Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize the probable levels of atrazine for comparison to specific water-quality benchmarks. Sites with a high probability of exceeding a benchmark for human health or aquatic life can be prioritized for monitoring.
Robust Semi-Active Ride Control under Stochastic Excitation
2014-01-01
broad classes of time-series models which are of practical importance; the Auto-Regressive (AR) models, the Integrated (I) models, and the Moving...Average (MA) models [12]. Combinations of these models result in autoregressive moving average (ARMA) and autoregressive integrated moving average...Down Up 4) Down Down These four cases can be written in compact form as: (20) Where is the Heaviside
Zhu, Yu; Xia, Jie-lai; Wang, Jing
2009-09-01
Application of the 'single auto regressive integrated moving average (ARIMA) model' and the 'ARIMA-generalized regression neural network (GRNN) combination model' in the research of the incidence of scarlet fever. Establish the auto regressive integrated moving average model based on the data of the monthly incidence on scarlet fever of one city, from 2000 to 2006. The fitting values of the ARIMA model was used as input of the GRNN, and the actual values were used as output of the GRNN. After training the GRNN, the effect of the single ARIMA model and the ARIMA-GRNN combination model was then compared. The mean error rate (MER) of the single ARIMA model and the ARIMA-GRNN combination model were 31.6%, 28.7% respectively and the determination coefficient (R(2)) of the two models were 0.801, 0.872 respectively. The fitting efficacy of the ARIMA-GRNN combination model was better than the single ARIMA, which had practical value in the research on time series data such as the incidence of scarlet fever.
Forecasting daily meteorological time series using ARIMA and regression models
NASA Astrophysics Data System (ADS)
Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir
2018-04-01
The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.
Time Series ARIMA Models of Undergraduate Grade Point Average.
ERIC Educational Resources Information Center
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
Monthly streamflow forecasting with auto-regressive integrated moving average
NASA Astrophysics Data System (ADS)
Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani
2017-09-01
Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.
NASA Astrophysics Data System (ADS)
Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria
2013-06-01
Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.
Using Baidu Search Index to Predict Dengue Outbreak in China
NASA Astrophysics Data System (ADS)
Liu, Kangkang; Wang, Tao; Yang, Zhicong; Huang, Xiaodong; Milinovich, Gabriel J.; Lu, Yi; Jing, Qinlong; Xia, Yao; Zhao, Zhengyang; Yang, Yang; Tong, Shilu; Hu, Wenbiao; Lu, Jiahai
2016-12-01
This study identified the possible threshold to predict dengue fever (DF) outbreaks using Baidu Search Index (BSI). Time-series classification and regression tree models based on BSI were used to develop a predictive model for DF outbreak in Guangzhou and Zhongshan, China. In the regression tree models, the mean autochthonous DF incidence rate increased approximately 30-fold in Guangzhou when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 382. When the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 91.8, there was approximately 9-fold increase of the mean autochthonous DF incidence rate in Zhongshan. In the classification tree models, the results showed that when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 99.3, there was 89.28% chance of DF outbreak in Guangzhou, while, in Zhongshan, when the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 68.1, the chance of DF outbreak rose up to 100%. The study indicated that less cost internet-based surveillance systems can be the valuable complement to traditional DF surveillance in China.
Naive vs. Sophisticated Methods of Forecasting Public Library Circulations.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1984-01-01
Two sophisticated--autoregressive integrated moving average (ARIMA), straight-line regression--and two naive--simple average, monthly average--forecasting techniques were used to forecast monthly circulation totals of 34 public libraries. Comparisons of forecasts and actual totals revealed that ARIMA and monthly average methods had smallest mean…
Forecasting Daily Patient Outflow From a Ward Having No Real-Time Clinical Data
Tran, Truyen; Luo, Wei; Phung, Dinh; Venkatesh, Svetha
2016-01-01
Background: Modeling patient flow is crucial in understanding resource demand and prioritization. We study patient outflow from an open ward in an Australian hospital, where currently bed allocation is carried out by a manager relying on past experiences and looking at demand. Automatic methods that provide a reasonable estimate of total next-day discharges can aid in efficient bed management. The challenges in building such methods lie in dealing with large amounts of discharge noise introduced by the nonlinear nature of hospital procedures, and the nonavailability of real-time clinical information in wards. Objective Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data. Methods We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features. Results Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014. Conclusions In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments. PMID:27444059
An Optimization of Inventory Demand Forecasting in University Healthcare Centre
NASA Astrophysics Data System (ADS)
Bon, A. T.; Ng, T. K.
2017-01-01
Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.
Hu, Wenbiao; Tong, Shilu; Mengersen, Kerrie; Connell, Des
2007-09-01
Few studies have examined the relationship between weather variables and cryptosporidiosis in Australia. This paper examines the potential impact of weather variability on the transmission of cryptosporidiosis and explores the possibility of developing an empirical forecast system. Data on weather variables, notified cryptosporidiosis cases, and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics for the period of January 1, 1996-December 31, 2004, respectively. Time series Poisson regression and seasonal auto-regression integrated moving average (SARIMA) models were performed to examine the potential impact of weather variability on the transmission of cryptosporidiosis. Both the time series Poisson regression and SARIMA models show that seasonal and monthly maximum temperature at a prior moving average of 1 and 3 months were significantly associated with cryptosporidiosis disease. It suggests that there may be 50 more cases a year for an increase of 1 degrees C maximum temperature on average in Brisbane. Model assessments indicated that the SARIMA model had better predictive ability than the Poisson regression model (SARIMA: root mean square error (RMSE): 0.40, Akaike information criterion (AIC): -12.53; Poisson regression: RMSE: 0.54, AIC: -2.84). Furthermore, the analysis of residuals shows that the time series Poisson regression appeared to violate a modeling assumption, in that residual autocorrelation persisted. The results of this study suggest that weather variability (particularly maximum temperature) may have played a significant role in the transmission of cryptosporidiosis. A SARIMA model may be a better predictive model than a Poisson regression model in the assessment of the relationship between weather variability and the incidence of cryptosporidiosis.
An Intelligent Decision Support System for Workforce Forecast
2011-01-01
ARIMA ) model to forecast the demand for construction skills in Hong Kong. This model was based...Decision Trees ARIMA Rule Based Forecasting Segmentation Forecasting Regression Analysis Simulation Modeling Input-Output Models LP and NLP Markovian...data • When results are needed as a set of easily interpretable rules 4.1.4 ARIMA Auto-regressive, integrated, moving-average ( ARIMA ) models
NASA Technical Reports Server (NTRS)
Johnson, C. R., Jr.; Balas, M. J.
1980-01-01
A novel interconnection of distributed parameter system (DPS) identification and adaptive filtering is presented, which culminates in a common statement of coupled autoregressive, moving-average expansion or parallel infinite impulse response configuration adaptive parameterization. The common restricted complexity filter objectives are seen as similar to the reduced-order requirements of the DPS expansion description. The interconnection presents the possibility of an exchange of problem formulations and solution approaches not yet easily addressed in the common finite dimensional lumped-parameter system context. It is concluded that the shared problems raised are nevertheless many and difficult.
Use of Time-Series, ARIMA Designs to Assess Program Efficacy.
ERIC Educational Resources Information Center
Braden, Jeffery P.; And Others
1990-01-01
Illustrates use of time-series designs for determining efficacy of interventions with fictitious data describing drug-abuse prevention program. Discusses problems and procedures associated with time-series data analysis using Auto Regressive Integrated Moving Averages (ARIMA) models. Example illustrates application of ARIMA analysis for…
Inhalant Use among Indiana School Children, 1991-2004
ERIC Educational Resources Information Center
Ding, Kele; Torabi, Mohammad R.; Perera, Bilesha; Jun, Mi Kyung; Jones-McKyer, E. Lisako
2007-01-01
Objective: To examine the prevalence and trend of inhalant use among Indiana public school students. Methods: The Alcohol, Tobacco, and Other Drug Use among Indiana Children and Adolescents surveys conducted annually between 1991 and 2004 were reanalyzed using 2-way moving average, Poisson regression, and ANOVA tests. Results: The prevalence had…
An improved portmanteau test for autocorrelated errors in interrupted time-series regression models.
Huitema, Bradley E; McKean, Joseph W
2007-08-01
A new portmanteau test for autocorrelation among the errors of interrupted time-series regression models is proposed. Simulation results demonstrate that the inferential properties of the proposed Q(H-M) test statistic are considerably more satisfactory than those of the well known Ljung-Box test and moderately better than those of the Box-Pierce test. These conclusions generally hold for a wide variety of autoregressive (AR), moving averages (MA), and ARMA error processes that are associated with time-series regression models of the form described in Huitema and McKean (2000a, 2000b).
Large signal-to-noise ratio quantification in MLE for ARARMAX models
NASA Astrophysics Data System (ADS)
Zou, Yiqun; Tang, Xiafei
2014-06-01
It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.
Alwee, Razana; Hj Shamsuddin, Siti Mariyam; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729
Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.
Wildfire suppression cost forecasts from the US Forest Service
Karen L. Abt; Jeffrey P. Prestemon; Krista M. Gebert
2009-01-01
The US Forest Service and other land-management agencies seek better tools for nticipating future expenditures for wildfire suppression. We developed regression models for forecasting US Forest Service suppression spending at 1-, 2-, and 3-year lead times. We compared these models to another readily available forecast model, the 10-year moving average model,...
Considerations for monitoring raptor population trends based on counts of migrants
Titus, K.; Fuller, M.R.; Ruos, J.L.; Meyburg, B-U.; Chancellor, R.D.
1989-01-01
Various problems were identified with standardized hawk count data as annually collected at six sites. Some of the hawk lookouts increased their hours of observation from 1979-1985, thereby confounding the total counts. Data recording and missing data hamper coding of data and their use with modern analytical techniques. Coefficients of variation among years in counts averaged about 40%. The advantages and disadvantages of various analytical techniques are discussed including regression, non-parametric rank correlation trend analysis, and moving averages.
A Comparison of Alternative Approaches to the Analysis of Interrupted Time-Series.
ERIC Educational Resources Information Center
Harrop, John W.; Velicer, Wayne F.
1985-01-01
Computer generated data representative of 16 Auto Regressive Integrated Moving Averages (ARIMA) models were used to compare the results of interrupted time-series analysis using: (1) the known model identification, (2) an assumed (l,0,0) model, and (3) an assumed (3,0,0) model as an approximation to the General Transformation approach. (Author/BW)
Impact of the Illinois Seat Belt Use Law on Accidents, Deaths, and Injuries.
ERIC Educational Resources Information Center
Rock, Steven M.
1992-01-01
The impact of the 1985 Illinois seat belt law is explored using Box-Jenkins Auto-Regressive, Integrated Moving Averages (ARIMA) techniques and monthly accident statistical data from the state department of transportation for January-July 1990. A conservative estimate is that the law provides benefits of $15 million per month in Illinois. (SLD)
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Kantz, Holger
2016-04-01
As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).
Modeling Geodetic Processes with Levy α-Stable Distribution and FARIMA
NASA Astrophysics Data System (ADS)
Montillet, Jean-Philippe; Yu, Kegen
2015-04-01
Over the last years the scientific community has been using the auto regressive moving average (ARMA) model in the modeling of the noise in global positioning system (GPS) time series (daily solution). This work starts with the investigation of the limit of the ARMA model which is widely used in signal processing when the measurement noise is white. Since a typical GPS time series consists of geophysical signals (e.g., seasonal signal) and stochastic processes (e.g., coloured and white noise), the ARMA model may be inappropriate. Therefore, the application of the fractional auto-regressive integrated moving average (FARIMA) model is investigated. The simulation results using simulated time series as well as real GPS time series from a few selected stations around Australia show that the FARIMA model fits the time series better than other models when the coloured noise is larger than the white noise. The second fold of this work focuses on fitting the GPS time series with the family of Levy α-stable distributions. Using this distribution, a hypothesis test is developed to eliminate effectively coarse outliers from GPS time series, achieving better performance than using the rule of thumb of n standard deviations (with n chosen empirically).
Yu, Lijing; Zhou, Lingling; Tan, Li; Jiang, Hongbo; Wang, Ying; Wei, Sheng; Nie, Shaofa
2014-01-01
Outbreaks of hand-foot-mouth disease (HFMD) have been reported for many times in Asia during the last decades. This emerging disease has drawn worldwide attention and vigilance. Nowadays, the prevention and control of HFMD has become an imperative issue in China. Early detection and response will be helpful before it happening, using modern information technology during the epidemic. In this paper, a hybrid model combining seasonal auto-regressive integrated moving average (ARIMA) model and nonlinear auto-regressive neural network (NARNN) is proposed to predict the expected incidence cases from December 2012 to May 2013, using the retrospective observations obtained from China Information System for Disease Control and Prevention from January 2008 to November 2012. The best-fitted hybrid model was combined with seasonal ARIMA [Formula: see text] and NARNN with 15 hidden units and 5 delays. The hybrid model makes the good forecasting performance and estimates the expected incidence cases from December 2012 to May 2013, which are respectively -965.03, -1879.58, 4138.26, 1858.17, 4061.86 and 6163.16 with an obviously increasing trend. The model proposed in this paper can predict the incidence trend of HFMD effectively, which could be helpful to policy makers. The usefulness of expected cases of HFMD perform not only in detecting outbreaks or providing probability statements, but also in providing decision makers with a probable trend of the variability of future observations that contains both historical and recent information.
Developing a predictive tropospheric ozone model for Tabriz
NASA Astrophysics Data System (ADS)
Khatibi, Rahman; Naghipour, Leila; Ghorbani, Mohammad A.; Smith, Michael S.; Karimi, Vahid; Farhoudi, Reza; Delafrouz, Hadi; Arvanaghi, Hadi
2013-04-01
Predictive ozone models are becoming indispensable tools by providing a capability for pollution alerts to serve people who are vulnerable to the risks. We have developed a tropospheric ozone prediction capability for Tabriz, Iran, by using the following five modeling strategies: three regression-type methods: Multiple Linear Regression (MLR), Artificial Neural Networks (ANNs), and Gene Expression Programming (GEP); and two auto-regression-type models: Nonlinear Local Prediction (NLP) to implement chaos theory and Auto-Regressive Integrated Moving Average (ARIMA) models. The regression-type modeling strategies explain the data in terms of: temperature, solar radiation, dew point temperature, and wind speed, by regressing present ozone values to their past values. The ozone time series are available at various time intervals, including hourly intervals, from August 2010 to March 2011. The results for MLR, ANN and GEP models are not overly good but those produced by NLP and ARIMA are promising for the establishing a forecasting capability.
A multimodel approach to interannual and seasonal prediction of Danube discharge anomalies
NASA Astrophysics Data System (ADS)
Rimbu, Norel; Ionita, Monica; Patrut, Simona; Dima, Mihai
2010-05-01
Interannual and seasonal predictability of Danube river discharge is investigated using three model types: 1) time series models 2) linear regression models of discharge with large-scale climate mode indices and 3) models based on stable teleconnections. All models are calibrated using discharge and climatic data for the period 1901-1977 and validated for the period 1978-2008 . Various time series models, like autoregressive (AR), moving average (MA), autoregressive and moving average (ARMA) or singular spectrum analysis and autoregressive moving average (SSA+ARMA) models have been calibrated and their skills evaluated. The best results were obtained using SSA+ARMA models. SSA+ARMA models proved to have the highest forecast skill also for other European rivers (Gamiz-Fortis et al. 2008). Multiple linear regression models using large-scale climatic mode indices as predictors have a higher forecast skill than the time series models. The best predictors for Danube discharge are the North Atlantic Oscillation (NAO) and the East Atlantic/Western Russia patterns during winter and spring. Other patterns, like Polar/Eurasian or Tropical Northern Hemisphere (TNH) are good predictors for summer and autumn discharge. Based on stable teleconnection approach (Ionita et al. 2008) we construct prediction models through a combination of sea surface temperature (SST), temperature (T) and precipitation (PP) from the regions where discharge and SST, T and PP variations are stable correlated. Forecast skills of these models are higher than forecast skills of the time series and multiple regression models. The models calibrated and validated in our study can be used for operational prediction of interannual and seasonal Danube discharge anomalies. References Gamiz-Fortis, S., D. Pozo-Vazquez, R.M. Trigo, and Y. Castro-Diez, Quantifying the predictability of winter river flow in Iberia. Part I: intearannual predictability. J. Climate, 2484-2501, 2008. Gamiz-Fortis, S., D. Pozo-Vazquez, R.M. Trigo, and Y. Castro-Diez, Quantifying the predictability of winter river flow in Iberia. Part II: seasonal predictability. J. Climate, 2503-2518, 2008. Ionita, M., G. Lohmann, and N. Rimbu, Prediction of spring Elbe river discharge based on stable teleconnections with global temperature and precipitation. J. Climate. 6215-6226, 2008.
Mao, Qiang; Zhang, Kai; Yan, Wu; Cheng, Chaonan
2018-05-02
The aims of this study were to develop a forecasting model for the incidence of tuberculosis (TB) and analyze the seasonality of infections in China; and to provide a useful tool for formulating intervention programs and allocating medical resources. Data for the monthly incidence of TB from January 2004 to December 2015 were obtained from the National Scientific Data Sharing Platform for Population and Health (China). The Box-Jenkins method was applied to fit a seasonal auto-regressive integrated moving average (SARIMA) model to forecast the incidence of TB over the subsequent six months. During the study period of 144 months, 12,321,559 TB cases were reported in China, with an average monthly incidence of 6.4426 per 100,000 of the population. The monthly incidence of TB showed a clear 12-month cycle, and a seasonality with two peaks occurring in January and March and a trough in December. The best-fit model was SARIMA (1,0,0)(0,1,1) 12 , which demonstrated adequate information extraction (white noise test, p>0.05). Based on the analysis, the incidence of TB from January to June 2016 were 6.6335, 4.7208, 5.8193, 5.5474, 5.2202 and 4.9156 per 100,000 of the population, respectively. According to the seasonal pattern of TB incidence in China, the SARIMA model was proposed as a useful tool for monitoring epidemics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Effect of air pollution on pediatric respiratory emergency room visits and hospital admissions.
Farhat, S C L; Paulo, R L P; Shimoda, T M; Conceição, G M S; Lin, C A; Braga, A L F; Warth, M P N; Saldiva, P H N
2005-02-01
In order to assess the effect of air pollution on pediatric respiratory morbidity, we carried out a time series study using daily levels of PM10, SO2, NO2, ozone, and CO and daily numbers of pediatric respiratory emergency room visits and hospital admissions at the Children's Institute of the University of Sao Paulo Medical School, from August 1996 to August 1997. In this period there were 43,635 hospital emergency room visits, 4534 of which were due to lower respiratory tract disease. The total number of hospital admissions was 6785, 1021 of which were due to lower respiratory tract infectious and/or obstructive diseases. The three health end-points under investigation were the daily number of emergency room visits due to lower respiratory tract diseases, hospital admissions due to pneumonia, and hospital admissions due to asthma or bronchiolitis. Generalized additive Poisson regression models were fitted, controlling for smooth functions of time, temperature and humidity, and an indicator of weekdays. NO2 was positively associated with all outcomes. Interquartile range increases (65.04 microg/m3) in NO2 moving averages were associated with an 18.4% increase (95% confidence interval, 95% CI = 12.5-24.3) in emergency room visits due to lower respiratory tract diseases (4-day moving average), a 17.6% increase (95% CI = 3.3-32.7) in hospital admissions due to pneumonia or bronchopneumonia (3-day moving average), and a 31.4% increase (95% CI = 7.2-55.7) in hospital admissions due to asthma or bronchiolitis (2-day moving average). The study showed that air pollution considerably affects children's respiratory morbidity, deserving attention from the health authorities.
Challenges of Electronic Medical Surveillance Systems
2004-06-01
More sophisticated approaches, such as regression models and classical autoregressive moving average ( ARIMA ) models that make estimates based on...with those predicted by a mathematical model . The primary benefit of ARIMA models is their ability to correct for local trends in the data so that...works well, for example, during a particularly severe flu season, where prolonged periods of high visit rates are adjusted to by the ARIMA model , thus
Tani, Yuji; Ogasawara, Katsuhiko
2012-01-01
This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.
Simultaneous Estimation of Electromechanical Modes and Forced Oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follum, Jim; Pierre, John W.; Martin, Russell
Over the past several years, great strides have been made in the effort to monitor the small-signal stability of power systems. These efforts focus on estimating electromechanical modes, which are a property of the system that dictate how generators in different parts of the system exchange energy. Though the algorithms designed for this task are powerful and important for reliable operation of the power system, they are susceptible to severe bias when forced oscillations are present in the system. Forced oscillations are fundamentally different from electromechanical oscillations in that they are the result of a rogue input to the system,more » rather than a property of the system itself. To address the presence of forced oscillations, the frequently used AutoRegressive Moving Average (ARMA) model is adapted to include sinusoidal inputs, resulting in the AutoRegressive Moving Average plus Sinusoid (ARMA+S) model. From this model, a new Two-Stage Least Squares algorithm is derived to incorporate the forced oscillations, thereby enabling the simultaneous estimation of the electromechanical modes and the amplitude and phase of the forced oscillations. The method is validated using simulated power system data as well as data obtained from the western North American power system (wNAPS) and Eastern Interconnection (EI).« less
2012-01-01
regressive Integrated Moving Average ( ARIMA ) model for the data, eliminating the need to identify an appropriate model through trial and error alone...06 .11 13.67 16 .62 16 .14 .11 8.06 16 .95 * Based on the asymptotic chi-square approximation. 8 In general, ARIMA models address three...performance standards and measurement processes and a prevailing climate of organizational trust were important factors. Unfortunately, uneven
Wavelet regression model in forecasting crude oil price
NASA Astrophysics Data System (ADS)
Hamid, Mohd Helmie; Shabri, Ani
2017-05-01
This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.
Atmospheric mold spore counts in relation to meteorological parameters
NASA Astrophysics Data System (ADS)
Katial, R. K.; Zhang, Yiming; Jones, Richard H.; Dyer, Philip D.
Fungal spore counts of Cladosporium, Alternaria, and Epicoccum were studied during 8 years in Denver, Colorado. Fungal spore counts were obtained daily during the pollinating season by a Rotorod sampler. Weather data were obtained from the National Climatic Data Center. Daily averages of temperature, relative humidity, daily precipitation, barometric pressure, and wind speed were studied. A time series analysis was performed on the data to mathematically model the spore counts in relation to weather parameters. Using SAS PROC ARIMA software, a regression analysis was performed, regressing the spore counts on the weather variables assuming an autoregressive moving average (ARMA) error structure. Cladosporium was found to be positively correlated (P<0.02) with average daily temperature, relative humidity, and negatively correlated with precipitation. Alternaria and Epicoccum did not show increased predictability with weather variables. A mathematical model was derived for Cladosporium spore counts using the annual seasonal cycle and significant weather variables. The model for Alternaria and Epicoccum incorporated the annual seasonal cycle. Fungal spore counts can be modeled by time series analysis and related to meteorological parameters controlling for seasonallity; this modeling can provide estimates of exposure to fungal aeroallergens.
Earthquakes Magnitude Predication Using Artificial Neural Network in Northern Red Sea Area
NASA Astrophysics Data System (ADS)
Alarifi, A. S.; Alarifi, N. S.
2009-12-01
Earthquakes are natural hazards that do not happen very often, however they may cause huge losses in life and property. Early preparation for these hazards is a key factor to reduce their damage and consequence. Since early ages, people tried to predicate earthquakes using simple observations such as strange or a typical animal behavior. In this paper, we study data collected from existing earthquake catalogue to give better forecasting for future earthquakes. The 16000 events cover a time span of 1970 to 2009, the magnitude range from greater than 0 to less than 7.2 while the depth range from greater than 0 to less than 100km. We propose a new artificial intelligent predication system based on artificial neural network, which can be used to predicate the magnitude of future earthquakes in northern Red Sea area including the Sinai Peninsula, the Gulf of Aqaba, and the Gulf of Suez. We propose a feed forward new neural network model with multi-hidden layers to predicate earthquakes occurrences and magnitudes in northern Red Sea area. Although there are similar model that have been published before in different areas, to our best knowledge this is the first neural network model to predicate earthquake in northern Red Sea area. Furthermore, we present other forecasting methods such as moving average over different interval, normally distributed random predicator, and uniformly distributed random predicator. In addition, we present different statistical methods and data fitting such as linear, quadratic, and cubic regression. We present a details performance analyses of the proposed methods for different evaluation metrics. The results show that neural network model provides higher forecast accuracy than other proposed methods. The results show that neural network achieves an average absolute error of 2.6% while an average absolute error of 3.8%, 7.3% and 6.17% for moving average, linear regression and cubic regression, respectively. In this work, we show an analysis of earthquakes data in northern Red Sea area for different statistics parameters such as correlation, mean, standard deviation, and other. This analysis is to provide a deep understand of the Seismicity of the area, and existing patterns.
A generic sun-tracking algorithm for on-axis solar collector in mobile platforms
NASA Astrophysics Data System (ADS)
Lai, An-Chow; Chong, Kok-Keong; Lim, Boon-Han; Ho, Ming-Cheng; Yap, See-Hao; Heng, Chun-Kit; Lee, Jer-Vui; King, Yeong-Jin
2015-04-01
This paper proposes a novel dynamic sun-tracking algorithm which allows accurate tracking of the sun for both non-concentrated and concentrated photovoltaic systems located on mobile platforms to maximize solar energy extraction. The proposed algorithm takes not only the date, time, and geographical information, but also the dynamic changes of coordinates of the mobile platforms into account to calculate the sun position angle relative to ideal azimuth-elevation axes in real time using general sun-tracking formulas derived by Chong and Wong. The algorithm acquires data from open-loop sensors, i.e. global position system (GPS) and digital compass, which are readily available in many off-the-shelf portable gadgets, such as smart phone, to instantly capture the dynamic changes of coordinates of mobile platforms. Our experiments found that a highly accurate GPS is not necessary as the coordinate changes of practical mobile platforms are not fast enough to produce significant differences in the calculation of the incident angle. On the contrary, it is critical to accurately identify the quadrant and angle where the mobile platforms are moving toward in real time, which can be resolved by using digital compass. In our implementation, a noise filtering mechanism is found necessary to remove unexpected spikes in the readings of the digital compass to ensure stability in motor actuations and effectiveness in continuous tracking. Filtering mechanisms being studied include simple moving average and linear regression; the results showed that a compound function of simple moving average and linear regression produces a better outcome. Meanwhile, we found that a sampling interval is useful to avoid excessive motor actuations and power consumption while not sacrificing the accuracy of sun-tracking.
Two models for identification and predicting behaviour of an induction motor system
NASA Astrophysics Data System (ADS)
Kuo, Chien-Hsun
2018-01-01
System identification or modelling is the process of building mathematical models of dynamical systems based on the available input and output data from the systems. This paper introduces system identification by using ARX (Auto Regressive with eXogeneous input) and ARMAX (Auto Regressive Moving Average with eXogeneous input) models. Through the identified system model, the predicted output could be compared with the measured one to help prevent the motor faults from developing into a catastrophic machine failure and avoid unnecessary costs and delays caused by the need to carry out unscheduled repairs. The induction motor system is illustrated as an example. Numerical and experimental results are shown for the identified induction motor system.
Do alcohol excise taxes affect traffic accidents? Evidence from Estonia.
Saar, Indrek
2015-01-01
This article examines the association between alcohol excise tax rates and alcohol-related traffic accidents in Estonia. Monthly time series of traffic accidents involving drunken motor vehicle drivers from 1998 through 2013 were regressed on real average alcohol excise tax rates while controlling for changes in economic conditions and the traffic environment. Specifically, regression models with autoregressive integrated moving average (ARIMA) errors were estimated in order to deal with serial correlation in residuals. Counterfactual models were also estimated in order to check the robustness of the results, using the level of non-alcohol-related traffic accidents as a dependent variable. A statistically significant (P <.01) strong negative relationship between the real average alcohol excise tax rate and alcohol-related traffic accidents was disclosed under alternative model specifications. For instance, the regression model with ARIMA (0, 1, 1)(0, 1, 1) errors revealed that a 1-unit increase in the tax rate is associated with a 1.6% decrease in the level of accidents per 100,000 population involving drunk motor vehicle drivers. No similar association was found in the cases of counterfactual models for non-alcohol-related traffic accidents. This article indicates that the level of alcohol-related traffic accidents in Estonia has been affected by changes in real average alcohol excise taxes during the period 1998-2013. Therefore, in addition to other measures, the use of alcohol taxation is warranted as a policy instrument in tackling alcohol-related traffic accidents.
NASA Astrophysics Data System (ADS)
Zhang, Ying; Bi, Peng; Hiller, Janet
2008-01-01
This is the first study to identify appropriate regression models for the association between climate variation and salmonellosis transmission. A comparison between different regression models was conducted using surveillance data in Adelaide, South Australia. By using notified salmonellosis cases and climatic variables from the Adelaide metropolitan area over the period 1990-2003, four regression methods were examined: standard Poisson regression, autoregressive adjusted Poisson regression, multiple linear regression, and a seasonal autoregressive integrated moving average (SARIMA) model. Notified salmonellosis cases in 2004 were used to test the forecasting ability of the four models. Parameter estimation, goodness-of-fit and forecasting ability of the four regression models were compared. Temperatures occurring 2 weeks prior to cases were positively associated with cases of salmonellosis. Rainfall was also inversely related to the number of cases. The comparison of the goodness-of-fit and forecasting ability suggest that the SARIMA model is better than the other three regression models. Temperature and rainfall may be used as climatic predictors of salmonellosis cases in regions with climatic characteristics similar to those of Adelaide. The SARIMA model could, thus, be adopted to quantify the relationship between climate variations and salmonellosis transmission.
Alternatives to the Moving Average
Paul C. van Deusen
2001-01-01
There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...
Mehta, Amar J.; Kloog, Itai; Zanobetti, Antonella; Coull, Brent A.; Sparrow, David; Vokonas, Pantel; Schwartz, Joel
2014-01-01
Background The underlying mechanisms of the association between ambient temperature and cardiovascular morbidity and mortality are not well understood, particularly for daily temperature variability. We evaluated if daily mean temperature and standard deviation of temperature was associated with heart rate-corrected QT interval (QTc) duration, a marker of ventricular repolarization in a prospective cohort of older men. Methods This longitudinal analysis included 487 older men participating in the VA Normative Aging Study with up to three visits between 2000–2008 (n = 743). We analyzed associations between QTc and moving averages (1–7, 14, 21, and 28 days) of the 24-hour mean and standard deviation of temperature as measured from a local weather monitor, and the 24-hour mean temperature estimated from a spatiotemporal prediction model, in time-varying linear mixed-effect regression. Effect modification by season, diabetes, coronary heart disease, obesity, and age was also evaluated. Results Higher mean temperature as measured from the local monitor, and estimated from the prediction model, was associated with longer QTc at moving averages of 21 and 28 days. Increased 24-hr standard deviation of temperature was associated with longer QTc at moving averages from 4 and up to 28 days; a 1.9°C interquartile range increase in 4-day moving average standard deviation of temperature was associated with a 2.8 msec (95%CI: 0.4, 5.2) longer QTc. Associations between 24-hr standard deviation of temperature and QTc were stronger in colder months, and in participants with diabetes and coronary heart disease. Conclusion/Significance In this sample of older men, elevated mean temperature was associated with longer QTc, and increased variability of temperature was associated with longer QTc, particularly during colder months and among individuals with diabetes and coronary heart disease. These findings may offer insight of an important underlying mechanism of temperature-related cardiovascular morbidity and mortality in an older population. PMID:25238150
Forecasting daily patient volumes in the emergency department.
Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L
2008-02-01
Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.
NASA Astrophysics Data System (ADS)
Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing
2017-09-01
The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.
Honda, Trenton; Pun, Vivian C; Manjourides, Justin; Suh, Helen
2018-07-01
Hypertension is a highly prevalent cardiovascular risk factor. It is possible that air pollution, also an established cardiovascular risk factor, may contribute to cardiovascular disease through increasing blood pressure. Previous studies evaluating associations between air pollution and blood pressure have had mixed results. We examined the association between long-term (one-year moving average) air pollutant exposures, prevalent hypertension and blood pressure in 4121 older Americans (57+ years) enrolled in the National Social Life, Health, and Aging Project. We estimated exposures to PM 2.5 using spatio-temporal models and used logistic regression accounting for repeated measures to evaluate the association between long-term average PM 2.5 and prevalence odds of hypertension. We additionally used linear regression to evaluate the associations between air pollutants and systolic, diastolic, mean arterial, and pulse pressures. Health effect models were adjusted for a number of demographic, health and socioeconomic covariates. An inter-quartile range (3.91 μg/m 3 ) increase in the one-year moving average of PM 2.5 was associated with increased: Odds of prevalent hypertension (POR 1.24, 95% CI: 1.11, 1.38), systolic blood pressure (0.93 mm Hg, 95% CI: 0.05, 1.80) and pulse pressure (0.89 mm Hg, 95% CI: 0.21, 1.58). Dose-response relationships were also observed. PM 2.5 was associated with increased odds of prevalent hypertension, and increased systolic pressure and pulse pressure in a cohort of older Americans. These findings add to the growing evidence that air pollution may be an important risk factor for hypertension and perturbations in blood pressure. Copyright © 2018 Elsevier Inc. All rights reserved.
Kim, H-I; Park, M S; Song, K J; Woo, Y; Hyung, W J
2014-10-01
The learning curve of robotic gastrectomy has not yet been evaluated in comparison with the laparoscopic approach. We compared the learning curves of robotic gastrectomy and laparoscopic gastrectomy based on operation time and surgical success. We analyzed 172 robotic and 481 laparoscopic distal gastrectomies performed by single surgeon from May 2003 to April 2009. The operation time was analyzed using a moving average and non-linear regression analysis. Surgical success was evaluated by a cumulative sum plot with a target failure rate of 10%. Surgical failure was defined as laparoscopic or open conversion, insufficient lymph node harvest for staging, resection margin involvement, postoperative morbidity, and mortality. Moving average and non-linear regression analyses indicated stable state for operation time at 95 and 121 cases in robotic gastrectomy, and 270 and 262 cases in laparoscopic gastrectomy, respectively. The cumulative sum plot identified no cut-off point for surgical success in robotic gastrectomy and 80 cases in laparoscopic gastrectomy. Excluding the initial 148 laparoscopic gastrectomies that were performed before the first robotic gastrectomy, the two groups showed similar number of cases to reach steady state in operation time, and showed no cut-off point in analysis of surgical success. The experience of laparoscopic surgery could affect the learning process of robotic gastrectomy. An experienced laparoscopic surgeon requires fewer cases of robotic gastrectomy to reach steady state. Moreover, the surgical outcomes of robotic gastrectomy were satisfactory. Copyright © 2013 Elsevier Ltd. All rights reserved.
Short-term forecasts gain in accuracy. [Regression technique using ''Box-Jenkins'' analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Box-Jenkins time-series models offer accuracy for short-term forecasts that compare with large-scale macroeconomic forecasts. Utilities need to be able to forecast peak demand in order to plan their generating, transmitting, and distribution systems. This new method differs from conventional models by not assuming specific data patterns, but by fitting available data into a tentative pattern on the basis of auto-correlations. Three types of models (autoregressive, moving average, or mixed autoregressive/moving average) can be used according to which provides the most appropriate combination of autocorrelations and related derivatives. Major steps in choosing a model are identifying potential models, estimating the parametersmore » of the problem, and running a diagnostic check to see if the model fits the parameters. The Box-Jenkins technique is well suited for seasonal patterns, which makes it possible to have as short as hourly forecasts of load demand. With accuracy up to two years, the method will allow electricity price-elasticity forecasting that can be applied to facility planning and rate design. (DCK)« less
NASA Astrophysics Data System (ADS)
Musa, Omer; Weixuan, Li; Xiong, Chen; Lunkun, Gong; Wenhe, Liao
2018-07-01
Solid-fuel ramjet converts thermal energy of combustion products to a forward thrust without using any moving parts. Normally, it uses air intake system to compress the incoming air without swirler. A new design of swirler has been proposed and used in the current work. In this paper, a series of firing tests have been carried out to investigate the impact of using swirl flow on regression rate, combustion characteristics, and performance of solid-fuel ramjet engines. The influences of swirl intensity, solid fuel port diameter, and combustor length were studied and varied independently. A new technique for determining the time and space averaged regression rate of high-density polyethylene solid fuel surface after experiments has been proposed based on the laser scan technique. A code has been developed to reconstruct the data from the scanner and then used to obtain the three-dimensional distribution of the regression rate. It is shown that increasing swirl number increases regression rate, thrust, and characteristic velocity, and, decreases air-fuel ratio, corner recirculation zone length, and specific impulse. Using swirl flow enhances the flame stability meanwhile negatively affected on ignition process and specific impulse. Although a significant reduction of combustion chamber length can be achieved when swirl flow is used. Power fitting correlation for average regression rate was developed taking into account the influence of swirl number. Furthermore, varying port diameter and combustor length were found to have influences on regression rate, combustion characteristics and performance of solid-fuel ramjet.
A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction
NASA Astrophysics Data System (ADS)
Danandeh Mehr, Ali; Kahya, Ercan
2017-06-01
Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.
Statistical description of turbulent transport for flux driven toroidal plasmas
NASA Astrophysics Data System (ADS)
Anderson, J.; Imadera, K.; Kishimoto, Y.; Li, J. Q.; Nordman, H.
2017-06-01
A novel methodology to analyze non-Gaussian probability distribution functions (PDFs) of intermittent turbulent transport in global full-f gyrokinetic simulations is presented. In this work, the auto-regressive integrated moving average (ARIMA) model is applied to time series data of intermittent turbulent heat transport to separate noise and oscillatory trends, allowing for the extraction of non-Gaussian features of the PDFs. It was shown that non-Gaussian tails of the PDFs from first principles based gyrokinetic simulations agree with an analytical estimation based on a two fluid model.
Modelling and Closed-Loop System Identification of a Quadrotor-Based Aerial Manipulator
NASA Astrophysics Data System (ADS)
Dube, Chioniso; Pedro, Jimoh O.
2018-05-01
This paper presents the modelling and system identification of a quadrotor-based aerial manipulator. The aerial manipulator model is first derived analytically using the Newton-Euler formulation for the quadrotor and Recursive Newton-Euler formulation for the manipulator. The aerial manipulator is then simulated with the quadrotor under Proportional Derivative (PD) control, with the manipulator in motion. The simulation data is then used for system identification of the aerial manipulator. Auto Regressive with eXogenous inputs (ARX) models are obtained from the system identification for linear accelerations \\ddot{X} and \\ddot{Y} and yaw angular acceleration \\ddot{\\psi }. For linear acceleration \\ddot{Z}, and pitch and roll angular accelerations \\ddot{θ } and \\ddot{φ }, Auto Regressive Moving Average with eXogenous inputs (ARMAX) models are identified.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Modelling space of spread Dengue Hemorrhagic Fever (DHF) in Central Java use spatial durbin model
NASA Astrophysics Data System (ADS)
Ispriyanti, Dwi; Prahutama, Alan; Taryono, Arkadina PN
2018-05-01
Dengue Hemorrhagic Fever is one of the major public health problems in Indonesia. From year to year, DHF causes Extraordinary Event in most parts of Indonesia, especially Central Java. Central Java consists of 35 districts or cities where each region is close to each other. Spatial regression is an analysis that suspects the influence of independent variables on the dependent variables with the influences of the region inside. In spatial regression modeling, there are spatial autoregressive model (SAR), spatial error model (SEM) and spatial autoregressive moving average (SARMA). Spatial Durbin model is the development of SAR where the dependent and independent variable have spatial influence. In this research dependent variable used is number of DHF sufferers. The independent variables observed are population density, number of hospitals, residents and health centers, and mean years of schooling. From the multiple regression model test, the variables that significantly affect the spread of DHF disease are the population and mean years of schooling. By using queen contiguity and rook contiguity, the best model produced is the SDM model with queen contiguity because it has the smallest AIC value of 494,12. Factors that generally affect the spread of DHF in Central Java Province are the number of population and the average length of school.
A Case Study to Improve Emergency Room Patient Flow at Womack Army Medical Center
2009-06-01
use just the previous month, moving average 2-month period ( MA2 ) uses the average from the previous two months, moving average 3-month period (MA3...ED prior to discharge by provider) MA2 /MA3/MA4 - moving averages of 2-4 months in length MAD - mean absolute deviation (measure of accuracy for
Xu, Dandan; Zhang, Yi; Zhou, Lian; Li, Tiantian
2018-03-17
The association between exposure to ambient particulate matter (PM) and reduced lung function parameters has been reported in many works. However, few studies have been conducted in developing countries with high levels of air pollution like China, and little attention has been paid to the acute effects of short-term exposure to air pollution on lung function. The study design consisted of a panel comprising 86 children from the same school in Nanjing, China. Four measurements of lung function were performed. A mixed-effects regression model with study participant as a random effect was used to investigate the relationship between PM 2.5 and lung function. An increase in the current day, 1-day and 2-day moving average PM 2.5 concentration was associated with decreases in lung function indicators. The greatest effect of PM 2.5 on lung function was detected at 1-day moving average PM 2.5 exposure. An increase of 10 μg/m 3 in the 1-day moving average PM 2.5 concentration was associated with a 23.22 mL decrease (95% CI: 13.19, 33.25) in Forced Vital Capacity (FVC), a 18.93 mL decrease (95% CI: 9.34, 28.52) in 1-s Forced Expiratory Volume (FEV 1 ), a 29.38 mL/s decrease (95% CI: -0.40, 59.15) in Peak Expiratory Flow (PEF), and a 27.21 mL/s decrease (95% CI: 8.38, 46.04) in forced expiratory flow 25-75% (FEF 25-75% ). The effects of PM 2.5 on lung function had significant lag effects. After an air pollution event, the health effects last for several days and we still need to pay attention to health protection.
Short-Term Exposure to Air Pollution and Biomarkers of Oxidative Stress: The Framingham Heart Study.
Li, Wenyuan; Wilker, Elissa H; Dorans, Kirsten S; Rice, Mary B; Schwartz, Joel; Coull, Brent A; Koutrakis, Petros; Gold, Diane R; Keaney, John F; Lin, Honghuang; Vasan, Ramachandran S; Benjamin, Emelia J; Mittleman, Murray A
2016-04-28
Short-term exposure to elevated air pollution has been associated with higher risk of acute cardiovascular diseases, with systemic oxidative stress induced by air pollution hypothesized as an important underlying mechanism. However, few community-based studies have assessed this association. Two thousand thirty-five Framingham Offspring Cohort participants living within 50 km of the Harvard Boston Supersite who were not current smokers were included. We assessed circulating biomarkers of oxidative stress including blood myeloperoxidase at the seventh examination (1998-2001) and urinary creatinine-indexed 8-epi-prostaglandin F2α (8-epi-PGF2α) at the seventh and eighth (2005-2008) examinations. We measured fine particulate matter (PM2.5), black carbon, sulfate, nitrogen oxides, and ozone at the Supersite and calculated 1-, 2-, 3-, 5-, and 7-day moving averages of each pollutant. Measured myeloperoxidase and 8-epi-PGF2α were loge transformed. We used linear regression models and linear mixed-effects models with random intercepts for myeloperoxidase and indexed 8-epi-PGF2α, respectively. Models were adjusted for demographic variables, individual- and area-level measures of socioeconomic position, clinical and lifestyle factors, weather, and temporal trend. We found positive associations of PM2.5 and black carbon with myeloperoxidase across multiple moving averages. Additionally, 2- to 7-day moving averages of PM2.5 and sulfate were consistently positively associated with 8-epi-PGF2α. Stronger positive associations of black carbon and sulfate with myeloperoxidase were observed among participants with diabetes than in those without. Our community-based investigation supports an association of select markers of ambient air pollution with circulating biomarkers of oxidative stress. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul
2011-07-01
In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.
The impact of household cooking and heating with solid fuels on ambient PM2.5 in peri-urban Beijing
NASA Astrophysics Data System (ADS)
Liao, Jiawen; Zimmermann Jin, Anna; Chafe, Zoë A.; Pillarisetti, Ajay; Yu, Tao; Shan, Ming; Yang, Xudong; Li, Haixi; Liu, Guangqing; Smith, Kirk R.
2017-09-01
Household cooking and space heating with biomass and coal have adverse impacts on both indoor and outdoor air quality and are associated with a significant health burden. Though household heating with biomass and coal is common in northern China, the contribution of space heating to ambient air pollution is not well studied. We investigated the impact of space heating on ambient air pollution in a village 40 km southwest of central Beijing during the winter heating season, from January to March 2013. Ambient PM2.5 concentrations and meteorological conditions were measured continuously at rooftop sites in the village during two winter months in 2013. The use of coal- and biomass-burning cookstoves and space heating devices was measured over time with Stove Use Monitors (SUMs) in 33 households and was coupled with fuel consumption data from household surveys to estimate hourly household PM2.5 emissions from cooking and space heating over the same period. We developed a multivariate linear regression model to assess the relationship between household PM2.5 emissions and the hourly average ambient PM2.5 concentration, and a time series autoregressive integrated moving average (ARIMA) regression model to account for autocorrelation. During the heating season, the average hourly ambient PM2.5 concentration was 139 ± 107 μg/m3 (mean ± SD) with strong autocorrelation in hourly concentration. The average primary PM2.5 emission per hour from village household space heating was 0.736 ± 0.138 kg/hour. The linear multivariate regression model indicated that during the heating season - after adjusting for meteorological effects - 39% (95% CI: 26%, 54%) of hourly averaged ambient PM2.5 was associated with household space heating emissions from the previous hour. Our study suggests that a comprehensive pollution control strategy for northern China, including Beijing, should address uncontrolled emissions from household solid fuel combustion in surrounding areas, particularly during the winter heating season.
A Simple Introduction to Moving Least Squares and Local Regression Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garimella, Rao Veerabhadra
In this brief note, a highly simpli ed introduction to esimating functions over a set of particles is presented. The note starts from Global Least Squares tting, going on to Moving Least Squares estimation (MLS) and nally, Local Regression Estimation (LRE).
Notes sur les mouvements recursifs (Notes on Regressive Moves).
ERIC Educational Resources Information Center
Auchlin, Antoine; And Others
1981-01-01
Examines the phenomenon of regressive moves (retro-interpretation) in the light of a hypothesis according to which the formation of complex and hierarchically organized conversation units is subordinated to the linearity of discourse. Analyzes a transactional exchange, describing the interplay of integration, anticipation, and retro-interpretation…
NASA Astrophysics Data System (ADS)
Sivavaraprasad, G.; Venkata Ratnam, D.
2017-07-01
Ionospheric delay is one of the major atmospheric effects on the performance of satellite-based radio navigation systems. It limits the accuracy and availability of Global Positioning System (GPS) measurements, related to critical societal and safety applications. The temporal and spatial gradients of ionospheric total electron content (TEC) are driven by several unknown priori geophysical conditions and solar-terrestrial phenomena. Thereby, the prediction of ionospheric delay is challenging especially over Indian sub-continent. Therefore, an appropriate short/long-term ionospheric delay forecasting model is necessary. Hence, the intent of this paper is to forecast ionospheric delays by considering day to day, monthly and seasonal ionospheric TEC variations. GPS-TEC data (January 2013-December 2013) is extracted from a multi frequency GPS receiver established at K L University, Vaddeswaram, Guntur station (geographic: 16.37°N, 80.37°E; geomagnetic: 7.44°N, 153.75°E), India. An evaluation, in terms of forecasting capabilities, of three ionospheric time delay models - an Auto Regressive Moving Average (ARMA) model, Auto Regressive Integrated Moving Average (ARIMA) model, and a Holt-Winter's model is presented. The performances of these models are evaluated through error measurement analysis during both geomagnetic quiet and disturbed days. It is found that, ARMA model is effectively forecasting the ionospheric delay with an accuracy of 82-94%, which is 10% more superior to ARIMA and Holt-Winter's models. Moreover, the modeled VTEC derived from International Reference Ionosphere, IRI (IRI-2012) model and new global TEC model, Neustrelitz TEC Model (NTCM-GL) have compared with forecasted VTEC values of ARMA, ARIMA and Holt-Winter's models during geomagnetic quiet days. The forecast results are indicating that ARMA model would be useful to set up an early warning system for ionospheric disturbances at low latitude regions.
The economic impact of a smoke-free bylaw on restaurant and bar sales in Ottawa, Canada.
Luk, Rita; Ferrence, Roberta; Gmel, Gerhard
2006-05-01
On 1 August 2001, the City of Ottawa (Canada's Capital) implemented a smoke-free bylaw that completely prohibited smoking in work-places and public places, including restaurants and bars, with no exemption for separately ventilated smoking rooms. This paper evaluates the effects of this bylaw on restaurant and bar sales. DATA AND MEASURES: We used retail sales tax data from March 1998 to June 2002 to construct two outcome measures: the ratio of licensed restaurant and bar sales to total retail sales and the ratio of unlicensed restaurant sales to total retail sales. Restaurant and bar sales were subtracted from total retail sales in the denominator of these measures. We employed an interrupted time-series design. Autoregressive integrated moving average (ARIMA) intervention analysis was used to test for three possible impacts that the bylaw might have on the sales of restaurants and bars. We repeated the analysis using regression with autoregressive moving average (ARMA) errors method to triangulate our results. Outcome measures showed declining trends at baseline before the bylaw went into effect. Results from ARIMA intervention and regression analyses did not support the hypotheses that the smoke-free bylaw had an impact that resulted in (1) abrupt permanent, (2) gradual permanent or (3) abrupt temporary changes in restaurant and bar sales. While a large body of research has found no significant adverse impact of smoke-free legislation on restaurant and bar sales in the United States, Australia and elsewhere, our study confirms these results in a northern region with a bilingual population, which has important implications for impending policy in Europe and other areas.
Liang, Hao; Gao, Lian; Liang, Bingyu; Huang, Jiegang; Zang, Ning; Liao, Yanyan; Yu, Jun; Lai, Jingzhen; Qin, Fengxiang; Su, Jinming; Ye, Li; Chen, Hui
2016-01-01
Background Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease. Methods The autoregressive integrated moving average (ARIMA) model and the generalized regression neural network (GRNN) model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention) from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE) and mean square error (MSE), were used to compare the performance among the three models. Results The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2)(1,1,1)12 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models. Conclusions The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County. PMID:27258555
Peng, Ying; Yu, Bin; Wang, Peng; Kong, De-Guang; Chen, Bang-Hua; Yang, Xiao-Bing
2017-12-01
Outbreaks of hand-foot-mouth disease (HFMD) have occurred many times and caused serious health burden in China since 2008. Application of modern information technology to prediction and early response can be helpful for efficient HFMD prevention and control. A seasonal auto-regressive integrated moving average (ARIMA) model for time series analysis was designed in this study. Eighty-four-month (from January 2009 to December 2015) retrospective data obtained from the Chinese Information System for Disease Prevention and Control were subjected to ARIMA modeling. The coefficient of determination (R 2 ), normalized Bayesian Information Criterion (BIC) and Q-test P value were used to evaluate the goodness-of-fit of constructed models. Subsequently, the best-fitted ARIMA model was applied to predict the expected incidence of HFMD from January 2016 to December 2016. The best-fitted seasonal ARIMA model was identified as (1,0,1)(0,1,1) 12 , with the largest coefficient of determination (R 2 =0.743) and lowest normalized BIC (BIC=3.645) value. The residuals of the model also showed non-significant autocorrelations (P Box-Ljung (Q) =0.299). The predictions by the optimum ARIMA model adequately captured the pattern in the data and exhibited two peaks of activity over the forecast interval, including a major peak during April to June, and again a light peak for September to November. The ARIMA model proposed in this study can forecast HFMD incidence trend effectively, which could provide useful support for future HFMD prevention and control in the study area. Besides, further observations should be added continually into the modeling data set, and parameters of the models should be adjusted accordingly.
O'Leary, D D; Lin, D C; Hughson, R L
1999-09-01
The heart rate component of the arterial baroreflex gain (BRG) was determined with auto-regressive moving-average (ARMA) analysis during each of spontaneous (SB) and random breathing (RB) protocols. Ten healthy subjects completed each breathing pattern on two different days in each of two different body positions, supine (SUP) and head-up tilt (HUT). The R-R interval, systolic arterial pressure (SAP) and instantaneous lung volume were recorded continuously. BRG was estimated from the ARMA impulse response relationship of R-R interval to SAP and from the spontaneous sequence method. The results indicated that both the ARMA and spontaneous sequence methods were reproducible (r = 0.76 and r = 0.85, respectively). As expected, BRG was significantly less in the HUT compared to SUP position for both ARMA (mean +/- SEM; 3.5 +/- 0.3 versus 11.2 +/- 1.4 ms mmHg-1; P < 0.01) and spontaneous sequence analysis (10.3 +/- 0.8 versus 31.5 +/- 2.3 ms mmHg-1; P < 0.001). However, no significant difference was found between BRG during RB and SB protocols for either ARMA (7.9 +/- 1.4 versus 6.7 +/- 0.8 ms mmHg-1; P = 0.27) or spontaneous sequence methods (21.8 +/- 2.7 versus 20.0 +/- 2.1 ms mmHg-1; P = 0.24). BRG was correlated during RB and SB protocols (r = 0.80; P < 0.0001). ARMA and spontaneous BRG estimates were correlated (r = 0.79; P < 0.0001), with spontaneous sequence values being consistently larger (P < 0.0001). In conclusion, we have shown that ARMA-derived BRG values are reproducible and that they can be determined during SB conditions, making the ARMA method appropriate for use in a wider range of patients.
NASA Astrophysics Data System (ADS)
Mansor, Zakwan; Zakaria, Mohd Zakimi; Nor, Azuwir Mohd; Saad, Mohd Sazli; Ahmad, Robiah; Jamaluddin, Hishamuddin
2017-09-01
This paper presents the black-box modelling of palm oil biodiesel engine (POB) using multi-objective optimization differential evolution (MOODE) algorithm. Two objective functions are considered in the algorithm for optimization; minimizing the number of term of a model structure and minimizing the mean square error between actual and predicted outputs. The mathematical model used in this study to represent the POB system is nonlinear auto-regressive moving average with exogenous input (NARMAX) model. Finally, model validity tests are applied in order to validate the possible models that was obtained from MOODE algorithm and lead to select an optimal model.
Ultra-Short-Term Wind Power Prediction Using a Hybrid Model
NASA Astrophysics Data System (ADS)
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
This paper aims to develop and apply a hybrid model of two data analytical methods, multiple linear regressions and least square (MLR&LS), for ultra-short-term wind power prediction (WPP), for example taking, Northeast China electricity demand. The data was obtained from the historical records of wind power from an offshore region, and from a wind farm of the wind power plant in the areas. The WPP achieved in two stages: first, the ratios of wind power were forecasted using the proposed hybrid method, and then the transformation of these ratios of wind power to obtain forecasted values. The hybrid model combines the persistence methods, MLR and LS. The proposed method included two prediction types, multi-point prediction and single-point prediction. WPP is tested by applying different models such as autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA) and artificial neural network (ANN). By comparing results of the above models, the validity of the proposed hybrid model is confirmed in terms of error and correlation coefficient. Comparison of results confirmed that the proposed method works effectively. Additional, forecasting errors were also computed and compared, to improve understanding of how to depict highly variable WPP and the correlations between actual and predicted wind power.
Neighborhood greenspace and health in a large urban center
NASA Astrophysics Data System (ADS)
Kardan, Omid; Gozdyra, Peter; Misic, Bratislav; Moola, Faisal; Palmer, Lyle J.; Paus, Tomáš; Berman, Marc G.
2015-07-01
Studies have shown that natural environments can enhance health and here we build upon that work by examining the associations between comprehensive greenspace metrics and health. We focused on a large urban population center (Toronto, Canada) and related the two domains by combining high-resolution satellite imagery and individual tree data from Toronto with questionnaire-based self-reports of general health perception, cardio-metabolic conditions and mental illnesses from the Ontario Health Study. Results from multiple regressions and multivariate canonical correlation analyses suggest that people who live in neighborhoods with a higher density of trees on their streets report significantly higher health perception and significantly less cardio-metabolic conditions (controlling for socio-economic and demographic factors). We find that having 10 more trees in a city block, on average, improves health perception in ways comparable to an increase in annual personal income of $10,000 and moving to a neighborhood with $10,000 higher median income or being 7 years younger. We also find that having 11 more trees in a city block, on average, decreases cardio-metabolic conditions in ways comparable to an increase in annual personal income of $20,000 and moving to a neighborhood with $20,000 higher median income or being 1.4 years younger.
NASA Astrophysics Data System (ADS)
Abunama, Taher; Othman, Faridah
2017-06-01
Analysing the fluctuations of wastewater inflow rates in sewage treatment plants (STPs) is essential to guarantee a sufficient treatment of wastewater before discharging it to the environment. The main objectives of this study are to statistically analyze and forecast the wastewater inflow rates into the Bandar Tun Razak STP in Kuala Lumpur, Malaysia. A time series analysis of three years’ weekly influent data (156weeks) has been conducted using the Auto-Regressive Integrated Moving Average (ARIMA) model. Various combinations of ARIMA orders (p, d, q) have been tried to select the most fitted model, which was utilized to forecast the wastewater inflow rates. The linear regression analysis was applied to testify the correlation between the observed and predicted influents. ARIMA (3, 1, 3) model was selected with the highest significance R-square and lowest normalized Bayesian Information Criterion (BIC) value, and accordingly the wastewater inflow rates were forecasted to additional 52weeks. The linear regression analysis between the observed and predicted values of the wastewater inflow rates showed a positive linear correlation with a coefficient of 0.831.
Reconstruction of missing daily streamflow data using dynamic regression models
NASA Astrophysics Data System (ADS)
Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault
2015-12-01
River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.
25 CFR 700.173 - Average net earnings of business or farm.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings of...
25 CFR 700.173 - Average net earnings of business or farm.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings of...
NASA Astrophysics Data System (ADS)
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin
2017-08-01
Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.
Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.
Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah
2016-01-01
The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.
NASA Astrophysics Data System (ADS)
Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum
2017-04-01
We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.
The Economic Impact of Malignant Catarrhal Fever on Pastoralist Livelihoods
Lankester, Felix; Lugelo, Ahmed; Kazwala, Rudovick; Keyyu, Julius; Cleaveland, Sarah; Yoder, Jonathan
2015-01-01
This study is the first to partially quantify the potential economic benefits that a vaccine, effective at protecting cattle against malignant catarrhal fever (MCF), could accrue to pastoralists living in East Africa. The benefits would result from the removal of household resource and management costs that are traditionally incurred avoiding the disease. MCF, a fatal disease of cattle caused by a virus transmitted from wildebeest calves, has plagued Maasai communities in East Africa for generations. The threat of the disease forces the Maasai to move cattle to less productive grazing areas to avoid wildebeest during calving season when forage quality is critical. To assess the management and resource costs associated with moving, we used household survey data. To estimate the costs associated with changes in livestock body condition that result from being herded away from wildebeest calving grounds, we exploited an ongoing MCF vaccine field trial and we used a hedonic price regression, a statistical model that allows estimation of the marginal contribution of a good’s attributes to its market price. We found that 90 percent of households move, on average, 82 percent of all cattle away from home to avoid MCF. In doing so, a herd’s productive contributions to the household was reduced, with 64 percent of milk being unavailable for sale or consumption by the family members remaining at the boma (the children, women, and the elderly). In contrast cattle that remained on the wildebeest calving grounds during the calving season (and survived MCF) remained fully productive to the family and gained body condition compared to cattle that moved away. This gain was, however, short-lived. We estimated the market value of these condition gains and losses using hedonic regression. The value of a vaccine for MCF is the removal of the costs incurred in avoiding the disease. PMID:25629896
Makrides, Lydia; Smith, Steven; Allt, Jane; Farquharson, Jane; Szpilfogel, Claudine; Curwin, Sandra; Veinot, Paula; Wang, Feifei; Edington, Dee
2011-07-01
To examine the relationship between health risks and absenteeism and drug costs vis-a-vis comprehensive workplace wellness. Eleven health risks, and change in drug claims, short-term and general illness calculated across four risk change groups. Wellness score examined using Wilcoxon test and regression model for cost change. The results showed 31% at risk; 9 of 11 risks associated with higher drug costs. Employees moving from low to high risk showed highest relative increase (81%) in drug costs; moving from high to low had lowest (24%). Low-high had highest increase in absenteeism costs (160%). With each risk increase, absenteeism costs increased by $CDN248 per year (P < 0.05) with average decrease of 0.07 risk factors and savings $CDN6979 per year. Both high-risk reduction and low-risk maintenance are important to contain drug costs. Only low-risk maintenance also avoids absenteeism costs associated with high risks.
Li, Wenyuan; Dorans, Kirsten S; Wilker, Elissa H; Rice, Mary B; Ljungman, Petter L; Schwartz, Joel D; Coull, Brent A; Koutrakis, Petros; Gold, Diane R; Keaney, John F; Vasan, Ramachandran S; Benjamin, Emelia J; Mittleman, Murray A
2017-09-01
The objective of this study is to examine associations between short-term exposure to ambient air pollution and circulating biomarkers of systemic inflammation in participants from the Framingham Offspring and Third Generation cohorts in the greater Boston area. We included 3996 noncurrent smoking participants (mean age, 53.6 years; 54% women) who lived within 50 km from a central air pollution monitoring site in Boston, MA, and calculated the 1- to 7-day moving averages of fine particulate matter (diameter<2.5 µm), black carbon, sulfate, nitrogen oxides, and ozone before the examination visits. We used linear mixed effects models for C-reactive protein and tumor necrosis factor receptor 2, which were measured up to twice for each participant; we used linear regression models for interleukin-6, fibrinogen, and tumor necrosis factor α, which were measured once. We adjusted for demographics, socioeconomic position, lifestyle, time, and weather. The 3- to 7-day moving averages of fine particulate matter (diameter<2.5 µm) and sulfate were positively associated with C-reactive protein concentrations. A 5 µg/m 3 higher 5-day moving average fine particulate matter (diameter<2.5 µm) was associated with 4.2% (95% confidence interval: 0.8, 7.6) higher circulating C-reactive protein. Positive associations were also observed for nitrogen oxides with interleukin-6 and for black carbon, sulfate, and ozone with tumor necrosis factor receptor 2. However, black carbon, sulfate, and nitrogen oxides were negatively associated with fibrinogen, and sulfate was negatively associated with tumor necrosis factor α. Higher short-term exposure to relatively low levels of ambient air pollution was associated with higher levels of C-reactive protein, interleukin-6, and tumor necrosis factor receptor 2 but not fibrinogen or tumor necrosis factor α in individuals residing in the greater Boston area. © 2017 American Heart Association, Inc.
Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies
2016-01-01
The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA′) in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA′. The Efficacy Ratio adjusts the AMA′ to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA′ is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA′ are superior to the passive buy-and-hold strategy. Specifically, AMA′ outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets. PMID:27574972
Pre-Drinking and the Temporal Gradient of Intoxication in a New Zealand Nightlife Environment.
Cameron, Michael P; Roskruge, Matthew J; Droste, Nic; Miller, Peter G
2018-01-01
We measured changes in the average level of intoxication over time in the nighttime economy and identified the factors associated with intoxication, including pre-drinking. A random intercept sample of 320 pedestrians (105 women; 215 men) was interviewed and received breath alcohol analysis in the nighttime economy of Hamilton, New Zealand. Data were collected over a five-night period, between 7 P.M. and 2:30 A.M. Data were analyzed by plotting the moving average breath alcohol concentration (BrAC) over time and using linear regression models to identify the factors associated with BrAC. Mean BrAC was 241.5 mcg/L for the full sample; 179.7 for women and 271.7 for men, which is a statistically significant difference. Mean BrAC was also significantly higher among those who engaged in pre-drinking than those who did not. In the regression models, time of night and pre-drinking were significantly associated with higher BrAC. The effect of pre-drinking on BrAC was larger for women than for men. The average level of intoxication increases throughout the night. However, this masks a potentially important gender difference, in that women's intoxication levels stop increasing after midnight, whereas men's increase continuously through the night. Similarly, intoxication of pre-drinkers stops increasing from 11 P.M., although remaining higher than non-pre-drinkers throughout the night. Analysis of BrAC provides a more nuanced understanding of intoxication levels in the nighttime economy.
Hernandez, Ivan; Preston, Jesse Lee; Hepler, Justin
2014-01-01
Research on the timescale bias has found that observers perceive more capacity for mind in targets moving at an average speed, relative to slow or fast moving targets. The present research revisited the timescale bias as a type of halo effect, where normal-speed people elicit positive evaluations and abnormal-speed (slow and fast) people elicit negative evaluations. In two studies, participants viewed videos of people walking at a slow, average, or fast speed. We find evidence for a timescale halo effect: people walking at an average-speed were attributed more positive mental traits, but fewer negative mental traits, relative to slow or fast moving people. These effects held across both cognitive and emotional dimensions of mind and were mediated by overall positive/negative ratings of the person. These results suggest that, rather than eliciting greater perceptions of general mind, the timescale bias may reflect a generalized positivity toward average speed people relative to slow or fast moving people. PMID:24421882
NASA Astrophysics Data System (ADS)
Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B. M. J.
2018-07-01
In environmental epidemiology studies, health response data (e.g. hospitalization or mortality) are often noisy because of hospital organization and other social factors. The noise in the data can hide the true signal related to the exposure. The signal can be unveiled by performing a temporal aggregation on health data and then using it as the response in regression analysis. From aggregated series, a general methodology is introduced to account for the particularities of an aggregated response in a regression setting. This methodology can be used with usually applied regression models in weather-related health studies, such as generalized additive models (GAM) and distributed lag nonlinear models (DLNM). In particular, the residuals are modelled using an autoregressive-moving average (ARMA) model to account for the temporal dependence. The proposed methodology is illustrated by modelling the influence of temperature on cardiovascular mortality in Canada. A comparison with classical DLNMs is provided and several aggregation methods are compared. Results show that there is an increase in the fit quality when the response is aggregated, and that the estimated relationship focuses more on the outcome over several days than the classical DLNM. More precisely, among various investigated aggregation schemes, it was found that an aggregation with an asymmetric Epanechnikov kernel is more suited for studying the temperature-mortality relationship.
Static and moving solid/gas interface modeling in a hybrid rocket engine
NASA Astrophysics Data System (ADS)
Mangeot, Alexandre; William-Louis, Mame; Gillard, Philippe
2018-07-01
A numerical model was developed with CFD-ACE software to study the working condition of an oxygen-nitrogen/polyethylene hybrid rocket combustor. As a first approach, a simplified numerical model is presented. It includes a compressible transient gas phase in which a two-step combustion mechanism is implemented coupled to a radiative model. The solid phase from the fuel grain is a semi-opaque material with its degradation process modeled by an Arrhenius type law. Two versions of the model were tested. The first considers the solid/gas interface with a static grid while the second uses grid deformation during the computation to follow the asymmetrical regression. The numerical results are obtained with two different regression kinetics originating from ThermoGravimetry Analysis and test bench results. In each case, the fuel surface temperature is retrieved within a range of 5% error. However, good results are only found using kinetics from the test bench. The regression rate is found within 0.03 mm s-1 and average combustor pressure and its variation over time have the same intensity than the measurements conducted on the test bench. The simulation that uses grid deformation to follow the regression shows a good stability over a 10 s simulated time simulation.
Examination of the Armagh Observatory Annual Mean Temperature Record, 1844-2004
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
The long-term annual mean temperature record (1844-2004) of the Armagh Observatory (Armagh, Northern Ireland, United Kingdom) is examined for evidence of systematic variation, in particular, as related to solar/geomagnetic forcing and secular variation. Indeed, both are apparent in the temperature record. Moving averages for 10 years of temperature are found to highly correlate against both 10-year moving averages of the aa-geomagnetic index and sunspot number, having correlation coefficients of approx. 0.7, inferring that nearly half the variance in the 10-year moving average of temperature can be explained by solar/geomagnetic forcing. The residuals appear episodic in nature, with cooling seen in the 1880s and again near 1980. Seven of the last 10 years of the temperature record has exceeded 10 C, unprecedented in the overall record. Variation of sunspot cyclic averages and 2-cycle moving averages of temperature strongly associate with similar averages for the solar/geomagnetic cycle, with the residuals displaying an apparent 9-cycle variation and a steep rise in temperature associated with cycle 23. Hale cycle averages of temperature for even-odd pairs of sunspot cycles correlate against similar averages for the solar/geomagnetic cycle and, especially, against the length of the Hale cycle. Indications are that annual mean temperature will likely exceed 10 C over the next decade.
Photonic single nonlinear-delay dynamical node for information processing
NASA Astrophysics Data System (ADS)
Ortín, Silvia; San-Martín, Daniel; Pesquera, Luis; Gutiérrez, José Manuel
2012-06-01
An electro-optical system with a delay loop based on semiconductor lasers is investigated for information processing by performing numerical simulations. This system can replace a complex network of many nonlinear elements for the implementation of Reservoir Computing. We show that a single nonlinear-delay dynamical system has the basic properties to perform as reservoir: short-term memory and separation property. The computing performance of this system is evaluated for two prediction tasks: Lorenz chaotic time series and nonlinear auto-regressive moving average (NARMA) model. We sweep the parameters of the system to find the best performance. The results achieved for the Lorenz and the NARMA-10 tasks are comparable to those obtained by other machine learning methods.
Forecasting of Water Consumptions Expenditure Using Holt-Winter’s and ARIMA
NASA Astrophysics Data System (ADS)
Razali, S. N. A. M.; Rusiman, M. S.; Zawawi, N. I.; Arbin, N.
2018-04-01
This study is carried out to forecast water consumption expenditure of Malaysian university specifically at University Tun Hussein Onn Malaysia (UTHM). The proposed Holt-Winter’s and Auto-Regressive Integrated Moving Average (ARIMA) models were applied to forecast the water consumption expenditure in Ringgit Malaysia from year 2006 until year 2014. The two models were compared and performance measurement of the Mean Absolute Percentage Error (MAPE) and Mean Absolute Deviation (MAD) were used. It is found that ARIMA model showed better results regarding the accuracy of forecast with lower values of MAPE and MAD. Analysis showed that ARIMA (2,1,4) model provided a reasonable forecasting tool for university campus water usage.
NASA Astrophysics Data System (ADS)
Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.
1988-10-01
A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.
Short-term electric power demand forecasting based on economic-electricity transmission model
NASA Astrophysics Data System (ADS)
Li, Wenfeng; Bai, Hongkun; Liu, Wei; Liu, Yongmin; Wang, Yubin Mao; Wang, Jiangbo; He, Dandan
2018-04-01
Short-term electricity demand forecasting is the basic work to ensure safe operation of the power system. In this paper, a practical economic electricity transmission model (EETM) is built. With the intelligent adaptive modeling capabilities of Prognoz Platform 7.2, the econometric model consists of three industrial added value and income levels is firstly built, the electricity demand transmission model is also built. By multiple regression, moving averages and seasonal decomposition, the problem of multiple correlations between variables is effectively overcome in EETM. The validity of EETM is proved by comparison with the actual value of Henan Province. Finally, EETM model is used to forecast the electricity consumption of the 1-4 quarter of 2018.
Forecasting conditional climate-change using a hybrid approach
Esfahani, Akbar Akbari; Friedel, Michael J.
2014-01-01
A novel approach is proposed to forecast the likelihood of climate-change across spatial landscape gradients. This hybrid approach involves reconstructing past precipitation and temperature using the self-organizing map technique; determining quantile trends in the climate-change variables by quantile regression modeling; and computing conditional forecasts of climate-change variables based on self-similarity in quantile trends using the fractionally differenced auto-regressive integrated moving average technique. The proposed modeling approach is applied to states (Arizona, California, Colorado, Nevada, New Mexico, and Utah) in the southwestern U.S., where conditional forecasts of climate-change variables are evaluated against recent (2012) observations, evaluated at a future time period (2030), and evaluated as future trends (2009–2059). These results have broad economic, political, and social implications because they quantify uncertainty in climate-change forecasts affecting various sectors of society. Another benefit of the proposed hybrid approach is that it can be extended to any spatiotemporal scale providing self-similarity exists.
Deep learning architecture for air quality predictions.
Li, Xiang; Peng, Ling; Hu, Yuan; Shao, Jing; Chi, Tianhe
2016-11-01
With the rapid development of urbanization and industrialization, many developing countries are suffering from heavy air pollution. Governments and citizens have expressed increasing concern regarding air pollution because it affects human health and sustainable development worldwide. Current air quality prediction methods mainly use shallow models; however, these methods produce unsatisfactory results, which inspired us to investigate methods of predicting air quality based on deep architecture models. In this paper, a novel spatiotemporal deep learning (STDL)-based air quality prediction method that inherently considers spatial and temporal correlations is proposed. A stacked autoencoder (SAE) model is used to extract inherent air quality features, and it is trained in a greedy layer-wise manner. Compared with traditional time series prediction models, our model can predict the air quality of all stations simultaneously and shows the temporal stability in all seasons. Moreover, a comparison with the spatiotemporal artificial neural network (STANN), auto regression moving average (ARMA), and support vector regression (SVR) models demonstrates that the proposed method of performing air quality predictions has a superior performance.
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
Rich, David Q.; Mittleman, Murray A.; Link, Mark S.; Schwartz, Joel; Luttmann-Gibson, Heike; Catalano, Paul J.; Speizer, Frank E.; Gold, Diane R.; Dockery, Douglas W.
2006-01-01
Objectives: We reported previously that 24-hr moving average ambient air pollution concentrations were positively associated with ventricular arrhythmias detected by implantable cardioverter defibrillators (ICDs). ICDs also detect paroxysmal atrial fibrillation episodes (PAF) that result in rapid ventricular rates. In this same cohort of ICD patients, we assessed the association between ambient air pollution and episodes of PAF. Design: We performed a case–crossover study. Participants: Patients who lived in the Boston, Massachusetts, metropolitan area and who had ICDs implanted between June 1995 and December 1999 (n = 203) were followed until July 2002. Evaluations/Measurements: We used conditional logistic regression to explore the association between community air pollution and 91 electrophysiologist-confirmed episodes of PAF among 29 subjects. Results: We found a statistically significant positive association between episodes of PAF and increased ozone concentration (22 ppb) in the hour before the arrhythmia (odds ratio = 2.08; 95% confidence interval = 1.22, 3.54; p = 0.001). The risk estimate for a longer (24-hr) moving average was smaller, thus suggesting an immediate effect. Positive but not statistically significant risks were associated with fine particles, nitrogen dioxide, and black carbon. Conclusions: Increased ambient O3 pollution was associated with increased risk of episodes of rapid ventricular response due to PAF, thereby suggesting that community air pollution may be a precipitant of these events. PMID:16393668
Forecasting coconut production in the Philippines with ARIMA model
NASA Astrophysics Data System (ADS)
Lim, Cristina Teresa
2015-02-01
The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.
Factors associated with residential mobility during pregnancy.
Amoah, Doris K; Nolan, Vikki; Relyea, George; Gurney, James G; Yu, Xinhua; Tylavsky, Frances A; Mzayek, Fawaz
2017-09-18
Our objective was to determine the factors associated with residential moving during pregnancy, as it may increase stress during pregnancy and affect birth outcomes. Data were obtained from the Conditions Affecting Neurocognitive Development and Learning in Early Childhood (CANDLE) study. Participants were recruited from December 2006 to June 2011 and included 1,448 pregnant women. The average gestational age at enrollment was 23 weeks. The primary outcome of residential mobility was defined as any change in address during pregnancy. Multivariate regression was used to assess the adjusted associations of factors with residential mobility. Out of 1,448 participants, approximately 9 percent moved between baseline (enrollment) and delivery. After adjusting for covariates, mothers with lower educational attainment [less than high school (adjusted odds ratio [aOR] = 3.74, 95% confidence interval [CI] = 1.78, 7.85) and high school/technical school (aOR = 3.57, 95% CI = 2.01, 6.32) compared to college degree or higher], and shorter length of residence in neighborhood were more likely to have moved compared to other mothers. Length of residence was protective of mobility (aOR = 0.91, 95% CI = 0.86, 0.96 per year). Increased understanding of residential mobility during pregnancy may help improve the health of mothers and their children.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bovy, Jo; Hogg, David W., E-mail: jo.bovy@nyu.ed
2010-07-10
The velocity distribution of nearby stars ({approx}<100 pc) contains many overdensities or 'moving groups', clumps of comoving stars, that are inconsistent with the standard assumption of an axisymmetric, time-independent, and steady-state Galaxy. We study the age and metallicity properties of the low-velocity moving groups based on the reconstruction of the local velocity distribution in Paper I of this series. We perform stringent, conservative hypothesis testing to establish for each of these moving groups whether it could conceivably consist of a coeval population of stars. We conclude that they do not: the moving groups are neither trivially associated with their eponymousmore » open clusters nor with any other inhomogeneous star formation event. Concerning a possible dynamical origin of the moving groups, we test whether any of the moving groups has a higher or lower metallicity than the background population of thin disk stars, as would generically be the case if the moving groups are associated with resonances of the bar or spiral structure. We find clear evidence that the Hyades moving group has higher than average metallicity and weak evidence that the Sirius moving group has lower than average metallicity, which could indicate that these two groups are related to the inner Lindblad resonance of the spiral structure. Further, we find weak evidence that the Hercules moving group has higher than average metallicity, as would be the case if it is associated with the bar's outer Lindblad resonance. The Pleiades moving group shows no clear metallicity anomaly, arguing against a common dynamical origin for the Hyades and Pleiades groups. Overall, however, the moving groups are barely distinguishable from the background population of stars, raising the likelihood that the moving groups are associated with transient perturbations.« less
Factors Influencing Willingness to Move: An Examination of Nonmetropolitan Residents.
ERIC Educational Resources Information Center
Swanson, Louis E., Jr.; And Others
1979-01-01
Examining relationships between social restraints and economic incentives on individuals' willingness to move, special attention was given to labor force participation relative to social factors. Regression Analysis found age and community tenure correlated negatively with willingness to move; people who were employed or not yet retired showed…
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2008-01-01
For 1996 .2006 (cycle 23), 12-month moving averages of the aa geomagnetic index strongly correlate (r = 0.92) with 12-month moving averages of solar wind speed, and 12-month moving averages of the number of coronal mass ejections (CMEs) (halo and partial halo events) strongly correlate (r = 0.87) with 12-month moving averages of sunspot number. In particular, the minimum (15.8, September/October 1997) and maximum (38.0, August 2003) values of the aa geomagnetic index occur simultaneously with the minimum (376 km/s) and maximum (547 km/s) solar wind speeds, both being strongly correlated with the following recurrent component (due to high-speed streams). The large peak of aa geomagnetic activity in cycle 23, the largest on record, spans the interval late 2002 to mid 2004 and is associated with a decreased number of halo and partial halo CMEs, whereas the smaller secondary peak of early 2005 seems to be associated with a slight rebound in the number of halo and partial halo CMEs. Based on the observed aaM during the declining portion of cycle 23, RM for cycle 24 is predicted to be larger than average, being about 168+/-60 (the 90% prediction interval), whereas based on the expected aam for cycle 24 (greater than or equal to 14.6), RM for cycle 24 should measure greater than or equal to 118+/-30, yielding an overlap of about 128+/-20.
Low-flow characteristics of Virginia streams
Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute
2011-01-01
Low-flow annual non-exceedance probabilities (ANEP), called probability-percent chance (P-percent chance) flow estimates, regional regression equations, and transfer methods are provided describing the low-flow characteristics of Virginia streams. Statistical methods are used to evaluate streamflow data. Analysis of Virginia streamflow data collected from 1895 through 2007 is summarized. Methods are provided for estimating low-flow characteristics of gaged and ungaged streams. The 1-, 4-, 7-, and 30-day average streamgaging station low-flow characteristics for 290 long-term, continuous-record, streamgaging stations are determined, adjusted for instances of zero flow using a conditional probability adjustment method, and presented for non-exceedance probabilities of 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.05, 0.02, 0.01, and 0.005. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression equations to estimate annual non-exceedance probabilities at gaged and ungaged sites and are summarized for 290 long-term, continuous-record streamgaging stations, 136 short-term, continuous-record streamgaging stations, and 613 partial-record streamgaging stations. Regional regression equations for six physiographic regions use basin characteristics to estimate 1-, 4-, 7-, and 30-day average low-flow annual non-exceedance probabilities at gaged and ungaged sites. Weighted low-flow values that combine computed streamgaging station low-flow characteristics and annual non-exceedance probabilities from regional regression equations provide improved low-flow estimates. Regression equations developed using the Maintenance of Variance with Extension (MOVE.1) method describe the line of organic correlation (LOC) with an appropriate index site for low-flow characteristics at 136 short-term, continuous-record streamgaging stations and 613 partial-record streamgaging stations. Monthly streamflow statistics computed on the individual daily mean streamflows of selected continuous-record streamgaging stations and curves describing flow-duration are presented. Text, figures, and lists are provided summarizing low-flow estimates, selected low-flow sites, delineated physiographic regions, basin characteristics, regression equations, error estimates, definitions, and data sources. This study supersedes previous studies of low flows in Virginia.
NASA Astrophysics Data System (ADS)
Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim
2018-01-01
The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.
Nonlinear System Identification for Aeroelastic Systems with Application to Experimental Data
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2008-01-01
Representation and identification of a nonlinear aeroelastic pitch-plunge system as a model of the Nonlinear AutoRegressive, Moving Average eXogenous (NARMAX) class is considered. A nonlinear difference equation describing this aircraft model is derived theoretically and shown to be of the NARMAX form. Identification methods for NARMAX models are applied to aeroelastic dynamics and its properties demonstrated via continuous-time simulations of experimental conditions. Simulation results show that (1) the outputs of the NARMAX model closely match those generated using continuous-time methods, and (2) NARMAX identification methods applied to aeroelastic dynamics provide accurate discrete-time parameter estimates. Application of NARMAX identification to experimental pitch-plunge dynamics data gives a high percent fit for cross-validated data.
System for monitoring an industrial process and determining sensor status
Gross, K.C.; Hoyer, K.K.; Humenik, K.E.
1995-10-17
A method and system for monitoring an industrial process and a sensor are disclosed. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 17 figs.
System for monitoring an industrial process and determining sensor status
Gross, K.C.; Hoyer, K.K.; Humenik, K.E.
1997-05-13
A method and system are disclosed for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 17 figs.
System for monitoring an industrial process and determining sensor status
Gross, Kenneth C.; Hoyer, Kristin K.; Humenik, Keith E.
1995-01-01
A method and system for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.
System for monitoring an industrial process and determining sensor status
Gross, Kenneth C.; Hoyer, Kristin K.; Humenik, Keith E.
1997-01-01
A method and system for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.
Performance of univariate forecasting on seasonal diseases: the case of tuberculosis.
Permanasari, Adhistya Erna; Rambli, Dayang Rohaya Awang; Dominic, P Dhanapal Durai
2011-01-01
The annual disease incident worldwide is desirable to be predicted for taking appropriate policy to prevent disease outbreak. This chapter considers the performance of different forecasting method to predict the future number of disease incidence, especially for seasonal disease. Six forecasting methods, namely linear regression, moving average, decomposition, Holt-Winter's, ARIMA, and artificial neural network (ANN), were used for disease forecasting on tuberculosis monthly data. The model derived met the requirement of time series with seasonality pattern and downward trend. The forecasting performance was compared using similar error measure in the base of the last 5 years forecast result. The findings indicate that ARIMA model was the most appropriate model since it obtained the less relatively error than the other model.
Parameter estimation of an ARMA model for river flow forecasting using goal programming
NASA Astrophysics Data System (ADS)
Mohammadi, Kourosh; Eslami, H. R.; Kahawita, Rene
2006-11-01
SummaryRiver flow forecasting constitutes one of the most important applications in hydrology. Several methods have been developed for this purpose and one of the most famous techniques is the Auto regressive moving average (ARMA) model. In the research reported here, the goal was to minimize the error for a specific season of the year as well as for the complete series. Goal programming (GP) was used to estimate the ARMA model parameters. Shaloo Bridge station on the Karun River with 68 years of observed stream flow data was selected to evaluate the performance of the proposed method. The results when compared with the usual method of maximum likelihood estimation were favorable with respect to the new proposed algorithm.
Mehta, Amar J; Kubzansky, Laura D; Coull, Brent A; Kloog, Itai; Koutrakis, Petros; Sparrow, David; Spiro, Avron; Vokonas, Pantel; Schwartz, Joel
2015-01-27
There is mixed evidence suggesting that air pollution may be associated with increased risk of developing psychiatric disorders. We aimed to investigate the association between air pollution and non-specific perceived stress, often a precursor to development of affective psychiatric disorders. This longitudinal analysis consisted of 987 older men participating in at least one visit for the Veterans Administration Normative Aging Study between 1995 and 2007 (n = 2,244 visits). At each visit, participants were administered the 14-item Perceived Stress Scale (PSS), which quantifies stress experienced in the previous week. Scores ranged from 0-56 with higher scores indicating increased stress. Differences in PSS score per interquartile range increase in moving average (1, 2, and 4-weeks) of air pollution exposures were estimated using linear mixed-effects regression after adjustment for age, race, education, physical activity, anti-depressant medication use, seasonality, meteorology, and day of week. We also evaluated effect modification by season (April-September and March-October for warm and cold season, respectively). Fine particles (PM2.5), black carbon (BC), nitrogen dioxide, and particle number counts (PNC) at moving averages of 1, 2, and 4-weeks were associated with higher perceived stress ratings. The strongest associations were observed for PNC; for example, a 15,997 counts/cm(3) interquartile range increase in 1-week average PNC was associated with a 3.2 point (95%CI: 2.1-4.3) increase in PSS score. Season modified the associations for specific pollutants; higher PSS scores in association with PM2.5, BC, and sulfate were observed mainly in colder months. Air pollution was associated with higher levels of perceived stress in this sample of older men, particularly in colder months for specific pollutants.
Time series regression model for infectious disease and weather.
Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro
2015-10-01
Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Masselot, Pierre; Chebana, Fateh; Bélanger, Diane; St-Hilaire, André; Abdous, Belkacem; Gosselin, Pierre; Ouarda, Taha B M J
2018-07-01
In environmental epidemiology studies, health response data (e.g. hospitalization or mortality) are often noisy because of hospital organization and other social factors. The noise in the data can hide the true signal related to the exposure. The signal can be unveiled by performing a temporal aggregation on health data and then using it as the response in regression analysis. From aggregated series, a general methodology is introduced to account for the particularities of an aggregated response in a regression setting. This methodology can be used with usually applied regression models in weather-related health studies, such as generalized additive models (GAM) and distributed lag nonlinear models (DLNM). In particular, the residuals are modelled using an autoregressive-moving average (ARMA) model to account for the temporal dependence. The proposed methodology is illustrated by modelling the influence of temperature on cardiovascular mortality in Canada. A comparison with classical DLNMs is provided and several aggregation methods are compared. Results show that there is an increase in the fit quality when the response is aggregated, and that the estimated relationship focuses more on the outcome over several days than the classical DLNM. More precisely, among various investigated aggregation schemes, it was found that an aggregation with an asymmetric Epanechnikov kernel is more suited for studying the temperature-mortality relationship. Copyright © 2018. Published by Elsevier B.V.
Interpreting Bivariate Regression Coefficients: Going beyond the Average
ERIC Educational Resources Information Center
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Ryberg, Karen R.; Vecchia, Aldo V.; Akyüz, F. Adnan; Lin, Wei
2016-01-01
Historically unprecedented flooding occurred in the Souris River Basin of Saskatchewan, North Dakota and Manitoba in 2011, during a longer term period of wet conditions in the basin. In order to develop a model of future flows, there is a need to evaluate effects of past multidecadal climate variability and/or possible climate change on precipitation. In this study, tree-ring chronologies and historical precipitation data in a four-degree buffer around the Souris River Basin were analyzed to develop regression models that can be used for predicting long-term variations of precipitation. To focus on longer term variability, 12-year moving average precipitation was modeled in five subregions (determined through cluster analysis of measures of precipitation) of the study area over three seasons (November–February, March–June and July–October). The models used multiresolution decomposition (an additive decomposition based on powers of two using a discrete wavelet transform) of tree-ring chronologies from Canada and the US and seasonal 12-year moving average precipitation based on Adjusted and Homogenized Canadian Climate Data and US Historical Climatology Network data. Results show that precipitation varies on long-term (multidecadal) time scales of 16, 32 and 64 years. Past extended pluvial and drought events, which can vary greatly with season and subregion, were highlighted by the models. Results suggest that the recent wet period may be a part of natural variability on a very long time scale.
Annual forest inventory estimates based on the moving average
Francis A. Roesch; James R. Steinman; Michael T. Thompson
2002-01-01
Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
...: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION: Proposed rule. SUMMARY: This proposed rule..., especially the teaching status adjustment factor. Therefore, we implemented a 3-year moving average approach... moving average to calculate the facility-level adjustment factors. For FY 2011, we issued a notice to...
NASA Astrophysics Data System (ADS)
Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.
2017-08-01
Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.
Park, Hyoung Keun; Bae, Sang Rak; Kim, Satbyul E; Choi, Woo Suk; Paick, Sung Hyun; Ho, Kim; Kim, Hyeong Gon; Lho, Yong Soo
2015-02-01
The aim of this study was to evaluate the effect of seasonal variation and climate parameters on urinary tract stone attack and investigate whether stone attack is increased sharply at a specific point. Nationwide data of total urinary tract stone attack numbers per month between January 2006 and December 2010 were obtained from the Korean Health Insurance Review and Assessment Service. The effects of climatic factors on monthly urinary stone attack were assessed using auto-regressive integrated moving average (ARIMA) regression method. A total of 1,702,913 stone attack cases were identified. Mean monthly and monthly average daily urinary stone attack cases were 28,382 ± 2,760 and 933 ± 85, respectively. The stone attack showed seasonal trends of sharp incline in June, a peak plateau from July to September, and a sharp decline after September. The correlation analysis showed that ambient temperature (r = 0.557, p < 0.001) and relative humidity (r = 0.513, p < 0.001) were significantly associated with urinary stone attack cases. However, after adjustment for trends and seasonality, ambient temperature was the only climate factor associated with the stone attack cases in ARIMA regression test (p = 0.04). Threshold temperature was estimated as 18.4 °C. Risk of urinary stone attack significantly increases 1.71% (1.02-2.41 %, 95% confidence intervals) with a 1 °C increase of ambient temperature above the threshold point. In conclusion, monthly urinary stone attack cases were changed according to seasonal variation. Among the climates variables, only temperature had consistent association with stone attack and when the temperature is over 18.4 °C, urinary stone attack would be increased sharply.
Kepler AutoRegressive Planet Search: Motivation & Methodology
NASA Astrophysics Data System (ADS)
Caceres, Gabriel; Feigelson, Eric; Jogesh Babu, G.; Bahamonde, Natalia; Bertin, Karine; Christen, Alejandra; Curé, Michel; Meza, Cristian
2015-08-01
The Kepler AutoRegressive Planet Search (KARPS) project uses statistical methodology associated with autoregressive (AR) processes to model Kepler lightcurves in order to improve exoplanet transit detection in systems with high stellar variability. We also introduce a planet-search algorithm to detect transits in time-series residuals after application of the AR models. One of the main obstacles in detecting faint planetary transits is the intrinsic stellar variability of the host star. The variability displayed by many stars may have autoregressive properties, wherein later flux values are correlated with previous ones in some manner. Auto-Regressive Moving-Average (ARMA) models, Generalized Auto-Regressive Conditional Heteroskedasticity (GARCH), and related models are flexible, phenomenological methods used with great success to model stochastic temporal behaviors in many fields of study, particularly econometrics. Powerful statistical methods are implemented in the public statistical software environment R and its many packages. Modeling involves maximum likelihood fitting, model selection, and residual analysis. These techniques provide a useful framework to model stellar variability and are used in KARPS with the objective of reducing stellar noise to enhance opportunities to find as-yet-undiscovered planets. Our analysis procedure consisting of three steps: pre-processing of the data to remove discontinuities, gaps and outliers; ARMA-type model selection and fitting; and transit signal search of the residuals using a new Transit Comb Filter (TCF) that replaces traditional box-finding algorithms. We apply the procedures to simulated Kepler-like time series with known stellar and planetary signals to evaluate the effectiveness of the KARPS procedures. The ARMA-type modeling is effective at reducing stellar noise, but also reduces and transforms the transit signal into ingress/egress spikes. A periodogram based on the TCF is constructed to concentrate the signal of these periodic spikes. When a periodic transit is found, the model is displayed on a standard period-folded averaged light curve. We also illustrate the efficient coding in R.
Buckingham-Jeffery, Elizabeth; Morbey, Roger; House, Thomas; Elliot, Alex J; Harcourt, Sally; Smith, Gillian E
2017-05-19
As service provision and patient behaviour varies by day, healthcare data used for public health surveillance can exhibit large day of the week effects. These regular effects are further complicated by the impact of public holidays. Real-time syndromic surveillance requires the daily analysis of a range of healthcare data sources, including family doctor consultations (called general practitioners, or GPs, in the UK). Failure to adjust for such reporting biases during analysis of syndromic GP surveillance data could lead to misinterpretations including false alarms or delays in the detection of outbreaks. The simplest smoothing method to remove a day of the week effect from daily time series data is a 7-day moving average. Public Health England developed the working day moving average in an attempt also to remove public holiday effects from daily GP data. However, neither of these methods adequately account for the combination of day of the week and public holiday effects. The extended working day moving average was developed. This is a further data-driven method for adding a smooth trend curve to a time series graph of daily healthcare data, that aims to take both public holiday and day of the week effects into account. It is based on the assumption that the number of people seeking healthcare services is a combination of illness levels/severity and the ability or desire of patients to seek healthcare each day. The extended working day moving average was compared to the seven-day and working day moving averages through application to data from two syndromic indicators from the GP in-hours syndromic surveillance system managed by Public Health England. The extended working day moving average successfully smoothed the syndromic healthcare data by taking into account the combined day of the week and public holiday effects. In comparison, the seven-day and working day moving averages were unable to account for all these effects, which led to misleading smoothing curves. The results from this study make it possible to identify trends and unusual activity in syndromic surveillance data from GP services in real-time independently of the effects caused by day of the week and public holidays, thereby improving the public health action resulting from the analysis of these data.
Moving in the Right Direction: Helping Children Cope with a Relocation
ERIC Educational Resources Information Center
Kruse, Tricia
2012-01-01
According to national figures, 37.1 million people moved in 2009 (U.S. Census Bureau, 2010). In fact, the average American will move 11.7 times in their lifetime. Why are Americans moving so much? There are a variety of reasons. Regardless of the reason, moving is a common experience for children. If one looks at the developmental characteristics…
Zhao, Xin; Han, Meng; Ding, Lili; Calin, Adrian Cantemir
2018-01-01
The accurate forecast of carbon dioxide emissions is critical for policy makers to take proper measures to establish a low carbon society. This paper discusses a hybrid of the mixed data sampling (MIDAS) regression model and BP (back propagation) neural network (MIDAS-BP model) to forecast carbon dioxide emissions. Such analysis uses mixed frequency data to study the effects of quarterly economic growth on annual carbon dioxide emissions. The forecasting ability of MIDAS-BP is remarkably better than MIDAS, ordinary least square (OLS), polynomial distributed lags (PDL), autoregressive distributed lags (ADL), and auto-regressive moving average (ARMA) models. The MIDAS-BP model is suitable for forecasting carbon dioxide emissions for both the short and longer term. This research is expected to influence the methodology for forecasting carbon dioxide emissions by improving the forecast accuracy. Empirical results show that economic growth has both negative and positive effects on carbon dioxide emissions that last 15 quarters. Carbon dioxide emissions are also affected by their own change within 3 years. Therefore, there is a need for policy makers to explore an alternative way to develop the economy, especially applying new energy policies to establish a low carbon society.
NASA Astrophysics Data System (ADS)
Ferreira, Paulo; Kristoufek, Ladislav
2017-11-01
We analyse the covered interest parity (CIP) using two novel regression frameworks based on cross-correlation analysis (detrended cross-correlation analysis and detrending moving-average cross-correlation analysis), which allow for studying the relationships at different scales and work well under non-stationarity and heavy tails. CIP is a measure of capital mobility commonly used to analyse financial integration, which remains an interesting feature of study in the context of the European Union. The importance of this features is related to the fact that the adoption of a common currency is associated with some benefits for countries, but also involves some risks such as the loss of economic instruments to face possible asymmetric shocks. While studying the Eurozone members could explain some problems in the common currency, studying the non-Euro countries is important to analyse if they are fit to take the possible benefits. Our results point to the CIP verification mainly in the Central European countries while in the remaining countries, the verification of the parity is only residual.
A comparison of several techniques for imputing tree level data
David Gartner
2002-01-01
As Forest Inventory and Analysis (FIA) changes from periodic surveys to the multipanel annual survey, new analytical methods become available. The current official statistic is the moving average. One alternative is an updated moving average. Several methods of updating plot per acre volume have been discussed previously. However, these methods may not be appropriate...
ERIC Educational Resources Information Center
Doerann-George, Judith
The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…
NASA Astrophysics Data System (ADS)
Li, Qingchen; Cao, Guangxi; Xu, Wei
2018-01-01
Based on a multifractal detrending moving average algorithm (MFDMA), this study uses the fractionally autoregressive integrated moving average process (ARFIMA) to demonstrate the effectiveness of MFDMA in the detection of auto-correlation at different sample lengths and to simulate some artificial time series with the same length as the actual sample interval. We analyze the effect of predictable and unpredictable meteorological disasters on the US and Chinese stock markets and the degree of long memory in different sectors. Furthermore, we conduct a preliminary investigation to determine whether the fluctuations of financial markets caused by meteorological disasters are derived from the normal evolution of the financial system itself or not. We also propose several reasonable recommendations.
Using Time-Series Regression to Predict Academic Library Circulations.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1984-01-01
Four methods were used to forecast monthly circulation totals in 15 midwestern academic libraries: dummy time-series regression, lagged time-series regression, simple average (straight-line forecasting), monthly average (naive forecasting). In tests of forecasting accuracy, dummy regression method and monthly mean method exhibited smallest average…
Geohydrology and simulation of ground-water flow in the aquifer system near Calvert City, Kentucky
Starn, J.J.; Arihood, L.D.; Rose, M.F.
1995-01-01
The U.S. Geological Survey, in cooperation with the Kentucky Natural Resources and Environmental Protection Cabinet, constructed a two-dimensional, steady-state ground-water-flow model to estimate hydraulic properties, contributing areas to discharge boundaries, and the average linear velocity at selected locations in an aquifer system near Calvert City, Ky. Nonlinear regression was used to estimate values of model parameters and the reliability of the parameter estimates. The regression minimizes the weighted difference between observed and calculated hydraulic heads and rates of flow. The calibrated model generally was better than alternative models considered, and although adding transmissive faults in the bedrock produced a slightly better model, fault transmissivity was not estimated reliably. The average transmissivity of the aquifer was 20,000 feet squared per day. Recharge to two outcrop areas, the McNairy Formation of Cretaceous age and the alluvium of Quaternary age, were 0.00269 feet per day (11.8 inches per year) and 0.000484 feet per day (2.1 inches per year), respectively. Contributing areas to wells at the Calvert City Water Company in 1992 did not include the Calvert City Industrial Complex. Since completing the fieldwork for this study in 1992, the Calvert City Water Company discontinued use of their wells and began withdrawing water from new wells that were located 4.5 miles east-southeast of the previous location; the contributing area moved farther from the industrial complex. The extent of the alluvium contributing water to wells was limited by the overlying lacustrine deposits. The average linear ground-water velocity at the industrial complex ranged from 0.90 feet per day to 4.47 feet per day with a mean of 1.98 feet per day.
Meseret, S.; Tamir, B.; Gebreyohannes, G.; Lidauer, M.; Negussie, E.
2015-01-01
The development of effective genetic evaluations and selection of sires requires accurate estimates of genetic parameters for all economically important traits in the breeding goal. The main objective of this study was to assess the relative performance of the traditional lactation average model (LAM) against the random regression test-day model (RRM) in the estimation of genetic parameters and prediction of breeding values for Holstein Friesian herds in Ethiopia. The data used consisted of 6,500 test-day (TD) records from 800 first-lactation Holstein Friesian cows that calved between 1997 and 2013. Co-variance components were estimated using the average information restricted maximum likelihood method under single trait animal model. The estimate of heritability for first-lactation milk yield was 0.30 from LAM whilst estimates from the RRM model ranged from 0.17 to 0.29 for the different stages of lactation. Genetic correlations between different TDs in first-lactation Holstein Friesian ranged from 0.37 to 0.99. The observed genetic correlation was less than unity between milk yields at different TDs, which indicated that the assumption of LAM may not be optimal for accurate evaluation of the genetic merit of animals. A close look at estimated breeding values from both models showed that RRM had higher standard deviation compared to LAM indicating that the TD model makes efficient utilization of TD information. Correlations of breeding values between models ranged from 0.90 to 0.96 for different group of sires and cows and marked re-rankings were observed in top sires and cows in moving from the traditional LAM to RRM evaluations. PMID:26194217
Energy consumption model on WiMAX subscriber station
NASA Astrophysics Data System (ADS)
Mubarakah, N.; Suherman; Al-Hakim, M. Y.; Warman, E.
2018-02-01
Mobile communication technologies move toward miniaturization. Mobile device’s energy source relies on its battery endurance. The smaller the mobile device, it is expected the slower the battery drains. Energy consumption reduction in mobile devices has been of interest of researcher. In order to optimize energy consumption, its usage should be predictable. This paper proposes a model of predicted energy amount consumed by the WiMAX subscriber station by using regression analysis of active WiMAX states and their durations. The proposed model was assessed by using NS-2 simulation for more than a hundred thousand of recorded energy consumptions data in every WiMAX states. The assessment show a small average deviation between predicted and measured energy consumptions, about 0.18% for training data and 0.187% and 0.191% for test data.
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
GPC-Based Stable Reconfigurable Control
NASA Technical Reports Server (NTRS)
Soloway, Don; Shi, Jian-Jun; Kelkar, Atul
2004-01-01
This paper presents development of multi-input multi-output (MIMO) Generalized Pre-dictive Control (GPC) law and its application to reconfigurable control design in the event of actuator saturation. A Controlled Auto-Regressive Integrating Moving Average (CARIMA) model is used to describe the plant dynamics. The control law is derived using input-output description of the system and is also related to the state-space form of the model. The stability of the GPC control law without reconfiguration is first established using Riccati-based approach and state-space formulation. A novel reconfiguration strategy is developed for the systems which have actuator redundancy and are faced with actuator saturation type failure. An elegant reconfigurable control design is presented with stability proof. Several numerical examples are presented to demonstrate the application of various results.
Forecasting seeing and parameters of long-exposure images by means of ARIMA
NASA Astrophysics Data System (ADS)
Kornilov, Matwey V.
2016-02-01
Atmospheric turbulence is the one of the major limiting factors for ground-based astronomical observations. In this paper, the problem of short-term forecasting seeing is discussed. The real data that were obtained by atmospheric optical turbulence (OT) measurements above Mount Shatdzhatmaz in 2007-2013 have been analysed. Linear auto-regressive integrated moving average (ARIMA) models are used for the forecasting. A new procedure for forecasting the image characteristics of direct astronomical observations (central image intensity, full width at half maximum, radius encircling 80 % of the energy) has been proposed. Probability density functions of the forecast of these quantities are 1.5-2 times thinner than the respective unconditional probability density functions. Overall, this study found that the described technique could adequately describe temporal stochastic variations of the OT power.
Annual replenishment of bed material by sediment transport in the Wind River near Riverton, Wyoming
Smalley, M.L.; Emmett, W.W.; Wacker, A.M.
1994-01-01
The U.S. Geological Survey, in cooperation with the Wyoming Department of Transportation, conducted a study during 1985-87 to determine the annual replenishment of sand and gravel along a point bar in the Wind River near Riverton, Wyoming. Hydraulic- geometry relations determined from streamflow measurements; streamflow characteristics determined from 45 years of record at the study site; and analyses of suspended-sediment, bedload, and bed- material samples were used to describe river transport characteristics and to estimate the annual replenishment of sand and gravel. The Wind River is a perennial, snowmelt-fed stream. Average daily discharge at the study site is about 734 cubic feet per second, and bankfull discharge (recurrence interval about 1.5 years) is about 5,000 cubic feet per second. At bankfull discharge, the river is about 136 feet wide and has an average depth of about 5.5 feet and average velocity of about 6.7 feet per second. Streams slope is about 0.0010 foot per foot. Bed material sampled on the point bar before the 1986 high flows ranged from sand to cobbles, with a median diameter of about 22 millimeters. Data for sediment samples collected during water year 1986 were used to develop regression equations between suspended-sediment load and water discharge and between bedload and water discharge. Average annual suspended-sediment load was computed to be about 561,000 tons per year using the regression equation in combination with flow-duration data. The regression equation for estimating bedload was not used; instead, average annual bedload was computed as 1.5 percent of average annual suspended load about 8,410 tons per year. This amount of bedload material is estimated to be in temporary storage along a reach containing seven riffles--a length of approximately 1 river mile. On the basis of bedload material sampled during the 1986 high flows, about 75 percent (by weight) is sand (2 millimeters in diameter or finer); median particle size is about 0.5 milli- meter. About 20 percent (by weight) is medium gravel to small cobbles--12.7 millimeters (0.5 inch) or coarser. The bedload moves slowly (about 0.03 percent of the water speed) and briefly (about 10 percent of the time). The average travel distance of a median-sized particle is about 1 river mile per year. The study results indicate that the average replenishment rate of bedload material coarser than 12.7 millimeters is about 1,500 to 2,000 tons (less than 1,500 cubic yards) per year. Finer material (0.075 to 6.4 millimeters in diameter) is replen- ishment at about 4,500 to 5,000 cubic yards per year. The total volume of potentially usable material would average about 6,000 cubic yards per year.
Quantifying rapid changes in cardiovascular state with a moving ensemble average.
Cieslak, Matthew; Ryan, William S; Babenko, Viktoriya; Erro, Hannah; Rathbun, Zoe M; Meiring, Wendy; Kelsey, Robert M; Blascovich, Jim; Grafton, Scott T
2018-04-01
MEAP, the moving ensemble analysis pipeline, is a new open-source tool designed to perform multisubject preprocessing and analysis of cardiovascular data, including electrocardiogram (ECG), impedance cardiogram (ICG), and continuous blood pressure (BP). In addition to traditional ensemble averaging, MEAP implements a moving ensemble averaging method that allows for the continuous estimation of indices related to cardiovascular state, including cardiac output, preejection period, heart rate variability, and total peripheral resistance, among others. Here, we define the moving ensemble technique mathematically, highlighting its differences from fixed-window ensemble averaging. We describe MEAP's interface and features for signal processing, artifact correction, and cardiovascular-based fMRI analysis. We demonstrate the accuracy of MEAP's novel B point detection algorithm on a large collection of hand-labeled ICG waveforms. As a proof of concept, two subjects completed a series of four physical and cognitive tasks (cold pressor, Valsalva maneuver, video game, random dot kinetogram) on 3 separate days while ECG, ICG, and BP were recorded. Critically, the moving ensemble method reliably captures the rapid cyclical cardiovascular changes related to the baroreflex during the Valsalva maneuver and the classic cold pressor response. Cardiovascular measures were seen to vary considerably within repetitions of the same cognitive task for each individual, suggesting that a carefully designed paradigm could be used to capture fast-acting event-related changes in cardiovascular state. © 2017 Society for Psychophysiological Research.
Least Squares Moving-Window Spectral Analysis.
Lee, Young Jong
2017-08-01
Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.
The Performance of Multilevel Growth Curve Models under an Autoregressive Moving Average Process
ERIC Educational Resources Information Center
Murphy, Daniel L.; Pituch, Keenan A.
2009-01-01
The authors examined the robustness of multilevel linear growth curve modeling to misspecification of an autoregressive moving average process. As previous research has shown (J. Ferron, R. Dailey, & Q. Yi, 2002; O. Kwok, S. G. West, & S. B. Green, 2007; S. Sivo, X. Fan, & L. Witta, 2005), estimates of the fixed effects were unbiased, and Type I…
NASA Astrophysics Data System (ADS)
Dwi Nugroho, Kreshna; Pebrianto, Singgih; Arif Fatoni, Muhammad; Fatikhunnada, Alvin; Liyantono; Setiawan, Yudi
2017-01-01
Information on the area and spatial distribution of paddy field are needed to support sustainable agricultural and food security program. Mapping or distribution of cropping pattern paddy field is important to obtain sustainability paddy field area. It can be done by direct observation and remote sensing method. This paper discusses remote sensing for paddy field monitoring based on MODIS time series data. In time series MODIS data, difficult to direct classified of data, because of temporal noise. Therefore wavelet transform and moving average are needed as filter methods. The Objective of this study is to recognize paddy cropping pattern with wavelet transform and moving average in West Java using MODIS imagery (MOD13Q1) from 2001 to 2015 then compared between both of methods. The result showed the spatial distribution almost have the same cropping pattern. The accuracy of wavelet transform (75.5%) is higher than moving average (70.5%). Both methods showed that the majority of the cropping pattern in West Java have pattern paddy-fallow-paddy-fallow with various time planting. The difference of the planting schedule was occurs caused by the availability of irrigation water.
An Examination of Selected Geomagnetic Indices in Relation to the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
Previous studies have shown geomagnetic indices to be useful for providing early estimates for the size of the following sunspot cycle several years in advance. Examined this study are various precursor methods for predicting the minimum and maximum amplitude of the following sunspot cycle, these precursors based on the aa and Ap geomagnetic indices and the number of disturbed days (NDD), days when the daily Ap index equaled or exceeded 25. Also examined is the yearly peak of the daily Ap index (Apmax), the number of days when Ap greater than or equal to 100, cyclic averages of sunspot number R, aa, Ap, NDD, and the number of sudden storm commencements (NSSC), as well the cyclic sums of NDD and NSSC. The analysis yields 90-percent prediction intervals for both the minimum and maximum amplitudes for cycle 24, the next sunspot cycle. In terms of yearly averages, the best regressions give Rmin = 9.8+/-2.9 and Rmax = 153.8+/-24.7, equivalent to Rm = 8.8+/-2.8 and RM = 159+/-5.5, based on the 12-mo moving average (or smoothed monthly mean sunspot number). Hence, cycle 24 is expected to be above average in size, similar to cycles 21 and 22, producing more than 300 sudden storm commencements and more than 560 disturbed days, of which about 25 will be Ap greater than or equal to 100. On the basis of annual averages, the sunspot minimum year for cycle 24 will be either 2006 or 2007.
NASA Astrophysics Data System (ADS)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
Inter-comparison of time series models of lake levels predicted by several modeling strategies
NASA Astrophysics Data System (ADS)
Khatibi, R.; Ghorbani, M. A.; Naghipour, L.; Jothiprakash, V.; Fathima, T. A.; Fazelifard, M. H.
2014-04-01
Five modeling strategies are employed to analyze water level time series of six lakes with different physical characteristics such as shape, size, altitude and range of variations. The models comprise chaos theory, Auto-Regressive Integrated Moving Average (ARIMA) - treated for seasonality and hence SARIMA, Artificial Neural Networks (ANN), Gene Expression Programming (GEP) and Multiple Linear Regression (MLR). Each is formulated on a different premise with different underlying assumptions. Chaos theory is elaborated in a greater detail as it is customary to identify the existence of chaotic signals by a number of techniques (e.g. average mutual information and false nearest neighbors) and future values are predicted using the Nonlinear Local Prediction (NLP) technique. This paper takes a critical view of past inter-comparison studies seeking a superior performance, against which it is reported that (i) the performances of all five modeling strategies vary from good to poor, hampering the recommendation of a clear-cut predictive model; (ii) the performances of the datasets of two cases are consistently better with all five modeling strategies; (iii) in other cases, their performances are poor but the results can still be fit-for-purpose; (iv) the simultaneous good performances of NLP and SARIMA pull their underlying assumptions to different ends, which cannot be reconciled. A number of arguments are presented including the culture of pluralism, according to which the various modeling strategies facilitate an insight into the data from different vantages.
Estimating the Length of the North Atlantic Basin Hurricane Season
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2012-01-01
For the interval 1945-2011, the length of the hurricane season in the North Atlantic basin averages about 130 +/- 42 days (the +/-1 standard deviation interval), having a range of 47 to 235 days. Runs-testing reveals that the annual length of season varies nonrandomly at the 5% level of significance. In particular, its trend, as described using 10-yr moving averages, generally has been upward since about 1979, increasing from about 113 to 157 days (in 2003). Based on annual values, one finds a highly statistically important inverse correlation at the 0.1% level of significance between the length of season and the occurrence of the first storm day of the season. For the 2012 hurricane season, based on the reported first storm day of May 19, 2012 (i.e., DOY = 140), the inferred preferential regression predicts that the length of the current season likely will be about 173 +/- 23 days, suggesting that it will end about November 8 +/- 23 days, with only about a 5% chance that it will end either before about September 23, 2012 or after about December 24, 2012.
Model-checking techniques based on cumulative residuals.
Lin, D Y; Wei, L J; Ying, Z
2002-03-01
Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
Zhao, Jinhui; Martin, Gina; Macdonald, Scott; Vallance, Kate; Treno, Andrew; Ponicki, William; Tu, Andrew; Buxton, Jane
2013-01-01
Objectives. We investigated whether periodic increases in minimum alcohol prices were associated with reduced alcohol-attributable hospital admissions in British Columbia. Methods. The longitudinal panel study (2002–2009) incorporated minimum alcohol prices, density of alcohol outlets, and age- and gender-standardized rates of acute, chronic, and 100% alcohol-attributable admissions. We applied mixed-method regression models to data from 89 geographic areas of British Columbia across 32 time periods, adjusting for spatial and temporal autocorrelation, moving average effects, season, and a range of economic and social variables. Results. A 10% increase in the average minimum price of all alcoholic beverages was associated with an 8.95% decrease in acute alcohol-attributable admissions and a 9.22% reduction in chronic alcohol-attributable admissions 2 years later. A Can$ 0.10 increase in average minimum price would prevent 166 acute admissions in the 1st year and 275 chronic admissions 2 years later. We also estimated significant, though smaller, adverse impacts of increased private liquor store density on hospital admission rates for all types of alcohol-attributable admissions. Conclusions. Significant health benefits were observed when minimum alcohol prices in British Columbia were increased. By contrast, adverse health outcomes were associated with an expansion of private liquor stores. PMID:23597383
STOCK MARKET CRASH AND EXPECTATIONS OF AMERICAN HOUSEHOLDS*
HUDOMIET, PÉTER; KÉZDI, GÁBOR; WILLIS, ROBERT J.
2011-01-01
SUMMARY This paper utilizes data on subjective probabilities to study the impact of the stock market crash of 2008 on households’ expectations about the returns on the stock market index. We use data from the Health and Retirement Study that was fielded in February 2008 through February 2009. The effect of the crash is identified from the date of the interview, which is shown to be exogenous to previous stock market expectations. We estimate the effect of the crash on the population average of expected returns, the population average of the uncertainty about returns (subjective standard deviation), and the cross-sectional heterogeneity in expected returns (disagreement). We show estimates from simple reduced-form regressions on probability answers as well as from a more structural model that focuses on the parameters of interest and separates survey noise from relevant heterogeneity. We find a temporary increase in the population average of expectations and uncertainty right after the crash. The effect on cross-sectional heterogeneity is more significant and longer lasting, which implies substantial long-term increase in disagreement. The increase in disagreement is larger among the stockholders, the more informed, and those with higher cognitive capacity, and disagreement co-moves with trading volume and volatility in the market. PMID:21547244
Thompson, Aaron M S; Zanobetti, Antonella; Silverman, Frances; Schwartz, Joel; Coull, Brent; Urch, Bruce; Speck, Mary; Brook, Jeffrey R; Manno, Michael; Gold, Diane R
2010-01-01
Systemic inflammation may be one of the mechanisms mediating the association between ambient air pollution and cardiovascular morbidity and mortality. Interleukin-6 (IL-6) and fibrinogen are biomarkers of systemic inflammation that are independent risk factors for cardio-vascular disease. We investigated the association between ambient air pollution and systemic inflammation using baseline measurements of IL-6 and fibrinogen from controlled human exposure studies. In this retrospective analysis we used repeated-measures data in 45 nonsmoking subjects. Hourly and daily moving averages were calculated for ozone, nitrogen dioxide, sulfur dioxide, and particulate matter
Ambient temperature and biomarkers of heart failure: a repeated measures analysis.
Wilker, Elissa H; Yeh, Gloria; Wellenius, Gregory A; Davis, Roger B; Phillips, Russell S; Mittleman, Murray A
2012-08-01
Extreme temperatures have been associated with hospitalization and death among individuals with heart failure, but few studies have explored the underlying mechanisms. We hypothesized that outdoor temperature in the Boston, Massachusetts, area (1- to 4-day moving averages) would be associated with higher levels of biomarkers of inflammation and myocyte injury in a repeated-measures study of individuals with stable heart failure. We analyzed data from a completed clinical trial that randomized 100 patients to 12 weeks of tai chi classes or to time-matched education control. B-type natriuretic peptide (BNP), C-reactive protein (CRP), and tumor necrosis factor (TNF) were measured at baseline, 6 weeks, and 12 weeks. Endothelin-1 was measured at baseline and 12 weeks. We used fixed effects models to evaluate associations with measures of temperature that were adjusted for time-varying covariates. Higher apparent temperature was associated with higher levels of BNP beginning with 2-day moving averages and reached statistical significance for 3- and 4-day moving averages. CRP results followed a similar pattern but were delayed by 1 day. A 5°C change in 3- and 4-day moving averages of apparent temperature was associated with 11.3% [95% confidence interval (CI): 1.1, 22.5; p = 0.03) and 11.4% (95% CI: 1.2, 22.5; p = 0.03) higher BNP. A 5°C change in the 4-day moving average of apparent temperature was associated with 21.6% (95% CI: 2.5, 44.2; p = 0.03) higher CRP. No clear associations with TNF or endothelin-1 were observed. Among patients undergoing treatment for heart failure, we observed positive associations between temperature and both BNP and CRP-predictors of heart failure prognosis and severity.
NASA Technical Reports Server (NTRS)
Pongratz, M.
1972-01-01
Results from a Nike-Tomahawk sounding rocket flight launched from Fort Churchill are presented. The rocket was launched into a breakup aurora at magnetic local midnight on 21 March 1968. The rocket was instrumented to measure electrons with an electrostatic analyzer electron spectrometer which made 29 measurements in the energy interval 0.5 KeV to 30 KeV. Complete energy spectra were obtained at a rate of 10/sec. Pitch angle information is presented via 3 computed average per rocket spin. The dumped electron average corresponds to averages over electrons moving nearly parallel to the B vector. The mirroring electron average corresponds to averages over electrons moving nearly perpendicular to the B vector. The average was also computed over the entire downward hemisphere (the precipitated electron average). The observations were obtained in an altitude range of 10 km at 230 km altitude.
Kumaraswamy autoregressive moving average models for double bounded environmental data
NASA Astrophysics Data System (ADS)
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
A comparison of four streamflow record extension techniques
Hirsch, Robert M.
1982-01-01
One approach to developing time series of streamflow, which may be used for simulation and optimization studies of water resources development activities, is to extend an existing gage record in time by exploiting the interstation correlation between the station of interest and some nearby (long-term) base station. Four methods of extension are described, and their properties are explored. The methods are regression (REG), regression plus noise (RPN), and two new methods, maintenance of variance extension types 1 and 2 (MOVE.l, MOVE.2). MOVE.l is equivalent to a method which is widely used in psychology, biometrics, and geomorphology and which has been called by various names, e.g., ‘line of organic correlation,’ ‘reduced major axis,’ ‘unique solution,’ and ‘equivalence line.’ The methods are examined for bias and standard error of estimate of moments and order statistics, and an empirical examination is made of the preservation of historic low-flow characteristics using 50-year-long monthly records from seven streams. The REG and RPN methods are shown to have serious deficiencies as record extension techniques. MOVE.2 is shown to be marginally better than MOVE.l, according to the various comparisons of bias and accuracy.
A Comparison of Four Streamflow Record Extension Techniques
NASA Astrophysics Data System (ADS)
Hirsch, Robert M.
1982-08-01
One approach to developing time series of streamflow, which may be used for simulation and optimization studies of water resources development activities, is to extend an existing gage record in time by exploiting the interstation correlation between the station of interest and some nearby (long-term) base station. Four methods of extension are described, and their properties are explored. The methods are regression (REG), regression plus noise (RPN), and two new methods, maintenance of variance extension types 1 and 2 (MOVE.l, MOVE.2). MOVE.l is equivalent to a method which is widely used in psychology, biometrics, and geomorphology and which has been called by various names, e.g., `line of organic correlation,' `reduced major axis,' `unique solution,' and `equivalence line.' The methods are examined for bias and standard error of estimate of moments and order statistics, and an empirical examination is made of the preservation of historic low-flow characteristics using 50-year-long monthly records from seven streams. The REG and RPN methods are shown to have serious deficiencies as record extension techniques. MOVE.2 is shown to be marginally better than MOVE.l, according to the various comparisons of bias and accuracy.
Neural net forecasting for geomagnetic activity
NASA Technical Reports Server (NTRS)
Hernandez, J. V.; Tajima, T.; Horton, W.
1993-01-01
We use neural nets to construct nonlinear models to forecast the AL index given solar wind and interplanetary magnetic field (IMF) data. We follow two approaches: (1) the state space reconstruction approach, which is a nonlinear generalization of autoregressive-moving average models (ARMA) and (2) the nonlinear filter approach, which reduces to a moving average model (MA) in the linear limit. The database used here is that of Bargatze et al. (1985).
Queues with Choice via Delay Differential Equations
NASA Astrophysics Data System (ADS)
Pender, Jamol; Rand, Richard H.; Wesson, Elizabeth
Delay or queue length information has the potential to influence the decision of a customer to join a queue. Thus, it is imperative for managers of queueing systems to understand how the information that they provide will affect the performance of the system. To this end, we construct and analyze two two-dimensional deterministic fluid models that incorporate customer choice behavior based on delayed queue length information. In the first fluid model, customers join each queue according to a Multinomial Logit Model, however, the queue length information the customer receives is delayed by a constant Δ. We show that the delay can cause oscillations or asynchronous behavior in the model based on the value of Δ. In the second model, customers receive information about the queue length through a moving average of the queue length. Although it has been shown empirically that giving patients moving average information causes oscillations and asynchronous behavior to occur in U.S. hospitals, we analytically and mathematically show for the first time that the moving average fluid model can exhibit oscillations and determine their dependence on the moving average window. Thus, our analysis provides new insight on how operators of service systems should report queue length information to customers and how delayed information can produce unwanted system dynamics.
Modeling and roles of meteorological factors in outbreaks of highly pathogenic avian influenza H5N1.
Biswas, Paritosh K; Islam, Md Zohorul; Debnath, Nitish C; Yamage, Mat
2014-01-01
The highly pathogenic avian influenza A virus subtype H5N1 (HPAI H5N1) is a deadly zoonotic pathogen. Its persistence in poultry in several countries is a potential threat: a mutant or genetically reassorted progenitor might cause a human pandemic. Its world-wide eradication from poultry is important to protect public health. The global trend of outbreaks of influenza attributable to HPAI H5N1 shows a clear seasonality. Meteorological factors might be associated with such trend but have not been studied. For the first time, we analyze the role of meteorological factors in the occurrences of HPAI outbreaks in Bangladesh. We employed autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to assess the roles of different meteorological factors in outbreaks of HPAI. Outbreaks were modeled best when multiplicative seasonality was incorporated. Incorporation of any meteorological variable(s) as inputs did not improve the performance of any multivariable models, but relative humidity (RH) was a significant covariate in several ARIMA and SARIMA models with different autoregressive and moving average orders. The variable cloud cover was also a significant covariate in two SARIMA models, but air temperature along with RH might be a predictor when moving average (MA) order at lag 1 month is considered.
NASA Astrophysics Data System (ADS)
Singh, Navneet K.; Singh, Asheesh K.; Tripathy, Manoj
2012-05-01
For power industries electricity load forecast plays an important role for real-time control, security, optimal unit commitment, economic scheduling, maintenance, energy management, and plant structure planning
[Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].
Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang
2016-07-12
To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.
Optimal design and experimental analyses of a new micro-vibration control payload-platform
NASA Astrophysics Data System (ADS)
Sun, Xiaoqing; Yang, Bintang; Zhao, Long; Sun, Xiaofen
2016-07-01
This paper presents a new payload-platform, for precision devices, which possesses the capability of isolating the complex space micro-vibration in low frequency range below 5 Hz. The novel payload-platform equipped with smart material actuators is investigated and designed through optimization strategy based on the minimum energy loss rate, for the aim of achieving high drive efficiency and reducing the effect of the magnetic circuit nonlinearity. Then, the dynamic model of the driving element is established by using the Lagrange method and the performance of the designed payload-platform is further discussed through the combination of the controlled auto regressive moving average (CARMA) model with modified generalized prediction control (MGPC) algorithm. Finally, an experimental prototype is developed and tested. The experimental results demonstrate that the payload-platform has an impressive potential of micro-vibration isolation.
Symbiosis of Steel, Energy, and CO2 Evolution in Korea
NASA Astrophysics Data System (ADS)
Lee, Hyunjoung; Matsuura, Hiroyuki; Sohn, Il
2016-09-01
This study looks at the energy intensity of the steel industry and the greenhouse gas intensity involved with the production of steel. Using several sources of steel production data and the corresponding energy sources used provides a time-series analysis of the greenhouse gas (GHG) and energy intensity from 1990 to 2014. The impact of the steel economy with the gross domestic product (GDP) provides indirect importance of the general manufacturing sector within Korea and in particular the steel industry. Beyond 2008, the shift in excess materials production and significant increase in total imports have led to an imbalance in the Korean steel market and continue to inhibit the growth of the domestic steel market. The forecast of the GHG and energy intensity along with the steel production up to 2030 is provided using the auto regressive integrated moving average analysis.
On the Period-Amplitude and Amplitude-Period Relationships
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2008-01-01
Examined are Period-Amplitude and Amplitude-Period relationships based on the cyclic behavior of the 12-month moving averages of monthly mean sunspot numbers for cycles 0.23, both in terms of Fisher's exact tests for 2x2 contingency tables and linear regression analyses. Concerning the Period-Amplitude relationship (same cycle), because cycle 23's maximum amplitude is known to be 120.8, the inferred regressions (90-percent prediction intervals) suggest that its period will be 131 +/- 24 months (using all cycles) or 131 +/- 18 months (ignoring cycles 2 and 4, which have the extremes of period, 108 and 164 months, respectively). Because cycle 23 has already persisted for 142 months (May 1996 through February 2008), based on the latter prediction, it should end before September 2008. Concerning the Amplitude-Period relationship (following cycle maximum amplitude versus preceding cycle period), because cycle 23's period is known to be at least 142 months, the inferred regressions (90-percent prediction intervals) suggest that cycle 24's maximum amplitude will be about less than or equal to 96.1 +/- 55.0 (using all cycle pairs) or less than or equal to 91.0 +/- 36.7 (ignoring statistical outlier cycle pairs). Hence, cycle 24's maximum amplitude is expected to be less than 151, perhaps even less than 128, unless cycle pair 23/24 proves to be a statistical outlier.
Rimaityte, Ingrida; Ruzgas, Tomas; Denafas, Gintaras; Racys, Viktoras; Martuzevicius, Dainius
2012-01-01
Forecasting of generation of municipal solid waste (MSW) in developing countries is often a challenging task due to the lack of data and selection of suitable forecasting method. This article aimed to select and evaluate several methods for MSW forecasting in a medium-scaled Eastern European city (Kaunas, Lithuania) with rapidly developing economics, with respect to affluence-related and seasonal impacts. The MSW generation was forecast with respect to the economic activity of the city (regression modelling) and using time series analysis. The modelling based on social-economic indicators (regression implemented in LCA-IWM model) showed particular sensitivity (deviation from actual data in the range from 2.2 to 20.6%) to external factors, such as the synergetic effects of affluence parameters or changes in MSW collection system. For the time series analysis, the combination of autoregressive integrated moving average (ARIMA) and seasonal exponential smoothing (SES) techniques were found to be the most accurate (mean absolute percentage error equalled to 6.5). Time series analysis method was very valuable for forecasting the weekly variation of waste generation data (r (2) > 0.87), but the forecast yearly increase should be verified against the data obtained by regression modelling. The methods and findings of this study may assist the experts, decision-makers and scientists performing forecasts of MSW generation, especially in developing countries.
Stockwell, Tim; Zhao, Jinhui; Sherk, Adam; Callaghan, Russell C; Macdonald, Scott; Gatley, Jodi
2017-07-01
Saskatchewan's introduction in April 2010 of minimum prices graded by alcohol strength led to an average minimum price increase of 9.1% per Canadian standard drink (=13.45 g ethanol). This increase was shown to be associated with reduced consumption and switching to lower alcohol content beverages. Police also informally reported marked reductions in night-time alcohol-related crime. This study aims to assess the impacts of changes to Saskatchewan's minimum alcohol-pricing regulations between 2008 and 2012 on selected crime events often related to alcohol use. Data were obtained from Canada's Uniform Crime Reporting Survey. Auto-regressive integrated moving average time series models were used to test immediate and lagged associations between minimum price increases and rates of night-time and police identified alcohol-related crimes. Controls were included for simultaneous crime rates in the neighbouring province of Alberta, economic variables, linear trend, seasonality and autoregressive and/or moving-average effects. The introduction of increased minimum-alcohol prices was associated with an abrupt decrease in night-time alcohol-related traffic offences for men (-8.0%, P < 0.001), but not women. No significant immediate changes were observed for non-alcohol-related driving offences, disorderly conduct or violence. Significant monthly lagged effects were observed for violent offences (-19.7% at month 4 to -18.2% at month 6), which broadly corresponded to lagged effects in on-premise alcohol sales. Increased minimum alcohol prices may contribute to reductions in alcohol-related traffic-related and violent crimes perpetrated by men. Observed lagged effects for violent incidents may be due to a delay in bars passing on increased prices to their customers, perhaps because of inventory stockpiling. [Stockwell T, Zhao J, Sherk A, Callaghan RC, Macdonald S, Gatley J. Assessing the impacts of Saskatchewan's minimum alcohol pricing regulations on alcohol-related crime. Drug Alcohol Rev 2017;36:492-501]. © 2016 Australasian Professional Society on Alcohol and other Drugs.
What do UK doctors in training value in a post? A discrete choice experiment.
Cleland, Jennifer; Johnston, Peter; Watson, Verity; Krucien, Nicolas; Skåtun, Diane
2016-02-01
Many individual and job-related factors are known to influence medical careers decision making. Medical trainees' (residents) views of which characteristics of a training post are important to them have been extensively studied but how they trade-off these characteristics is under-researched. Such information is crucial for the development of effective policies to enhance recruitment and retention. Our aim was to investigate the strength of UK foundation doctors' and trainees' preferences for training post characteristics in terms of monetary value. We used an online questionnaire study incorporating a discrete choice experiment (DCE), distributed to foundation programme doctors and doctors in training across all specialty groups within three UK regions, in August-October 2013. The main outcome measures were monetary values for training-post characteristics, based on willingness to forgo and willingness to accept extra income for a change in each job characteristic, calculated from regression coefficients. The questionnaire was answered by 1323 trainees. Good working conditions were the most influential characteristics of a training position. Trainee doctors would need to be compensated by an additional 49.8% above the average earnings within their specialty to move from a post with good working conditions to one with poor working conditions. A training post with limited rather than good opportunities for one's spouse or partner would require compensation of 38.4% above the average earnings within their specialty. Trainees would require compensation of 30.8% above the average earnings within their specialty to move from a desirable to a less desirable locality. These preferences varied only to a limited extent according to individual characteristics. Trainees place most value on good working conditions, good opportunities for their partners and desirable geographical location when making career-related decisions. This intelligence can be used to develop alternative models of workforce planning or to develop information about job opportunities that address trainees' values. © 2016 John Wiley & Sons Ltd.
MARD—A moving average rose diagram application for the geosciences
NASA Astrophysics Data System (ADS)
Munro, Mark A.; Blenkinsop, Thomas G.
2012-12-01
MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.
Stone, Wesley W.; Gilliom, Robert J.
2012-01-01
Watershed Regressions for Pesticides (WARP) models, previously developed for atrazine at the national scale, are improved for application to the United States (U.S.) Corn Belt region by developing region-specific models that include watershed characteristics that are influential in predicting atrazine concentration statistics within the Corn Belt. WARP models for the Corn Belt (WARP-CB) were developed for annual maximum moving-average (14-, 21-, 30-, 60-, and 90-day durations) and annual 95th-percentile atrazine concentrations in streams of the Corn Belt region. The WARP-CB models accounted for 53 to 62% of the variability in the various concentration statistics among the model-development sites. Model predictions were within a factor of 5 of the observed concentration statistic for over 90% of the model-development sites. The WARP-CB residuals and uncertainty are lower than those of the National WARP model for the same sites. Although atrazine-use intensity is the most important explanatory variable in the National WARP models, it is not a significant variable in the WARP-CB models. The WARP-CB models provide improved predictions for Corn Belt streams draining watersheds with atrazine-use intensities of 17 kg/km2 of watershed area or greater.
Time series trends of the safety effects of pavement resurfacing.
Park, Juneyoung; Abdel-Aty, Mohamed; Wang, Jung-Han
2017-04-01
This study evaluated the safety performance of pavement resurfacing projects on urban arterials in Florida using the observational before and after approaches. The safety effects of pavement resurfacing were quantified in the crash modification factors (CMFs) and estimated based on different ranges of heavy vehicle traffic volume and time changes for different severity levels. In order to evaluate the variation of CMFs over time, crash modification functions (CMFunctions) were developed using nonlinear regression and time series models. The results showed that pavement resurfacing projects decrease crash frequency and are found to be more safety effective to reduce severe crashes in general. Moreover, the results of the general relationship between the safety effects and time changes indicated that the CMFs increase over time after the resurfacing treatment. It was also found that pavement resurfacing projects for the urban roadways with higher heavy vehicle volume rate are more safety effective than the roadways with lower heavy vehicle volume rate. Based on the exploration and comparison of the developed CMFucntions, the seasonal autoregressive integrated moving average (SARIMA) and exponential functional form of the nonlinear regression models can be utilized to identify the trend of CMFs over time. Copyright © 2017 Elsevier Ltd. All rights reserved.
A stepwise model to predict monthly streamflow
NASA Astrophysics Data System (ADS)
Mahmood Al-Juboori, Anas; Guven, Aytac
2016-12-01
In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.
NASA Astrophysics Data System (ADS)
Yang, Fan; Xue, Lianqing; Zhang, Luochen; Chen, Xinfang; Chi, Yixia
2017-12-01
This article aims to explore the adaptive utilization strategies of flow regime versus traditional practices in the context of climate change and human activities in the arid area. The study presents quantitative analysis of climatic and anthropogenic factors to streamflow alteration in the Tarim River Basin (TRB) using the Budyko method and adaptive utilization strategies to eco-hydrological regime by comparing the applicability between autoregressive moving average model (ARMA) model and combined regression model. Our results suggest that human activities played a dominant role in streamflow deduction in the mainstream with contribution of 120.7%~190.1%. While in the headstreams, climatic variables were the primary determinant of streamflow by 56.5~152.6% of the increase. The comparison revealed that combined regression model performed better than ARMA model with the qualified rate of 80.49~90.24%. Based on the forecasts of streamflow for different purposes, the adaptive utilization scheme of water flow is established from the perspective of time and space. Our study presents an effective water resources scheduling scheme for the ecological environment and provides references for ecological protection and water allocation in the arid area.
Zhou, Qingping; Jiang, Haiyan; Wang, Jianzhou; Zhou, Jianling
2014-10-15
Exposure to high concentrations of fine particulate matter (PM₂.₅) can cause serious health problems because PM₂.₅ contains microscopic solid or liquid droplets that are sufficiently small to be ingested deep into human lungs. Thus, daily prediction of PM₂.₅ levels is notably important for regulatory plans that inform the public and restrict social activities in advance when harmful episodes are foreseen. A hybrid EEMD-GRNN (ensemble empirical mode decomposition-general regression neural network) model based on data preprocessing and analysis is firstly proposed in this paper for one-day-ahead prediction of PM₂.₅ concentrations. The EEMD part is utilized to decompose original PM₂.₅ data into several intrinsic mode functions (IMFs), while the GRNN part is used for the prediction of each IMF. The hybrid EEMD-GRNN model is trained using input variables obtained from principal component regression (PCR) model to remove redundancy. These input variables accurately and succinctly reflect the relationships between PM₂.₅ and both air quality and meteorological data. The model is trained with data from January 1 to November 1, 2013 and is validated with data from November 2 to November 21, 2013 in Xi'an Province, China. The experimental results show that the developed hybrid EEMD-GRNN model outperforms a single GRNN model without EEMD, a multiple linear regression (MLR) model, a PCR model, and a traditional autoregressive integrated moving average (ARIMA) model. The hybrid model with fast and accurate results can be used to develop rapid air quality warning systems. Copyright © 2014 Elsevier B.V. All rights reserved.
Periodicity analysis of tourist arrivals to Banda Aceh using smoothing SARIMA approach
NASA Astrophysics Data System (ADS)
Miftahuddin, Helida, Desri; Sofyan, Hizir
2017-11-01
Forecasting the number of tourist arrivals who enters a region is needed for tourism businesses, economic and industrial policies, so that the statistical modeling needs to be conducted. Banda Aceh is the capital of Aceh province more economic activity is driven by the services sector, one of which is the tourism sector. Therefore, the prediction of the number of tourist arrivals is needed to develop further policies. The identification results indicate that the data arrival of foreign tourists to Banda Aceh to contain the trend and seasonal nature. Allegedly, the number of arrivals is influenced by external factors, such as economics, politics, and the holiday season caused the structural break in the data. Trend patterns are detected by using polynomial regression with quadratic and cubic approaches, while seasonal is detected by a periodic regression polynomial with quadratic and cubic approach. To model the data that has seasonal effects, one of the statistical methods that can be used is SARIMA (Seasonal Autoregressive Integrated Moving Average). The results showed that the smoothing, a method to detect the trend pattern is cubic polynomial regression approach, with the modified model and the multiplicative periodicity of 12 months. The AIC value obtained was 70.52. While the method for detecting the seasonal pattern is a periodic regression polynomial cubic approach, with the modified model and the multiplicative periodicity of 12 months. The AIC value obtained was 73.37. Furthermore, the best model to predict the number of foreign tourist arrivals to Banda Aceh in 2017 to 2018 is SARIMA (0,1,1)(1,1,0) with MAPE is 26%.
Adachi, Yasumoto; Makita, Kohei
2015-09-01
Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.
Longitudinal follow-up of academic achievement in children with autism from age 2 to 18.
Kim, So Hyun; Bal, Vanessa H; Lord, Catherine
2018-03-01
This study examined early predictors of and changes in school-age academic achievement and class placement in children referred for autism spectrum disorder (ASD) at age 2. Of 111 ASD referrals, 74 were diagnosed with ASD at age 18. Regression analyses were performed to identify age 3 predictors of achievement in arithmetic, passage comprehension, word reading, and spelling at ages 9 and 18. Linear Mixed Models were used to examine predictors of academic growth between ages 9 and 18. Academic skills varied widely at 9 and 18, but were mostly commensurate with or higher than expected given cognitive levels. However, 22% (age 9) and 32% (age 18) of children with average/above average IQ showed below/low average achievement in at least one academic domain. Children who remained in general education/inclusion classrooms had higher achievement than those who moved to special education classrooms. Stronger cognitive skills at age 3 and 9 predicted better academic achievement and faster academic growth from age 9 to 18. Parent participation in intervention by age 3 predicted better achievement at age 9 and 18. Many children with ASD achieve basic academic skills commensurate with or higher than their cognitive ability. However, more rigorous screening for learning difficulties may be important for those with average cognitive skills because a significant minority show relative academic delays. Interventions targeting cognitive skills and parent participation in early treatment may have cascading effects on long-term academic development. © 2017 Association for Child and Adolescent Mental Health.
Spatial variation of pneumonia hospitalization risk in Twin Cities metro area, Minnesota.
Iroh Tam, P Y; Krzyzanowski, B; Oakes, J M; Kne, L; Manson, S
2017-11-01
Fine resolution spatial variability in pneumonia hospitalization may identify correlates with socioeconomic, demographic and environmental factors. We performed a retrospective study within the Fairview Health System network of Minnesota. Patients 2 months of age and older hospitalized with pneumonia between 2011 and 2015 were geocoded to their census block group, and pneumonia hospitalization risk was analyzed in relation to socioeconomic, demographic and environmental factors. Spatial analyses were performed using Esri's ArcGIS software, and multivariate Poisson regression was used. Hospital encounters of 17 840 patients were included in the analysis. Multivariate Poisson regression identified several significant associations, including a 40% increased risk of pneumonia hospitalization among census block groups with large, compared with small, populations of ⩾65 years, a 56% increased risk among census block groups in the bottom (first) quartile of median household income compared to the top (fourth) quartile, a 44% higher risk in the fourth quartile of average nitrogen dioxide emissions compared with the first quartile, and a 47% higher risk in the fourth quartile of average annual solar insolation compared to the first quartile. After adjusting for income, moving from the first to the second quartile of the race/ethnic diversity index resulted in a 21% significantly increased risk of pneumonia hospitalization. In conclusion, the risk of pneumonia hospitalization at the census-block level is associated with age, income, race/ethnic diversity index, air quality, and solar insolation, and varies by region-specific factors. Identifying correlates using fine spatial analysis provides opportunities for targeted prevention and control.
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
Forecasting Instability Indicators in the Horn of Africa
2008-03-01
further than 2 (Makridakis, et al, 1983, 359). 2-32 Autoregressive Integrated Moving Average ( ARIMA ) Model . Similar to the ARMA model except for...stationary process. ARIMA models are described as ARIMA (p,d,q), where p is the order of the autoregressive process, d is the degree of the...differential process, and q is the order of the moving average process. The ARMA (1,1) model shown above is equivalent to an ARIMA (1,0,1) model . An ARIMA
Decadal Trends of Atlantic Basin Tropical Cyclones (1950-1999)
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Ten-year moving averages of the seasonal rates for 'named storms,' tropical storms, hurricanes, and major (or intense) hurricanes in the Atlantic basin suggest that the present epoch is one of enhanced activity, marked by seasonal rates typically equal to or above respective long-term median rates. As an example, the 10-year moving average of the seasonal rates for named storms is now higher than for any previous year over the past 50 years, measuring 10.65 in 1994, or 2.65 units higher than its median rate of 8. Also, the 10-year moving average for tropical storms has more than doubled, from 2.15 in 1955 to 4.60 in 1992, with 16 of the past 20 years having a seasonal rate of three or more (the median rate). For hurricanes and major hurricanes, their respective 10-year moving averages turned upward, rising above long-term median rates (5.5 and 2, respectively) in 1992, a response to the abrupt increase in seasonal rates that occurred in 1995. Taken together, the outlook for future hurricane seasons is for all categories of Atlantic basin tropical cyclones to have seasonal rates at levels equal to or above long-term median rates, especially during non-El Nino-related seasons. Only during El Nino-related seasons does it appear likely that seasonal rates might be slightly diminished.
Wu, Shaowei; Deng, Furong; Niu, Jie; Huang, Qinsheng; Liu, Youcheng; Guo, Xinbiao
2010-01-01
Heart rate variability (HRV), a marker of cardiac autonomic function, has been -associated with particulate matter (PM) air pollution, especially in older patients and those with cardio-vascular diseases. However, the effect of PM exposure on cardiac autonomic function in young, healthy adults has received less attention. We evaluated the relationship between exposure to traffic-related PM with an aerodynamic diameter
Short-Term Mortality Rates during a Decade of Improved Air Quality in Erfurt, Germany
Breitner, Susanne; Stölzel, Matthias; Cyrys, Josef; Pitz, Mike; Wölke, Gabriele; Kreyling, Wolfgang; Küchenhoff, Helmut; Heinrich, Joachim; Wichmann, H.-Erich; Peters, Annette
2009-01-01
Background Numerous studies have shown associations between ambient air pollution and daily mortality. Objectives Our goal was to investigate the association of ambient air pollution and daily mortality in Erfurt, Germany, over a 10.5-year period after the German unification, when air quality improved. Methods We obtained daily mortality counts and data on mass concentrations of particulate matter (PM) < 10 μm in aerodynamic diameter (PM10), gaseous pollutants, and meteorology in Erfurt between October 1991 and March 2002. We obtained ultrafine particle number concentrations (UFP) and mass concentrations of PM < 2.5 μm in aerodynamic diameter (PM2.5) from September 1995 to March 2002. We analyzed the data using semiparametric Poisson regression models adjusting for trend, seasonality, influenza epidemics, day of the week, and meteorology. We evaluated cumulative associations between air pollution and mortality using polynomial distributed lag (PDL) models and multiday moving averages of air pollutants. We evaluated changes in the associations over time in time-varying coefficient models. Results Air pollution concentrations decreased over the study period. Cumulative exposure to UFP was associated with increased mortality. An interquartile range (IQR) increase in the 15-day cumulative mean UFP of 7,649 cm−3 was associated with a relative risk (RR) of 1.060 [95% confidence interval (CI), 1.008–1.114] for PDL models and an RR/IQR of 1.055 (95% CI, 1.011–1.101) for moving averages. RRs decreased from the mid-1990s to the late 1990s. Conclusion Results indicate an elevated mortality risk from short-term exposure to UFP. They further suggest that RRs for short-term associations of air pollution decreased as pollution control measures were implemented in Eastern Germany. PMID:19337521
Carlsen, Hanne Krage; Zoëga, Helga; Valdimarsdóttir, Unnur; Gíslason, Thórarinn; Hrafnkelsson, Birgir
2012-02-01
Air pollutants in Iceland's capital area include hydrogen sulfide (H2S) emissions from geothermal power plants, particle pollution (PM10) and traffic-related pollutants. Respiratory health effects of exposure to PM and traffic pollutants are well documented, yet this is one of the first studies to investigate short-term health effects of ambient H2S exposure. The aim of this study was to investigate the associations between daily ambient levels of H2S, PM10, nitrogen dioxide (NO2) and ozone (O3), and the use of drugs for obstructive pulmonary diseases in adults in Iceland's capital area. The study period was 8 March 2006 to 31 December 2009. We used log-linear Poisson generalized additive regression models with cubic splines to estimate relative risks of individually dispensed drugs by air pollution levels. A three-day moving average of the exposure variables gave the best fit to the data. Final models included significant covariates adjusting for climate and influenza epidemics, as well as time-dependent variables. The three-day moving average of H2S and PM10 levels were positively associated with the number of individuals who were dispensed drugs at lag 3-5, corresponding to a 2.0% (95% confidence interval [CI] 0.4, 3.6) and 0.9% (95% CI 0.1, 1.8) per 10 μg/m3 pollutant concentration increase, respectively. Our findings indicated that intermittent increases in levels of particle matter from traffic and natural sources and ambient H2S levels were weakly associated with increased dispensing of drugs for obstructive pulmonary disease in Iceland's capital area. These weak associations could be confounded by unevaluated variables hence further studies are needed. Copyright © 2012 Elsevier Inc. All rights reserved.
Ward, S T; Hancox, A; Mohammed, M A; Ismail, T; Griffiths, E A; Valori, R; Dunckley, P
2017-06-01
The aim of this study was to determine the number of OGDs (oesophago-gastro-duodenoscopies) trainees need to perform to acquire competency in terms of successful unassisted completion to the second part of the duodenum 95% of the time. OGD data were retrieved from the trainee e-portfolio developed by the Joint Advisory Group on GI Endoscopy (JAG) in the UK. All trainees were included unless they were known to have a baseline experience of >20 procedures or had submitted data for <20 procedures. The primary outcome measure was OGD completion, defined as passage of the endoscope to the second part of the duodenum without physical assistance. The number of OGDs required to achieve a 95% completion rate was calculated by the moving average method and learning curve cumulative summation (LC-Cusum) analysis. To determine which factors were independently associated with OGD completion, a mixed effects logistic regression model was constructed with OGD completion as the outcome variable. Data were analysed for 1255 trainees over 288 centres, representing 243 555 OGDs. By moving average method, trainees attained a 95% completion rate at 187 procedures. By LC-Cusum analysis, after 200 procedures, >90% trainees had attained a 95% completion rate. Total number of OGDs performed, trainee age and experience in lower GI endoscopy were factors independently associated with OGD completion. There are limited published data on the OGD learning curve. This is the largest study to date analysing the learning curve for competency acquisition. The JAG competency requirement for 200 procedures appears appropriate. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Neighborhood Walkability and Body Mass Index Trajectories: Longitudinal Study of Canadians.
Wasfi, Rania A; Dasgupta, Kaberi; Orpana, Heather; Ross, Nancy A
2016-05-01
To assess the impact of neighborhood walkability on body mass index (BMI) trajectories of urban Canadians. Data are from Canada's National Population Health Survey (n = 2935; biannual assessments 1994-2006). We measured walkability with the Walk Score. We modeled body mass index (BMI, defined as weight in kilograms divided by the square of height in meters [kg/m(2)]) trajectories as a function of Walk Score and sociodemographic and behavioral covariates with growth curve models and fixed-effects regression models. In men, BMI increased annually by an average of 0.13 kg/m(2) (95% confidence interval [CI] = 0.11, 0.14) over the 12 years of follow-up. Moving to a high-walkable neighborhood (2 or more Walk Score quartiles higher) decreased BMI trajectories for men by approximately 1 kg/m(2) (95% CI = -1.16, -0.17). Moving to a low-walkable neighborhood increased BMI for men by approximately 0.45 kg/m(2) (95% CI = 0.01, 0.89). There was no detectable influence of neighborhood walkability on body weight for women. Our study of a large sample of urban Canadians followed for 12 years confirms that neighborhood walkability influences BMI trajectories for men, and may be influential in curtailing male age-related weight gain.
Motile and non-motile sperm diagnostic manipulation using optoelectronic tweezers.
Ohta, Aaron T; Garcia, Maurice; Valley, Justin K; Banie, Lia; Hsu, Hsan-Yin; Jamshidi, Arash; Neale, Steven L; Lue, Tom; Wu, Ming C
2010-12-07
Optoelectronic tweezers was used to manipulate human spermatozoa to determine whether their response to OET predicts sperm viability among non-motile sperm. We review the electro-physical basis for how live and dead human spermatozoa respond to OET. The maximal velocity that non-motile spermatozoa could be induced to move by attraction or repulsion to a moving OET field was measured. Viable sperm are attracted to OET fields and can be induced to move at an average maximal velocity of 8.8 ± 4.2 µm s(-1), while non-viable sperm are repelled to OET, and are induced to move at an average maximal velocity of -0.8 ± 1.0 µm s(-1). Manipulation of the sperm using OET does not appear to result in increased DNA fragmentation, making this a potential method by which to identify viable non-motile sperm for assisted reproductive technologies.
Transport of the moving barrier driven by chiral active particles
NASA Astrophysics Data System (ADS)
Liao, Jing-jing; Huang, Xiao-qun; Ai, Bao-quan
2018-03-01
Transport of a moving V-shaped barrier exposed to a bath of chiral active particles is investigated in a two-dimensional channel. Due to the chirality of active particles and the transversal asymmetry of the barrier position, active particles can power and steer the directed transport of the barrier in the longitudinal direction. The transport of the barrier is determined by the chirality of active particles. The moving barrier and active particles move in the opposite directions. The average velocity of the barrier is much larger than that of active particles. There exist optimal parameters (the chirality, the self-propulsion speed, the packing fraction, and the channel width) at which the average velocity of the barrier takes its maximal value. In particular, tailoring the geometry of the barrier and the active concentration provides novel strategies to control the transport properties of micro-objects or cargoes in an active medium.
A vertical handoff decision algorithm based on ARMA prediction model
NASA Astrophysics Data System (ADS)
Li, Ru; Shen, Jiao; Chen, Jun; Liu, Qiuhuan
2012-01-01
With the development of computer technology and the increasing demand for mobile communications, the next generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.
Simulation of high-energy radiation belt electron fluxes using NARMAX-VERB coupled codes
Pakhotin, I P; Drozdov, A Y; Shprits, Y Y; Boynton, R J; Subbotin, D A; Balikhin, M A
2014-01-01
This study presents a fusion of data-driven and physics-driven methodologies of energetic electron flux forecasting in the outer radiation belt. Data-driven NARMAX (Nonlinear AutoRegressive Moving Averages with eXogenous inputs) model predictions for geosynchronous orbit fluxes have been used as an outer boundary condition to drive the physics-based Versatile Electron Radiation Belt (VERB) code, to simulate energetic electron fluxes in the outer radiation belt environment. The coupled system has been tested for three extended time periods totalling several weeks of observations. The time periods involved periods of quiet, moderate, and strong geomagnetic activity and captured a range of dynamics typical of the radiation belts. The model has successfully simulated energetic electron fluxes for various magnetospheric conditions. Physical mechanisms that may be responsible for the discrepancies between the model results and observations are discussed. PMID:26167432
[A new kinematics method of determing elbow rotation axis and evaluation of its feasibility].
Han, W; Song, J; Wang, G Z; Ding, H; Li, G S; Gong, M Q; Jiang, X Y; Wang, M Y
2016-04-18
To study a new positioning method of elbow external fixation rotation axis, and to evaluate its feasibility. Four normal adult volunteers and six Sawbone elbow models were brought into this experiment. The kinematic data of five elbow flexion were collected respectively by optical positioning system. The rotation axes of the elbow joints were fitted by the least square method. The kinematic data and fitting results were visually displayed. According to the fitting results, the average moving planes and rotation axes were calculated. Thus, the rotation axes of new kinematic methods were obtained. By using standard clinical methods, the entrance and exit points of rotation axes of six Sawbone elbow models were located under X-ray. And The kirschner wires were placed as the representatives of rotation axes using traditional positioning methods. Then, the entrance point deviation, the exit point deviation and the angle deviation of two kinds of located rotation axes were compared. As to the four volunteers, the indicators represented circular degree and coplanarity of elbow flexion movement trajectory of each volunteer were both about 1 mm. All the distance deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 3 mm. All the angle deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 5°. As to the six Sawbone models, the average entrance point deviations, the average exit point deviations and the average angle deviations of two different rotation axes determined by two kinds of located methods were respectively 1.697 2 mm, 1.838 3 mm and 1.321 7°. All the deviations were very small. They were all in an acceptable range of clinical practice. The values that represent circular degree and coplanarity of volunteer's elbow single curvature movement trajectory are very small. The result shows that the elbow single curvature movement can be regarded as the approximate fixed axis movement. The new method can replace the traditional method in accuracy. It can make up the deficiency of the traditional fixed axis method.
Mapping Shoreline Change Using Digital Orthophotogrammetry on Maui, Hawaii
Fletcher, C.; Rooney, J.; Barbee, M.; Lim, S.-C.; Richmond, B.
2003-01-01
Digital, aerial orthophotomosaics with 0.5-3.0 m horizontal accuracy, used with NOAA topographic maps (T-sheets), document past shoreline positions on Maui Island, Hawaii. Outliers in the shoreline position database are determined using a least median of squares regression. Least squares linear regression of the reweighted data (outliers excluded) is used to determine a shoreline trend termed the reweighted linear squares (RLS). To determine the annual erosion hazard rate (AEHR) for use by shoreline managers the RLS data is smoothed in the longshore direction using a weighted moving average five transects wide with the smoothed rate applied to the center transect. Weightings within each five transect group are 1,3,5,3,1. AEHR's (smoothed RLS values) are plotted on a 1:3000 map series for use by shoreline managers and planners. These maps are displayed on the web for public reference at http://www.co.maui.hi.us/ departments/Planning/erosion.htm. An end-point rate of change is also calculated using the earliest T-sheet and the latest collected shoreline (1997 or 2002). The resulting database consists of 3565 separate erosion rates spaced every 20 m along 90 km of sandy shoreline. Three regions are analyzed: Kihei, West Maui, and North Shore coasts. The Kihei Coast has an average AEHR of about 0.3 m/yr, an end point rate (EPR) of 0.2 m/yr, 2.8 km of beach loss and 19 percent beach narrowing in the period 1949-1997. Over the same period the West Maui coast has an average AEHR of about 0.2 m/yr, an average EPR of about 0.2 m/yr, about 4.5 km of beach loss and 25 percent beach narrowing. The North Shore has an average AEHR of about 0.4 m/yr, an average EPR of about 0.3 m/yr, 0.8 km of beach loss and 15 percent beach narrowing. The mean, island-wide EPR of eroding shorelines is 0.24 m/yr and the average AEHR of eroding shorelines is about 0.3 m/yr. The overall shoreline change rate, erosion and accretion included, as measured using the unsmoothed RLS technique is 0.21 m/yr. Island wide changes in beach width show a 19 percent decrease over the period 1949/ 1950 to 1997/2002. Island-wide, about 8 km of dry beach has been lost since 1949 (i.e., high water against hard engineering structures and natural rock substrate).
Informing Mitigation of Disaster Loss through Social Media: Evidence from Thailand
NASA Astrophysics Data System (ADS)
Allaire, M.
2015-12-01
This paper is the first to investigate the role of online information and social media in enabling households to reduce natural disaster losses. The historic 2011 Bangkok flood is utilized as a case study to assess how internet use allowed households to mitigate flood losses. This event was one of the first major disasters to affect an urban area with a substantial population connected to social media. The role of online information is investigated with a mixed methods approach, using both quantitative (propensity score matching and multivariate regression analysis) and qualitative (in-depth interviews) techniques. The study relies on two data sources - survey responses from 469 Bangkok households and in-depth interviews with internet users who are a subset of the survey participants. Propensity score matching indicates that social media use enabled households to reduce mean total losses by 37%, using a nearest neighbor estimator. Average loss reductions amounted to USD 3,708 to USD 4,886, depending on the matching estimator. In addition, regression analysis suggests that social media use is associated with lower flood losses (average reduction of USD 2,784). These reductions are notable when considering that total flood losses in 2011 averaged USD 4,903. Social media offered information that was not available from other sources, such as localized and nearly real-time updates of flood location and depth. With knowledge of current flood conditions, Bangkok households could move belongings to higher ground before floodwaters arrived. These findings suggest that utilizing social media users as sensors could better inform populations during natural disasters, particularly in locations that lack real-time, accurate flood monitoring networks. Therefore, expanded access to the internet and social could especially be useful in developing countries, ungagged basins, and highly complex urban environments. There is also an enormous opportunity for disseminating government disaster communication through social media. Overall, the study reveals that online information can enable effective disaster preparedness and reduce flood losses.
ERIC Educational Resources Information Center
Gaines, Gale F.
Focused state efforts have helped teacher salaries in Southern Regional Education Board (SREB) states move toward the national average. Preliminary 2000-01 estimates put SREB's average teacher salary at its highest point in 22 years compared to the national average. The SREB average teacher salary is approximately 90 percent of the national…
Mechanistic approach to generalized technical analysis of share prices and stock market indices
NASA Astrophysics Data System (ADS)
Ausloos, M.; Ivanova, K.
2002-05-01
Classical technical analysis methods of stock evolution are recalled, i.e. the notion of moving averages and momentum indicators. The moving averages lead to define death and gold crosses, resistance and support lines. Momentum indicators lead the price trend, thus give signals before the price trend turns over. The classical technical analysis investment strategy is thereby sketched. Next, we present a generalization of these tricks drawing on physical principles, i.e. taking into account not only the price of a stock but also the volume of transactions. The latter becomes a time dependent generalized mass. The notion of pressure, acceleration and force are deduced. A generalized (kinetic) energy is easily defined. It is understood that the momentum indicators take into account the sign of the fluctuations, while the energy is geared toward the absolute value of the fluctuations. They have different patterns which are checked by searching for the crossing points of their respective moving averages. The case of IBM evolution over 1990-2000 is used for illustrations.
An impact analysis of forecasting methods and forecasting parameters on bullwhip effect
NASA Astrophysics Data System (ADS)
Silitonga, R. Y. H.; Jelly, N.
2018-04-01
Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.
NASA Astrophysics Data System (ADS)
Nair, Kalyani P.; Harkness, Elaine F.; Gadde, Soujanye; Lim, Yit Y.; Maxwell, Anthony J.; Moschidis, Emmanouil; Foden, Philip; Cuzick, Jack; Brentnall, Adam; Evans, D. Gareth; Howell, Anthony; Astley, Susan M.
2017-03-01
Personalised breast screening requires assessment of individual risk of breast cancer, of which one contributory factor is weight. Self-reported weight has been used for this purpose, but may be unreliable. We explore the use of volume of fat in the breast, measured from digital mammograms. Volumetric breast density measurements were used to determine the volume of fat in the breasts of 40,431 women taking part in the Predicting Risk Of Cancer At Screening (PROCAS) study. Tyrer-Cuzick risk using self-reported weight was calculated for each woman. Weight was also estimated from the relationship between self-reported weight and breast fat volume in the cohort, and used to re-calculate Tyrer-Cuzick risk. Women were assigned to risk categories according to 10 year risk (below average <2%, average 2-3.49%, above average 3.5-4.99%, moderate 5-7.99%, high >=8%) and the original and re-calculated Tyrer-Cuzick risks were compared. Of the 716 women diagnosed with breast cancer during the study, 15 (2.1%) moved into a lower risk category, and 37 (5.2%) moved into a higher category when using weight estimated from breast fat volume. Of the 39,715 women without a cancer diagnosis, 1009 (2.5%) moved into a lower risk category, and 1721 (4.3%) into a higher risk category. The majority of changes were between below average and average risk categories (38.5% of those with a cancer diagnosis, and 34.6% of those without). No individual moved more than one risk group. Automated breast fat measures may provide a suitable alternative to self-reported weight for risk assessment in personalized screening.
Forecast of Frost Days Based on Monthly Temperatures
NASA Astrophysics Data System (ADS)
Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.
2009-04-01
Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.
Smitley, David; Davis, Terrance; Rebek, Eric
2008-10-01
Our objective was to characterize the rate at which ash (Fraxinus spp.) trees decline in areas adjacent to the leading edge of visible ash canopy thinning due to emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae). Trees in southeastern Michigan were surveyed from 2003 to 2006 for canopy thinning and dieback by comparing survey trees with a set of 11 standard photographs. Freeways stemming from Detroit in all directions were used as survey transects. Between 750 and 1,100 trees were surveyed each year. A rapid method of sampling populations of emerald ash borer was developed by counting emerald ash borer emergence holes with binoculars and then felling trees to validate binocular counts. Approximately 25% of the trees surveyed for canopy thinning in 2005 and 2006 also were sampled for emerald ash borer emergence holes using binoculars. Regression analysis indicates that 41-53% of the variation in ash canopy thinning can be explained by the number of emerald ash borer emergence holes per tree. Emerald ash borer emergence holes were found at every site where ash canopy thinning averaged > 40%. In 2003, ash canopy thinning averaged 40% at a distance of 19.3 km from the epicenter of the emerald ash borer infestation in Canton. By 2006, the point at which ash trees averaged 40% canopy thinning had increased to a distance of 51.2 km away from Canton. Therefore, the point at which ash trees averaged 40% canopy thinning, a state of decline clearly visible to the average person, moved outward at a rate of 10.6 km/yr during this period.
Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick
2018-01-01
When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
GIS Tools to Estimate Average Annual Daily Traffic
DOT National Transportation Integrated Search
2012-06-01
This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...
Dynamics of actin-based movement by Rickettsia rickettsii in vero cells.
Heinzen, R A; Grieshaber, S S; Van Kirk, L S; Devin, C J
1999-08-01
Actin-based motility (ABM) is a virulence mechanism exploited by invasive bacterial pathogens in the genera Listeria, Shigella, and Rickettsia. Due to experimental constraints imposed by the lack of genetic tools and their obligate intracellular nature, little is known about rickettsial ABM relative to Listeria and Shigella ABM systems. In this study, we directly compared the dynamics and behavior of ABM of Rickettsia rickettsii and Listeria monocytogenes. A time-lapse video of moving intracellular bacteria was obtained by laser-scanning confocal microscopy of infected Vero cells synthesizing beta-actin coupled to green fluorescent protein (GFP). Analysis of time-lapse images demonstrated that R. rickettsii organisms move through the cell cytoplasm at an average rate of 4.8 +/- 0.6 micrometer/min (mean +/- standard deviation). This speed was 2.5 times slower than that of L. monocytogenes, which moved at an average rate of 12.0 +/- 3.1 micrometers/min. Although rickettsiae moved more slowly, the actin filaments comprising the actin comet tail were significantly more stable, with an average half-life approximately three times that of L. monocytogenes (100.6 +/- 19.2 s versus 33.0 +/- 7.6 s, respectively). The actin tail associated with intracytoplasmic rickettsiae remained stationary in the cytoplasm as the organism moved forward. In contrast, actin tails of rickettsiae trapped within the nucleus displayed dramatic movements. The observed phenotypic differences between the ABM of Listeria and Rickettsia may indicate fundamental differences in the mechanisms of actin recruitment and polymerization.
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J
2017-04-01
Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.
Up-down Asymmetries in Speed Perception
NASA Technical Reports Server (NTRS)
Thompson, Peter; Stone, Leland S.
1997-01-01
We compared speed matches for pairs of stimuli that moved in opposite directions (upward and downward). Stimuli were elliptical patches (2 deg horizontally by 1 deg vertically) of horizontal sinusoidal gratings of spatial. frequency 2 cycles/deg. Two sequential 380 msec reveal presentations were compared. One of each pair of gratings (the standard) moved at 4 Hz (2 deg/sec), the other (the test) moved at a rate determined by a simple up-down staircase. The point of subjectively equal speed was calculated from the average of the last eight reversals. The task was to fixate a central point and to determine which one of the pair appeared to move faster. Eight of 10 observers perceived the upward drifting grating as moving faster than a grating moving downward but otherwise identical. on average (N = 10), when the standard moved downward, it was matched by a test moving upward at 94.7+/-1.7(SE)% of the standard speed, and when the standard moved upward it was matched by a test moving downward at 105.1+/-2.3(SE)% of the standard speed. Extending this paradigm over a range of spatial (1.5 to 13.5 c/d) and temporal (1.5 to 13.5 Hz) frequencies, preliminary results (N = 4) suggest that, under the conditions of our experiment, upward matter is seen as faster than downward for speeds greater than approx.1 deg/sec, but the effect appears to reverse at speeds below approx.1 deg/sec with downward motion perceived as faster. Given that an up-down asymmetry has been observed for the optokinetic response, both perceptual and oculomotor contributions to this phenomenon deserve exploration.
ERIC Educational Resources Information Center
Kobrin, Jennifer L.; Sinharay, Sandip; Haberman, Shelby J.; Chajewski, Michael
2011-01-01
This study examined the adequacy of a multiple linear regression model for predicting first-year college grade point average (FYGPA) using SAT[R] scores and high school grade point average (HSGPA). A variety of techniques, both graphical and statistical, were used to examine if it is possible to improve on the linear regression model. The results…
Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches
NASA Astrophysics Data System (ADS)
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.
Tropical Cyclone Activity in the North Atlantic Basin During the Weather Satellite Era, 1960-2014
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2016-01-01
This Technical Publication (TP) represents an extension of previous work concerning the tropical cyclone activity in the North Atlantic basin during the weather satellite era, 1960-2014, in particular, that of an article published in The Journal of the Alabama Academy of Science. With the launch of the TIROS-1 polar-orbiting satellite in April 1960, a new era of global weather observation and monitoring began. Prior to this, the conditions of the North Atlantic basin were determined only from ship reports, island reports, and long-range aircraft reconnaissance. Consequently, storms that formed far from land, away from shipping lanes, and beyond the reach of aircraft possibly could be missed altogether, thereby leading to an underestimate of the true number of tropical cyclones forming in the basin. Additionally, new analysis techniques have come into use which sometimes has led to the inclusion of one or more storms at the end of a nominal hurricane season that otherwise would not have been included. In this TP, examined are the yearly (or seasonal) and 10-year moving average (10-year moving average) values of the (1) first storm day (FSD), last storm day (LSD), and length of season (LOS); (2) frequencies of tropical cyclones (by class); (3) average peak 1-minute sustained wind speed (
Moran, John L; Solomon, Patricia J
2013-05-24
Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Monthly mean raw mortality (at hospital discharge) time series, 1995-2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) "in-control" status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues.
Advanced Statistics for Exotic Animal Practitioners.
Hodsoll, John; Hellier, Jennifer M; Ryan, Elizabeth G
2017-09-01
Correlation and regression assess the association between 2 or more variables. This article reviews the core knowledge needed to understand these analyses, moving from visual analysis in scatter plots through correlation, simple and multiple linear regression, and logistic regression. Correlation estimates the strength and direction of a relationship between 2 variables. Regression can be considered more general and quantifies the numerical relationships between an outcome and 1 or multiple variables in terms of a best-fit line, allowing predictions to be made. Each technique is discussed with examples and the statistical assumptions underlying their correct application. Copyright © 2017 Elsevier Inc. All rights reserved.
Kinesin-microtubule interactions during gliding assays under magnetic force
NASA Astrophysics Data System (ADS)
Fallesen, Todd L.
Conventional kinesin is a motor protein capable of converting the chemical energy of ATP into mechanical work. In the cell, this is used to actively transport vesicles through the intracellular matrix. The relationship between the velocity of a single kinesin, as it works against an increasing opposing load, has been well studied. The relationship between the velocity of a cargo being moved by multiple kinesin motors against an opposing load has not been established. A major difficulty in determining the force-velocity relationship for multiple motors is determining the number of motors that are moving a cargo against an opposing load. Here I report on a novel method for detaching microtubules bound to a superparamagnetic bead from kinesin anchor points in an upside down gliding assay using a uniform magnetic field perpendicular to the direction of microtubule travel. The anchor points are presumably kinesin motors bound to the surface which microtubules are gliding over. Determining the distance between anchor points, d, allows the calculation of the average number of kinesins, n, that are moving a microtubule. It is possible to calculate the fraction of motors able to move microtubules as well, which is determined to be ˜ 5%. Using a uniform magnetic field parallel to the direction of microtubule travel, it is possible to impart a uniform magnetic field on a microtubule bound to a superparamagnetic bead. We are able to decrease the average velocity of microtubules driven by multiple kinesin motors moving against an opposing force. Using the average number of kinesins on a microtubule, we estimate that there are an average 2-7 kinesins acting against the opposing force. By fitting Gaussians to the smoothed distributions of microtubule velocities acting against an opposing force, multiple velocities are seen, presumably for n, n-1, n-2, etc motors acting together. When these velocities are scaled for the average number of motors on a microtubule, the force-velocity relationship for multiple motors follows the same trend as for one motor, supporting the hypothesis that multiple motors share the load.
Class III correction using an inter-arch spring-loaded module
2014-01-01
Background A retrospective study was conducted to determine the cephalometric changes in a group of Class III patients treated with the inter-arch spring-loaded module (CS2000®, Dynaflex, St. Ann, MO, USA). Methods Thirty Caucasian patients (15 males, 15 females) with an average pre-treatment age of 9.6 years were treated consecutively with this appliance and compared with a control group of subjects from the Bolton-Brush Study who were matched in age, gender, and craniofacial morphology to the treatment group. Lateral cephalograms were taken before treatment and after removal of the CS2000® appliance. The treatment effects of the CS2000® appliance were calculated by subtracting the changes due to growth (control group) from the treatment changes. Results All patients were improved to a Class I dental arch relationship with a positive overjet. Significant sagittal, vertical, and angular changes were found between the pre- and post-treatment radiographs. With an average treatment time of 1.3 years, the maxillary base moved forward by 0.8 mm, while the mandibular base moved backward by 2.8 mm together with improvements in the ANB and Wits measurements. The maxillary incisor moved forward by 1.3 mm and the mandibular incisor moved forward by 1.0 mm. The maxillary molar moved forward by 1.0 mm while the mandibular molar moved backward by 0.6 mm. The average overjet correction was 3.9 mm and 92% of the correction was due to skeletal contribution and 8% was due to dental contribution. The average molar correction was 5.2 mm and 69% of the correction was due to skeletal contribution and 31% was due to dental contribution. Conclusions Mild to moderate Class III malocclusion can be corrected using the inter-arch spring-loaded appliance with minimal patient compliance. The overjet correction was contributed by forward movement of the maxilla, backward and downward movement of the mandible, and proclination of the maxillary incisors. The molar relationship was corrected by mesialization of the maxillary molars, distalization of the mandibular molars together with a rotation of the occlusal plane. PMID:24934153
Davoren, Mary; O'Dwyer, Sarah; Abidin, Zareena; Naughton, Leena; Gibbons, Olivia; Doyle, Elaine; McDonnell, Kim; Monks, Stephen; Kennedy, Harry G
2012-07-13
We examined whether new structured professional judgment instruments for assessing need for therapeutic security, treatment completion and recovery in forensic settings were related to moves from higher to lower levels of therapeutic security and added anything to assessment of risk. This was a prospective naturalistic twelve month observational study of a cohort of patients in a forensic hospital placed according to their need for therapeutic security along a pathway of moves from high to progressively less secure units in preparation for discharge. Patients were assessed using the DUNDRUM-1 triage security scale, the DUNDRUM-3 programme completion scale and the DUNDRUM-4 recovery scale and assessments of risk of violence, self harm and suicide, symptom severity and global function. Patients were subsequently observed for positive moves to less secure units and negative moves to more secure units. There were 86 male patients at baseline with mean follow-up 0.9 years, 11 positive and 9 negative moves. For positive moves, logistic regression indicated that along with location at baseline, the DUNDRUM-1, HCR-20 dynamic and PANSS general symptom scores were associated with subsequent positive moves. The receiver operating characteristic was significant for the DUNDRUM-1 while ANOVA co-varying for both location at baseline and HCR-20 dynamic score was significant for DUNDRUM-1. For negative moves, logistic regression showed DUNDRUM-1 and HCR-20 dynamic scores were associated with subsequent negative moves, along with DUNDRUM-3 and PANSS negative symptoms in some models. The receiver operating characteristic was significant for the DUNDRUM-4 recovery and HCR-20 dynamic scores with DUNDRUM-1, DUNDRUM-3, PANSS general and GAF marginal. ANOVA co-varying for both location at baseline and HCR-20 dynamic scores showed only DUNDRUM-1 and PANSS negative symptoms associated with subsequent negative moves. Clinicians appear to decide moves based on combinations of current and imminent (dynamic) risk measured by HCR-20 dynamic score and historical seriousness of risk as measured by need for therapeutic security (DUNDRUM-1) in keeping with Scott's formulation of risk and seriousness. The DUNDRUM-3 programme completion and DUNDRUM-4 recovery scales have utility as dynamic measures that can off-set perceived 'dangerousness'.
2012-01-01
Background We examined whether new structured professional judgment instruments for assessing need for therapeutic security, treatment completion and recovery in forensic settings were related to moves from higher to lower levels of therapeutic security and added anything to assessment of risk. Methods This was a prospective naturalistic twelve month observational study of a cohort of patients in a forensic hospital placed according to their need for therapeutic security along a pathway of moves from high to progressively less secure units in preparation for discharge. Patients were assessed using the DUNDRUM-1 triage security scale, the DUNDRUM-3 programme completion scale and the DUNDRUM-4 recovery scale and assessments of risk of violence, self harm and suicide, symptom severity and global function. Patients were subsequently observed for positive moves to less secure units and negative moves to more secure units. Results There were 86 male patients at baseline with mean follow-up 0.9 years, 11 positive and 9 negative moves. For positive moves, logistic regression indicated that along with location at baseline, the DUNDRUM-1, HCR-20 dynamic and PANSS general symptom scores were associated with subsequent positive moves. The receiver operating characteristic was significant for the DUNDRUM-1 while ANOVA co-varying for both location at baseline and HCR-20 dynamic score was significant for DUNDRUM-1. For negative moves, logistic regression showed DUNDRUM-1 and HCR-20 dynamic scores were associated with subsequent negative moves, along with DUNDRUM-3 and PANSS negative symptoms in some models. The receiver operating characteristic was significant for the DUNDRUM-4 recovery and HCR-20 dynamic scores with DUNDRUM-1, DUNDRUM-3, PANSS general and GAF marginal. ANOVA co-varying for both location at baseline and HCR-20 dynamic scores showed only DUNDRUM-1 and PANSS negative symptoms associated with subsequent negative moves. Conclusions Clinicians appear to decide moves based on combinations of current and imminent (dynamic) risk measured by HCR-20 dynamic score and historical seriousness of risk as measured by need for therapeutic security (DUNDRUM-1) in keeping with Scott's formulation of risk and seriousness. The DUNDRUM-3 programme completion and DUNDRUM-4 recovery scales have utility as dynamic measures that can off-set perceived 'dangerousness'. PMID:22794187
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
NASA Astrophysics Data System (ADS)
Wang, Jing; Shen, Huoming; Zhang, Bo; Liu, Juan
2018-06-01
In this paper, we studied the parametric resonance issue of an axially moving viscoelastic nanobeam with varying velocity. Based on the nonlocal strain gradient theory, we established the transversal vibration equation of the axially moving nanobeam and the corresponding boundary condition. By applying the average method, we obtained a set of self-governing ordinary differential equations when the excitation frequency of the moving parameters is twice the intrinsic frequency or near the sum of certain second-order intrinsic frequencies. On the plane of parametric excitation frequency and excitation amplitude, we can obtain the instability region generated by the resonance, and through numerical simulation, we analyze the influence of the scale effect and system parameters on the instability region. The results indicate that the viscoelastic damping decreases the resonance instability region, and the average velocity and stiffness make the instability region move to the left- and right-hand sides. Meanwhile, the scale effect of the system is obvious. The nonlocal parameter exhibits not only the stiffness softening effect but also the damping weakening effect, while the material characteristic length parameter exhibits the stiffness hardening effect and damping reinforcement effect.
ERIC Educational Resources Information Center
Larkin, Brittany
2016-01-01
Two independent studies conducted by Baker, Sciarra, and Farrie (2015) and Augenblick, Palaich and Associates (2015) reveal Alabama's public school funding mechanism to be regressive and inequitable. The recommendation from both of these studies is to develop a funding formula including per pupil-based allocation and supplemental categorical…
Essays in the California electricity reserves markets
NASA Astrophysics Data System (ADS)
Metaxoglou, Konstantinos
This dissertation examines inefficiencies in the California electricity reserves markets. In Chapter 1, I use the information released during the investigation of the state's electricity crisis of 2000 and 2001 by the Federal Energy Regulatory Commission to diagnose allocative inefficiencies. Building upon the work of Wolak (2000), I calculate a lower bound for the sellers' price-cost margins using the inverse elasticities of their residual demand curves. The downward bias in my estimates stems from the fact that I don't account for the hierarchical substitutability of the reserve types. The margins averaged at least 20 percent for the two highest quality types of reserves, regulation and spinning, generating millions of dollars in transfers to a handful of sellers. I provide evidence that the deviations from marginal cost pricing were due to the markets' high concentration and a principal-agent relationship that emerged from their design. In Chapter 2, I document systematic differences between the markets' day- and hour-ahead prices. I use a high-dimensional vector moving average model to estimate the premia and conduct correct inferences. To obtain exact maximum likelihood estimates of the model, I employ the EM algorithm that I develop in Chapter 3. I uncover significant day-ahead premia, which I attribute to market design characteristics too. On the demand side, the market design established a principal-agent relationship between the markets' buyers (principal) and their supervisory authority (agent). The agent had very limited incentives to shift reserve purchases to the lower priced hour-ahead markets. On the supply side, the market design raised substantial entry barriers by precluding purely speculative trading and by introducing a complicated code of conduct that induced uncertainty about which actions were subject to regulatory scrutiny. In Chapter 3, I introduce a state-space representation for vector autoregressive moving average models that enables exact maximum likelihood estimation using the EM algorithm. Moreover, my algorithm uses only analytical expressions; it requires the Kalman filter and a fixed-interval smoother in the E step and least squares-type regression in the M step. In contrast, existing maximum likelihood estimation methods require numerical differentiation, both for univariate and multivariate models.
Time series modelling of increased soil temperature anomalies during long period
NASA Astrophysics Data System (ADS)
Shirvani, Amin; Moradi, Farzad; Moosavi, Ali Akbar
2015-10-01
Soil temperature just beneath the soil surface is highly dynamic and has a direct impact on plant seed germination and is probably the most distinct and recognisable factor governing emergence. Autoregressive integrated moving average as a stochastic model was developed to predict the weekly soil temperature anomalies at 10 cm depth, one of the most important soil parameters. The weekly soil temperature anomalies for the periods of January1986-December 2011 and January 2012-December 2013 were taken into consideration to construct and test autoregressive integrated moving average models. The proposed model autoregressive integrated moving average (2,1,1) had a minimum value of Akaike information criterion and its estimated coefficients were different from zero at 5% significance level. The prediction of the weekly soil temperature anomalies during the test period using this proposed model indicated a high correlation coefficient between the observed and predicted data - that was 0.99 for lead time 1 week. Linear trend analysis indicated that the soil temperature anomalies warmed up significantly by 1.8°C during the period of 1986-2011.
TERMA Framework for Biomedical Signal Analysis: An Economic-Inspired Approach.
Elgendi, Mohamed
2016-11-02
Biomedical signals contain features that represent physiological events, and each of these events has peaks. The analysis of biomedical signals for monitoring or diagnosing diseases requires the detection of these peaks, making event detection a crucial step in biomedical signal processing. Many researchers have difficulty detecting these peaks to investigate, interpret and analyze their corresponding events. To date, there is no generic framework that captures these events in a robust, efficient and consistent manner. A new method referred to for the first time as two event-related moving averages ("TERMA") involves event-related moving averages and detects events in biomedical signals. The TERMA framework is flexible and universal and consists of six independent LEGO building bricks to achieve high accuracy detection of biomedical events. Results recommend that the window sizes for the two moving averages ( W 1 and W 2 ) have to follow the inequality ( 8 × W 1 ) ≥ W 2 ≥ ( 2 × W 1 ) . Moreover, TERMA is a simple yet efficient event detector that is suitable for wearable devices, point-of-care devices, fitness trackers and smart watches, compared to more complex machine learning solutions.
Dijkstra, Aletta; Kibele, Eva U B; Verweij, Antonia; van der Lucht, Fons; Janssen, Fanny
2015-12-01
Health disparities between population declining and non-declining areas have received little attention, even though population decline is an established phenomenon in Europe. Selective migration, in which healthier people move out of deprived areas, can possibly explain worse health in declining regions. We assessed whether selective migration can explain the observed worse average health in declining regions as compared with non-declining regions in the Netherlands. Combining data from the Dutch Housing and Living Survey held in 2002 and 2006 with Dutch registry data, we studied the relation between health status and migration in a 5-year period at the individual level by applying logistic regression. In our sample of 130,600 participants, we compared health status, demographic and socioeconomic factors of movers and stayers from declining and non-declining regions. People in the Netherlands who migrated are healthier than those staying behind [odds ratio (OR): 1.80]. This effect is larger for persons moving out of declining regions (OR: 1.76) than those moving into declining regions (OR: 1.47). When controlled for demographic and socioeconomic characteristics, these effects are not significant. Moreover, only a small part of the population migrates out of (0.29%) or into (0.25%) declining regions in the course of 5 years. Despite the relation between health and migration, the effect of selective migration on health differences between declining and non-declining regions in the Netherlands is small. Both health and migration are complexly linked with socioeconomic and demographic factors. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Dexter, F
2000-10-01
We examined how to program an operating room (OR) information system to assist the OR manager in deciding whether to move the last case of the day in one OR to another OR that is empty to decrease overtime labor costs. We first developed a statistical strategy to predict whether moving the case would decrease overtime labor costs for first shift nurses and anesthesia providers. The strategy was based on using historical case duration data stored in a surgical services information system. Second, we estimated the incremental overtime labor costs achieved if our strategy was used for moving cases versus movement of cases by an OR manager who knew in advance exactly how long each case would last. We found that if our strategy was used to decide whether to move cases, then depending on parameter values, only 2.0 to 4.3 more min of overtime would be required per case than if the OR manager had perfect retrospective knowledge of case durations. The use of other information technologies to assist in the decision of whether to move a case, such as real-time patient tracking information systems, closed-circuit cameras, or graphical airport-style displays can, on average, reduce overtime by no more than only 2 to 4 min per case that can be moved. The use of other information technologies to assist in the decision of whether to move a case, such as real-time patient tracking information systems, closed-circuit cameras, or graphical airport-style displays, can, on average, reduce overtime by no more than only 2 to 4 min per case that can be moved.
Neighborhood Walkability and Body Mass Index Trajectories: Longitudinal Study of Canadians
Dasgupta, Kaberi; Orpana, Heather; Ross, Nancy A.
2016-01-01
Objectives. To assess the impact of neighborhood walkability on body mass index (BMI) trajectories of urban Canadians. Methods. Data are from Canada’s National Population Health Survey (n = 2935; biannual assessments 1994–2006). We measured walkability with the Walk Score. We modeled body mass index (BMI, defined as weight in kilograms divided by the square of height in meters [kg/m2]) trajectories as a function of Walk Score and sociodemographic and behavioral covariates with growth curve models and fixed-effects regression models. Results. In men, BMI increased annually by an average of 0.13 kg/m2 (95% confidence interval [CI] = 0.11, 0.14) over the 12 years of follow-up. Moving to a high-walkable neighborhood (2 or more Walk Score quartiles higher) decreased BMI trajectories for men by approximately 1 kg/m2 (95% CI = −1.16, −0.17). Moving to a low-walkable neighborhood increased BMI for men by approximately 0.45 kg/m2 (95% CI = 0.01, 0.89). There was no detectable influence of neighborhood walkability on body weight for women. Conclusions. Our study of a large sample of urban Canadians followed for 12 years confirms that neighborhood walkability influences BMI trajectories for men, and may be influential in curtailing male age-related weight gain. PMID:26985612
Is Managed Care Leading to Consolidation in Health-care Markets?
David, Dranove; Simon, Carol J; White, William D
2002-01-01
Objective To determine the extent to which managed care has led to consolidation among hospitals and physicians. Data Sources We use data from the American Hospital Association, American Medical Association, and government censuses. Study Design Two stage least squares regression analysis examines how cross-section variation in managed care penetration affects provider consolidation, while controlling for the endogeneity of managed-care penetration. Specifically, we examine inpatient hospital markets and physician practice size in large metropolitan areas. Data Collection Methods All data are from secondary sources, merged at the level of the Primary Metropolitan Statistical Area. Principal Findings We find that higher levels of local managed-care penetration are associated with substantial increases in consolidation in hospital and physician markets. In the average market (managed-care penetration equaled 34 percent in 1994), managed care was associated with an increase in the Herfindahl of .054 between 1981 and 1994, moving from .096 in 1981 to .154. This is equivalent to moving from 10.4 equal-size hospitals to 6.5 equal-sized hospitals. In the physician market place, we estimate that at the mean, managed care resulted in a 14 percentage point decrease of physicians in solo practice between 1986 and 1995. This implies a decrease in the percentage of doctors in solo practice from 38 percent in 1986 to 24 percent by 1995. PMID:12132596
Peak Running Intensity of International Rugby: Implications for Training Prescription.
Delaney, Jace A; Thornton, Heidi R; Pryor, John F; Stewart, Andrew M; Dascombe, Ben J; Duthie, Grant M
2017-09-01
To quantify the duration and position-specific peak running intensities of international rugby union for the prescription and monitoring of specific training methodologies. Global positioning systems (GPS) were used to assess the activity profile of 67 elite-level rugby union players from 2 nations across 33 international matches. A moving-average approach was used to identify the peak relative distance (m/min), average acceleration/deceleration (AveAcc; m/s 2 ), and average metabolic power (P met ) for a range of durations (1-10 min). Differences between positions and durations were described using a magnitude-based network. Peak running intensity increased as the length of the moving average decreased. There were likely small to moderate increases in relative distance and AveAcc for outside backs, halfbacks, and loose forwards compared with the tight 5 group across all moving-average durations (effect size [ES] = 0.27-1.00). P met demands were at least likely greater for outside backs and halfbacks than for the tight 5 (ES = 0.86-0.99). Halfbacks demonstrated the greatest relative distance and P met outputs but were similar to outside backs and loose forwards in AveAcc demands. The current study has presented a framework to describe the peak running intensities achieved during international rugby competition by position, which are considerably higher than previously reported whole-period averages. These data provide further knowledge of the peak activity profiles of international rugby competition, and this information can be used to assist coaches and practitioners in adequately preparing athletes for the most demanding periods of play.
ERIC Educational Resources Information Center
Mugrage, Beverly; And Others
Three ridge regression solutions are compared with ordinary least squares regression and with principal components regression using all components. Ridge regression, particularly the Lawless-Wang solution, out-performed ordinary least squares regression and the principal components solution on the criteria of stability of coefficient and closeness…
Heuristic approach to capillary pressures averaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coca, B.P.
1980-10-01
Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.
Computational problems in autoregressive moving average (ARMA) models
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.
1981-01-01
The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
Control of molt in birds: association with prolactin and gonadal regression in starlings.
Dawson, Alistair
2006-07-01
Despite the importance of molt to birds, very little is known about its environmental or physiological control. In starlings Sturnus vulgaris, and other species, under both natural conditions and experimental regimes, gonadal regression coincides with peak prolactin secretion. The prebasic molt starts at the same time. The aim of this series of experiments was to keep starlings on photo-schedules that would challenge the normally close relationship between gonadal regression and molt, to determine how closely the start of molt is associated with gonadal regression and/or associated changes in prolactin concentrations. In one series of experiments, photosensitive starlings were moved from a short photoperiod, 8 h light per day (8L), to 13 or 18L, and from 13 to 18L or 13 to 8L during testicular maturation. Later, photorefractory birds under 13L that had finished molting were moved to 18L. In another series of experiments, photorefractory starlings were moved from 18 to 8L for 7 weeks, 4 weeks, 2 weeks, 1 week, 3 days, 1 day, or 0 days, before being returned to 18L. There was no consistent relationship between photoperiod, or the increase in photoperiod, and the timing of the start of molt. Nor was there a consistent relationship with gonadal regression and the start of molt-molt could be triggered in the absence of a gonadal cycle. However, there was always an association between the start of molt and prolactin. In all cases where molt was induced, there had been an earlier increase in prolactin. However, the timing of molt was related to the time of peak prolactin, not the magnitude of that peak. This relationship between peak prolactin and the start of molt could explain the normally close relationship between the end of breeding activity and the start of molt.
Balabin, Roman M; Smirnov, Sergey V
2011-04-29
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Historical Data Analysis of Hospital Discharges Related to the Amerithrax Attack in Florida
Burke, Lauralyn K.; Brown, C. Perry; Johnson, Tammie M.
2016-01-01
Interrupted time-series analysis (ITSA) can be used to identify, quantify, and evaluate the magnitude and direction of an event on the basis of time-series data. This study evaluates the impact of the bioterrorist anthrax attacks (“Amerithrax”) on hospital inpatient discharges in the metropolitan statistical area of Palm Beach, Broward, and Miami-Dade counties in the fourth quarter of 2001. Three statistical methods—standardized incidence ratio (SIR), segmented regression, and an autoregressive integrated moving average (ARIMA)—were used to determine whether Amerithrax influenced inpatient utilization. The SIR found a non–statistically significant 2 percent decrease in hospital discharges. Although the segmented regression test found a slight increase in the discharge rate during the fourth quarter, it was also not statistically significant; therefore, it could not be attributed to Amerithrax. Segmented regression diagnostics preparing for ARIMA indicated that the quarterly data time frame was not serially correlated and violated one of the assumptions for the use of the ARIMA method and therefore could not properly evaluate the impact on the time-series data. Lack of data granularity of the time frames hindered the successful evaluation of the impact by the three analytic methods. This study demonstrates that the granularity of the data points is as important as the number of data points in a time series. ITSA is important for the ability to evaluate the impact that any hazard may have on inpatient utilization. Knowledge of hospital utilization patterns during disasters offer healthcare and civic professionals valuable information to plan, respond, mitigate, and evaluate any outcomes stemming from biothreats. PMID:27843420
Phung, Dung; Huang, Cunrui; Rutherford, Shannon; Chu, Cordia; Wang, Xiaoming; Nguyen, Minh; Nguyen, Nga Huy; Manh, Cuong Do
2015-01-01
The Mekong Delta is highly vulnerable to climate change and a dengue endemic area in Vietnam. This study aims to examine the association between climate factors and dengue incidence and to identify the best climate prediction model for dengue incidence in Can Tho city, the Mekong Delta area in Vietnam. We used three different regression models comprising: standard multiple regression model (SMR), seasonal autoregressive integrated moving average model (SARIMA), and Poisson distributed lag model (PDLM) to examine the association between climate factors and dengue incidence over the period 2003-2010. We validated the models by forecasting dengue cases for the period of January-December, 2011 using the mean absolute percentage error (MAPE). Receiver operating characteristics curves were used to analyze the sensitivity of the forecast of a dengue outbreak. The results indicate that temperature and relative humidity are significantly associated with changes in dengue incidence consistently across the model methods used, but not cumulative rainfall. The Poisson distributed lag model (PDLM) performs the best prediction of dengue incidence for a 6, 9, and 12-month period and diagnosis of an outbreak however the SARIMA model performs a better prediction of dengue incidence for a 3-month period. The simple or standard multiple regression performed highly imprecise prediction of dengue incidence. We recommend a follow-up study to validate the model on a larger scale in the Mekong Delta region and to analyze the possibility of incorporating a climate-based dengue early warning method into the national dengue surveillance system. Copyright © 2014 Elsevier B.V. All rights reserved.
The role of verbal memory in regressions during reading.
Guérard, Katherine; Saint-Aubin, Jean; Maltais, Marilyne
2013-01-01
During reading, participants generally move their eyes rightward on the line. A number of eye movements, called regressions, are made leftward, to words that have already been fixated. In the present study, we investigated the role of verbal memory during regressions. In Experiment 1, participants were asked to read sentences for comprehension. After reading, they were asked to make a regression to a target word presented auditorily. The results revealed that their regressions were guided by memory, as they differed from those of a control group who did not read the sentences. The role of verbal memory during regressions was then investigated by combining the reading task with articulatory suppression (Exps. 2 and 3). The results showed that articulatory suppression affected the size and the accuracy of the initial regression but had a minimal effect on corrective saccades. This suggests that verbal memory plays an important role in determining the location of the initial saccade during regressions.
Distractor Interference during Smooth Pursuit Eye Movements
ERIC Educational Resources Information Center
Spering, Miriam; Gegenfurtner, Karl R.; Kerzel, Dirk
2006-01-01
When 2 targets for pursuit eye movements move in different directions, the eye velocity follows the vector average (S. G. Lisberger & V. P. Ferrera, 1997). The present study investigates the mechanisms of target selection when observers are instructed to follow a predefined horizontal target and to ignore a moving distractor stimulus. Results show…
Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang
2015-05-01
Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.
In-use activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks.
Sandhu, Gurdas S; Frey, H Christopher; Bartelt-Hunt, Shannon; Jones, Elizabeth
2015-03-01
The objectives of this study were to quantify real-world activity, fuel use, and emissions for heavy duty diesel roll-off refuse trucks; evaluate the contribution of duty cycles and emissions controls to variability in cycle average fuel use and emission rates; quantify the effect of vehicle weight on fuel use and emission rates; and compare empirical cycle average emission rates with the U.S. Environmental Protection Agency's MOVES emission factor model predictions. Measurements were made at 1 Hz on six trucks of model years 2005 to 2012, using onboard systems. The trucks traveled 870 miles, had an average speed of 16 mph, and collected 165 tons of trash. The average fuel economy was 4.4 mpg, which is approximately twice previously reported values for residential trash collection trucks. On average, 50% of time is spent idling and about 58% of emissions occur in urban areas. Newer trucks with selective catalytic reduction and diesel particulate filter had NOx and PM cycle average emission rates that were 80% lower and 95% lower, respectively, compared to older trucks without. On average, the combined can and trash weight was about 55% of chassis weight. The marginal effect of vehicle weight on fuel use and emissions is highest at low loads and decreases as load increases. Among 36 cycle average rates (6 trucks×6 cycles), MOVES-predicted values and estimates based on real-world data have similar relative trends. MOVES-predicted CO2 emissions are similar to those of the real world, while NOx and PM emissions are, on average, 43% lower and 300% higher, respectively. The real-world data presented here can be used to estimate benefits of replacing old trucks with new trucks. Further, the data can be used to improve emission inventories and model predictions. In-use measurements of the real-world activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks can be used to improve the accuracy of predictive models, such as MOVES, and emissions inventories. Further, the activity data from this study can be used to generate more representative duty cycles for more accurate chassis dynamometer testing. Comparisons of old and new model year diesel trucks are useful in analyzing the effect of fleet turnover. The analysis of effect of haul weight on fuel use can be used by fleet managers to optimize operations to reduce fuel cost.
Long-Term PM2.5 Exposure and Respiratory, Cancer, and Cardiovascular Mortality in Older US Adults.
Pun, Vivian C; Kazemiparkouhi, Fatemeh; Manjourides, Justin; Suh, Helen H
2017-10-15
The impact of chronic exposure to fine particulate matter (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm (PM2.5)) on respiratory disease and lung cancer mortality is poorly understood. In a cohort of 18.9 million Medicare beneficiaries (4.2 million deaths) living across the conterminous United States between 2000 and 2008, we examined the association between chronic PM2.5 exposure and cause-specific mortality. We evaluated confounding through adjustment for neighborhood behavioral covariates and decomposition of PM2.5 into 2 spatiotemporal scales. We found significantly positive associations of 12-month moving average PM2.5 exposures (per 10-μg/m3 increase) with respiratory, chronic obstructive pulmonary disease, and pneumonia mortality, with risk ratios ranging from 1.10 to 1.24. We also found significant PM2.5-associated elevated risks for cardiovascular and lung cancer mortality. Risk ratios generally increased with longer moving averages; for example, an elevation in 60-month moving average PM2.5 exposures was linked to 1.33 times the lung cancer mortality risk (95% confidence interval: 1.24, 1.40), as compared with 1.13 (95% confidence interval: 1.11, 1.15) for 12-month moving average exposures. Observed associations were robust in multivariable models, although evidence of unmeasured confounding remained. In this large cohort of US elderly, we provide important new evidence that long-term PM2.5 exposure is significantly related to increased mortality from respiratory disease, lung cancer, and cardiovascular disease. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John
2018-03-01
To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later. © 2017 by the American Institute of Ultrasound in Medicine.
Ansari, Mozafar; Othman, Faridah; Abunama, Taher; El-Shafie, Ahmed
2018-04-01
The function of a sewage treatment plant is to treat the sewage to acceptable standards before being discharged into the receiving waters. To design and operate such plants, it is necessary to measure and predict the influent flow rate. In this research, the influent flow rate of a sewage treatment plant (STP) was modelled and predicted by autoregressive integrated moving average (ARIMA), nonlinear autoregressive network (NAR) and support vector machine (SVM) regression time series algorithms. To evaluate the models' accuracy, the root mean square error (RMSE) and coefficient of determination (R 2 ) were calculated as initial assessment measures, while relative error (RE), peak flow criterion (PFC) and low flow criterion (LFC) were calculated as final evaluation measures to demonstrate the detailed accuracy of the selected models. An integrated model was developed based on the individual models' prediction ability for low, average and peak flow. An initial assessment of the results showed that the ARIMA model was the least accurate and the NAR model was the most accurate. The RE results also prove that the SVM model's frequency of errors above 10% or below - 10% was greater than the NAR model's. The influent was also forecasted up to 44 weeks ahead by both models. The graphical results indicate that the NAR model made better predictions than the SVM model. The final evaluation of NAR and SVM demonstrated that SVM made better predictions at peak flow and NAR fit well for low and average inflow ranges. The integrated model developed includes the NAR model for low and average influent and the SVM model for peak inflow.
Chen, Qihong; Long, Rong; Quan, Shuhai
2014-01-01
This paper presents a neural network predictive control strategy to optimize power distribution for a fuel cell/ultracapacitor hybrid power system of a robot. We model the nonlinear power system by employing time variant auto-regressive moving average with exogenous (ARMAX), and using recurrent neural network to represent the complicated coefficients of the ARMAX model. Because the dynamic of the system is viewed as operating- state- dependent time varying local linear behavior in this frame, a linear constrained model predictive control algorithm is developed to optimize the power splitting between the fuel cell and ultracapacitor. The proposed algorithm significantly simplifies implementation of the controller and can handle multiple constraints, such as limiting substantial fluctuation of fuel cell current. Experiment and simulation results demonstrate that the control strategy can optimally split power between the fuel cell and ultracapacitor, limit the change rate of the fuel cell current, and so as to extend the lifetime of the fuel cell. PMID:24707206
Two-stage damage diagnosis based on the distance between ARMA models and pre-whitening filters
NASA Astrophysics Data System (ADS)
Zheng, H.; Mita, A.
2007-10-01
This paper presents a two-stage damage diagnosis strategy for damage detection and localization. Auto-regressive moving-average (ARMA) models are fitted to time series of vibration signals recorded by sensors. In the first stage, a novel damage indicator, which is defined as the distance between ARMA models, is applied to damage detection. This stage can determine the existence of damage in the structure. Such an algorithm uses output only and does not require operator intervention. Therefore it can be embedded in the sensor board of a monitoring network. In the second stage, a pre-whitening filter is used to minimize the cross-correlation of multiple excitations. With this technique, the damage indicator can further identify the damage location and severity when the damage has been detected in the first stage. The proposed methodology is tested using simulation and experimental data. The analysis results clearly illustrate the feasibility of the proposed two-stage damage diagnosis methodology.
An efficient approach to ARMA modeling of biological systems with multiple inputs and delays
NASA Technical Reports Server (NTRS)
Perrott, M. H.; Cohen, R. J.
1996-01-01
This paper presents a new approach to AutoRegressive Moving Average (ARMA or ARX) modeling which automatically seeks the best model order to represent investigated linear, time invariant systems using their input/output data. The algorithm seeks the ARMA parameterization which accounts for variability in the output of the system due to input activity and contains the fewest number of parameters required to do so. The unique characteristics of the proposed system identification algorithm are its simplicity and efficiency in handling systems with delays and multiple inputs. We present results of applying the algorithm to simulated data and experimental biological data In addition, a technique for assessing the error associated with the impulse responses calculated from estimated ARMA parameterizations is presented. The mapping from ARMA coefficients to impulse response estimates is nonlinear, which complicates any effort to construct confidence bounds for the obtained impulse responses. Here a method for obtaining a linearization of this mapping is derived, which leads to a simple procedure to approximate the confidence bounds.
Freely chosen cadence during a covert manipulation of ambient temperature.
Hartley, Geoffrey L; Cheung, Stephen S
2013-01-01
The present study investigated relationships between changes in power output (PO) to torque (TOR) or freely chosen cadence (FCC) during thermal loading. Twenty participants cycled at a constant rating of perceived exertion while ambient temperature (Ta) was covertly manipulated at 20-min intervals of 20 °C, 35 °C, and 20 °C. The magnitude responses of PO, FCC and TOR were analyzed using repeated-measures ANOVA, while the temporal correlations were analyzed using Auto-Regressive Integrated Moving Averages (ARIMA). Increases in Ta caused significant thermal strain (p < .01), and subsequently, a decrease in PO and TOR magnitude (p < .01), whereas FCC remained unchanged (p = .51). ARIMA indicates that changes in PO were highly correlated to TOR (stationary r2 = .954, p = .04), while FCC was moderately correlated (stationary r2 = .717, p = .01) to PO. In conclusion, changes in PO are caused by a modulation in TOR, whereas FCC remains unchanged and therefore, unaffected by thermal stressors.
NASA Astrophysics Data System (ADS)
Celenk, Mehmet; Song, Yinglei; Ma, Limin; Zhou, Min
2003-05-01
A new algorithm that can be used to automatically recognize and classify malignant lymphomas and lukemia is proposed in this paper. The algorithm utilizes the morphological watershed to extract boundaries of cells from their grey-level images. It generates a sequence of Euclidean distances by selecting pixels in clockwise direction on the boundary of the cell and calculating the Euclidean distances of the selected pixels from the centroid of the cell. A feature vector associated with each cell is then obtained by applying the auto-regressive moving-average (ARMA) model to the generated sequence of Euclidean distances. The clustering measure J3=trace{inverse(Sw-1)Sm} involving the within (Sw) and mixed (Sm) class-scattering matrices is computed for both cell classes to provide an insight into the extent to which different cell classes in the training data are separated. Our test results suggest that the algorithm is highly accurate for the development of an interactive, computer-assisted diagnosis (CAD) tool.
Short-term forecasting of emergency inpatient flow.
Abraham, Gad; Byrnes, Graham B; Bain, Christopher A
2009-05-01
Hospital managers have to manage resources effectively, while maintaining a high quality of care. For hospitals where admissions from the emergency department to the wards represent a large proportion of admissions, the ability to forecast these admissions and the resultant ward occupancy is especially useful for resource planning purposes. Since emergency admissions often compete with planned elective admissions, modeling emergency demand may result in improved elective planning as well. We compare several models for forecasting daily emergency inpatient admissions and occupancy. The models are applied to three years of daily data. By measuring their mean square error in a cross-validation framework, we find that emergency admissions are largely random, and hence, unpredictable, whereas emergency occupancy can be forecasted using a model combining regression and autoregressive integrated moving average (ARIMA) model, or a seasonal ARIMA model, for up to one week ahead. Faced with variable admissions and occupancy, hospitals must prepare a reserve capacity of beds and staff. Our approach allows estimation of the required reserve capacity.
Morimoto, Tissiani; Costa, Juvenal Soares Dias da
2017-03-01
The goal of this study was to analyze the trend over time of hospitalizations due to conditions susceptible to primary healthcare (HCSPC), and how it relates to healthcare spending and Family Health Strategy (FHS) coverage in the city of São Leopoldo, Rio Grande do Sul State, Brazil, between 2003 and 2012. This is an ecological, time-trend study. We used secondary data available in the Unified Healthcare System Hospital Data System, the Primary Care Department and Public Health Budget Data System. The analysis compared HCSPC using three-year moving averages and Poisson regressions or negative binomials. We found no statistical significance in decreasing HCSPC indicators and primary care spending in the period analyzed. Healthcare spending, per-capita spending and FHS coverage increased significantly, but we found no correlation with HCSPC. The results show that, despite increases in the funds invested and population covered by FHS, they are still insufficient to deliver the level of care the population requires.
Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data
Young, Alistair A.; Li, Xiaosong
2014-01-01
Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382
Forecast of severe fever with thrombocytopenia syndrome incidence with meteorological factors.
Sun, Ji-Min; Lu, Liang; Liu, Ke-Ke; Yang, Jun; Wu, Hai-Xia; Liu, Qi-Yong
2018-06-01
Severe fever with thrombocytopenia syndrome (SFTS) is emerging and some studies reported that SFTS incidence was associated with meteorological factors, while no report on SFTS forecast models was reported up to date. In this study, we constructed and compared three forecast models using autoregressive integrated moving average (ARIMA) model, negative binomial regression model (NBM), and quasi-Poisson generalized additive model (GAM). The dataset from 2011 to 2015 were used for model construction and the dataset in 2016 were used for external validity assessment. All the three models fitted the SFTS cases reasonably well during the training process and forecast process, while the NBM model forecasted better than other two models. Moreover, we demonstrated that temperature and relative humidity played key roles in explaining the temporal dynamics of SFTS occurrence. Our study contributes to better understanding of SFTS dynamics and provides predictive tools for the control and prevention of SFTS. Copyright © 2018 Elsevier B.V. All rights reserved.
Dog days of summer: Influences on decision of wolves to move pups
Ausband, David E.; Mitchell, Michael S.; Bassing, Sarah B.; Nordhagen, Matthew; Smith, Douglas W.; Stahler, Daniel R.
2016-01-01
For animals that forage widely, protecting young from predation can span relatively long time periods due to the inability of young to travel with and be protected by their parents. Moving relatively immobile young to improve access to important resources, limit detection of concentrated scent by predators, and decrease infestations by ectoparasites can be advantageous. Moving young, however, can also expose them to increased mortality risks (e.g., accidents, getting lost, predation). For group-living animals that live in variable environments and care for young over extended time periods, the influence of biotic factors (e.g., group size, predation risk) and abiotic factors (e.g., temperature and precipitation) on the decision to move young is unknown. We used data from 25 satellite-collared wolves ( Canis lupus ) in Idaho, Montana, and Yellowstone National Park to evaluate how these factors could influence the decision to move pups during the pup-rearing season. We hypothesized that litter size, the number of adults in a group, and perceived predation risk would positively affect the number of times gray wolves moved pups. We further hypothesized that wolves would move their pups more often when it was hot and dry to ensure sufficient access to water. Contrary to our hypothesis, monthly temperature above the 30-year average was negatively related to the number of times wolves moved their pups. Monthly precipitation above the 30-year average, however, was positively related to the amount of time wolves spent at pup-rearing sites after leaving the natal den. We found little relationship between risk of predation (by grizzly bears, humans, or conspecifics) or group and litter sizes and number of times wolves moved their pups. Our findings suggest that abiotic factors most strongly influence the decision of wolves to move pups, although responses to unpredictable biotic events (e.g., a predator encountering pups) cannot be ruled out.
Abou-Senna, Hatem; Radwan, Essam; Westerlund, Kurt; Cooper, C David
2013-07-01
The Intergovernmental Panel on Climate Change (IPCC) estimates that baseline global GHG emissions may increase 25-90% from 2000 to 2030, with carbon dioxide (CO2 emissions growing 40-110% over the same period. On-road vehicles are a major source of CO2 emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, for example, carbon monoxide (CO), nitrogen oxides (NO(x)), and particulate matter (PM). Therefore, the need to accurately quantify transportation-related emissions from vehicles is essential. The new US. Environmental Protection Agency (EPA) mobile source emissions model, MOVES2010a (MOVES), can estimate vehicle emissions on a second-by-second basis, creating the opportunity to combine a microscopic traffic simulation model (such as VISSIM) with MOVES to obtain accurate results. This paper presents an examination of four different approaches to capture the environmental impacts of vehicular operations on a 10-mile stretch of Interstate 4 (I-4), an urban limited-access highway in Orlando, FL. First (at the most basic level), emissions were estimated for the entire 10-mile section "by hand" using one average traffic volume and average speed. Then three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link drive schedules (LDS), and second-by-second operating mode distributions (OPMODE). This paper analyzes how the various approaches affect predicted emissions of CO, NO(x), PM2.5, PM10, and CO2. The results demonstrate that obtaining precise and comprehensive operating mode distributions on a second-by-second basis provides more accurate emission estimates. Specifically, emission rates are highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach. Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. With MOVES, there is an opportunity for higher precision and accuracy. Integrating a microscopic traffic simulation model (such as VISSIM) with MOVES allows one to obtain precise and accurate emissions estimates. The proposed emission rate estimation process also can be extended to gridded emissions for ozone modeling, or to localized air quality dispersion modeling, where temporal and spatial resolution of emissions is essential to predict the concentration of pollutants near roadways.
Work-related accidents among the Iranian population: a time series analysis, 2000–2011
Karimlou, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood
2015-01-01
Background Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. Objectives To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. Methods In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box–Jenkins modeling to develop a time series model of the total number of accidents. Results There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). Conclusions The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection. PMID:26119774
Work-related accidents among the Iranian population: a time series analysis, 2000-2011.
Karimlou, Masoud; Salehi, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood
2015-01-01
Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box-Jenkins modeling to develop a time series model of the total number of accidents. There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection.
Maximum likelihood estimation for periodic autoregressive moving average models
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
NASA Astrophysics Data System (ADS)
Levine, Zachary H.; Pintar, Adam L.
2015-11-01
A simple algorithm for averaging a stochastic sequence of 1D arrays in a moving, expanding window is provided. The samples are grouped in bins which increase exponentially in size so that a constant fraction of the samples is retained at any point in the sequence. The algorithm is shown to have particular relevance for a class of Monte Carlo sampling problems which includes one characteristic of iterative reconstruction in computed tomography. The code is available in the CPC program library in both Fortran 95 and C and is also available in R through CRAN.
Detrending moving average algorithm for multifractals
NASA Astrophysics Data System (ADS)
Gu, Gao-Feng; Zhou, Wei-Xing
2010-07-01
The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.
TERMA Framework for Biomedical Signal Analysis: An Economic-Inspired Approach
Elgendi, Mohamed
2016-01-01
Biomedical signals contain features that represent physiological events, and each of these events has peaks. The analysis of biomedical signals for monitoring or diagnosing diseases requires the detection of these peaks, making event detection a crucial step in biomedical signal processing. Many researchers have difficulty detecting these peaks to investigate, interpret and analyze their corresponding events. To date, there is no generic framework that captures these events in a robust, efficient and consistent manner. A new method referred to for the first time as two event-related moving averages (“TERMA”) involves event-related moving averages and detects events in biomedical signals. The TERMA framework is flexible and universal and consists of six independent LEGO building bricks to achieve high accuracy detection of biomedical events. Results recommend that the window sizes for the two moving averages (W1 and W2) have to follow the inequality (8×W1)≥W2≥(2×W1). Moreover, TERMA is a simple yet efficient event detector that is suitable for wearable devices, point-of-care devices, fitness trackers and smart watches, compared to more complex machine learning solutions. PMID:27827852
Wu, Shaowei; Deng, Furong; Niu, Jie; Huang, Qinsheng; Liu, Youcheng; Guo, Xinbiao
2010-01-01
Background Heart rate variability (HRV), a marker of cardiac autonomic function, has been associated with particulate matter (PM) air pollution, especially in older patients and those with cardiovascular diseases. However, the effect of PM exposure on cardiac autonomic function in young, healthy adults has received less attention. Objectives We evaluated the relationship between exposure to traffic-related PM with an aerodynamic diameter ≤ 2.5 μm (PM2.5) and HRV in a highly exposed panel of taxi drivers. Methods Continuous measurements of personal exposure to PM2.5 and ambulatory electrocardiogram monitoring were conducted on 11 young healthy taxi drivers for a 12-hr work shift during their work time (0900–2100 hr) before, during, and after the Beijing 2008 Olympic Games. Mixed-effects regression models were used to estimate associations between PM2.5 exposure and percent changes in 5-min HRV indices after combining data from the three time periods and controlling for potentially confounding variables. Results Personal exposures of taxi drivers to PM2.5 changed markedly across the three time periods. The standard deviation of normal-to-normal (SDNN) intervals decreased by 2.2% [95% confidence interval (CI), −3.8% to −0.6%] with an interquartile range (IQR; 69.5 μg/m3) increase in the 30-min PM2.5 moving average, whereas the low-frequency and high-frequency powers decreased by 4.2% (95% CI, −9.0% to 0.8%) and 6.2% (95% CI, −10.7% to −1.5%), respectively, in association with an IQR increase in the 2-hr PM2.5 moving average. Conclusions Marked changes in traffic-related PM2.5 exposure were associated with altered cardiac autonomic function in young healthy adults. PMID:20056565
Exponentially Weighted Moving Average Change Detection Around the Country (and the World)
NASA Astrophysics Data System (ADS)
Brooks, E.; Wynne, R. H.; Thomas, V. A.; Blinn, C. E.; Coulston, J.
2014-12-01
With continuous, freely available moderate-resolution imagery of the Earth's surface available, and with the promise of more imagery to come, change detection based on continuous process models continues to be a major area of research. One such method, exponentially weighted moving average change detection (EWMACD), is based on a mixture of harmonic regression (HR) and statistical quality control, a branch of statistics commonly used to detect aberrations in industrial and medical processes. By using HR to approximate per-pixel seasonal curves, the resulting residuals characterize information about the pixels which stands outside of the periodic structure imposed by HR. Under stable pixels, these residuals behave as might be expected, but in the presence of changes (growth, stress, removal), the residuals clearly show these changes when they are used as inputs into an EWMA chart. In prior work in Alabama, USA, EWMACD yielded an overall accuracy of 85% on a random sample of known thinned stands, in some cases detecting thinnings as sparse as 25% removal. It was also shown to correctly identify the timing of the thinning activity, typically within a single image date of the change. The net result of the algorithm was to produce date-by-date maps of afforestation and deforestation on a variable scale of severity. In other research, EWMACD has also been applied to detect land use and land cover changes in central Java, Indonesia, despite the heavy incidence of clouds and a monsoonal climate. Preliminary results show that EWMACD accurately identifies land use conversion (agricultural to residential, for example) and also identifies neighborhoods where the building density has increased, removing neighborhood vegetation. In both cases, initial results indicate the potential utility of EWMACD to detect both gross and subtle ecosystem disturbance, but further testing across a range of ecosystems and disturbances is clearly warranted.
Mordukhovich, Irina; Lepeule, Johanna; Coull, Brent A; Sparrow, David; Vokonas, Pantel; Schwartz, Joel
2015-02-01
Black carbon (BC) is a pro-oxidant, traffic-related pollutant linked with lung function decline. We evaluated the influence of genetic variation in the oxidative stress pathway on the association between long-term BC exposure and lung function decline. Lung function parameters (FVC and FEV1) were measured during one or more study visits between 1995 and 2011 (n=651 participants) among an elderly cohort: the Normative Aging Study. Residential BC exposure levels were estimated using a spatiotemporal land use regression model. We evaluated whether oxidative stress variants, combined into a genetic score, modify the association between 1-year and 5-year moving averages of BC exposure and lung function levels and rates of decline, using linear mixed models. We report stronger associations between long-term BC exposure and increased rate of lung function decline, but not baseline lung function level, among participants with higher oxidative stress allelic risk profiles compared with participants with lower risk profiles. Associations were strongest when evaluating 5-year moving averages of BC exposure. A 0.5 µg/m(3) increase in 5-year BC exposure was associated with a 0.1% yearly increase in FVC (95% CI -0.5 to 0.7) among participants with low genetic risk scores and a 1.3% yearly decrease (95% CI -1.8 to -0.8) among those with high scores (p-interaction=0.0003). Our results suggest that elderly men with high oxidative stress genetic scores may be more susceptible to the effects of BC on lung function decline. The results, if confirmed, should inform air-quality recommendations in light of a potentially susceptible subgroup. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Heterogeneous CPU-GPU moving targets detection for UAV video
NASA Astrophysics Data System (ADS)
Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan
2017-07-01
Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.
Industrial Based Migration in India. A Case Study of Dumdum "Dunlop Industrial Zone"
NASA Astrophysics Data System (ADS)
Das, Biplab; Bandyopadhyay, Aditya; Sen, Jayashree
2012-10-01
Migration is a very important part in our present society. Basically Millions of people moved during the industrial revolution. Some simply moved from a village to a town in the hope of finding work whilst others moved from one country to another in search of a better way of life. The main reason for moving home during the 19th century was to find work. On one hand this involved migration from the countryside to the growing industrial cities, on the other it involved rates of migration, emigration, and the social changes that were drastically affecting factors such as marriage,birth and death rates. These social changes taking place as a result of capitalism had far ranging affects, such as lowering the average age of marriage and increasing the size of the average family.Migration was not just people moving out of the country, it also invloved a lot of people moving into Britain. In the 1840's Ireland suffered a terrible famine. Faced with a massive cost of feeding the starving population many local landowners paid for labourers to emigrate.There was a shift away from agriculturally based rural dwelling towards urban habitation to meet the mass demand for labour that new industry required. There became great regional differences in population levels and in the structure of their demography. This was due to rates of migration, emigration, and the social changes that were drastically affecting factors such as marriage, birth and death rates. These social changes taking place as a result of capitalism had far ranging affects, such as lowering the average age of marriage and increasing the size of the average family. There is n serious disagreement as to the extent of the population changes that occurred but one key question that always arouses debate is that of whether an expanding population resulted in economic growth or vice versa, i.e. was industrialization a catalyst for population growth? A clear answer is difficult to decipher as the two variables are so closely and fundamentally interlinked, but it seems that both factors provided impetus for each otherís take off. If anything, population and economic growth were complimentary towards one another rather than simply being causative factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, M; Rockhill, J; Phillips, M
Purpose: To investigate a spatiotemporally optimal radiotherapy prescription scheme and its potential benefit for glioblastoma (GBM) patients using the proliferation and invasion (PI) glioma model. Methods: Standard prescription for GBM was assumed to deliver 46Gy in 23 fractions to GTV1+2cm margin and additional 14Gy in 7 fractions to GTV2+2cm margin. We simulated the tumor proliferation and invasion in 2D according to the PI glioma model with a moving velocity of 0.029(slow-move), 0.079(average-move), and 0.13(fast-move) mm/day for GTV2 with a radius of 1 and 2cm. For each tumor, the margin around GTV1 and GTV2 was varied to 0–6 cm and 1–3more » cm respectively. Total dose to GTV1 was constrained such that the equivalent uniform dose (EUD) to normal brain equals EUD with the standard prescription. A non-stationary dose policy, where the fractional dose varies, was investigated to estimate the temporal effect of the radiation dose. The efficacy of an optimal prescription scheme was evaluated by tumor cell-surviving fraction (SF), EUD, and the expected survival time. Results: Optimal prescription for the slow-move tumors was to use 3.0(small)-3.5(large) cm margins to GTV1, and 1.5cm margin to GTV2. For the average- and fast-move tumors, it was optimal to use 6.0cm margin for GTV1 suggesting that whole brain therapy is optimal, and then 1.5cm (average-move) and 1.5–3.0cm (fast-move, small-large) margins for GTV2. It was optimal to deliver the boost sequentially using a linearly decreasing fractional dose for all tumors. Optimal prescription led to 0.001–0.465% of the tumor SF resulted from using the standard prescription, and increased tumor EUD by 25.3–49.3% and the estimated survival time by 7.6–22.2 months. Conclusion: It is feasible to optimize a prescription scheme depending on the individual tumor characteristics. A personalized prescription scheme could potentially increase tumor EUD and the expected survival time significantly without increasing EUD to normal brain.« less
Univariate Time Series Prediction of Solar Power Using a Hybrid Wavelet-ARMA-NARX Prediction Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazaripouya, Hamidreza; Wang, Yubo; Chu, Chi-Cheng
This paper proposes a new hybrid method for super short-term solar power prediction. Solar output power usually has a complex, nonstationary, and nonlinear characteristic due to intermittent and time varying behavior of solar radiance. In addition, solar power dynamics is fast and is inertia less. An accurate super short-time prediction is required to compensate for the fluctuations and reduce the impact of solar power penetration on the power system. The objective is to predict one step-ahead solar power generation based only on historical solar power time series data. The proposed method incorporates discrete wavelet transform (DWT), Auto-Regressive Moving Average (ARMA)more » models, and Recurrent Neural Networks (RNN), while the RNN architecture is based on Nonlinear Auto-Regressive models with eXogenous inputs (NARX). The wavelet transform is utilized to decompose the solar power time series into a set of richer-behaved forming series for prediction. ARMA model is employed as a linear predictor while NARX is used as a nonlinear pattern recognition tool to estimate and compensate the error of wavelet-ARMA prediction. The proposed method is applied to the data captured from UCLA solar PV panels and the results are compared with some of the common and most recent solar power prediction methods. The results validate the effectiveness of the proposed approach and show a considerable improvement in the prediction precision.« less
The Micromechanics of the Moving Contact Line
NASA Technical Reports Server (NTRS)
Han, Minsub; Lichter, Seth; Lin, Chih-Yu; Perng, Yeong-Yan
1996-01-01
The proposed research is divided into three components concerned with molecular structure, molecular orientation, and continuum averages of discrete systems. In the experimental program, we propose exploring how changes in interfacial molecular structure generate contact line motion. Rather than rely on the electrostatic and electrokinetic fields arising from the molecules themselves, we augment their interactions by an imposed field at the solid/liquid interface. By controling the field, we can manipulate the molecular structure at the solid/liquid interface. In response to controlled changes in molecular structure, we observe the resultant contact line motion. In the analytical portion of the proposed research we seek to formulate a system of equations governing fluid motion which accounts for the orientation of fluid molecules. In preliminary work, we have focused on describing how molecular orientation affects the forces generated at the moving contact line. Ideally, as assumed above, the discrete behavior of molecules can be averaged into a continuum theory. In the numerical portion of the proposed research, we inquire whether the contact line region is, in fact, large enough to possess a well-defined average. Additionally, we ask what types of behavior distinguish discrete systems from continuum systems. Might the smallness of the contact line region, in itself, lead to behavior different from that in the bulk? Taken together, our proposed research seeks to identify and accurately account for some of the molecular dynamics of the moving contact line, and attempts to formulate a description from which one can compute the forces at the moving contact line.
Murray, Louis C.
2012-01-01
A study to examine the influences of climatic and anthropogenic stressors on groundwater levels, lake stages, and surface-water discharge at selected sites in northern Volusia County, Florida, was conducted in 2009 by the U.S. Geological Survey. Water-level data collected at 20 monitoring sites (17 groundwater and 3 lake sites) in the vicinity of a wetland area were analyzed with multiple linear regression to examine the relative influences of precipitation and groundwater withdrawals on changes in groundwater levels and lake stage. Analyses were conducted across varying periods of record between 1995 and 2010 and included the effects of groundwater withdrawals aggregated from municipal water-supply wells located within 12 miles of the project sites. Surface-water discharge data at the U.S. Geological Survey Tiger Bay canal site were analyzed for changes in flow between 1978 and 2001. As expected, water-level changes in monitoring wells located closer to areas of concentrated groundwater withdrawals were more highly correlated with withdrawals than were water-level changes measured in wells further removed from municipal well fields. Similarly, water-level changes in wells tapping the Upper Floridan aquifer, the source of municipal supply, were more highly correlated with groundwater withdrawals than were water-level changes in wells tapping the shallower surficial aquifer system. Water-level changes predicted by the regression models over precipitation-averaged periods of record were underestimated for observations having large positive monthly changes (generally greater than 1.0 foot). Such observations are associated with high precipitation and were identified as points in the regression analyses that produced large standardized residuals and/or observations of high influence. Thus, regression models produced by multiple linear regression analyses may have better predictive capability in wetland environments when applied to periods of average or below average precipitation conditions than during wetter than average conditions. For precipitation-averaged hydrologic conditions, water-level changes in the surficial aquifer system were statistically correlated solely with precipitation or were more highly correlated with precipitation than with groundwater withdrawals. Changes in Upper Floridan aquifer water levels and in water-surface stage (stage) at Indian and Scoggin Lakes tended to be highly correlated with both precipitation and withdrawals. The greater influence of withdrawals on stage changes, relative to changes in nearby surficial aquifer system water levels, indicates that these karstic lakes may be better connected hydraulically with the underlying Upper Floridan aquifer than is the surficial aquifer system at the other monitoring sites. At most sites, and for both aquifers, the 2-month moving average of precipitation or groundwater withdrawals included as an explanatory variable in the regression models indicates that water-level changes are not only influenced by stressor conditions across the current month, but also by those of the previous month. The relations between changes in water levels, precipitation, and groundwater withdrawals varied seasonally and in response to a period of drought. Water-level changes tended to be most highly correlated with withdrawals during the spring, when relatively large increases contributed to water-level declines, and during the fall when reduced withdrawal rates contributed to water-level recovery. Water-level changes tended to be most highly (or solely) correlated with precipitation in the winter, when withdrawals are minimal, and in the summer when precipitation is greatest. Water-level changes measured during the drought of October 2005 to June 2008 tended to be more highly correlated with groundwater withdrawals at Upper Floridan aquifer sites than at surficial aquifer system sites, results that were similar to those for precipitation-averaged conditions. Also, changes in stage at Indian and Scoggin Lakes were highly correlated with precipitation and groundwater withdrawals during the drought. Groundwater-withdrawal rates during the drought were, on average, greater than those for precipitation-averaged conditions. Accounting only for withdrawals aggregated from pumping wells located within varying radial distances of less than 12 miles of each site produced essentially the same relation between water-level changes and groundwater withdrawals as that determined for withdrawals aggregated within 12 miles of the site. Similarly, increases in withdrawals aggregated over distances of 1 to 12 miles of the sites had little effect on adjusted R-squared values. Analyses of streamflow measurements collected between 1978 and 2001 at the U.S. Geological Survey Tiger Bay canal site indicate that significant changes occurred during base-flow conditions during that period. Hypothesis and trend testing, together with analyses of flow duration, the number of zero-flow days, and double-mass curves indicate that, after 1988, when a municipal well field began production, base flow was statistically lower than the period before 1988. This decrease in base flow could not be explained by variations in precipitation between these two periods.
Kumar, M Kishore; Sreekanth, V; Salmon, Maëlle; Tonne, Cathryn; Marshall, Julian D
2018-08-01
This study uses spatiotemporal patterns in ambient concentrations to infer the contribution of regional versus local sources. We collected 12 months of monitoring data for outdoor fine particulate matter (PM 2.5 ) in rural southern India. Rural India includes more than one-tenth of the global population and annually accounts for around half a million air pollution deaths, yet little is known about the relative contribution of local sources to outdoor air pollution. We measured 1-min averaged outdoor PM 2.5 concentrations during June 2015-May 2016 in three villages, which varied in population size, socioeconomic status, and type and usage of domestic fuel. The daily geometric-mean PM 2.5 concentration was ∼30 μg m -3 (geometric standard deviation: ∼1.5). Concentrations exceeded the Indian National Ambient Air Quality standards (60 μg m -3 ) during 2-5% of observation days. Average concentrations were ∼25 μg m -3 higher during winter than during monsoon and ∼8 μg m -3 higher during morning hours than the diurnal average. A moving average subtraction method based on 1-min average PM 2.5 concentrations indicated that local contributions (e.g., nearby biomass combustion, brick kilns) were greater in the most populated village, and that overall the majority of ambient PM 2.5 in our study was regional, implying that local air pollution control strategies alone may have limited influence on local ambient concentrations. We compared the relatively new moving average subtraction method against a more established approach. Both methods broadly agree on the relative contribution of local sources across the three sites. The moving average subtraction method has broad applicability across locations. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Vrijheid, M; Mann, S; Vecchia, P; Wiart, J; Taki, M; Ardoino, L; Armstrong, B K; Auvinen, A; Bédard, D; Berg-Beckhoff, G; Brown, J; Chetrit, A; Collatz-Christensen, H; Combalot, E; Cook, A; Deltour, I; Feychting, M; Giles, G G; Hepworth, S J; Hours, M; Iavarone, I; Johansen, C; Krewski, D; Kurttio, P; Lagorio, S; Lönn, S; McBride, M; Montestrucq, L; Parslow, R C; Sadetzki, S; Schüz, J; Tynes, T; Woodward, A; Cardis, E
2009-10-01
The output power of a mobile phone is directly related to its radiofrequency (RF) electromagnetic field strength, and may theoretically vary substantially in different networks and phone use circumstances due to power control technologies. To improve indices of RF exposure for epidemiological studies, we assessed determinants of mobile phone output power in a multinational study. More than 500 volunteers in 12 countries used Global System for Mobile communications software-modified phones (GSM SMPs) for approximately 1 month each. The SMPs recorded date, time, and duration of each call, and the frequency band and output power at fixed sampling intervals throughout each call. Questionnaires provided information on the typical circumstances of an individual's phone use. Linear regression models were used to analyse the influence of possible explanatory variables on the average output power and the percentage call time at maximum power for each call. Measurements of over 60,000 phone calls showed that the average output power was approximately 50% of the maximum, and that output power varied by a factor of up to 2 to 3 between study centres and network operators. Maximum power was used during a considerable proportion of call time (39% on average). Output power decreased with increasing call duration, but showed little variation in relation to reported frequency of use while in a moving vehicle or inside buildings. Higher output powers for rural compared with urban use of the SMP were observed principally in Sweden where the study covered very sparsely populated areas. Average power levels are substantially higher than the minimum levels theoretically achievable in GSM networks. Exposure indices could be improved by accounting for average power levels of different telecommunications systems. There appears to be little value in gathering information on circumstances of phone use other than use in very sparsely populated regions.
Rodríguez, Jorge Martín; Peñaloza, Rolando Enrique; Moreno Montoya, José
2015-01-01
Objective To analyze the behavior temporal of road-traffic injuries (RTI) in Valledupar, Colombia from January 2008 to December 2012. Methodology An observational study was conducted based on records from the Colombian National Legal Medicine and Forensic Sciences Institute regional office in Valledupar. Different variables were analyzed, such as the injured person’s sex, age, education level, and type of road user; the timeframe, place and circumstances of crashes and the vehicles associated with the occurrence. Furthermore, a time series analysis was conducted using an auto-regressive integrated moving average. Results There were 105 events per month on an average, 64.9% of RTI involved men; 82.3% of the persons injured were from 18 to 59 years of age; the average age was 35.4 years of age; the road users most involved in RTI were motorcyclists (69%), followed by pedestrians (12%). 70% had up to upper-secondary education. Sunday was the day with the most RTI occurrences; 93% of the RTI occurred in the urban area. The time series showed a seasonal pattern and a significant trend effect. The modeling process verified the existence of both memory and extrinsic variables related. Conclusions An RTI occurrence pattern was identified, which showed an upward trend during the period analyzed. Motorcyclists were the main road users involved in RTI, which suggests the need to design and implement specific measures for that type of road user, from regulations for graduated licensing for young drivers to monitoring road user behavior for the promotion of road safety. PMID:26657887
Rodríguez, Jorge Martín; Peñaloza, Rolando Enrique; Moreno Montoya, José
2015-01-01
To analyze the behavior temporal of road-traffic injuries (RTI) in Valledupar, Colombia from January 2008 to December 2012. An observational study was conducted based on records from the Colombian National Legal Medicine and Forensic Sciences Institute regional office in Valledupar. Different variables were analyzed, such as the injured person's sex, age, education level, and type of road user; the timeframe, place and circumstances of crashes and the vehicles associated with the occurrence. Furthermore, a time series analysis was conducted using an auto-regressive integrated moving average. There were 105 events per month on an average, 64.9% of RTI involved men; 82.3% of the persons injured were from 18 to 59 years of age; the average age was 35.4 years of age; the road users most involved in RTI were motorcyclists (69%), followed by pedestrians (12%). 70% had up to upper-secondary education. Sunday was the day with the most RTI occurrences; 93% of the RTI occurred in the urban area. The time series showed a seasonal pattern and a significant trend effect. The modeling process verified the existence of both memory and extrinsic variables related. An RTI occurrence pattern was identified, which showed an upward trend during the period analyzed. Motorcyclists were the main road users involved in RTI, which suggests the need to design and implement specific measures for that type of road user, from regulations for graduated licensing for young drivers to monitoring road user behavior for the promotion of road safety.
ERIC Educational Resources Information Center
Adams, Gerald J.; Dial, Micah
1998-01-01
The cyclical nature of mathematics grades was studied for a cohort of elementary school students from a large metropolitan school district in Texas over six years (average cohort size of 8495). The study used an autoregressive integrated moving average (ARIMA) model. Results indicate that grades do exhibit a significant cyclical pattern. (SLD)
Evidence of redshifts in the average solar line profiles of C IV and Si IV from OSO-8 observations
NASA Technical Reports Server (NTRS)
Roussel-Dupre, D.; Shine, R. A.
1982-01-01
Line profiles of C IV and Si V obtained by the Colorado spectrometer on OSO-8 are presented. It is shown that the mean profiles are redshifted with a magnitude varying from 6-20 km/s, and with a mean of 12 km/s. An apparent average downflow of material in the 50,000-100,000 K temperature range is measured. The redshifts are observed in the line center positions of spatially and temporally averaged profiles and are measured either relative to chromospheric Si I lines or from a comparison of sun center and limb profiles. The observations of 6-20 km/s redshifts place constraints on the mechanisms that dominate EUV line emission since it requires a strong weighting of the emission in regions of downward moving material, and since there is little evidence for corresponding upward moving materials in these lines.
2013-01-01
Background Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Methods Monthly mean raw mortality (at hospital discharge) time series, 1995–2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) “in-control” status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. Results The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. Conclusions The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues. PMID:23705957
Wen, Cheng; Dallimer, Martin; Carver, Steve; Ziv, Guy
2018-05-06
Despite the great potential of mitigating carbon emission, development of wind farms is often opposed by local communities due to the visual impact on landscape. A growing number of studies have applied nonmarket valuation methods like Choice Experiments (CE) to value the visual impact by eliciting respondents' willingness to pay (WTP) or willingness to accept (WTA) for hypothetical wind farms through survey questions. Several meta-analyses have been found in the literature to synthesize results from different valuation studies, but they have various limitations related to the use of the prevailing multivariate meta-regression analysis. In this paper, we propose a new meta-analysis method to establish general functions for the relationships between the estimated WTP or WTA and three wind farm attributes, namely the distance to residential/coastal areas, the number of turbines and turbine height. This method involves establishing WTA or WTP functions for individual studies, fitting the average derivative functions and deriving the general integral functions of WTP or WTA against wind farm attributes. Results indicate that respondents in different studies consistently showed increasing WTP for moving wind farms to greater distances, which can be fitted by non-linear (natural logarithm) functions. However, divergent preferences for the number of turbines and turbine height were found in different studies. We argue that the new analysis method proposed in this paper is an alternative to the mainstream multivariate meta-regression analysis for synthesizing CE studies and the general integral functions of WTP or WTA against wind farm attributes are useful for future spatial modelling and benefit transfer studies. We also suggest that future multivariate meta-analyses should include non-linear components in the regression functions. Copyright © 2018. Published by Elsevier B.V.
Soyiri, Ireneous N; Reidpath, Daniel D
2013-01-01
Forecasting higher than expected numbers of health events provides potentially valuable insights in its own right, and may contribute to health services management and syndromic surveillance. This study investigates the use of quantile regression to predict higher than expected respiratory deaths. Data taken from 70,830 deaths occurring in New York were used. Temporal, weather and air quality measures were fitted using quantile regression at the 90th-percentile with half the data (in-sample). Four QR models were fitted: an unconditional model predicting the 90th-percentile of deaths (Model 1), a seasonal/temporal (Model 2), a seasonal, temporal plus lags of weather and air quality (Model 3), and a seasonal, temporal model with 7-day moving averages of weather and air quality. Models were cross-validated with the out of sample data. Performance was measured as proportionate reduction in weighted sum of absolute deviations by a conditional, over unconditional models; i.e., the coefficient of determination (R1). The coefficient of determination showed an improvement over the unconditional model between 0.16 and 0.19. The greatest improvement in predictive and forecasting accuracy of daily mortality was associated with the inclusion of seasonal and temporal predictors (Model 2). No gains were made in the predictive models with the addition of weather and air quality predictors (Models 3 and 4). However, forecasting models that included weather and air quality predictors performed slightly better than the seasonal and temporal model alone (i.e., Model 3 > Model 4 > Model 2) This study provided a new approach to predict higher than expected numbers of respiratory related-deaths. The approach, while promising, has limitations and should be treated at this stage as a proof of concept.
Soyiri, Ireneous N.; Reidpath, Daniel D.
2013-01-01
Forecasting higher than expected numbers of health events provides potentially valuable insights in its own right, and may contribute to health services management and syndromic surveillance. This study investigates the use of quantile regression to predict higher than expected respiratory deaths. Data taken from 70,830 deaths occurring in New York were used. Temporal, weather and air quality measures were fitted using quantile regression at the 90th-percentile with half the data (in-sample). Four QR models were fitted: an unconditional model predicting the 90th-percentile of deaths (Model 1), a seasonal / temporal (Model 2), a seasonal, temporal plus lags of weather and air quality (Model 3), and a seasonal, temporal model with 7-day moving averages of weather and air quality. Models were cross-validated with the out of sample data. Performance was measured as proportionate reduction in weighted sum of absolute deviations by a conditional, over unconditional models; i.e., the coefficient of determination (R1). The coefficient of determination showed an improvement over the unconditional model between 0.16 and 0.19. The greatest improvement in predictive and forecasting accuracy of daily mortality was associated with the inclusion of seasonal and temporal predictors (Model 2). No gains were made in the predictive models with the addition of weather and air quality predictors (Models 3 and 4). However, forecasting models that included weather and air quality predictors performed slightly better than the seasonal and temporal model alone (i.e., Model 3 > Model 4 > Model 2) This study provided a new approach to predict higher than expected numbers of respiratory related-deaths. The approach, while promising, has limitations and should be treated at this stage as a proof of concept. PMID:24147122
Anastasopoulou, Panagiota; Tubic, Mirnes; Schmidt, Steffen; Neumann, Rainer; Woll, Alexander; Härtel, Sascha
2014-01-01
The measurement of activity energy expenditure (AEE) via accelerometry is the most commonly used objective method for assessing human daily physical activity and has gained increasing importance in the medical, sports and psychological science research in recent years. The purpose of this study was to determine which of the following procedures is more accurate to determine the energy cost during the most common everyday life activities; a single regression or an activity based approach. For this we used a device that utilizes single regression models (GT3X, ActiGraph Manufacturing Technology Inc., FL., USA) and a device using activity-dependent calculation models (move II, movisens GmbH, Karlsruhe, Germany). Nineteen adults (11 male, 8 female; 30.4±9.0 years) wore the activity monitors attached to the waist and a portable indirect calorimeter (IC) as reference measure for AEE while performing several typical daily activities. The accuracy of the two devices for estimating AEE was assessed as the mean differences between their output and the reference and evaluated using Bland-Altman analysis. The GT3X overestimated the AEE of walking (GT3X minus reference, 1.26 kcal/min), walking fast (1.72 kcal/min), walking up-/downhill (1.45 kcal/min) and walking upstairs (1.92 kcal/min) and underestimated the AEE of jogging (-1.30 kcal/min) and walking upstairs (-2.46 kcal/min). The errors for move II were smaller than those for GT3X for all activities. The move II overestimated AEE of walking (move II minus reference, 0.21 kcal/min), walking up-/downhill (0.06 kcal/min) and stair walking (upstairs: 0.13 kcal/min; downstairs: 0.29 kcal/min) and underestimated AEE of walking fast (-0.11 kcal/min) and jogging (-0.93 kcal/min). Our data suggest that the activity monitor using activity-dependent calculation models is more appropriate for predicting AEE in daily life than the activity monitor using a single regression model.
A New Trend-Following Indicator: Using SSA to Design Trading Rules
NASA Astrophysics Data System (ADS)
Leles, Michel Carlo Rodrigues; Mozelli, Leonardo Amaral; Guimarães, Homero Nogueira
Singular Spectrum Analysis (SSA) is a non-parametric approach that can be used to decompose a time-series as trends, oscillations and noise. Trend-following strategies rely on the principle that financial markets move in trends for an extended period of time. Moving Averages (MAs) are the standard indicator to design such strategies. In this study, SSA is used as an alternative method to enhance trend resolution in comparison with the traditional MA. New trading rules using SSA as indicator are proposed. This paper shows that for the Down Jones Industrial Average (DJIA) and Shangai Securities Composite Index (SSCI) time-series the SSA trading rules provided, in general, better results in comparison to MA trading rules.
Temporal and long-term trend analysis of class C notifiable diseases in China from 2009 to 2014
Zhang, Xingyu; Hou, Fengsu; Qiao, Zhijiao; Li, Xiaosong; Zhou, Lijun; Liu, Yuanyuan; Zhang, Tao
2016-01-01
Objectives Time series models are effective tools for disease forecasting. This study aims to explore the time series behaviour of 11 notifiable diseases in China and to predict their incidence through effective models. Settings and participants The Chinese Ministry of Health started to publish class C notifiable diseases in 2009. The monthly reported case time series of 11 infectious diseases from the surveillance system between 2009 and 2014 was collected. Methods We performed a descriptive and a time series study using the surveillance data. Decomposition methods were used to explore (1) their seasonality expressed in the form of seasonal indices and (2) their long-term trend in the form of a linear regression model. Autoregressive integrated moving average (ARIMA) models have been established for each disease. Results The number of cases and deaths caused by hand, foot and mouth disease ranks number 1 among the detected diseases. It occurred most often in May and July and increased, on average, by 0.14126/100 000 per month. The remaining incidence models show good fit except the influenza and hydatid disease models. Both the hydatid disease and influenza series become white noise after differencing, so no available ARIMA model can be fitted for these two diseases. Conclusion Time series analysis of effective surveillance time series is useful for better understanding the occurrence of the 11 types of infectious disease. PMID:27797981
Climate variability, weather and enteric disease incidence in New Zealand: time series analysis.
Lal, Aparna; Ikeda, Takayoshi; French, Nigel; Baker, Michael G; Hales, Simon
2013-01-01
Evaluating the influence of climate variability on enteric disease incidence may improve our ability to predict how climate change may affect these diseases. To examine the associations between regional climate variability and enteric disease incidence in New Zealand. Associations between monthly climate and enteric diseases (campylobacteriosis, salmonellosis, cryptosporidiosis, giardiasis) were investigated using Seasonal Auto Regressive Integrated Moving Average (SARIMA) models. No climatic factors were significantly associated with campylobacteriosis and giardiasis, with similar predictive power for univariate and multivariate models. Cryptosporidiosis was positively associated with average temperature of the previous month (β = 0.130, SE = 0.060, p <0.01) and inversely related to the Southern Oscillation Index (SOI) two months previously (β = -0.008, SE = 0.004, p <0.05). By contrast, salmonellosis was positively associated with temperature (β = 0.110, SE = 0.020, p<0.001) of the current month and SOI of the current (β = 0.005, SE = 0.002, p<0.050) and previous month (β = 0.005, SE = 0.002, p<0.05). Forecasting accuracy of the multivariate models for cryptosporidiosis and salmonellosis were significantly higher. Although spatial heterogeneity in the observed patterns could not be assessed, these results suggest that temporally lagged relationships between climate variables and national communicable disease incidence data can contribute to disease prediction models and early warning systems.
Distractor interference during smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R; Kerzel, Dirk
2006-10-01
When 2 targets for pursuit eye movements move in different directions, the eye velocity follows the vector average (S. G. Lisberger & V. P. Ferrera, 1997). The present study investigates the mechanisms of target selection when observers are instructed to follow a predefined horizontal target and to ignore a moving distractor stimulus. Results show that at 140 ms after distractor onset, horizontal eye velocity is decreased by about 25%. Vertical eye velocity increases or decreases by 1 degrees /s in the direction opposite from the distractor. This deviation varies in size with distractor direction, velocity, and contrast. The effect was present during the initiation and steady-state tracking phase of pursuit but only when the observer had prior information about target motion. Neither vector averaging nor winner-take-all models could predict the response to a moving to-be-ignored distractor during steady-state tracking of a predefined target. The contributions of perceptual mislocalization and spatial attention to the vertical deviation in pursuit are discussed. Copyright 2006 APA.
Hinds, Aynslie M; Bechtel, Brian; Distasio, Jino; Roos, Leslie L; Lix, Lisa M
2018-06-05
Residence in public housing, a subsidized and managed government program, may affect health and healthcare utilization. We compared healthcare use in the year before individuals moved into public housing with usage during their first year of tenancy. We also described trends in use. We used linked population-based administrative data housed in the Population Research Data Repository at the Manitoba Centre for Health Policy. The cohort consisted of individuals who moved into public housing in 2009 and 2010. We counted the number of hospitalizations, general practitioner (GP) visits, specialist visits, emergency department visits, and prescriptions drugs dispensed in the twelve 30-day intervals (i.e., months) immediately preceding and following the public housing move-in date. Generalized linear models with generalized estimating equations tested for a period (pre/post-move-in) by month interaction. Odds ratios (ORs), incident rate ratios (IRRs), and means are reported along with 95% confidence intervals (95% CIs). The cohort included 1942 individuals; the majority were female (73.4%) who lived in low income areas and received government assistance (68.1%). On average, the cohort had more than four health conditions. Over the 24 30-day intervals, the percentage of the cohort that visited a GP, specialist, and an emergency department ranged between 37.0% and 43.0%, 10.0% and 14.0%, and 6.0% and 10.0%, respectively, while the percentage of the cohort hospitalized ranged from 1.0% to 5.0%. Generally, these percentages were highest in the few months before the move-in date and lowest in the few months after the move-in date. The period by month interaction was statistically significant for hospitalizations, GP visits, and prescription drug use. The average change in the odds, rate, or mean was smaller in the post-move-in period than in the pre-move-in period. Use of some healthcare services declined after people moved into public housing; however, the decrease was only observed in the first few months and utilization rebounded. Knowledge of healthcare trends before individuals move in are informative for ensuring the appropriate supports are available to new public housing residents. Further study is needed to determine if decreased healthcare utilization following a move is attributable to decreased access.
The change of sleeping and lying posture of Japanese black cows after moving into new environment.
Fukasawa, Michiru; Komatsu, Tokushi; Higashiyama, Yumi
2018-04-25
The environmental change is one of the stressful events in livestock production. Change in environment disturbed cow behavior and cows needed several days to reach stable behavioral pattern, especially sleeping posture (SP) and lying posture (LP) have been used as an indicator for relax and well-acclimated to its environment. The aim of this study examines how long does Japanese black cow required for stabilization of SP and LP after moving into new environment. Seven pregnant Japanese black cows were used. Cows were moved into new tie-stall shed and measured sleeping and lying posture 17 times during 35 experimental days. Both SP and LP were detected by accelerometer fixed on middle occipital and hip-cross, respectively. Daily total time, frequency, and average bout of both SP and LP were calculated. Daily SP time was the shortest on day 1, and increased to the highest on day3. It decreased until day 9, after that stabilized about 65 min /day till the end of experiment. The longest average SP bout was shown on day 1, and it decreased to stabilize till day 7. Daily LP time was changed as same manner as daily SP time. The average SP bout showed the longest on day 1, and it decreased to stable level till day 7. On the other hand, the average LP bout showed the shortest on day1, and it was increased to stable level till on day 7. These results showed that pregnant Japanese black cows needed 1 week to stabilize their SP. However, there were different change pattern between the average SP and LP bout, even though the change pattern of daily SP and LP time were similar.
Ribeiro, Haroldo V; Mendes, Renio S; Lenzi, Ervin K; del Castillo-Mussot, Marcelo; Amaral, Luís A N
2013-01-01
The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player's advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player's advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent [Formula: see text] characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments.
Ribeiro, Haroldo V.; Mendes, Renio S.; Lenzi, Ervin K.; del Castillo-Mussot, Marcelo; Amaral, Luís A. N.
2013-01-01
The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player’s advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player’s advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments. PMID:23382876
Ro, Kyoung S; Johnson, Melvin H; Varma, Ravi M; Hashmonay, Ram A; Hunt, Patrick
2009-08-01
Improved characterization of distributed emission sources of greenhouse gases such as methane from concentrated animal feeding operations require more accurate methods. One promising method is recently used by the USEPA. It employs a vertical radial plume mapping (VRPM) algorithm using optical remote sensing techniques. We evaluated this method to estimate emission rates from simulated distributed methane sources. A scanning open-path tunable diode laser was used to collect path-integrated concentrations (PICs) along different optical paths on a vertical plane downwind of controlled methane releases. Each cycle consists of 3 ground-level PICs and 2 above ground PICs. Three- to 10-cycle moving averages were used to reconstruct mass equivalent concentration plum maps on the vertical plane. The VRPM algorithm estimated emission rates of methane along with meteorological and PIC data collected concomitantly under different atmospheric stability conditions. The derived emission rates compared well with actual released rates irrespective of atmospheric stability conditions. The maximum error was 22 percent when 3-cycle moving average PICs were used; however, it decreased to 11% when 10-cycle moving average PICs were used. Our validation results suggest that this new VRPM method may be used for improved estimations of greenhouse gas emission from a variety of agricultural sources.
NASA Astrophysics Data System (ADS)
Jerome, N. P.; Orton, M. R.; d'Arcy, J. A.; Feiweier, T.; Tunariu, N.; Koh, D.-M.; Leach, M. O.; Collins, D. J.
2015-01-01
Respiratory motion commonly confounds abdominal diffusion-weighted magnetic resonance imaging, where averaging of successive samples at different parts of the respiratory cycle, performed in the scanner, manifests the motion as blurring of tissue boundaries and structural features and can introduce bias into calculated diffusion metrics. Storing multiple averages separately allows processing using metrics other than the mean; in this prospective volunteer study, median and trimmed mean values of signal intensity for each voxel over repeated averages and diffusion-weighting directions are shown to give images with sharper tissue boundaries and structural features for moving tissues, while not compromising non-moving structures. Expert visual scoring of derived diffusion maps is significantly higher for the median than for the mean, with modest improvement from the trimmed mean. Diffusion metrics derived from mono- and bi-exponential diffusion models are comparable for non-moving structures, demonstrating a lack of introduced bias from using the median. The use of the median is a simple and computationally inexpensive alternative to complex and expensive registration algorithms, requiring only additional data storage (and no additional scanning time) while returning visually superior images that will facilitate the appropriate placement of regions-of-interest when analysing abdominal diffusion-weighted magnetic resonance images, for assessment of disease characteristics and treatment response.
A novel algorithm for Bluetooth ECG.
Pandya, Utpal T; Desai, Uday B
2012-11-01
In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.
Jerome, N P; Orton, M R; d'Arcy, J A; Feiweier, T; Tunariu, N; Koh, D-M; Leach, M O; Collins, D J
2015-01-21
Respiratory motion commonly confounds abdominal diffusion-weighted magnetic resonance imaging, where averaging of successive samples at different parts of the respiratory cycle, performed in the scanner, manifests the motion as blurring of tissue boundaries and structural features and can introduce bias into calculated diffusion metrics. Storing multiple averages separately allows processing using metrics other than the mean; in this prospective volunteer study, median and trimmed mean values of signal intensity for each voxel over repeated averages and diffusion-weighting directions are shown to give images with sharper tissue boundaries and structural features for moving tissues, while not compromising non-moving structures. Expert visual scoring of derived diffusion maps is significantly higher for the median than for the mean, with modest improvement from the trimmed mean. Diffusion metrics derived from mono- and bi-exponential diffusion models are comparable for non-moving structures, demonstrating a lack of introduced bias from using the median. The use of the median is a simple and computationally inexpensive alternative to complex and expensive registration algorithms, requiring only additional data storage (and no additional scanning time) while returning visually superior images that will facilitate the appropriate placement of regions-of-interest when analysing abdominal diffusion-weighted magnetic resonance images, for assessment of disease characteristics and treatment response.
Wu, Yan; Aarts, Ronald M.
2018-01-01
A recurring problem regarding the use of conventional comb filter approaches for elimination of periodic waveforms is the degree of selectivity achieved by the filtering process. Some applications, such as the gradient artefact correction in EEG recordings during coregistered EEG-fMRI, require a highly selective comb filtering that provides effective attenuation in the stopbands and gain close to unity in the pass-bands. In this paper, we present a novel comb filtering implementation whereby the iterative filtering application of FIR moving average-based approaches is exploited in order to enhance the comb filtering selectivity. Our results indicate that the proposed approach can be used to effectively approximate the FIR moving average filter characteristics to those of an ideal filter. A cascaded implementation using the proposed approach shows to further increase the attenuation in the filter stopbands. Moreover, broadening of the bandwidth of the comb filtering stopbands around −3 dB according to the fundamental frequency of the stopband can be achieved by the novel method, which constitutes an important characteristic to account for broadening of the harmonic gradient artefact spectral lines. In parallel, the proposed filtering implementation can also be used to design a novel notch filtering approach with enhanced selectivity as well. PMID:29599955
Watson, J T; Ritzmann, R E
1998-01-01
We have combined high-speed video motion analysis of leg movements with electromyogram (EMG) recordings from leg muscles in cockroaches running on a treadmill. The mesothoracic (T2) and metathoracic (T3) legs have different kinematics. While in each leg the coxa-femur (CF) joint moves in unison with the femurtibia (FT) joint, the relative joint excursions differ between T2 and T3 legs. In T3 legs, the two joints move through approximately the same excursion. In T2 legs, the FT joint moves through a narrower range of angles than the CF joint. In spite of these differences in motion, no differences between the T2 and T3 legs were seen in timing or qualitative patterns of depressor coxa and extensor tibia activity. The average firing frequencies of slow depressor coxa (Ds) and slow extensor tibia (SETi) motor neurons are directly proportional to the average angular velocity of their joints during stance. The average Ds and SETi firing frequency appears to be modulated on a cycle-by-cycle basis to control running speed and orientation. In contrast, while the frequency variations within Ds and SETi bursts were consistent across cycles, the variations within each burst did not parallel variations in the velocity of the relevant joints.
FARMWORKERS, A REPRINT FROM THE 1966 MANPOWER REPORT.
ERIC Educational Resources Information Center
Manpower Administration (DOL), Washington, DC.
ALTHOUGH THE AVERAGE STANDARD OF LIVING OF FARM PEOPLE HAS BEEN RISING STEADILY, THEY CONTINUE TO FACE SEVERE PROBLEMS OF UNDEREMPLOYMENT AND POVERTY. THE AVERAGE PER CAPITA INCOME OF FARM RESIDENTS IS LESS THAN TWO-THIRDS THAT OF THE NONFARM POPULATION. MILLIONS HAVE MOVED TO CITIES, LEAVING STAGNATING RURAL COMMUNITIES, AND INCREASING THE CITY…
Repeated furrow formation from a single mitotic apparatus in cylindrical sand dollar eggs.
Rappaport, R
1985-04-01
The methods used previously to demonstrate the ability of a single mitotic apparatus to elicit multiple furrows involved considerable cell distortion and did not permit the investigator to control the positioning of the parts or to observe satisfactorily the early stages of furrow development. In this investigation, Echinarachnius parma eggs were confined in 82 microns i.d. transparent, silicone rubber-walled capillaries, and the mitotic apparatus was moved by pushing the poles inward with 55-microns-diameter glass balls. When the mitotic apparatus was shifted immediately after the furrow first appeared, a new furrow appeared in the normal relation to the new position in 1-2 minutes. The same mitotic apparatus could elicit up to 13 furrows as it was shifted back and forth by alternately pushing in the poles. The previous furrow regressed as the new furrow developed. The operations protracted the furrow establishment period to as long as 24.5 minutes after establishment of the first furrow. The characteristics of furrow regression were related to the distance the mitotic apparatus was moved. It is unlikely that regression was caused either by stress imposed on the surface or the removal of the mitotic apparatus from the vicinity of the furrow.
Severe Weather Guide - Mediterranean Ports. 7. Marseille
1988-03-01
the afternoon. Upper—level westerlies and the associated storm track is moved northward during summer, so extratropical cyclones and associated...autumn as the extratropical storm track moves southward. Precipitation amount is the highest of the year, with an average of 3 inches (76 mm) for the...18 SUBJECT TERMS (Continue on reverse if necessary and identify by block number) Storm haven Mediterranean meteorology Marseille port
Polymer Coatings Degradation Properties
1985-02-01
undertaken 124). The Box-Jenkins approach first evaluates the partial auto -correlation function and determines the order of the moving average memory function...78 - Tables 15 and 16 show the resalit- f- a, the partial auto correlation plots. Second order moving .-. "ra ;;th -he appropriate lags were...coated films. Kaempf, Guenter; Papenroth, Wolfgang; Kunststoffe Date: 1982 Volume: 72 Number:7 Pages: 424-429 Parameters influencing the accelerated
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Vatsa, Veer N.; Atkins, Harold L.
2005-01-01
We apply an unsteady Reynolds-averaged Navier-Stokes (URANS) solver for unstructured grids to unsteady flows on moving and stationary grids. Example problems considered are relevant to active flow control and stability and control. Computational results are presented using the Spalart-Allmaras turbulence model and are compared to experimental data. The effect of grid and time-step refinement are examined.
Multifractal detrending moving-average cross-correlation analysis
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Zhou, Wei-Xing
2011-07-01
There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.
Traffic-Related Air Pollution, Blood Pressure, and Adaptive Response of Mitochondrial Abundance.
Zhong, Jia; Cayir, Akin; Trevisi, Letizia; Sanchez-Guerra, Marco; Lin, Xinyi; Peng, Cheng; Bind, Marie-Abèle; Prada, Diddier; Laue, Hannah; Brennan, Kasey J M; Dereix, Alexandra; Sparrow, David; Vokonas, Pantel; Schwartz, Joel; Baccarelli, Andrea A
2016-01-26
Exposure to black carbon (BC), a tracer of vehicular-traffic pollution, is associated with increased blood pressure (BP). Identifying biological factors that attenuate BC effects on BP can inform prevention. We evaluated the role of mitochondrial abundance, an adaptive mechanism compensating for cellular-redox imbalance, in the BC-BP relationship. At ≥ 1 visits among 675 older men from the Normative Aging Study (observations=1252), we assessed daily BP and ambient BC levels from a stationary monitor. To determine blood mitochondrial abundance, we used whole blood to analyze mitochondrial-to-nuclear DNA ratio (mtDNA/nDNA) using quantitative polymerase chain reaction. Every standard deviation increase in the 28-day BC moving average was associated with 1.97 mm Hg (95% confidence interval [CI], 1.23-2.72; P<0.0001) and 3.46 mm Hg (95% CI, 2.06-4.87; P<0.0001) higher diastolic and systolic BP, respectively. Positive BC-BP associations existed throughout all time windows. BC moving averages (5-day to 28-day) were associated with increased mtDNA/nDNA; every standard deviation increase in 28-day BC moving average was associated with 0.12 standard deviation (95% CI, 0.03-0.20; P=0.007) higher mtDNA/nDNA. High mtDNA/nDNA significantly attenuated the BC-systolic BP association throughout all time windows. The estimated effect of 28-day BC moving average on systolic BP was 1.95-fold larger for individuals at the lowest mtDNA/nDNA quartile midpoint (4.68 mm Hg; 95% CI, 3.03-6.33; P<0.0001), in comparison with the top quartile midpoint (2.40 mm Hg; 95% CI, 0.81-3.99; P=0.003). In older adults, short-term to moderate-term ambient BC levels were associated with increased BP and blood mitochondrial abundance. Our findings indicate that increased blood mitochondrial abundance is a compensatory response and attenuates the cardiac effects of BC. © 2015 American Heart Association, Inc.
Do Our Means of Inquiry Match our Intentions?
Petscher, Yaacov
2016-01-01
A key stage of the scientific method is the analysis of data, yet despite the variety of methods that are available to researchers they are most frequently distilled to a model that focuses on the average relation between variables. Although research questions are frequently conceived with broad inquiry in mind, most regression methods are limited in comprehensively evaluating how observed behaviors are related to each other. Quantile regression is a largely unknown yet well-suited analytic technique similar to traditional regression analysis, but allows for a more systematic approach to understanding complex associations among observed phenomena in the psychological sciences. Data from the National Education Longitudinal Study of 1988/2000 are used to illustrate how quantile regression overcomes the limitations of average associations in linear regression by showing that psychological well-being and sex each differentially relate to reading achievement depending on one’s level of reading achievement. PMID:27486410
An Examination and Comparison of Airline and Navy Pilot Career Earnings
1986-03-01
RECEIVED ........ .............. 45 16. AIRLINE PILOT PROBATIONARY WAGES .... ........ 46 17. 1985 FAPA MAXIMUM PILOT WAGE ESTIMATES ..... 53 1 1983...tI% LIN PILOT WAGES REGRESSION EQUATIONS . 5 19. AVERAGE 1983 PILOT WAGES COMPUTED FROM REGRESSION ANALYSIS ...... ............. 56 20. FAPA MAXIMUM...Western N/A 1,200 1,500 Source: FAPA This establishes a wage "base" for pilots. In addition, a pilot who ilys more than average in one month may "bank
ERIC Educational Resources Information Center
Kasapoglu, Koray
2014-01-01
This study aims to investigate which factors are associated with Turkey's 15-year-olds' scoring above the OECD average (493) on the PISA'09 reading assessment. Collected from a total of 4,996 15-year-old students from Turkey, data were analyzed by logistic regression analysis in order to model the data of students who were split into two: (1)…
High dimensional linear regression models under long memory dependence and measurement error
NASA Astrophysics Data System (ADS)
Kaul, Abhishek
This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.
Methods for estimating flood frequency in Montana based on data through water year 1998
Parrett, Charles; Johnson, Dave R.
2004-01-01
Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.
Schilling, K.E.; Wolter, C.F.
2005-01-01
Nineteen variables, including precipitation, soils and geology, land use, and basin morphologic characteristics, were evaluated to develop Iowa regression models to predict total streamflow (Q), base flow (Qb), storm flow (Qs) and base flow percentage (%Qb) in gauged and ungauged watersheds in the state. Discharge records from a set of 33 watersheds across the state for the 1980 to 2000 period were separated into Qb and Qs. Multiple linear regression found that 75.5 percent of long term average Q was explained by rainfall, sand content, and row crop percentage variables, whereas 88.5 percent of Qb was explained by these three variables plus permeability and floodplain area variables. Qs was explained by average rainfall and %Qb was a function of row crop percentage, permeability, and basin slope variables. Regional regression models developed for long term average Q and Qb were adapted to annual rainfall and showed good correlation between measured and predicted values. Combining the regression model for Q with an estimate of mean annual nitrate concentration, a map of potential nitrate loads in the state was produced. Results from this study have important implications for understanding geomorphic and land use controls on streamflow and base flow in Iowa watersheds and similar agriculture dominated watersheds in the glaciated Midwest. (JAWRA) (Copyright ?? 2005).
David W. Williams; Guohong Li; Ruitong Gao
2004-01-01
Movements of 55 Anoplophora glabripennis (Motschulsky) adults were monitored on 200 willow trees, Salix babylonica L., at a site appx. 80 km southeast of Beijing, China, for 9-14 d in an individual mark-recapture study using harmonic radar. The average movement distance was appx. 14 m, with many beetles not moving at all and others moving >90 m. The rate of movement...
ERIC Educational Resources Information Center
Huang, Min-Hsiung
2009-01-01
Reports of international studies of student achievement often receive public attention worldwide. However, this attention overly focuses on the national rankings of average student performance. To move beyond the simplistic comparison of national mean scores, this study investigates (a) country differences in the measures of variability as well as…
Time-Series Forecast Modeling on High-Bandwidth Network Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Sim, Alex
With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less
NASA Technical Reports Server (NTRS)
Labovitz, M. L.; Toll, D. L.; Kennard, R. E.
1980-01-01
Previously established results demonstrate that LANDSAT data are autocorrelated and can be described by a univariate linear stochastic process known as auto-regressive-integrated-moving-average model of degree 1, 0, 1 or ARIMA (1, 0, 1). This model has two coefficients of interest for interpretation phi(1) and theta(1). In a comparison of LANDSAT thematic mapper simulator (TMS) data and LANDSAT MSS data several results were established: (1) The form of the relatedness as described by this model is not dependent upon system look angle or pixel size. (2) The phi(1) coefficient increases with decreasing pixel size and increasing topographic complexity. (3) Changes in topography have a greater influence upon phi(1) than changes in land cover class. (4) The theta(1) seems to vary with the amount of atmospheric haze. These patterns of variation in phi(1) and theta(1) are potentially exploitable by the remote sensing community to yield stochastically independent sets of observations, characterize topography, and reduce the number of bytes needed to store remotely sensed data.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2015-01-01
River water is a major resource of drinking water on earth. Management of river water is highly needed for surviving. Yamuna is the main river of India, and monthly variation of water quality of river Yamuna, using statistical methods have been compared at different sites for each water parameters. Regression, correlation coefficient, autoregressive integrated moving average (ARIMA), box-Jenkins, residual autocorrelation function (ACF), residual partial autocorrelation function (PACF), lag, fractal, Hurst exponent, and predictability index have been estimated to analyze trend and prediction of water quality. Predictive model is useful at 95% confidence limits and all water parameters reveal platykurtic curve. Brownian motion (true random walk) behavior exists at different sites for BOD, AMM, and total Kjeldahl nitrogen (TKN). Quality of Yamuna River water at Hathnikund is good, declines at Nizamuddin, Mazawali, Agra D/S, and regains good quality again at Juhikha. For all sites, almost all parameters except potential of hydrogen (pH), water temperature (WT) crosses the prescribed limits of World Health Organization (WHO)/United States Environmental Protection Agency (EPA).
NASA Astrophysics Data System (ADS)
Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.
2016-01-01
In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.
The Lateral Tracking Control for the Intelligent Vehicle Based on Adaptive PID Neural Network.
Han, Gaining; Fu, Weiping; Wang, Wen; Wu, Zongsheng
2017-05-30
The intelligent vehicle is a complicated nonlinear system, and the design of a path tracking controller is one of the key technologies in intelligent vehicle research. This paper mainly designs a lateral control dynamic model of the intelligent vehicle, which is used for lateral tracking control. Firstly, the vehicle dynamics model (i.e., transfer function) is established according to the vehicle parameters. Secondly, according to the vehicle steering control system and the CARMA (Controlled Auto-Regression and Moving-Average) model, a second-order control system model is built. Using forgetting factor recursive least square estimation (FFRLS), the system parameters are identified. Finally, a neural network PID (Proportion Integral Derivative) controller is established for lateral path tracking control based on the vehicle model and the steering system model. Experimental simulation results show that the proposed model and algorithm have the high real-time and robustness in path tracing control. This provides a certain theoretical basis for intelligent vehicle autonomous navigation tracking control, and lays the foundation for the vertical and lateral coupling control.
The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network
NASA Astrophysics Data System (ADS)
Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.
2017-05-01
The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.
The Lateral Tracking Control for the Intelligent Vehicle Based on Adaptive PID Neural Network
Han, Gaining; Fu, Weiping; Wang, Wen; Wu, Zongsheng
2017-01-01
The intelligent vehicle is a complicated nonlinear system, and the design of a path tracking controller is one of the key technologies in intelligent vehicle research. This paper mainly designs a lateral control dynamic model of the intelligent vehicle, which is used for lateral tracking control. Firstly, the vehicle dynamics model (i.e., transfer function) is established according to the vehicle parameters. Secondly, according to the vehicle steering control system and the CARMA (Controlled Auto-Regression and Moving-Average) model, a second-order control system model is built. Using forgetting factor recursive least square estimation (FFRLS), the system parameters are identified. Finally, a neural network PID (Proportion Integral Derivative) controller is established for lateral path tracking control based on the vehicle model and the steering system model. Experimental simulation results show that the proposed model and algorithm have the high real-time and robustness in path tracing control. This provides a certain theoretical basis for intelligent vehicle autonomous navigation tracking control, and lays the foundation for the vertical and lateral coupling control. PMID:28556817
Stock price forecasting based on time series analysis
NASA Astrophysics Data System (ADS)
Chi, Wan Le
2018-05-01
Using the historical stock price data to set up a sequence model to explain the intrinsic relationship of data, the future stock price can forecasted. The used models are auto-regressive model, moving-average model and autoregressive-movingaverage model. The original data sequence of unit root test was used to judge whether the original data sequence was stationary. The non-stationary original sequence as a first order difference needed further processing. Then the stability of the sequence difference was re-inspected. If it is still non-stationary, the second order differential processing of the sequence is carried out. Autocorrelation diagram and partial correlation diagram were used to evaluate the parameters of the identified ARMA model, including coefficients of the model and model order. Finally, the model was used to forecast the fitting of the shanghai composite index daily closing price with precision. Results showed that the non-stationary original data series was stationary after the second order difference. The forecast value of shanghai composite index daily closing price was closer to actual value, indicating that the ARMA model in the paper was a certain accuracy.
Riemannian multi-manifold modeling and clustering in brain networks
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Salsabilian, Shiva; Wack, David S.; Muldoon, Sarah F.; Baidoo-Williams, Henry E.; Vettel, Jean M.; Cieslak, Matthew; Grafton, Scott T.
2017-08-01
This paper introduces Riemannian multi-manifold modeling in the context of brain-network analytics: Brainnetwork time-series yield features which are modeled as points lying in or close to a union of a finite number of submanifolds within a known Riemannian manifold. Distinguishing disparate time series amounts thus to clustering multiple Riemannian submanifolds. To this end, two feature-generation schemes for brain-network time series are put forth. The first one is motivated by Granger-causality arguments and uses an auto-regressive moving average model to map low-rank linear vector subspaces, spanned by column vectors of appropriately defined observability matrices, to points into the Grassmann manifold. The second one utilizes (non-linear) dependencies among network nodes by introducing kernel-based partial correlations to generate points in the manifold of positivedefinite matrices. Based on recently developed research on clustering Riemannian submanifolds, an algorithm is provided for distinguishing time series based on their Riemannian-geometry properties. Numerical tests on time series, synthetically generated from real brain-network structural connectivity matrices, reveal that the proposed scheme outperforms classical and state-of-the-art techniques in clustering brain-network states/structures.
NASA Astrophysics Data System (ADS)
Shiri, Jalal; Kisi, Ozgur; Yoon, Heesung; Lee, Kang-Kun; Hossein Nazemi, Amir
2013-07-01
The knowledge of groundwater table fluctuations is important in agricultural lands as well as in the studies related to groundwater utilization and management levels. This paper investigates the abilities of Gene Expression Programming (GEP), Adaptive Neuro-Fuzzy Inference System (ANFIS), Artificial Neural Networks (ANN) and Support Vector Machine (SVM) techniques for groundwater level forecasting in following day up to 7-day prediction intervals. Several input combinations comprising water table level, rainfall and evapotranspiration values from Hongcheon Well station (South Korea), covering a period of eight years (2001-2008) were used to develop and test the applied models. The data from the first six years were used for developing (training) the applied models and the last two years data were reserved for testing. A comparison was also made between the forecasts provided by these models and the Auto-Regressive Moving Average (ARMA) technique. Based on the comparisons, it was found that the GEP models could be employed successfully in forecasting water table level fluctuations up to 7 days beyond data records.
Detection of meteorological extreme effect on historical crop yield anomaly
NASA Astrophysics Data System (ADS)
Kim, W.; Iizumi, T.; Nishimori, M.
2017-12-01
Meteorological extremes of temperature and precipitation are a critical issue in the global climate change, and some studies investigating how the extreme changes in accordance with the climate change are continuously reported. However, it is rarely understandable that the extremes affect crop yield worldwide as heatwave, coolwave, drought, and flood, albeit some local or national reports are available. Therefore, we globally investigated the extremes effects on the variability of historical yield of maize, rice, soy, and wheat with a standardized index and a historical yield anomaly. For the regression analysis, the standardized index is annually aggregated in the consideration of a crop calendar, and the historical yield is detrended with 5-year moving average. Throughout this investigation, we found that the relationship between the aggregated standardized index and the historical yield anomaly shows not merely positive correlation but also negative correlation in all crops in the globe. Namely, the extremes cause decrease of crop yield as a matter of course, but increase in some regions contrastingly. These results help us to quantify the extremes effect on historical crop yield anomaly.
Statistical modeling of valley fever data in Kern County, California
NASA Astrophysics Data System (ADS)
Talamantes, Jorge; Behseta, Sam; Zender, Charles S.
2007-03-01
Coccidioidomycosis (valley fever) is a fungal infection found in the southwestern US, northern Mexico, and some places in Central and South America. The fungus that causes it ( Coccidioides immitis) is normally soil-dwelling but, if disturbed, becomes air-borne and infects the host when its spores are inhaled. It is thus natural to surmise that weather conditions that foster the growth and dispersal of the fungus must have an effect on the number of cases in the endemic areas. We present here an attempt at the modeling of valley fever incidence in Kern County, California, by the implementation of a generalized auto regressive moving average (GARMA) model. We show that the number of valley fever cases can be predicted mainly by considering only the previous history of incidence rates in the county. The inclusion of weather-related time sequences improves the model only to a relatively minor extent. This suggests that fluctuations of incidence rates (about a seasonally varying background value) are related to biological and/or anthropogenic reasons, and not so much to weather anomalies.
Time-Series Forecast Modeling on High-Bandwidth Network Measurements
Yoo, Wucherl; Sim, Alex
2016-06-24
With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less
Steinmetz-Wood, Madeleine; Wasfi, Rania; Parker, George; Bornstein, Lisa; Caron, Jean; Kestens, Yan
2017-07-14
Collective efficacy has been associated with many health benefits at the neighborhood level. Therefore, understanding why some communities have greater collective efficacy than others is important from a public health perspective. This study examined the relationship between gentrification and collective efficacy, in Montreal Canada. A gentrification index was created using tract level median household income, proportion of the population with a bachelor's degree, average rent, proportion of the population with low income, and proportion of the population aged 30-44. Multilevel linear regression analyses were conducted to measure the association between gentrification and individual level collective efficacy. Gentrification was positively associated with collective efficacy. Gentrifiers (individuals moving into gentrifying neighborhoods) had higher collective efficacy than individuals that lived in a neighborhood that did not gentrify. Perceptions of collective efficacy of the original residents of gentrifying neighborhoods were not significantly different from the perceptions of neighborhood collective efficacy of gentrifiers. Our results indicate that gentrification was positively associated with perceived collective efficacy. This implies that gentrification could have beneficial health effects for individuals living in gentrifying neighborhoods.
Systematic strategies for the third industrial accident prevention plan in Korea.
Kang, Young-sig; Yang, Sung-hwan; Kim, Tae-gu; Kim, Day-sung
2012-01-01
To minimize industrial accidents, it's critical to evaluate a firm's priorities for prevention factors and strategies since such evaluation provides decisive information for preventing industrial accidents and maintaining safety management. Therefore, this paper proposes the evaluation of priorities through statistical testing of prevention factors with a cause analysis in a cause and effect model. A priority matrix criterion is proposed to apply the ranking and for the objectivity of questionnaire results. This paper used regression method (RA), exponential smoothing method (ESM), double exponential smoothing method (DESM), autoregressive integrated moving average (ARIMA) model and proposed analytical function method (PAFM) to analyze trends of accident data that will lead to an accurate prediction. This paper standardized the questionnaire results of workers and managers in manufacturing and construction companies with less than 300 employees, located in the central Korean metropolitan areas where fatal accidents have occurred. Finally, a strategy was provided to construct safety management for the third industrial accident prevention plan and a forecasting method for occupational accident rates and fatality rates for occupational accidents per 10,000 people.
A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction
NASA Astrophysics Data System (ADS)
Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.
2017-03-01
There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Ukkusuri, Satish V.
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Aziz, H. M. Abdul; Ukkusuri, Satish V.
2017-06-29
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Simulations of moving effect of coastal vegetation on tsunami damping
NASA Astrophysics Data System (ADS)
Tsai, Ching-Piao; Chen, Ying-Chi; Octaviani Sihombing, Tri; Lin, Chang
2017-05-01
A coupled wave-vegetation simulation is presented for the moving effect of the coastal vegetation on tsunami wave height damping. The problem is idealized by solitary wave propagation on a group of emergent cylinders. The numerical model is based on general Reynolds-averaged Navier-Stokes equations with renormalization group turbulent closure model by using volume of fluid technique. The general moving object (GMO) model developed in computational fluid dynamics (CFD) code Flow-3D is applied to simulate the coupled motion of vegetation with wave dynamically. The damping of wave height and the turbulent kinetic energy along moving and stationary cylinders are discussed. The simulated results show that the damping of wave height and the turbulent kinetic energy by the moving cylinders are clearly less than by the stationary cylinders. The result implies that the wave decay by the coastal vegetation may be overestimated if the vegetation was represented as stationary state.
Chen, Renjie; Samoli, Evangelia; Wong, Chit-Ming; Huang, Wei; Wang, Zongshuang; Chen, Bingheng; Kan, Haidong
2012-09-15
Few multi-city studies in Asian developing countries have examined the acute health effects of ambient nitrogen dioxide (NO(2)). In the China Air Pollution and Health Effects Study (CAPES), we investigated the short-term association between NO(2) and mortality in 17 Chinese cities. We applied two-stage Bayesian hierarchical models to obtain city-specific and national average estimates for NO(2). In each city, we used Poisson regression models incorporating natural spline smoothing functions to adjust for long-term and seasonal trend of mortality, as well as other time-varying covariates. We examined the associations by age, gender and education status. We combined the individual-city estimates of the concentration-response curves to get an overall NO(2)-mortality association in China. The averaged daily concentrations of NO(2) in the 17 Chinese cities ranged from 26 μg/m(3) to 67 μg/m(3). In the combined analysis, a 10-μg/m(3) increase in two-day moving averaged NO(2) was associated with a 1.63% [95% posterior interval (PI), 1.09 to 2.17], 1.80% (95% PI, 1.00 to 2.59) and 2.52% (95% PI, 1.44 to 3.59) increase of total, cardiovascular, and respiratory mortality, respectively. These associations remained significant after adjustment for ambient particles or sulfur dioxide (SO(2)). Older people appeared to be more vulnerable to NO(2) exposure. The combined concentration-response curves indicated a linear association. Conclusively, this largest epidemiologic study of NO(2) in Asian developing countries to date suggests that short-term exposure to NO(2) is associated with increased mortality risk. Copyright © 2012 Elsevier Ltd. All rights reserved.
Tamulonis, Kathryn L.; Kappel, William M.
2009-01-01
Dendrogeomorphic techniques were used to assess soil movement within the Rattlesnake Gulf landslide in the Tully Valley of central New York during the last century. This landslide is a postglacial, slow-moving earth slide that covers 23 acres and consists primarily of rotated, laminated, glaciolacustrine silt and clay. Sixty-two increment cores were obtained from 30 hemlock (Tsuga canadensis) trees across the active part of the landslide and from 3 control sites to interpret the soil-displacement history. Annual growth rings were measured and reaction wood was identified to indicate years in which ring growth changed from concentric to eccentric, on the premise that soil movement triggered compensatory growth in displaced trees. These data provided a basis for an 'event index' to identify years of landslide activity over the 108 years of record represented by the oldest trees. Event-index values and total annual precipitation increased during this time, but years with sudden event-index increases did not necessarily correspond to years with above-average precipitation. Multiple-regression and residual-values analyses indicated a possible correlation between precipitation and movement within the landslide and a possible cyclic (decades-long) tree-ring response to displacement within the landslide area from the toe upward to, and possibly beyond, previously formed landslide features. The soil movement is triggered by a sequence of factors that include (1) periods of several months with below-average precipitation followed by persistent above-average precipitation, (2) the attendant increase in streamflow, which erodes the landslide toe and results in an upslope propagation of slumping, and (3) the harvesting of mature trees within this landslide during the last century and continuing to the present.
Congdon, P
1990-08-01
London's average total fertility rate (TFR) stood at 1.75. Using a cluster analysis to compare the 1985-1987 fertility patterns of different boroughs of London, demographers learned that 5 natural groupings occurred. 4 boroughs in a central London cluster have the distinction of having a low TFR (1.38) and late fertility (average age of 29.58 years). The researchers attributed these occurrences to the high levels of employment and career attachment and low rates of marriage among women in this cluster. 2 inner city boroughs constituted the smallest cluster and had the largest TFR (2.37), mainly due to high numbers of births to the ethnic minorities. The largest cluster consisted of 12 boroughs located mainly along the periphery with 2 centrally located boroughs (TFR, 1.79). Some of the upper class outer boroughs characterized another cluster with a TFR of 1.61. Another cluster made up of inner and outer boroughs in east and southeast London had a ample proportion of manual worker (TFR, 2.04). Social class most likely accounted for the contrast in TFRs between the 2 aformentioned clusters. Demographers observed that cyclical fluctuation of fertility occurred as opposed to secular trends. Due to these fluctuations, demographers used autoregressive moving average forecast models to time series of the fertility variables in London since 1952. They also applied structural time series models which included regression variables and the influence of cyclical and/or trend behavior. The results showed that large cohorts and the increase in female economic activity caused a delay in the modal age of births and a reduction in the number of births.
Optimization of Game Formats in U-10 Soccer Using Logistic Regression Analysis
Amatria, Mario; Arana, Javier; Anguera, M. Teresa; Garzón, Belén
2016-01-01
Abstract Small-sided games provide young soccer players with better opportunities to develop their skills and progress as individual and team players. There is, however, little evidence on the effectiveness of different game formats in different age groups, and furthermore, these formats can vary between and even within countries. The Royal Spanish Soccer Association replaced the traditional grassroots 7-a-side format (F-7) with the 8-a-side format (F-8) in the 2011-12 season and the country’s regional federations gradually followed suit. The aim of this observational methodology study was to investigate which of these formats best suited the learning needs of U-10 players transitioning from 5-aside futsal. We built a multiple logistic regression model to predict the success of offensive moves depending on the game format and the area of the pitch in which the move was initiated. Success was defined as a shot at the goal. We also built two simple logistic regression models to evaluate how the game format influenced the acquisition of technicaltactical skills. It was found that the probability of a shot at the goal was higher in F-7 than in F-8 for moves initiated in the Creation Sector-Own Half (0.08 vs 0.07) and the Creation Sector-Opponent's Half (0.18 vs 0.16). The probability was the same (0.04) in the Safety Sector. Children also had more opportunities to control the ball and pass or take a shot in the F-7 format (0.24 vs 0.20), and these were also more likely to be successful in this format (0.28 vs 0.19). PMID:28031768
Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee
2017-07-01
Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.
Ricketts, Thomas C
2010-02-01
This study attempted to determine if there were identifiable trends in where surgeons moved from and to over time. Physicians, including surgeons, change the location of their practices over their careers. If this movement follows economic theory that surgeons, like most professionals, seek better economic opportunities, then their movements should be toward better markets for their services. Using US national data (American Medical Association Masterfile) that describes the practice locations of surgeons, this study tracked county-level changes of practice location and summarized the characteristics of the places surgeons left and those they moved to. The analysis was primarily descriptive with linear multivariate regression models constructed to determine the characteristics of the surgeons who moved and of the places from and to which they moved. Approximately 30,262 (32.1%) of 94,630 actively practicing, post-training, nonfederal surgeons moved in the 10-year period 1996-2006. The overall tendency of movers was to go to places that had more physicians and a better overall economic environment. These trends, if they continue, may create pressure on access in rural and urban underserved areas.
Three Least-Squares Minimization Approaches to Interpret Gravity Data Due to Dipping Faults
NASA Astrophysics Data System (ADS)
Abdelrahman, E. M.; Essa, K. S.
2015-02-01
We have developed three different least-squares minimization approaches to determine, successively, the depth, dip angle, and amplitude coefficient related to the thickness and density contrast of a buried dipping fault from first moving average residual gravity anomalies. By defining the zero-anomaly distance and the anomaly value at the origin of the moving average residual profile, the problem of depth determination is transformed into a constrained nonlinear gravity inversion. After estimating the depth of the fault, the dip angle is estimated by solving a nonlinear inverse problem. Finally, after estimating the depth and dip angle, the amplitude coefficient is determined using a linear equation. This method can be applied to residuals as well as to measured gravity data because it uses the moving average residual gravity anomalies to estimate the model parameters of the faulted structure. The proposed method was tested on noise-corrupted synthetic and real gravity data. In the case of the synthetic data, good results are obtained when errors are given in the zero-anomaly distance and the anomaly value at the origin, and even when the origin is determined approximately. In the case of practical data (Bouguer anomaly over Gazal fault, south Aswan, Egypt), the fault parameters obtained are in good agreement with the actual ones and with those given in the published literature.
A monitoring tool for performance improvement in plastic surgery at the individual level.
Maruthappu, Mahiben; Duclos, Antoine; Orgill, Dennis; Carty, Matthew J
2013-05-01
The assessment of performance in surgery is expanding significantly. Application of relevant frameworks to plastic surgery, however, has been limited. In this article, the authors present two robust graphic tools commonly used in other industries that may serve to monitor individual surgeon operative time while factoring in patient- and surgeon-specific elements. The authors reviewed performance data from all bilateral reduction mammaplasties performed at their institution by eight surgeons between 1995 and 2010. Operative time was used as a proxy for performance. Cumulative sum charts and exponentially weighted moving average charts were generated using a train-test analytic approach, and used to monitor surgical performance. Charts mapped crude, patient case-mix-adjusted, and case-mix and surgical-experience-adjusted performance. Operative time was found to decline from 182 minutes to 118 minutes with surgical experience (p < 0.001). Cumulative sum and exponentially weighted moving average charts were generated using 1995 to 2007 data (1053 procedures) and tested on 2008 to 2010 data (246 procedures). The sensitivity and accuracy of these charts were significantly improved by adjustment for case mix and surgeon experience. The consideration of patient- and surgeon-specific factors is essential for correct interpretation of performance in plastic surgery at the individual surgeon level. Cumulative sum and exponentially weighted moving average charts represent accurate methods of monitoring operative time to control and potentially improve surgeon performance over the course of a career.
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.
Spatial Assessment of Model Errors from Four Regression Techniques
Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove
2005-01-01
Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...
Quantile Regression in the Study of Developmental Sciences
ERIC Educational Resources Information Center
Petscher, Yaacov; Logan, Jessica A. R.
2014-01-01
Linear regression analysis is one of the most common techniques applied in developmental research, but only allows for an estimate of the average relations between the predictor(s) and the outcome. This study describes quantile regression, which provides estimates of the relations between the predictor(s) and outcome, but across multiple points of…
Tiffan, Kenneth F.; Kock, Tobias J.; Haskell, Craig A.; Connor, William P.; Steinhorst, R. Kirk
2009-01-01
We studied the migratory behavior of subyearling fall Chinook salmon Oncorhynchus tshawytscha in free-flowing and impounded reaches of the Snake River to evaluate the hypothesis that velocity and turbulence are the primary causal mechanisms of downstream migration. The hypothesis states that impoundment reduces velocity and turbulence and alters the migratory behavior of juvenile Chinook salmon as a result of their reduced perception of these cues. At a constant flow (m3 /s), both velocity (km/d) and turbulence (the SD of velocity) decreased from riverine to impounded habitat as cross-sectional areas increased. We found evidence for the hypothesis that subyearling Chinook salmon perceive velocity and turbulence cues and respond to these cues by varying their behavior. The percentage of the subyearlings that moved faster than the average current speed decreased as fish made the transition from riverine reaches with high velocities and turbulence to upper reservoir reaches with low velocities and turbulence but increased to riverine levels again as the fish moved further down in the reservoir, where velocity and turbulence remained low. The migration rate (km/d) decreased in accordance with longitudinal reductions in velocity and turbulence, as predicted by the hypothesis. The variation in migration rate was better explained by a repeatedmeasures regression model containing velocity (Akaike’s information criterion ¼ 1,769.0) than a model containing flow (2,232.6). We conclude that subyearling fall Chinook salmon respond to changes in water velocity and turbulence, which work together to affect the migration rate.
Wangdi, Kinley; Singhasivanon, Pratap; Silawan, Tassanee; Lawpoolsri, Saranath; White, Nicholas J; Kaewkungwal, Jaranit
2010-09-03
Malaria still remains a public health problem in some districts of Bhutan despite marked reduction of cases in last few years. To strengthen the country's prevention and control measures, this study was carried out to develop forecasting and prediction models of malaria incidence in the endemic districts of Bhutan using time series and ARIMAX. This study was carried out retrospectively using the monthly reported malaria cases from the health centres to Vector-borne Disease Control Programme (VDCP) and the meteorological data from Meteorological Unit, Department of Energy, Ministry of Economic Affairs. Time series analysis was performed on monthly malaria cases, from 1994 to 2008, in seven malaria endemic districts. The time series models derived from a multiplicative seasonal autoregressive integrated moving average (ARIMA) was deployed to identify the best model using data from 1994 to 2006. The best-fit model was selected for each individual district and for the overall endemic area was developed and the monthly cases from January to December 2009 and 2010 were forecasted. In developing the prediction model, the monthly reported malaria cases and the meteorological factors from 1996 to 2008 of the seven districts were analysed. The method of ARIMAX modelling was employed to determine predictors of malaria of the subsequent month. It was found that the ARIMA (p, d, q) (P, D, Q)s model (p and P representing the auto regressive and seasonal autoregressive; d and D representing the non-seasonal differences and seasonal differencing; and q and Q the moving average parameters and seasonal moving average parameters, respectively and s representing the length of the seasonal period) for the overall endemic districts was (2,1,1)(0,1,1)12; the modelling data from each district revealed two most common ARIMA models including (2,1,1)(0,1,1)12 and (1,1,1)(0,1,1)12. The forecasted monthly malaria cases from January to December 2009 and 2010 varied from 15 to 82 cases in 2009 and 67 to 149 cases in 2010, where population in 2009 was 285,375 and the expected population of 2010 to be 289,085. The ARIMAX model of monthly cases and climatic factors showed considerable variations among the different districts. In general, the mean maximum temperature lagged at one month was a strong positive predictor of an increased malaria cases for four districts. The monthly number of cases of the previous month was also a significant predictor in one district, whereas no variable could predict malaria cases for two districts. The ARIMA models of time-series analysis were useful in forecasting the number of cases in the endemic areas of Bhutan. There was no consistency in the predictors of malaria cases when using ARIMAX model with selected lag times and climatic predictors. The ARIMA forecasting models could be employed for planning and managing malaria prevention and control programme in Bhutan.
The Hurst exponent in energy futures prices
NASA Astrophysics Data System (ADS)
Serletis, Apostolos; Rosenberg, Aryeh Adam
2007-07-01
This paper extends the work in Elder and Serletis [Long memory in energy futures prices, Rev. Financial Econ., forthcoming, 2007] and Serletis et al. [Detrended fluctuation analysis of the US stock market, Int. J. Bifurcation Chaos, forthcoming, 2007] by re-examining the empirical evidence for random walk type behavior in energy futures prices. In doing so, it uses daily data on energy futures traded on the New York Mercantile Exchange, over the period from July 2, 1990 to November 1, 2006, and a statistical physics approach-the ‘detrending moving average’ technique-providing a reliable framework for testing the information efficiency in financial markets as shown by Alessio et al. [Second-order moving average and scaling of stochastic time series, Eur. Phys. J. B 27 (2002) 197-200] and Carbone et al. [Time-dependent hurst exponent in financial time series. Physica A 344 (2004) 267-271; Analysis of clusters formed by the moving average of a long-range correlated time series. Phys. Rev. E 69 (2004) 026105]. The results show that energy futures returns display long memory and that the particular form of long memory is anti-persistence.
NASA Astrophysics Data System (ADS)
Yoon, Eun-A.; Hwang, Doo-Jin; Chae, Jinho; Yoon, Won Duk; Lee, Kyounghoon
2018-03-01
This study was carried out to determine the in situ target strength and behavioral characteristics of moon jellyfish ( Aurelia aurita) using two frequencies (38 and 120 kHz) that present a 2- frequency-difference method for distinguishing A. aurita from other marine planktonic organisms. The average TS was shown as -71.9 -67.9 dB at 38 kHz and -75.5 -66.0 dB at 120 kHz and the average ΔMVBS120-38 kHz was similar at -1.5 3.5 dB. The TS values varied in a range of about 14 dB from -83.3 and -69.0 dB depending on the pulsation of A. aurita. The species moved in a range of -0.1 1.0 m and they mostly moved horizontally with moving speeds of 0.3 0.6 m·s-1. The TS and behavioral characteristics of A. aurita can distinguish the species from others. The acoustic technology can also contribute to understanding the distribution and abundance of the species.
Environmental Assessment: Installation Development at Sheppard Air Force Base, Texas
2007-05-01
column, or in topographic depressions. Water is then utilized by plants and is respired, or it moves slowly into groundwater and/or eventually to surface...water bodies where it slowly moves through the hydrologic cycle. Removal of vegetation decreases infiltration into the soil column and thereby...School District JP-4 jet propulsion fuel 4 kts knots Ldn Day- Night Average Sound Level Leq equivalent noise level Lmax maximum sound level lb pound
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Hock-Eam, Lim
2012-09-01
This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.
NASA Astrophysics Data System (ADS)
Zhou, Weijie; Dang, Yaoguo; Gu, Rongbao
2013-03-01
We apply the multifractal detrending moving average (MFDMA) to investigate and compare the efficiency and multifractality of 5-min high-frequency China Securities Index 300 (CSI 300). The results show that the CSI 300 market becomes closer to weak-form efficiency after the introduction of CSI 300 future. We find that the CSI 300 is featured by multifractality and there are less complexity and risk after the CSI 300 index future was introduced. With the shuffling, surrogating and removing extreme values procedures, we unveil that extreme events and fat-distribution are the main origin of multifractality. Besides, we discuss the knotting phenomena in multifractality, and find that the scaling range and the irregular fluctuations for large scales in the Fq(s) vs s plot can cause a knot.
Gauging the Nearness and Size of Cycle Maximum
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2003-01-01
A simple method for monitoring the nearness and size of conventional cycle maximum for an ongoing sunspot cycle is examined. The method uses the observed maximum daily value and the maximum monthly mean value of international sunspot number and the maximum value of the 2-mo moving average of monthly mean sunspot number to effect the estimation. For cycle 23, a maximum daily value of 246, a maximum monthly mean of 170.1, and a maximum 2-mo moving average of 148.9 were each observed in July 2000. Taken together, these values strongly suggest that conventional maximum amplitude for cycle 23 would be approx. 124.5, occurring near July 2002 +/-5 mo, very close to the now well-established conventional maximum amplitude and occurrence date for cycle 23-120.8 in April 2000.
An algorithm for testing the efficient market hypothesis.
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).
An Algorithm for Testing the Efficient Market Hypothesis
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148
Air quality at night markets in Taiwan.
Zhao, Ping; Lin, Chi-Chi
2010-03-01
In Taiwan, there are more than 300 night markets and they have attracted more and more visitors in recent years. Air quality in night markets has become a public concern. To characterize the current air quality in night markets, four major night markets in Kaohsiung were selected for this study. The results of this study showed that the mean carbon dioxide (CO2) concentrations at fixed and moving sites in night markets ranged from 326 to 427 parts per million (ppm) during non-open hours and from 433 to 916 ppm during open hours. The average carbon monoxide (CO) concentrations at fixed and moving sites in night markets ranged from 0.2 to 2.8 ppm during non-open hours and from 2.1 to 14.1 ppm during open hours. The average 1-hr levels of particulate matter with aerodynamic diameters less than 10 microm (PM10) and less than 2.5 microm (PM2.5) at fixed and moving sites in night markets were high, ranging from 186 to 451 microg/m3 and from 175 to 418 microg/m3, respectively. The levels of PM2.5 accounted for 80-97% of their respective PM10 concentrations. The average formaldehyde (HCHO) concentrations at fixed and moving sites in night markets ranged from 0 to 0.05 ppm during non-open hours and from 0.02 to 0.27 ppm during open hours. The average concentration of individual polycyclic aromatic hydrocarbons (PAHs) was found in the range of 0.09 x 10(4) to 1.8 x 10(4) ng/m3. The total identified PAHs (TIPs) ranged from 7.8 x 10(1) to 20 x 10(1) ng/m3 during non-open hours and from 1.5 x 10(4) to 4.0 x 10(4) ng/m3 during open hours. Of the total analyzed PAHs, the low-molecular-weight PAHs (two to three rings) were the dominant species, corresponding to an average of 97% during non-open hours and 88% during open hours, whereas high-molecular-weight PAHs (four to six rings) represented 3 and 12% of the total detected PAHs in the gas phase during non-open and open hours, respectively.
Nonlinear-regression flow model of the Gulf Coast aquifer systems in the south-central United States
Kuiper, L.K.
1994-01-01
A multiple-regression methodology was used to help answer questions concerning model reliability, and to calibrate a time-dependent variable-density ground-water flow model of the gulf coast aquifer systems in the south-central United States. More than 40 regression models with 2 to 31 regressions parameters are used and detailed results are presented for 12 of the models. More than 3,000 values for grid-element volume-averaged head and hydraulic conductivity are used for the regression model observations. Calculated prediction interval half widths, though perhaps inaccurate due to a lack of normality of the residuals, are the smallest for models with only four regression parameters. In addition, the root-mean weighted residual decreases very little with an increase in the number of regression parameters. The various models showed considerable overlap between the prediction inter- vals for shallow head and hydraulic conductivity. Approximate 95-percent prediction interval half widths for volume-averaged freshwater head exceed 108 feet; for volume-averaged base 10 logarithm hydraulic conductivity, they exceed 0.89. All of the models are unreliable for the prediction of head and ground-water flow in the deeper parts of the aquifer systems, including the amount of flow coming from the underlying geopressured zone. Truncating the domain of solution of one model to exclude that part of the system having a ground-water density greater than 1.005 grams per cubic centimeter or to exclude that part of the systems below a depth of 3,000 feet, and setting the density to that of freshwater does not appreciably change the results for head and ground-water flow, except for locations close to the truncation surface.
Minute ventilation of cyclists, car and bus passengers: an experimental study.
Zuurbier, Moniek; Hoek, Gerard; van den Hazel, Peter; Brunekreef, Bert
2009-10-27
Differences in minute ventilation between cyclists, pedestrians and other commuters influence inhaled doses of air pollution. This study estimates minute ventilation of cyclists, car and bus passengers, as part of a study on health effects of commuters' exposure to air pollutants. Thirty-four participants performed a submaximal test on a bicycle ergometer, during which heart rate and minute ventilation were measured simultaneously at increasing cycling intensity. Individual regression equations were calculated between heart rate and the natural log of minute ventilation. Heart rates were recorded during 280 two hour trips by bicycle, bus and car and were calculated into minute ventilation levels using the individual regression coefficients. Minute ventilation during bicycle rides were on average 2.1 times higher than in the car (individual range from 1.3 to 5.3) and 2.0 times higher than in the bus (individual range from 1.3 to 5.1). The ratio of minute ventilation of cycling compared to travelling by bus or car was higher in women than in men. Substantial differences in regression equations were found between individuals. The use of individual regression equations instead of average regression equations resulted in substantially better predictions of individual minute ventilations. The comparability of the gender-specific overall regression equations linking heart rate and minute ventilation with one previous American study, supports that for studies on the group level overall equations can be used. For estimating individual doses, the use of individual regression coefficients provides more precise data. Minute ventilation levels of cyclists are on average two times higher than of bus and car passengers, consistent with the ratio found in one small previous study of young adults. The study illustrates the importance of inclusion of minute ventilation data in comparing air pollution doses between different modes of transport.
Chen, Renjie; Zhang, Yuhao; Yang, Chunxue; Zhao, Zhuohui; Xu, Xiaohui; Kan, Haidong
2013-04-01
There have been no multicity studies on the acute effects of air pollution on stroke mortality in China. This study was undertaken to examine the associations between daily stroke mortality and outdoor air pollution (particulate matter <10 μm in aerodynamic diameter, sulfur dioxide, and nitrogen dioxide) in 8 Chinese cities. We used Poisson regression models with natural spline-smoothing functions to adjust for long-term and seasonal trends, as well as other time-varying covariates. We applied 2-stage Bayesian hierarchical statistical models to estimate city-specific and national average associations of air pollution with daily stroke mortality. Air pollution was associated with daily stroke mortality in 8 Chinese cities. In the combined analysis, an increase of 10 μg/m(3) of 2-day moving average concentrations of particulate matter <10 μm in aerodynamic diameter, sulfur dioxide, and nitrogen dioxide corresponded to 0.54% (95% posterior intervals, 0.28-0.81), 0.88% (95% posterior intervals, 0.54-1.22), and 1.47% (95% posterior intervals, 0.88-2.06) increase of stroke mortality, respectively. The concentration-response curves indicated linear nonthreshold associations between air pollution and risk of stroke mortality. To our knowledge, this is the first multicity study in China, or even in other developing countries, to report the acute effect of air pollution on stroke mortality. Our results contribute to very limited data on the effect of air pollution on stroke for high-exposure settings typical in developing countries.
Pons, Tracey; Shipton, Edward A
2011-04-01
There are no comparative randomised controlled trials of physiotherapy modalities for chronic low back and radicular pain associated with multilevel fusion. Physiotherapy-based rehabilitation to control pain and improve activation levels for persistent pain following multilevel fusion can be challenging. This is a case report of a 68-year-old man who was referred for physiotherapy intervention 10 months after a multilevel spinal fusion for spinal stenosis. He reported high levels of persistent postoperative pain with minimal activity as a consequence of his pain following the surgery. The physiotherapy interventions consisted of three phases of rehabilitation starting with pool exercise that progressed to land-based walking. These were all combined with transcutaneous electrical nerve stimulation (TENS) that was used consistently for up to 8 hours per day. As outcome measures, daily pain levels and walking distances were charted once the pool programme was completed (in the third phase). Phase progression was determined by shuttle test results. The pain level was correlated with the distance walked using linear regression over a 5-day average. Over a 5-day moving average, the pain level reduced and walking distance increased. The chart of recorded pain level and walking distance showed a trend toward decreased pain with the increased distance walked. In a patient undergoing multilevel lumbar fusion, the combined use of TENS and a progressive walking programme (from pool to land) reduced pain and increased walking distance. This improvement was despite poor medication compliance and a reported high level of postsurgical pain.
Model for forecasting Olea europaea L. airborne pollen in South-West Andalusia, Spain
NASA Astrophysics Data System (ADS)
Galán, C.; Cariñanos, Paloma; García-Mozo, Herminia; Alcázar, Purificación; Domínguez-Vilches, Eugenio
Data on predicted average and maximum airborne pollen concentrations and the dates on which these maximum values are expected are of undoubted value to allergists and allergy sufferers, as well as to agronomists. This paper reports on the development of predictive models for calculating total annual pollen output, on the basis of pollen and weather data compiled over the last 19 years (1982-2000) for Córdoba (Spain). Models were tested in order to predict the 2000 pollen season; in addition, and in view of the heavy rainfall recorded in spring 2000, the 1982-1998 data set was used to test the model for 1999. The results of the multiple regression analysis show that the variables exerting the greatest influence on the pollen index were rainfall in March and temperatures over the months prior to the flowering period. For prediction of maximum values and dates on which these values might be expected, the start of the pollen season was used as an additional independent variable. Temperature proved the best variable for this prediction. Results improved when the 5-day moving average was taken into account. Testing of the predictive model for 1999 and 2000 yielded fairly similar results. In both cases, the difference between expected and observed pollen data was no greater than 10%. However, significant differences were recorded between forecast and expected maximum and minimum values, owing to the influence of rainfall during the flowering period.
Tallon, Lindsay A; Manjourides, Justin; Pun, Vivian C; Mittleman, Murray A; Kioumourtzoglou, Marianthi-Anna; Coull, Brent; Suh, Helen
2017-02-17
Little is known about the association between air pollution and erectile dysfunction (ED), a disorder occurring in 64% of men over the age of 70, and to date, no studies have been published. To address this significant knowledge gap, we explored the relationship between ED and air pollution in a group of older men who were part of the National Social Life, Health, and Aging Project (NSHAP), a nationally representative cohort study of older Americans. We obtained incident ED status and participant data for 412 men (age 57-85). Fine particulate matter (PM 2.5 ) exposures were estimated using spatio-temporal models based on participants' geocoded addresses, while nitrogen dioxide (NO 2 ) and ozone (O 3 ) concentrations were estimated using nearest measurements from the Environmental Protection Agency's Air Quality System. The association between air pollution and incident ED (newly developed in Wave 2) was examined and logistic regression models were run with adjusted models controlling for race, education, season, smoking, obesity, diabetes, depression, and median household income of census tract. We found positive, although statistically insignificant, associations between PM 2.5 , NO 2 , and O 3 exposures and odds of incident ED for each of our examined exposure windows, including 1 to 7 year moving averages. Odds ratios (OR) for 1 and 7 year moving averages equaled 1.16 (95% CI: 0.87, 1.55) and 1.16 (95% CI: 0.92, 1.46), respectively, for an IQR increase in PM 2.5 exposures. Observed associations were robust to model specifications and were not significantly modified by any of the examined risk factors for ED. We found associations between PM 2.5 , NO 2 , and O 3 exposures and odds of developing ED that did not reach nominal statistical significance, although exposures to each pollutant were consistently associated with higher odds of developing ED. While more research is needed, our findings suggest a relationship between air pollutant exposure and incident cases of ED, a common condition in older men.
Sarnat, S E; Suh, H H; Coull, B A; Schwartz, J; Stone, P H; Gold, D R
2006-01-01
Objectives Ambient particulate air pollution has been associated with increased risk of cardiovascular morbidity and mortality. Pathways by which particles may act involve autonomic nervous system dysfunction or inflammation, which can affect cardiac rate and rhythm. The importance of these pathways may vary by particle component or source. In an eastern US location with significant regional pollution, the authors examined the association of air pollution and odds of cardiac arrhythmia in older adults. Methods Thirty two non‐smoking older adults were evaluated on a weekly basis for 24 weeks during the summer and autumn of 2000 with a standardised 30 minute protocol that included continuous electrocardiogram measurements. A central ambient monitoring station provided daily concentrations of fine particles (PM2.5, sulfate, elemental carbon) and gases. Sulfate was used as a marker of regional pollution. The authors used logistic mixed effects regression to examine the odds of having any supraventricular ectopy (SVE) or ventricular ectopy (VE) in association with increases in air pollution for moving average pollutant concentrations up to 10 days before the health assessment. Results Participant specific mean counts of arrhythmia over the protocol varied between 0.1–363 for SVE and 0–350 for VE. The authors observed odds ratios for having SVE over the length of the protocol of 1.42 (95% CI 0.99 to 2.04), 1.70 (95% CI 1.12 to 2.57), and 1.78 (95% CI 0.95 to 3.35) for 10.0 μg/m3, 4.2 μg/m3, and 14.9 ppb increases in five day moving average PM2.5, sulfate, and ozone concentrations respectively. The other pollutants, including elemental carbon, showed no effect on arrhythmia. Participants reporting cardiovascular conditions (for example, previous myocardial infarction or hypertension) were the most susceptible to pollution induced SVE. The authors found no association of pollution with VE. Conclusion Increased levels of ambient sulfate and ozone may increase the risk of supraventricular arrhythmia in the elderly. PMID:16757505
Moving object detection and tracking in videos through turbulent medium
NASA Astrophysics Data System (ADS)
Halder, Kalyan Kumar; Tahtali, Murat; Anavatti, Sreenatha G.
2016-06-01
This paper addresses the problem of identifying and tracking moving objects in a video sequence having a time-varying background. This is a fundamental task in many computer vision applications, though a very challenging one because of turbulence that causes blurring and spatiotemporal movements of the background images. Our proposed approach involves two major steps. First, a moving object detection algorithm that deals with the detection of real motions by separating the turbulence-induced motions using a two-level thresholding technique is used. In the second step, a feature-based generalized regression neural network is applied to track the detected objects throughout the frames in the video sequence. The proposed approach uses the centroid and area features of the moving objects and creates the reference regions instantly by selecting the objects within a circle. Simulation experiments are carried out on several turbulence-degraded video sequences and comparisons with an earlier method confirms that the proposed approach provides a more effective tracking of the targets.
The solar wind effect on cosmic rays and solar activity
NASA Technical Reports Server (NTRS)
Fujimoto, K.; Kojima, H.; Murakami, K.
1985-01-01
The relation of cosmic ray intensity to solar wind velocity is investigated, using neutron monitor data from Kiel and Deep River. The analysis shows that the regression coefficient of the average intensity for a time interval to the corresponding average velocity is negative and that the absolute effect increases monotonously with the interval of averaging, tau, that is, from -0.5% per 100km/s for tau = 1 day to -1.1% per 100km/s for tau = 27 days. For tau 27 days the coefficient becomes almost constant independently of the value of tau. The analysis also shows that this tau-dependence of the regression coefficiently is varying with the solar activity.
Soil translocation estimates calibrated for moldboard plow depth
USDA-ARS?s Scientific Manuscript database
Over the past century, one of the biggest culprits of tillage-induced soil erosion and translocation has been the moldboard plow. The distance soil will move by moldboard plow tillage has been shown to be correlated with slope gradient. Lindstrom et al. (1992) developed regression equations describi...
Total Phosphorus Loads for Selected Tributaries to Sebago Lake, Maine
Hodgkins, Glenn A.
2001-01-01
The streamflow and water-quality datacollection networks of the Portland Water District (PWD) and the U.S. Geological Survey (USGS) as of February 2000 were analyzed in terms of their applicability for estimating total phosphorus loads for selected tributaries to Sebago Lake in southern Maine. The long-term unit-area mean annual flows for the Songo River and for small, ungaged tributaries are similar to the long-term unit-area mean annual flows for the Crooked River and other gaged tributaries to Sebago Lake, based on a regression equation that estimates mean annual streamflows in Maine. Unit-area peak streamflows of Sebago Lake tributaries can be quite different, based on a regression equation that estimates peak streamflows for Maine. Crooked River had a statistically significant positive relation (Kendall's Tau test, p=0.0004) between streamflow and total phosphorus concentration. Panther Run had a statistically significant negative relation (p=0.0015). Significant positive relations may indicate contributions from nonpoint sources or sediment resuspension, whereas significant negative relations may indicate dilution of point sources. Total phosphorus concentrations were significantly larger in the Crooked River than in the Songo River (Wilcoxon rank-sum test, p<0.0001). Evidence was insufficient, however, to indicate that phosphorus concentrations from medium-sized drainage basins, at a significance level of 0.05, were different from each other or that concentrations in small-sized drainage basins were different from each other (Kruskal-Wallis test, p= 0.0980, 0.1265). All large- and medium-sized drainage basins were sampled for total phosphorus approximately monthly. Although not all small drainage basins were sampled, they may be well represented by the small drainage basins that were sampled. If the tributaries gaged by PWD had adequate streamflow data, the current PWD tributary monitoring program would probably produce total phosphorus loading data that would represent all gaged and ungaged tributaries to Sebago Lake. Outside the PWD tributary-monitoring program, the largest ungaged tributary to Sebago Lake contains 1.5 percent of the area draining to the lake. In the absence of unique point or nonpoint sources of phosphorus, ungaged tributaries are unlikely to have total phosphorus concentrations that differ significantly from those in the small tributaries that have concentration data. The regression method, also known as the rating-curve method, was used to estimate the annual total phosphorus load for Crooked River, Northwest River, and Rich Mill Pond Outlet for water years 1996-98. The MOVE.1 method was used to estimate daily streamflows for the regression method at Northwest River and Rich Mill Pond Outlet, where streamflows were not continuously monitored. An averaging method also was used to compute annual loads at the three sites. The difference between the regression estimate and the averaging estimate for each of the three tributaries was consistent with what was expected from previous studies.
Li, Jian; Wu, Huan-Yu; Li, Yan-Ting; Jin, Hui-Ming; Gu, Bao-Ke; Yuan, Zheng-An
2010-01-01
To explore the feasibility of establishing and applying of autoregressive integrated moving average (ARIMA) model to predict the incidence rate of dysentery in Shanghai, so as to provide the theoretical basis for prevention and control of dysentery. ARIMA model was established based on the monthly incidence rate of dysentery of Shanghai from 1990 to 2007. The parameters of model were estimated through unconditional least squares method, the structure was determined according to criteria of residual un-correlation and conclusion, and the model goodness-of-fit was determined through Akaike information criterion (AIC) and Schwarz Bayesian criterion (SBC). The constructed optimal model was applied to predict the incidence rate of dysentery of Shanghai in 2008 and evaluate the validity of model through comparing the difference of predicted incidence rate and actual one. The incidence rate of dysentery in 2010 was predicted by ARIMA model based on the incidence rate from January 1990 to June 2009. The model ARIMA (1, 1, 1) (0, 1, 2)(12) had a good fitness to the incidence rate with both autoregressive coefficient (AR1 = 0.443) during the past time series, moving average coefficient (MA1 = 0.806) and seasonal moving average coefficient (SMA1 = 0.543, SMA2 = 0.321) being statistically significant (P < 0.01). AIC and SBC were 2.878 and 16.131 respectively and predicting error was white noise. The mathematic function was (1-0.443B) (1-B) (1-B(12))Z(t) = (1-0.806B) (1-0.543B(12)) (1-0.321B(2) x 12) micro(t). The predicted incidence rate in 2008 was consistent with the actual one, with the relative error of 6.78%. The predicted incidence rate of dysentery in 2010 based on the incidence rate from January 1990 to June 2009 would be 9.390 per 100 thousand. ARIMA model can be used to fit the changes of incidence rate of dysentery and to forecast the future incidence rate in Shanghai. It is a predicted model of high precision for short-time forecast.
Rate of Oviposition by Culex Quinquefasciatus in San Antonio, Texas, During Three Years
1988-09-01
autoregression and zero orders of integration and moving average ( ARIMA (l,O,O)). This model was chosen initially because rainfall ap- peared to...have no trend requiring integration and no obvious requirement for a moving aver- age component (i.e., no regular periodicity). This ARIMA model was...Say in both the northern and southern hem- ispheres exposes this species to a variety of climatic challenges to its survival. It is able to adjust
1983-11-01
S-Approximate Household inventory item average chance of being moved (%) High Electric toaster Vacuum cleaner 80 Colour television Medium Record...most rtadily moved are small items of electrical. I equipment and valuable items such as colour televisions. However, many respondents reported that...WESSEX WATER AUTHORITY, "Somerset Land Drainage District, land drainage sur ey report", Wessex Water Authority, Bridgwater, England, 1979. .34 "* • I.U
Plans, Patterns, and Move Categories Guiding a Highly Selective Search
NASA Astrophysics Data System (ADS)
Trippen, Gerhard
In this paper we present our ideas for an Arimaa-playing program (also called a bot) that uses plans and pattern matching to guide a highly selective search. We restrict move generation to moves in certain move categories to reduce the number of moves considered by the bot significantly. Arimaa is a modern board game that can be played with a standard Chess set. However, the rules of the game are not at all like those of Chess. Furthermore, Arimaa was designed to be as simple and intuitive as possible for humans, yet challenging for computers. While all established Arimaa bots use alpha-beta search with a variety of pruning techniques and other heuristics ending in an extensive positional leaf node evaluation, our new bot, Rat, starts with a positional evaluation of the current position. Based on features found in the current position - supported by pattern matching using a directed position graph - our bot Rat decides which of a given set of plans to follow. The plan then dictates what types of moves can be chosen. This is another major difference from bots that generate "all" possible moves for a particular position. Rat is only allowed to generate moves that belong to certain categories. Leaf nodes are evaluated only by a straightforward material evaluation to help avoid moves that lose material. This highly selective search looks, on average, at only 5 moves out of 5,000 to over 40,000 possible moves in a middle game position.
Predicting changes in hydrologic retention in an evolving semi-arid alluvial stream
Harvey, J.W.; Conklin, M.H.; Koelsch, R.S.
2003-01-01
Hydrologic retention of solutes in hyporheic zones or other slowly moving waters of natural channels is thought to be a significant control on biogeochemical cycling and ecology of streams. To learn more about factors affecting hydrologic retention, we repeated stream-tracer injections for 5 years in a semi-arid alluvial stream (Pinal Creek, Ariz.) during a period when streamflow was decreasing, channel width increasing, and coverage of aquatic macrophytes expanding. Average stream velocity at Pinal Creek decreased from 0.8 to 0.2 m/s, average stream depth decreased from 0.09 to 0.04 m, and average channel width expanded from 3 to 13 m. Modeling of tracer experiments indicated that the hydrologic retention factor (Rh), a measure of the average time that solute spends in storage per unit length of downstream transport, increased from 0.02 to 8 s/m. At the same time the ratio of cross-sectional area of storage zones to main channel cross-sectional area (As/A) increased from 0.2 to 0.8 m2/m2, and average water residence time in storage zones (ts) increased from 5 to 24 min. Compared with published data from four other streams in the US, Pinal Creek experienced the greatest change in hydrologic retention for a given change in streamflow. The other streams differed from Pinal Creek in that they experienced a change in streamflow between tracer experiments without substantial geomorphic or vegetative adjustments. As a result, a regression of hydrologic retention on streamflow developed for the other streams underpredicted the measured increases in hydrologic retention at Pinal Creek. The increase in hydrologic retention at Pinal Creek was more accurately predicted when measurements of the Darcy-Weisbach friction factor were used (either alone or in addition to streamflow) as a predictor variable. We conclude that relatively simple measurements of channel friction are useful for predicting the response of hydrologic retention in streams to major adjustments in channel morphology as well as changes in streamflow. Published by Elsevier Ltd.
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
A Generation at Risk: When the Baby Boomers Reach Golden Pond.
ERIC Educational Resources Information Center
Butler, Robert N.
The 20th century has seen average life expectancy in the United States move from under 50 years to over 70 years. Coupled with this increase in average life expectancy is the aging of the 76.4 million persons born between 1946 and 1964. As they approach retirement, these baby-boomers will have to balance their own needs with those of living…
Ivancevich, Nikolas M.; Dahl, Jeremy J.; Smith, Stephen W.
2010-01-01
Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively. PMID:19942503
Ivancevich, Nikolas M; Dahl, Jeremy J; Smith, Stephen W
2009-10-01
Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively.
Recent Enhancements To The FUN3D Flow Solver For Moving-Mesh Applications
NASA Technical Reports Server (NTRS)
Biedron, Robert T,; Thomas, James L.
2009-01-01
An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids has been extended to handle general mesh movement involving rigid, deforming, and overset meshes. Mesh deformation is achieved through analogy to elastic media by solving the linear elasticity equations. A general method for specifying the motion of moving bodies within the mesh has been implemented that allows for inherited motion through parent-child relationships, enabling simulations involving multiple moving bodies. Several example calculations are shown to illustrate the range of potential applications. For problems in which an isolated body is rotating with a fixed rate, a noninertial reference-frame formulation is available. An example calculation for a tilt-wing rotor is used to demonstrate that the time-dependent moving grid and noninertial formulations produce the same results in the limit of zero time-step size.
The Impact of Principal Movement and School Achievement on Principal Salaries
ERIC Educational Resources Information Center
Tran, Henry; Buckman, David G.
2017-01-01
This study examines whether principals' movements and school achievement are associated with their salaries. Predictors of principal salaries were examined using three years of panel data. Results from a fixed-effects regression analysis suggest that principals who moved to school leadership positions in other districts leveraged higher salaries…
Diversity and Educational Benefits: Moving Beyond Self-Reported Questionnaire Data
ERIC Educational Resources Information Center
Herzog, Serge
2007-01-01
Effects of ethnic/racial diversity among students and faculty on cognitive growth of undergraduate students are estimated via a series of hierarchical linear and multinomial logistic regression models. Using objective measures of compositional, curricular, and interactional diversity based on actuarial course enrollment records of over 6,000…
NASA Astrophysics Data System (ADS)
Yi, Hou-Hui; Fan, Li-Juan; Yang, Xiao-Feng; Chen, Yan-Yan
2008-09-01
The rolling massage manipulation is a classic Chinese massage, which is expected to eliminate many diseases. Here the effect of the rolling massage on the particle moving property in the blood vessels under the rolling massage manipulation is studied by the lattice Boltzmann simulation. The simulation results show that the particle moving behaviour depends on the rolling velocity, the distance between particle position and rolling position. The average values, including particle translational velocity and angular velocity, increase as the rolling velocity increases almost linearly. The result is helpful to understand the mechanism of the massage and develop the rolling techniques.
Experimental comparisons of hypothesis test and moving average based combustion phase controllers.
Gao, Jinwu; Wu, Yuhu; Shen, Tielong
2016-11-01
For engine control, combustion phase is the most effective and direct parameter to improve fuel efficiency. In this paper, the statistical control strategy based on hypothesis test criterion is discussed. Taking location of peak pressure (LPP) as combustion phase indicator, the statistical model of LPP is first proposed, and then the controller design method is discussed on the basis of both Z and T tests. For comparison, moving average based control strategy is also presented and implemented in this study. The experiments on a spark ignition gasoline engine at various operating conditions show that the hypothesis test based controller is able to regulate LPP close to set point while maintaining the rapid transient response, and the variance of LPP is also well constrained. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Neonatal heart rate prediction.
Abdel-Rahman, Yumna; Jeremic, Aleksander; Tan, Kenneth
2009-01-01
Technological advances have caused a decrease in the number of infant deaths. Pre-term infants now have a substantially increased chance of survival. One of the mechanisms that is vital to saving the lives of these infants is continuous monitoring and early diagnosis. With continuous monitoring huge amounts of data are collected with so much information embedded in them. By using statistical analysis this information can be extracted and used to aid diagnosis and to understand development. In this study we have a large dataset containing over 180 pre-term infants whose heart rates were recorded over the length of their stay in the Neonatal Intensive Care Unit (NICU). We test two types of models, empirical bayesian and autoregressive moving average. We then attempt to predict future values. The autoregressive moving average model showed better results but required more computation.
Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm
NASA Astrophysics Data System (ADS)
Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.
2014-08-01
This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.
Structural equation modeling of the inflammatory response to traffic air pollution
Baja, Emmanuel S.; Schwartz, Joel D.; Coull, Brent A.; Wellenius, Gregory A.; Vokonas, Pantel S.; Suh, Helen H.
2015-01-01
Several epidemiological studies have reported conflicting results on the effect of traffic-related pollutants on markers of inflammation. In a Bayesian framework, we examined the effect of traffic pollution on inflammation using structural equation models (SEMs). We studied measurements of C-reactive protein (CRP), soluble vascular cell adhesion molecule-1 (sVCAM-1), and soluble intracellular adhesion molecule-1 (sICAM-1) for 749 elderly men from the Normative Aging Study. Using repeated measures SEMs, we fit a latent variable for traffic pollution that is reflected by levels of black carbon, carbon monoxide, nitrogen monoxide and nitrogen dioxide to estimate its effect on a latent variable for inflammation that included sICAM-1, sVCAM-1 and CRP. Exposure periods were assessed using 1-, 2-, 3-, 7-, 14- and 30-day moving averages previsit. We compared our findings using SEMs with those obtained using linear mixed models. Traffic pollution was related to increased inflammation for 3-, 7-, 14- and 30-day exposure periods. An inter-quartile range increase in traffic pollution was associated with a 2.3% (95% posterior interval (PI): 0.0–4.7%) increase in inflammation for the 3-day moving average, with the most significant association observed for the 30-day moving average (23.9%; 95% PI: 13.9–36.7%). Traffic pollution adversely impacts inflammation in the elderly. SEMs in a Bayesian framework can comprehensively incorporate multiple pollutants and health outcomes simultaneously in air pollution–cardiovascular epidemiological studies. PMID:23232970
Park, Yoonah; Yong, Yuen Geng; Yun, Seong Hyeon; Jung, Kyung Uk; Huh, Jung Wook; Cho, Yong Beom; Kim, Hee Cheol; Lee, Woo Yong; Chun, Ho-Kyung
2015-05-01
This study aimed to compare the learning curves and early postoperative outcomes for conventional laparoscopic (CL) and single incision laparoscopic (SIL) right hemicolectomy (RHC). This retrospective study included the initial 35 cases in each group. Learning curves were evaluated by the moving average of operative time, mean operative time of every five consecutive cases, and cumulative sum (CUSUM) analysis. The learning phase was considered overcome when the moving average of operative times reached a plateau, and when the mean operative time of every five consecutive cases reached a low point and subsequently did not vary by more than 30 minutes. Six patients with missing data in the CL RHC group were excluded from the analyses. According to the mean operative time of every five consecutive cases, learning phase of SIL and CL RHC was completed between 26 and 30 cases, and 16 and 20 cases, respectively. Moving average analysis revealed that approximately 31 (SIL) and 25 (CL) cases were needed to complete the learning phase, respectively. CUSUM analysis demonstrated that 10 (SIL) and two (CL) cases were required to reach a steady state of complication-free performance, respectively. Postoperative complications rate was higher in SIL than in CL group, but the difference was not statistically significant (17.1% vs. 3.4%). The learning phase of SIL RHC is longer than that of CL RHC. Early oncological outcomes of both techniques were comparable. However, SIL RHC had a statistically insignificant higher complication rate than CL RHC during the learning phase.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Franklin, M. Rose (Technical Monitor)
2000-01-01
Since 1750, the number of cataclysmic volcanic eruptions (i.e., those having a volcanic explosivity index, or VEI, equal to 4 or larger) per decade is found to span 2-11, with 96% located in the tropics and extra-tropical Northern Hemisphere, A two-point moving average of the time series has higher values since the 1860s than before, measuring 8.00 in the 1910s (the highest value) and measuring 6.50 in the 1980s, the highest since the 18 1 0s' peak. On the basis of the usual behavior of the first difference of the two-point moving averages, one infers that the two-point moving average for the 1990s will measure about 6.50 +/- 1.00, implying that about 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially, those having VEI equal to 5 or larger) nearly always have been associated with episodes of short-term global cooling, the occurrence of even one could ameliorate the effects of global warming. Poisson probability distributions reveal that the probability of one or more VEI equal to 4 or larger events occurring within the next ten years is >99%, while it is about 49% for VEI equal to 5 or larger events and 18% for VEI equal to 6 or larger events. Hence, the likelihood that a, climatically significant volcanic eruption will occur within the next 10 years appears reasonably high.
Levine, Matthew E; Albers, David J; Hripcsak, George
2016-01-01
Time series analysis methods have been shown to reveal clinical and biological associations in data collected in the electronic health record. We wish to develop reliable high-throughput methods for identifying adverse drug effects that are easy to implement and produce readily interpretable results. To move toward this goal, we used univariate and multivariate lagged regression models to investigate associations between twenty pairs of drug orders and laboratory measurements. Multivariate lagged regression models exhibited higher sensitivity and specificity than univariate lagged regression in the 20 examples, and incorporating autoregressive terms for labs and drugs produced more robust signals in cases of known associations among the 20 example pairings. Moreover, including inpatient admission terms in the model attenuated the signals for some cases of unlikely associations, demonstrating how multivariate lagged regression models' explicit handling of context-based variables can provide a simple way to probe for health-care processes that confound analyses of EHR data.
NASA Technical Reports Server (NTRS)
Forbes, T. G.; Hones, E. W., Jr.; Bame, S. J.; Asbridge, J. R.; Paschmann, G.; Sckopke, N.; Russell, C. T.
1981-01-01
From an ISEE survey of substorm dropouts and recoveries during the period February 5 to May 25, 1978, 66 timing events observed by the Los Alamos Scientific Laboratory/Max-Planck-Institut Fast Plasma Experiments were studied in detail. Near substorm onset, both the average timing velocity and the bulk flow velocity at the edge of the plasma sheet are inward, toward the center. Measured normal to the surface of the plasma sheet, the timing velocity is 23 + or - 18 km/s and the proton flow velocity is 20 + or - 8 km/s. During substorm recovery, the plasma sheet reappears moving outward with an average timing velocity of 133 + or - 31 km/s; however, the corresponding proton flow velocity is only 3 + or - 7 km/s in the same direction. It is suggested that the difference between the average timing velocity for the expansion of the plasma sheet and the plasma bulk flow perpendicular to the surface of the sheet during substorm recovery is most likely the result of surface waves moving past the position of the satellites.
Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data
NASA Astrophysics Data System (ADS)
Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti
2018-03-01
In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.
The vacuum friction paradox and related puzzles
NASA Astrophysics Data System (ADS)
Barnett, Stephen M.; Sonnleitner, Matthias
2018-04-01
The frequency of light emitted by a moving source is shifted by a factor proportional to its velocity. We find that this Doppler shift requires the existence of a paradoxical effect: that a moving atom radiating in otherwise empty space feels a net or average force acing against its direction motion and proportional in magnitude to is speed. Yet there is no preferred rest frame, either in relativity or in Newtonian mechanics, so how can there be a vacuum friction force?
Gleason, Jessie A; Bielory, Leonard; Fagliano, Jerald A
2014-07-01
Asthma is one of the most common chronic diseases among school-aged children in the United States. Environmental respiratory irritants exacerbate asthma among children. Understanding the impact of a variety of known and biologically plausible environmental irritants and triggers among children in New Jersey - ozone, fine particulate matter (PM2.5), tree pollen, weed pollen, grass pollen and ragweed - would allow for informed public health interventions. Time-stratified case-crossover design was used to study the transient impact of ozone, PM2.5 and pollen on the acute onset of pediatric asthma. Daily emergency department visits were obtained for children aged 3-17 years with a primary diagnosis of asthma during the warm season (April through September), 2004-2007 (inclusive). Bi-directional control sampling was used to select two control periods for each case for a total of 65,562 inclusion days. Since the period of exposure prior to emergency department visit may be the most clinically relevant, lag exposures were investigated (same day (lag0), 1, 2, 3, 4, and 5 as well as 3-day and 5-day moving averages). Multivariable conditional logistic regression controlling for holiday, school-in-session indicator, and 3-day moving average for temperature and relative humidity was used to examine the associations. Odds ratios are based on interquartile range (IQR) increases or 10 unit increases when IQR ranges were narrow. Single-pollutant models as well as multipollutant models were examined. Stratification on gender, race, ethnicity and socioeconomic status was explored. The associations with ozone and PM2.5 were strongest on the same day (lag0) of the emergency department visit (RR IQR=1.05, 95% CI 1.04-1.06) and (RR IQR=1.03, 95% CI 1.02-1.04), respectively, with a decreasing lag effect. Tree and weed pollen were associated with pediatric ED visits; the largest magnitudes of association was with the 5-day average (RR IQR=1.23, 95% CI 1.21-1.25) and (RR 10=1.13, 95% CI 1.12-1.14), respectively. Grass pollen was only minimally associated with the outcome while ragweed had a negative association. The ambient air pollutant ozone is associated with increases in pediatric emergency department asthma visits during the warm weather season. The different pollen types showed different associations with the outcome. High levels of tree pollen appear to be an important risk factor in asthma exacerbations. Copyright © 2014 Elsevier Inc. All rights reserved.
Rising gasoline prices increase new motorcycle sales and fatalities.
Zhu, He; Wilson, Fernando A; Stimpson, Jim P; Hilsenrath, Peter E
2015-12-01
We examined whether sales of new motorcycles was a mechanism to explain the relationship between motorcycle fatalities and gasoline prices. The data came from the Motorcycle Industry Council, Energy Information Administration and Fatality Analysis Reporting System for 1984-2009. Autoregressive integrated moving average (ARIMA) regressions estimated the effect of inflation-adjusted gasoline price on motorcycle sales and logistic regressions estimated odds ratios (ORs) between new and old motorcycle fatalities when gasoline prices increase. New motorcycle sales were positively correlated with gasoline prices (r = 0.78) and new motorcycle fatalities (r = 0.92). ARIMA analysis estimated that a US$1 increase in gasoline prices would result in 295,000 new motorcycle sales and, consequently, 233 new motorcycle fatalities. Compared to crashes on older motorcycle models, those on new motorcycles were more likely to be young riders, occur in the afternoon, in clear weather, with a large engine displacement, and without alcohol involvement. Riders on new motorcycles were more likely to be in fatal crashes relative to older motorcycles (OR 1.14, 95 % confidence interval (CI) 1.02-1.28) when gasoline prices increase. Our findings suggest that, in response to increasing gasoline prices, people tend to purchase new motorcycles, and this is accompanied with significantly increased crash risk. There are several policy mechanisms that can be used to lower the risk of motorcycle crash injuries through the mechanism of gas prices and motorcycle sales such as raising awareness of motorcycling risks, enhancing licensing and testing requirements, limiting motorcycle power-to-weight ratios for inexperienced riders, and developing mandatory training programs for new riders.
Methodology for the AutoRegressive Planet Search (ARPS) Project
NASA Astrophysics Data System (ADS)
Feigelson, Eric; Caceres, Gabriel; ARPS Collaboration
2018-01-01
The detection of periodic signals of transiting exoplanets is often impeded by the presence of aperiodic photometric variations. This variability is intrinsic to the host star in space-based observations (typically arising from magnetic activity) and from observational conditions in ground-based observations. The most common statistical procedures to remove stellar variations are nonparametric, such as wavelet decomposition or Gaussian Processes regression. However, many stars display variability with autoregressive properties, wherein later flux values are correlated with previous ones. Providing the time series is evenly spaced, parametric autoregressive models can prove very effective. Here we present the methodology of the Autoregessive Planet Search (ARPS) project which uses Autoregressive Integrated Moving Average (ARIMA) models to treat a wide variety of stochastic short-memory processes, as well as nonstationarity. Additionally, we introduce a planet-search algorithm to detect periodic transits in the time-series residuals after application of ARIMA models. Our matched-filter algorithm, the Transit Comb Filter (TCF), replaces the traditional box-fitting step. We construct a periodogram based on the TCF to concentrate the signal of these periodic spikes. Various features of the original light curves, the ARIMA fits, the TCF periodograms, and folded light curves at peaks of the TCF periodogram can then be collected to provide constraints for planet detection. These features provide input into a multivariate classifier when a training set is available. The ARPS procedure has been applied NASA's Kepler mission observations of ~200,000 stars (Caceres, Dissertation Talk, this meeting) and will be applied in the future to other datasets.
Stone, Wesley W.; Crawford, Charles G.; Gilliom, Robert J.
2013-01-01
Watershed Regressions for Pesticides for multiple pesticides (WARP-MP) are statistical models developed to predict concentration statistics for a wide range of pesticides in unmonitored streams. The WARP-MP models use the national atrazine WARP models in conjunction with an adjustment factor for each additional pesticide. The WARP-MP models perform best for pesticides with application timing and methods similar to those used with atrazine. For other pesticides, WARP-MP models tend to overpredict concentration statistics for the model development sites. For WARP and WARP-MP, the less-than-ideal sampling frequency for the model development sites leads to underestimation of the shorter-duration concentration; hence, the WARP models tend to underpredict 4- and 21-d maximum moving-average concentrations, with median errors ranging from 9 to 38% As a result of this sampling bias, pesticides that performed well with the model development sites are expected to have predictions that are biased low for these shorter-duration concentration statistics. The overprediction by WARP-MP apparent for some of the pesticides is variably offset by underestimation of the model development concentration statistics. Of the 112 pesticides used in the WARP-MP application to stream segments nationwide, 25 were predicted to have concentration statistics with a 50% or greater probability of exceeding one or more aquatic life benchmarks in one or more stream segments. Geographically, many of the modeled streams in the Corn Belt Region were predicted to have one or more pesticides that exceeded an aquatic life benchmark during 2009, indicating the potential vulnerability of streams in this region.
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2015-01-01
Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2016-06-01
The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.
Zlotnik, Alexander; Gallardo-Antolín, Ascensión; Cuchí Alfaro, Miguel; Pérez Pérez, María Carmen; Montero Martínez, Juan Manuel
2015-08-01
Although emergency department visit forecasting can be of use for nurse staff planning, previous research has focused on models that lacked sufficient resolution and realistic error metrics for these predictions to be applied in practice. Using data from a 1100-bed specialized care hospital with 553,000 patients assigned to its healthcare area, forecasts with different prediction horizons, from 2 to 24 weeks ahead, with an 8-hour granularity, using support vector regression, M5P, and stratified average time-series models were generated with an open-source software package. As overstaffing and understaffing errors have different implications, error metrics and potential personnel monetary savings were calculated with a custom validation scheme, which simulated subsequent generation of predictions during a 4-year period. Results were then compared with a generalized estimating equation regression. Support vector regression and M5P models were found to be superior to the stratified average model with a 95% confidence interval. Our findings suggest that medium and severe understaffing situations could be reduced in more than an order of magnitude and average yearly savings of up to €683,500 could be achieved if dynamic nursing staff allocation was performed with support vector regression instead of the static staffing levels currently in use.
Lankila, Tiina; Näyhä, Simo; Rautio, Arja; Koiranen, Markku; Rusanen, Jarmo; Taanila, Anja
2013-01-01
We examined the association of health and well-being with moving using a detailed geographical scale. 7845 men and women born in northern Finland in 1966 were surveyed by postal questionnaire in 1997 and linked to 1 km(2) geographical grids based on each subject's home address in 1997-2000. Population density was used to classify each grid as rural (1-100 inhabitants/km²) or urban (>100 inhabitants/km²) type. Moving was treated as a three-class response variate (not moved; moved to different type of grid; moved to similar type of grid). Moving was regressed on five explanatory factors (life satisfaction, self-reported health, lifetime morbidity, activity-limiting illness and use of health services), adjusting for factors potentially associated with health and moving (gender, marital status, having children, housing tenure, education, employment status and previous move). The results were expressed as odds ratios (OR) and their 95% confidence intervals (CI). Moves from rural to urban grids were associated with dissatisfaction with current life (adjusted OR 2.01; 95% CI 1.26-3.22) and having somatic (OR 1.66; 1.07-2.59) or psychiatric (OR 2.37; 1.21-4.63) morbidities, the corresponding ORs for moves from rural to other rural grids being 1.71 (0.98-2.98), 1.63 (0.95-2.78) and 2.09 (0.93-4.70), respectively. Among urban dwellers, only the frequent use of health services (≥ 21 times/year) was associated with moving, the adjusted ORs being 1.65 (1.05-2.57) for moves from urban to rural grids and 1.30 (1.03-1.64) for urban to other urban grids. We conclude that dissatisfaction with life and history of diseases and injuries, especially psychiatric morbidity, may increase the propensity to move from rural to urbanised environments, while availability of health services may contribute to moves within urban areas and also to moves from urban areas to the countryside, where high-level health services enable a good quality of life for those attracted by the pastoral environment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Hydrogeology and leachate movement near two chemical-waste sites in Oswego County, New York
Anderson, H.R.; Miller, Todd S.
1986-01-01
Forty-five observation wells and test holes were installed at two chemical waste disposal sites in Oswego County, New York, to evaluate the hydrogeologic conditions and the rate and direction of leachate migration. At the site near Oswego groundwater moves northward at an average velocity of 0.4 ft/day through unconsolidated glacial deposits and discharges into White Creek and Wine Creek, which border the site and discharge to Lake Ontario. Leaking barrels by chemical wastes have contaminated the groundwater within the site, as evidenced by detection of 10 ' priority pollutant ' organic compounds, and elevated values of specific conductance, chloride, arsenic, lead, and mercury. At the site near Fulton, where 8,000 barrels of chemical wastes are buried, groundwater in the sandy surficial aquifer bordering the landfill on the south and east moves southward and eastward at an average velocity of 2.8 ft/day and discharges to Bell Creek, which discharges to the Oswego River, or moves beneath the landfill. Leachate is migrating eastward, southeastward, and southwestward, as evidenced by elevated values of specific conductance, temperature, and concentrations of several trace metals at wells east, southeast, and southwest of the site. (USGS)
NASA Astrophysics Data System (ADS)
Ferrera, Elisabetta; Giammanco, Salvatore; Cannata, Andrea; Montalto, Placido
2013-04-01
From November 2009 to April 2011 soil radon activity was continuously monitored using a Barasol® probe located on the upper NE flank of Mt. Etna volcano, close either to the Piano Provenzana fault or to the NE-Rift. Seismic and volcanological data have been analyzed together with radon data. We also analyzed air and soil temperature, barometric pressure, snow and rain fall data. In order to find possible correlations among the above parameters, and hence to reveal possible anomalies in the radon time-series, we used different statistical methods: i) multivariate linear regression; ii) cross-correlation; iii) coherence analysis through wavelet transform. Multivariate regression indicated a modest influence on soil radon from environmental parameters (R2 = 0.31). When using 100-days time windows, the R2 values showed wide variations in time, reaching their maxima (~0.63-0.66) during summer. Cross-correlation analysis over 100-days moving averages showed that, similar to multivariate linear regression analysis, the summer period is characterised by the best correlation between radon data and environmental parameters. Lastly, the wavelet coherence analysis allowed a multi-resolution coherence analysis of the time series acquired. This approach allows to study the relations among different signals either in time or frequency domain. It confirmed the results of the previous methods, but also allowed to recognize correlations between radon and environmental parameters at different observation scales (e.g., radon activity changed during strong precipitations, but also during anomalous variations of soil temperature uncorrelated with seasonal fluctuations). Our work suggests that in order to make an accurate analysis of the relations among distinct signals it is necessary to use different techniques that give complementary analytical information. In particular, the wavelet analysis showed to be very effective in discriminating radon changes due to environmental influences from those correlated with impending seismic or volcanic events.
The Accuracy of Talking Pedometers when Used during Free-Living: A Comparison of Four Devices
ERIC Educational Resources Information Center
Albright, Carolyn; Jerome, Gerald J.
2011-01-01
The purpose of this study was to determine the accuracy of four commercially available talking pedometers in measuring accumulated daily steps of adult participants while they moved independently. Ten young sighted adults (with an average age of 24.1 [plus or minus] 4.6 years), 10 older sighted adults (with an average age of 73 [plus or minus] 5.5…
Comparison of estimators for rolling samples using Forest Inventory and Analysis data
Devin S. Johnson; Michael S. Williams; Raymond L. Czaplewski
2003-01-01
The performance of three classes of weighted average estimators is studied for an annual inventory design similar to the Forest Inventory and Analysis program of the United States. The first class is based on an ARIMA(0,1,1) time series model. The equal weight, simple moving average is a member of this class. The second class is based on an ARIMA(0,2,2) time series...
NASA Astrophysics Data System (ADS)
Wu, Yu-Jie; Lin, Guan-Wei
2017-04-01
Since 1999, Taiwan has experienced a rapid rise in the number of landslides, and the number even reached a peak after the 2009 Typhoon Morakot. Although it is proved that the ground-motion signals induced by slope processes could be recorded by seismograph, it is difficult to be distinguished from continuous seismic records due to the lack of distinct P and S waves. In this study, we combine three common seismic detectors including the short-term average/long-term average (STA/LTA) approach, and two diagnostic functions of moving average and scintillation index. Based on these detectors, we have established an auto-detection algorithm of landslide-quakes and the detection thresholds are defined to distinguish landslide-quake from earthquakes and background noises. To further improve the proposed detection algorithm, we apply it to seismic archives recorded by Broadband Array in Taiwan for Seismology (BATS) during the 2009 Typhoon Morakots and consequently the discrete landslide-quakes detected by the automatic algorithm are located. The detection algorithm show that the landslide-detection results are consistent with that of visual inspection and hence can be used to automatically monitor landslide-quakes.
High-Resolution Coarse-Grained Modeling Using Oriented Coarse-Grained Sites.
Haxton, Thomas K
2015-03-10
We introduce a method to bring nearly atomistic resolution to coarse-grained models, and we apply the method to proteins. Using a small number of coarse-grained sites (about one per eight atoms) but assigning an independent three-dimensional orientation to each site, we preferentially integrate out stiff degrees of freedom (bond lengths and angles, as well as dihedral angles in rings) that are accurately approximated by their average values, while retaining soft degrees of freedom (unconstrained dihedral angles) mostly responsible for conformational variability. We demonstrate that our scheme retains nearly atomistic resolution by mapping all experimental protein configurations in the Protein Data Bank onto coarse-grained configurations and then analytically backmapping those configurations back to all-atom configurations. This roundtrip mapping throws away all information associated with the eliminated (stiff) degrees of freedom except for their average values, which we use to construct optimal backmapping functions. Despite the 4:1 reduction in the number of degrees of freedom, we find that heavy atoms move only 0.051 Å on average during the roundtrip mapping, while hydrogens move 0.179 Å on average, an unprecedented combination of efficiency and accuracy among coarse-grained protein models. We discuss the advantages of such a high-resolution model for parametrizing effective interactions and accurately calculating observables through direct or multiscale simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Salloum, Maher; Lee, Jina
2017-07-10
KARMA4 is a C++ library for autoregressive moving average (ARMA) modeling and forecasting of time-series data while incorporating both process and observation error. KARMA4 is designed for fitting and forecasting of time-series data for predictive purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burlaga, L. F.; Ness, N. F., E-mail: lburlagahsp@verizon.net, E-mail: nfnudel@yahoo.com
2012-04-10
We examine the relationships between the magnetic field and the radial velocity component V{sub R} observed in the heliosheath by instruments on Voyager 1 (V1). No increase in the magnetic field strength B was observed in a region where V{sub R} decreased linearly from 70 km s{sup -1} to 0 km s{sup -1} as plasma moved outward past V1. An unusually broad transition from positive to negative polarity was observed during a Almost-Equal-To 26 day interval when the heliospheric current sheet (HCS) moved below the latitude of V1 and the speed of V1 was comparable to the radial speed ofmore » the heliosheath flow. When V1 moved through a region where V{sub R} Almost-Equal-To 0 (the 'stagnation region'), B increased linearly with time by a factor of two, and the average of B was 0.14 nT. Nothing comparable to this was observed previously. The magnetic polarity was negative throughout the stagnation region for Almost-Equal-To 580 days until 2011 DOY 235, indicating that the HCS was below the latitude of V1. The average passage times of the magnetic holes and proton boundary layers were the same during 2009 and 2011, because the plasma moved past V1 during 2009 at the same speed that V1 moved through the stagnation region during 2011. The microscale fluctuations of B in the stagnation region during 2011 are qualitatively the same as those observed in the heliosheath during 2009. These results suggest that the stagnation region is a part of the heliosheath, rather than a 'transition region' associated with the heliopause.« less
Aydogan, Tuğba; Akçay, BetÜl İlkay Sezgin; Kardeş, Esra; Ergin, Ahmet
2017-11-01
The objective of this study is to evaluate the diagnostic ability of retinal nerve fiber layer (RNFL), macular, optic nerve head (ONH) parameters in healthy subjects, ocular hypertension (OHT), preperimetric glaucoma (PPG), and early glaucoma (EG) patients, to reveal factors affecting the diagnostic ability of spectral domain-optical coherence tomography (SD-OCT) parameters and risk factors for glaucoma. Three hundred and twenty-six eyes (89 healthy, 77 OHT, 94 PPG, and 66 EG eyes) were analyzed. RNFL, macular, and ONH parameters were measured with SD-OCT. The area under the receiver operating characteristic curve (AUC) and sensitivity at 95% specificity was calculated. Logistic regression analysis was used to determine the glaucoma risk factors. Receiver operating characteristic regression analysis was used to evaluate the influence of covariates on the diagnostic ability of parameters. In PPG patients, parameters that had the largest AUC value were average RNFL thickness (0.83) and rim volume (0.83). In EG patients, parameter that had the largest AUC value was average RNFL thickness (0.98). The logistic regression analysis showed average RNFL thickness was a risk factor for both PPG and EG. Diagnostic ability of average RNFL and average ganglion cell complex thickness increased as disease severity increased. Signal strength index did not affect diagnostic abilities. Diagnostic ability of average RNFL and rim area increased as disc area increased. When evaluating patients with glaucoma, patients at risk for glaucoma, and healthy controls RNFL parameters deserve more attention in clinical practice. Further studies are needed to fully understand the influence of covariates on the diagnostic ability of OCT parameters.
Martin, Gary R.; Fowler, Kathleen K.; Arihood, Leslie D.
2016-09-06
Information on low-flow characteristics of streams is essential for the management of water resources. This report provides equations for estimating the 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and the harmonic-mean flow at ungaged, unregulated stream sites in Indiana. These equations were developed using the low-flow statistics and basin characteristics for 108 continuous-record streamgages in Indiana with at least 10 years of daily mean streamflow data through the 2011 climate year (April 1 through March 31). The equations were developed in cooperation with the Indiana Department of Environmental Management.Regression techniques were used to develop the equations for estimating low-flow frequency statistics and the harmonic-mean flows on the basis of drainage-basin characteristics. A geographic information system was used to measure basin characteristics for selected streamgages. A final set of 25 basin characteristics measured at all the streamgages were evaluated to choose the best predictors of the low-flow statistics.Logistic-regression equations applicable statewide are presented for estimating the probability that selected low-flow frequency statistics equal zero. These equations use the explanatory variables total drainage area, average transmissivity of the full thickness of the unconsolidated deposits within 1,000 feet of the stream network, and latitude of the basin outlet. The percentage of the streamgage low-flow statistics correctly classified as zero or nonzero using the logistic-regression equations ranged from 86.1 to 88.9 percent.Generalized-least-squares regression equations applicable statewide for estimating nonzero low-flow frequency statistics use total drainage area, the average hydraulic conductivity of the top 70 feet of unconsolidated deposits, the slope of the basin, and the index of permeability and thickness of the Quaternary surficial sediments as explanatory variables. The average standard error of prediction of these regression equations ranges from 55.7 to 61.5 percent.Regional weighted-least-squares regression equations were developed for estimating the harmonic-mean flows by dividing the State into three low-flow regions. The Northern region uses total drainage area and the average transmissivity of the entire thickness of unconsolidated deposits as explanatory variables. The Central region uses total drainage area, the average hydraulic conductivity of the entire thickness of unconsolidated deposits, and the index of permeability and thickness of the Quaternary surficial sediments. The Southern region uses total drainage area and the percent of the basin covered by forest. The average standard error of prediction for these equations ranges from 39.3 to 66.7 percent.The regional regression equations are applicable only to stream sites with low flows unaffected by regulation and to stream sites with drainage basin characteristic values within specified limits. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features and for urbanized basins. Extrapolations near and beyond the applicable basin characteristic limits will have unknown errors that may be large. Equations are presented for use in estimating the 90-percent prediction interval of the low-flow statistics estimated by use of the regression equations at a given stream site.The regression equations are to be incorporated into the U.S. Geological Survey StreamStats Web-based application for Indiana. StreamStats allows users to select a stream site on a map and automatically measure the needed basin characteristics and compute the estimated low-flow statistics and associated prediction intervals.
Counties eliminating racial disparities in colorectal cancer mortality.
Rust, George; Zhang, Shun; Yu, Zhongyuan; Caplan, Lee; Jain, Sanjay; Ayer, Turgay; McRoy, Luceta; Levine, Robert S
2016-06-01
Although colorectal cancer (CRC) mortality rates are declining, racial-ethnic disparities in CRC mortality nationally are widening. Herein, the authors attempted to identify county-level variations in this pattern, and to characterize counties with improving disparity trends. The authors examined 20-year trends in US county-level black-white disparities in CRC age-adjusted mortality rates during the study period between 1989 and 2010. Using a mixed linear model, counties were grouped into mutually exclusive patterns of black-white racial disparity trends in age-adjusted CRC mortality across 20 three-year rolling average data points. County-level characteristics from census data and from the Area Health Resources File were normalized and entered into a principal component analysis. Multinomial logistic regression models were used to test the relation between these factors (clusters of related contextual variables) and the disparity trend pattern group for each county. Counties were grouped into 4 disparity trend pattern groups: 1) persistent disparity (parallel black and white trend lines); 2) diverging (widening disparity); 3) sustained equality; and 4) converging (moving from disparate outcomes toward equality). The initial principal component analysis clustered the 82 independent variables into a smaller number of components, 6 of which explained 47% of the county-level variation in disparity trend patterns. County-level variation in social determinants, health care workforce, and health systems all were found to contribute to variations in cancer mortality disparity trend patterns from 1990 through 2010. Counties sustaining equality over time or moving from disparities to equality in cancer mortality suggest that disparities are not inevitable, and provide hope that more communities can achieve optimal and equitable cancer outcomes for all. Cancer 2016;122:1735-48. © 2016 American Cancer Society. © 2016 American Cancer Society.
NASA Astrophysics Data System (ADS)
Dai, Junhu; Xu, Yunjia; Wang, Huanjiong; Alatalo, Juha; Tao, Zexing; Ge, Quansheng
2017-12-01
Continuous long-term temperature sensitivity (ST) of leaf unfolding date (LUD) and main impacting factors in spring in the period 1978-2014 for 40 plant species in Mudanjiang, Heilongjiang Province, Northeast China, were analyzed by using observation data from the China Phenological Observation Network (CPON), together with the corresponding meteorological data from the China Meteorological Data Service Center. Temperature sensitivities, slopes of the regression between LUD and mean temperature during the optimum preseason (OP), were analyzed using 15-year moving window to determine their temporal trends. Major factors impacting ST were then chosen and evaluated by applying a random sampling method. The results showed that LUD was sensitive to mean temperature in a defined period before phenophase onset for all plant species analyzed. Over the period 1978-2014, the mean ST of LUD for all plant species was - 3.2 ± 0.49 days °C-1. The moving window analysis revealed that 75% of species displayed increasing ST of LUD, with 55% showing significant increases (P < 0.05). ST for the other 25% exhibited a decreasing trend, with 17% showing significant decreases (P < 0.05). On average, ST increased by 16%, from - 2.8 ± 0.83 days °C-1 during 1980-1994 to - 3.30 ± 0.65 days °C-1 during 2000-2014. For species with later LUD and longer OP, ST tended to increase more, while species with earlier LUD and shorter OP tended to display a decreasing ST. The standard deviation of preseason temperature impacted the temporal variation in ST. Chilling conditions influenced ST for some species, but photoperiod limitation did not have significant or coherent effects on changes in ST.
A prediction of the trend of population development in urban and rural areas in China.
Hu, Y
1998-01-01
This study predicts trends in population growth, urbanization, and age structure in China. Data were obtained from the 1990 Census. Population totaled 1.22 billion at the end of 1996. The fertility model predicts future fertility by variant and parity; parameters are provided in a table. High, medium, and low fertility variants, respectively, are based on the total regressive fertility rates (TRFR) of 2.23, 1.9, and 1.6. The medium variant assumes 2 children in rural areas. The low variant is ideal and assumes no third parity in rural areas. Urbanization means an annual average increase of 0.5% after 1996 at pace I and 0.8% at pace II. Urban population will be 57.8% of total population by 2050. Under these three variants, population size in 2000 will be 898 million in rural and 403 million in urban areas, 869 million in rural and 400 million in urban areas, and 856 million in rural and 398 million in urban areas, respectively. Population will peak at 1.7 billion in 2050, at 1.48 billion in 2033, and at 1.38 billion in 2023, respectively. During the period 2000-2020, about 10-14 million rural migrants will move to urban areas; 10 million will move thereafter. The elderly aged over 60 years will reach 7% by 2000 and 20% by 2040. Rural population will age faster than urban population. The working age population will reach 775 million in 2000, peak at 868 million in 2016, and will always be over 60% of total population. School-age population will amount to over 300 million by 2030. Young population will always be more than 25% in rural areas, which is nearly 17 percentage points higher than in urban areas.
Eleven-year trend in acetanilide pesticide degradates in the Iowa River, Iowa
Kalkhoff, Stephen J.; Vecchia, Aldo V.; Capel, Paul D.; Meyer, Michael T.
2012-01-01
Trends in concentration and loads of acetochlor, alachlor, and metolachlor and their ethanasulfonic (ESA) and oxanilic (OXA) acid degradates were studied from 1996 through 2006 in the main stem of the Iowa River, Iowa and in the South Fork Iowa River, a small tributary near the headwaters of the Iowa River. Concentration trends were determined using the parametric regression model SEAWAVE-Q, which accounts for seasonal and flow-related variability. Daily estimated concentrations generated from the model were used with daily streamflow to calculate daily and yearly loads. Acetochlor, alachlor, metolachlor, and their ESA and OXA degradates were generally present in >50% of the samples collected from both sites throughout the study. Their concentrations generally decreased from 1996 through 2006, although the rate of decrease was slower after 2001. Concentrations of the ESA and OXA degradates decreased from 3 to about 23% yr-1. The concentration trend was related to the decreasing use of these compounds during the study period. Decreasing concentrations and constant runoff resulted in an average reduction of 10 to >3000 kg per year of alachlor and metolachlor ESA and OXA degradates being transported out of the Iowa River watershed. Transport of acetochlor and metolachlor parent compounds and their degradates from the Iowa River watershed ranged from <1% to about 6% of the annual application. These trends were related to the decreasing use of these compounds during the study period, but the year-to-year variability cannot explain changes in loads based on herbicide use alone. The trends were also affected by the timing and amount of precipitation. As expected, increased amounts of water moving through the watershed moved a greater percentage of the applied herbicides, especially the relatively soluble degradates, from the soils into the rivers through surface runoff, shallow groundwater inflow, and subsurface drainage.
Housing stability over two years and HIV risk among newly homeless youth.
Rosenthal, Doreen; Rotheram-Borus, Mary Jane; Batterham, Philip; Mallett, Shelley; Rice, Eric; Milburn, Norweeta G
2007-11-01
The stability of living situation was examined as a predictor of young people's HIV-related sexual and drug use acts two years after leaving home for the first time. Newly homeless youth aged 12-20 years were recruited in Los Angeles County, California, U.S.A. (n = 261) and Melbourne, Australia (n = 165) and followed longitudinally at 3, 6, 12, 18, and 24 months. Their family history of moves and the type and frequency of moves over the two years following becoming newly homeless were examined. Regression analyses indicated that recent sexual risk two years after becoming newly homeless was not related to the instability of youths' living situations; condom use was higher among youth with more placements in institutional settings and among males. Drug use was significantly related to having moved more often over two years and Melbourne youth used drugs significantly more than youth in Los Angeles.
Wildfire-Migration Dynamics: Lessons from Colorado’s Fourmile Canyon Fire
Nawrotzki, Raphael J.; Brenkert-Smith, Hannah; Hunter, Lori M.; Champ, Patricia A.
2014-01-01
The number of people living in wildfire prone wildland-urban interface (WUI) communities is on the rise. Yet, no prior study has investigated wildfire-induced residential relocation from WUI areas after a major fire event. To provide insight into the association between socio-demographic and socio-psychological characteristics and wildfire related intention to move, we use data from a survey of WUI residents in Boulder and Larimer Counties, Colorado. The data were collected two months after the devastating Fourmile Canyon fire destroyed 169 homes and burned over 6,000 acres of public and private land. Although working with a small migrant sample, logistic regression models demonstrate that survey respondents intending to move in relation to wildfire incidence do not differ socio-demographically from their non-migrant counterparts. They do, however, show significantly higher levels of risk perception. Investigating destination choices shows a preference for short distance moves. PMID:24882943
NASA Technical Reports Server (NTRS)
Edwards, T. R. (Inventor)
1985-01-01
Apparatus for doubling the data density rate of an analog to digital converter or doubling the data density storage capacity of a memory deviced is discussed. An interstitial data point midway between adjacent data points in a data stream having an even number of equal interval data points is generated by applying a set of predetermined one-dimensional convolute integer coefficients which can include a set of multiplier coefficients and a normalizer coefficient. Interpolator means apply the coefficients to the data points by weighting equally on each side of the center of the even number of equal interval data points to obtain an interstital point value at the center of the data points. A one-dimensional output data set, which is twice as dense as a one-dimensional equal interval input data set, can be generated where the output data set includes interstitial points interdigitated between adjacent data points in the input data set. The method for generating the set of interstital points is a weighted, nearest-neighbor, non-recursive, moving, smoothing averaging technique, equivalent to applying a polynomial regression calculation to the data set.
NASA Astrophysics Data System (ADS)
Peng, Chi; Wang, Meie; Chen, Weiping
2016-11-01
Spatial statistical methods including Cokriging interpolation, Morans I analysis, and geographically weighted regression (GWR) were used for studying the spatial characteristics of polycyclic aromatic hydrocarbon (PAH) accumulation in urban, suburban, and rural soils of Beijing. The concentrations of PAHs decreased spatially as the level of urbanization decreased. Generally, PAHs in soil showed two spatial patterns on the regional scale: (1) regional baseline depositions with a radius of 16.5 km related to the level of urbanization and (2) isolated pockets of soil contaminated with PAHs were found up to around 3.5 km from industrial point sources. In the urban areas, soil PAHs showed high spatial heterogeneity on the block scale, which was probably related to vegetation cover, land use, and physical soil disturbance. The distribution of total PAHs in urban blocks was unrelated to the indicators of the intensity of anthropogenic activity, namely population density, light intensity at night, and road density, but was significantly related to the same indicators in the suburban and rural areas. The moving averages of molecular ratios suggested that PAHs in the suburban and rural soils were a mix of local emissions and diffusion from urban areas.
Morrison, A C; Ferro, C; Pardo, R; Torres, M; Devlin, B; Wilson, M L; Tesh, R B
1995-07-01
Ecological studies on the sand fly Lutzomyia longipalpis (Lutz & Neiva) were conducted during 1990-1993 in a small rural community in Colombia where American visceral leishmaniasis is endemic. Standardized weekly sand fly collections made from pigpens and natural resting sites displayed a bimodal annual abundance cycle, with a small peak occurring in October-November and a larger one in April-May. Time series analysis was employed to quantify the associations between sand fly abundance and weather factors (temperature, relative humidity, and rainfall). In addition to a prominent 6-mo cycle. Fourier analysis of the collection data demonstrated that the L. longipalpis population also exhibited a 5- to 8-wk cycle that may represent the length of larval development. Autoregressive moving average models were fit to weekly collection data and their residuals were regressed against rainfall, temperature, and relative humidity. A significant positive association between female L. longipalpis abundance and the relative humidity and rainfall recorded 3 wk earlier was found, indicating that these factors may be of value in predicting sand fly abundance. Additionally, these data indicated that L. longipalpis larvae may become quiescent during adverse conditions.
Using Latent Class Analysis to Identify Profiles of Elder Abuse Perpetrators.
DeLiema, Marguerite; Yonashiro-Cho, Jeanine; Gassoumis, Zach D; Yon, Yongjie; Conrad, Ken J
2018-06-14
Research suggests that abuser risk factors differ across elder mistreatment types, but abuse interventions are not individualized. To move away from assumptions of perpetrator homogeneity and to inform intervention approaches, this study classifies abusers into subtypes according to their behavior profiles. Data are from the Older Adult Mistreatment Assessment administered to victims by Adult Protective Service (APS) in Illinois. Latent class analysis was used to categorize abusers (N = 336) using victim and caseworker reports on abusers' harmful and supportive behaviors and characteristics. Multinomial logistic regression was then used to determine which abuser profiles are associated with 4 types of mistreatment-neglect, physical, emotional, and financial-and other sociodemographic characteristics. Abusers fall into 4 profiles descriptively labeled "Caregiver," "Temperamental," "Dependent Caregiver," and "Dangerous." Dangerous abusers have the highest levels of aggression, financial dependency, substance abuse, and irresponsibility. Caregivers are lowest in harmful characteristics and highest in providing emotional and instrumental support to victims. The 4 profiles significantly differ in the average age and gender of the abuser, the relationship to victims, and types of mistreatment committed. This is the first quantitative study to identify and characterize abuser subtypes. Tailored interventions are needed to reduce problem behaviors and enhance strengths specific to each abuser profile.
Chan, Herbert; Brasher, Penelope; Erdelyi, Shannon; Desapriya, Edi; Asbridge, Mark; Purssell, Roy; Macdonald, Scott; Schuurman, Nadine; Pike, Ian
2014-01-01
Objectives. We evaluated the public health benefits of traffic laws targeting speeding and drunk drivers (British Columbia, Canada, September 2010). Methods. We studied fatal crashes and ambulance dispatches and hospital admissions for road trauma, using interrupted time series with multiple nonequivalent comparison series. We determined estimates of effect using linear regression models incorporating an autoregressive integrated moving average error term. We used neighboring jurisdictions (Alberta, Saskatchewan, Washington State) as external controls. Results. In the 2 years after implementation of the new laws, significant decreases occurred in fatal crashes (21.0%; 95% confidence interval [CI] = 15.3, 26.4) and in hospital admissions (8.0%; 95% CI = 0.6, 14.9) and ambulance calls (7.2%; 95% CI = 1.1, 13.0) for road trauma. We found a very large reduction in alcohol-related fatal crashes (52.0%; 95% CI = 34.5, 69.5), and the benefits of the new laws are likely primarily the result of a reduction in drinking and driving. Conclusions. These findings suggest that laws calling for immediate sanctions for dangerous drivers can reduce road trauma and should be supported. PMID:25121822
Using Google Trends and ambient temperature to predict seasonal influenza outbreaks.
Zhang, Yuzhou; Bambrick, Hilary; Mengersen, Kerrie; Tong, Shilu; Hu, Wenbiao
2018-05-16
The discovery of the dynamics of seasonal and non-seasonal influenza outbreaks remains a great challenge. Previous internet-based surveillance studies built purely on internet or climate data do have potential error. We collected influenza notifications, temperature and Google Trends (GT) data between January 1st, 2011 and December 31st, 2016. We performed time-series cross correlation analysis and temporal risk analysis to discover the characteristics of influenza epidemics in the period. Then, the seasonal autoregressive integrated moving average (SARIMA) model and regression tree model were developed to track influenza epidemics using GT and climate data. Influenza infection was significantly corrected with GT at lag of 1-7 weeks in Brisbane and Gold Coast, and temperature at lag of 1-10 weeks for the two study settings. SARIMA models with GT and temperature data had better predictive performance. We identified autoregression (AR) for influenza was the most important determinant for influenza occurrence in both Brisbane and Gold Coast. Our results suggested internet search metrics in conjunction with temperature can be used to predict influenza outbreaks, which can be considered as a pre-requisite for constructing early warning systems using search and temperature data. Copyright © 2018 Elsevier Ltd. All rights reserved.
Climate variation and incidence of Ross river virus in Cairns, Australia: a time-series analysis.
Tong, S; Hu, W
2001-01-01
In this study we assessed the impact of climate variability on the Ross River virus (RRv) transmission and validated an epidemic-forecasting model in Cairns, Australia. Data on the RRv cases recorded between 1985 and 1996 were obtained from the Queensland Department of Health. Climate and population data were supplied by the Australian Bureau of Meteorology and the Australian Bureau of Statistics, respectively. The cross-correlation function (CCF) showed that maximum temperature in the current month and rainfall and relative humidity at a lag of 2 months were positively and significantly associated with the monthly incidence of RRv, whereas relative humidity at a lag of 5 months was inversely associated with the RRv transmission. We developed autoregressive integrated moving average (ARIMA) models on the data collected between 1985 to 1994, and then validated the models using the data collected between 1995 and 1996. The results show that the relative humidity at a lag of 5 months (p < 0.001) and the rainfall at a lag of 2 months (p < 0.05) appeared to play significant roles in the transmission of RRv disease in Cairns. Furthermore, the regressive forecast curves were consistent with the pattern of actual values. PMID:11748035
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2012-01-01
This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.
Kuo, R J; Wu, P; Wang, C P
2002-09-01
Sales forecasting plays a very prominent role in business strategy. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average (ARMA). However, sales forecasting is very complicated owing to influence by internal and external environments. Recently, artificial neural networks (ANNs) have also been applied in sales forecasting since their promising performances in the areas of control and pattern recognition. However, further improvement is still necessary since unique circumstances, e.g. promotion, cause a sudden change in the sales pattern. Thus, this study utilizes a proposed fuzzy neural network (FNN), which is able to eliminate the unimportant weights, for the sake of learning fuzzy IF-THEN rules obtained from the marketing experts with respect to promotion. The result from FNN is further integrated with the time series data through an ANN. Both the simulated and real-world problem results show that FNN with weight elimination can have lower training error compared with the regular FNN. Besides, real-world problem results also indicate that the proposed estimation system outperforms the conventional statistical method and single ANN in accuracy.
Population drinking and fatal injuries in Eastern Europe: a time-series analysis of six countries.
Landberg, Jonas
2010-01-01
To estimate to what extent injury mortality rates in 6 Eastern European countries are affected by changes in population drinking during the post-war period. The analysis included injury mortality rates and per capita alcohol consumption in Russia, Belarus, Poland, Hungary, Bulgaria and the former Czechoslovakia. Total population and gender-specific models were estimated using auto regressive integrated moving average time-series modelling. The estimates for the total population were generally positive and significant. For Russia and Belarus, a 1-litre increase in per capita consumption was associated with an increase in injury mortality of 7.5 and 5.5 per 100,000 inhabitants, respectively. The estimates for the remaining countries ranged between 1.4 and 2.0. The gender-specific estimates displayed national variations similar to the total population estimates although the estimates for males were higher than for females in all countries. The results suggest that changes in per capita consumption have a significant impact on injury mortality in these countries, but the strength of the association tends to be stronger in countries where intoxication-oriented drinking is more common. Copyright 2009 S. Karger AG, Basel.
Modeling Polio Data Using the First Order Non-Negative Integer-Valued Autoregressive, INAR(1), Model
NASA Astrophysics Data System (ADS)
Vazifedan, Turaj; Shitan, Mahendran
Time series data may consists of counts, such as the number of road accidents, the number of patients in a certain hospital, the number of customers waiting for service at a certain time and etc. When the value of the observations are large it is usual to use Gaussian Autoregressive Moving Average (ARMA) process to model the time series. However if the observed counts are small, it is not appropriate to use ARMA process to model the observed phenomenon. In such cases we need to model the time series data by using Non-Negative Integer valued Autoregressive (INAR) process. The modeling of counts data is based on the binomial thinning operator. In this paper we illustrate the modeling of counts data using the monthly number of Poliomyelitis data in United States between January 1970 until December 1983. We applied the AR(1), Poisson regression model and INAR(1) model and the suitability of these models were assessed by using the Index of Agreement(I.A.). We found that INAR(1) model is more appropriate in the sense it had a better I.A. and it is natural since the data are counts.
Flood characteristics of Alaskan streams
Lamke, R.D.
1979-01-01
Peak discharge data for Alaskan streams are summarized and analyzed. Multiple-regression equations relating peak discharge magnitude and frequency to climatic and physical characteristics of 260 gaged basins were determined in order to estimate average recurrence interval of floods at ungaged sites. These equations are for 1.25-, 2-, 5-, 10-, 25-, and 50-year average recurrence intervals. In this report, Alaska was divided into two regions, one having a maritime climate with fall and winter rains and floods, the other having spring and summer floods of a variety or combinations of causes. Average standard errors of the six multiple-regression equations for these two regions were 48 and 74 percent, respectively. Maximum recorded floods at more than 400 sites throughout Alaska are tabulated. Maps showing lines of equal intensity of the principal climatic variables found to be significant (mean annual precipitation and mean minimum January temperature), and location of the 260 sites used in the multiple-regression analyses are included. Little flood data have been collected in western and arctic Alaska, and the predictive equations are therefore less reliable for those areas. (Woodard-USGS)
Gazolla, Fernanda Mussi; Neves Bordallo, Maria Alice; Madeira, Isabel Rey; de Miranda Carvalho, Cecilia Noronha; Vieira Monteiro, Alexandra Maria; Pinheiro Rodrigues, Nádia Cristina; Borges, Marcos Antonio; Collett-Solberg, Paulo Ferrez; Muniz, Bruna Moreira; de Oliveira, Cecilia Lacroix; Pinheiro, Suellen Martins; de Queiroz Ribeiro, Rebeca Mathias
2015-05-01
Early exposure to cardiovascular risk factors creates a chronic inflammatory state that could damage the endothelium followed by thickening of the carotid intima-media. To investigate the association of cardiovascular risk factors and thickening of the carotid intima. Media in prepubertal children. In this cross-sectional study, carotid intima-media thickness (cIMT) and cardiovascular risk factors were assessed in 129 prepubertal children aged from 5 to 10 year. Association was assessed by simple and multivariate logistic regression analyses. In simple logistic regression analyses, body mass index (BMI) z-score, waist circumference, and systolic blood pressure (SBP) were positively associated with increased left, right, and average cIMT, whereas diastolic blood pressure was positively associated only with increased left and average cIMT (p<0.05). In multivariate logistic regression analyses increased left cIMT was positively associated to BMI z-score and SBP, and increased average cIMT was only positively associated to SBP (p<0.05). BMI z-score and SBP were the strongest risk factors for increased cIMT.
Intelligent transportation systems infrastructure initiative
DOT National Transportation Integrated Search
1997-01-01
The three-quarter moving composite price index is the weighted average of the indices for three consecutive quarters. The Composite Bid Price Index is composed of six indicator items: common excavation, to indicate the price trend for all roadway exc...
Shao, Ying-Hui; Gu, Gao-Feng; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Sornette, Didier
2012-01-01
Notwithstanding the significant efforts to develop estimators of long-range correlations (LRC) and to compare their performance, no clear consensus exists on what is the best method and under which conditions. In addition, synthetic tests suggest that the performance of LRC estimators varies when using different generators of LRC time series. Here, we compare the performances of four estimators [Fluctuation Analysis (FA), Detrended Fluctuation Analysis (DFA), Backward Detrending Moving Average (BDMA), and Centred Detrending Moving Average (CDMA)]. We use three different generators [Fractional Gaussian Noises, and two ways of generating Fractional Brownian Motions]. We find that CDMA has the best performance and DFA is only slightly worse in some situations, while FA performs the worst. In addition, CDMA and DFA are less sensitive to the scaling range than FA. Hence, CDMA and DFA remain “The Methods of Choice” in determining the Hurst index of time series. PMID:23150785
NASA Astrophysics Data System (ADS)
Yi, Hou-Hui; Yang, Xiao-Feng; Wang, Cai-Feng; Li, Hua-Bing
2009-07-01
The rolling massage is one of the most important manipulations in Chinese massage, which is expected to eliminate many diseases. Here, the effect of the rolling massage on a pair of particles moving in blood vessels under rolling massage manipulation is studied by the lattice Boltzmann simulation. The simulated results show that the motion of each particle is considerably modified by the rolling massage, and it depends on the relative rolling velocity, the rolling depth, and the distance between particle position and rolling position. Both particles' translational average velocities increase almost linearly as the rolling velocity increases, and obey the same law. The increment of the average relative angular velocity for the leading particle is smaller than that of the trailing one. The result is helpful for understanding the mechanism of the massage and to further develop the rolling techniques.
Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob
2013-11-01
Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.
Direct determination approach for the multifractal detrending moving average analysis
NASA Astrophysics Data System (ADS)
Xu, Hai-Chuan; Gu, Gao-Feng; Zhou, Wei-Xing
2017-11-01
In the canonical framework, we propose an alternative approach for the multifractal analysis based on the detrending moving average method (MF-DMA). We define a canonical measure such that the multifractal mass exponent τ (q ) is related to the partition function and the multifractal spectrum f (α ) can be directly determined. The performances of the direct determination approach and the traditional approach of the MF-DMA are compared based on three synthetic multifractal and monofractal measures generated from the one-dimensional p -model, the two-dimensional p -model, and the fractional Brownian motions. We find that both approaches have comparable performances to unveil the fractal and multifractal nature. In other words, without loss of accuracy, the multifractal spectrum f (α ) can be directly determined using the new approach with less computation cost. We also apply the new MF-DMA approach to the volatility time series of stock prices and confirm the presence of multifractality.
[A peak recognition algorithm designed for chromatographic peaks of transformer oil].
Ou, Linjun; Cao, Jian
2014-09-01
In the field of the chromatographic peak identification of the transformer oil, the traditional first-order derivative requires slope threshold to achieve peak identification. In terms of its shortcomings of low automation and easy distortion, the first-order derivative method was improved by applying the moving average iterative method and the normalized analysis techniques to identify the peaks. Accurate identification of the chromatographic peaks was realized through using multiple iterations of the moving average of signal curves and square wave curves to determine the optimal value of the normalized peak identification parameters, combined with the absolute peak retention times and peak window. The experimental results show that this algorithm can accurately identify the peaks and is not sensitive to the noise, the chromatographic peak width or the peak shape changes. It has strong adaptability to meet the on-site requirements of online monitoring devices of dissolved gases in transformer oil.
ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.
Lee, Keunbaik; Baek, Changryong; Daniels, Michael J
2017-11-01
In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.
Optimized nested Markov chain Monte Carlo sampling: theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D
2009-01-01
Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples ofmore » the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.« less
NASA Astrophysics Data System (ADS)
Chen, Feier; Tian, Kang; Ding, Xiaoxu; Miao, Yuqi; Lu, Chunxia
2016-11-01
Analysis of freight rate volatility characteristics attracts more attention after year 2008 due to the effect of credit crunch and slowdown in marine transportation. The multifractal detrended fluctuation analysis technique is employed to analyze the time series of Baltic Dry Bulk Freight Rate Index and the market trend of two bulk ship sizes, namely Capesize and Panamax for the period: March 1st 1999-February 26th 2015. In this paper, the degree of the multifractality with different fluctuation sizes is calculated. Besides, multifractal detrending moving average (MF-DMA) counting technique has been developed to quantify the components of multifractal spectrum with the finite-size effect taken into consideration. Numerical results show that both Capesize and Panamax freight rate index time series are of multifractal nature. The origin of multifractality for the bulk freight rate market series is found mostly due to nonlinear correlation.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
Murray, Louis C.
2009-01-01
Water-use data collected between 1992 and 2006 at eight municipal water-supply utilities in east-central and northeast Florida were analyzed to identify seasonal trends in use and to quantify monthly variations. Regression analyses were applied to identify significant correlations between water use and selected meteorological parameters and drought indices. Selected parameters and indices include precipitation (P), air temperature (T), potential evapotranspiration (PET), available water (P-PET), monthly changes in these parameters (Delta P, Delta T, Delta PET, Delta(P-PET), the Palmer Drought Severity Index (PDSI), and the Standardized Precipitation Index (SPI). Selected utilities include the City of Daytona Beach (Daytona), the City of Eustis (Eustis), Gainesville Regional Utilities (GRU), Jacksonville Electric Authority (JEA), Orange County Utilities (OCU), Orlando Utilities Commission (OUC), Seminole County Utilities (SCU), and the City of St. Augustine (St. Augustine). Water-use rates at these utilities in 2006 ranged from about 3.2 million gallons per day at Eustis to about 131 million gallons per day at JEA. Total water-use rates increased at all utilities throughout the 15-year period of record, ranging from about 4 percent at Daytona to greater than 200 percent at OCU and SCU. Metered rates, however, decreased at six of the eight utilities, ranging from about 2 percent at OCU and OUC to about 17 percent at Eustis. Decreases in metered rates occurred because the number of metered connections increased at a greater rate than did total water use, suggesting that factors other than just population growth may play important roles in water-use dynamics. Given the absence of a concurrent trend in precipitation, these decreases can likely be attributed to changes in non-climatic factors such as water-use type, usage of reclaimed water, water-use restrictions, demographics, and so forth. When averaged for the eight utilities, metered water-use rates depict a clear seasonal pattern in which rates were lowest in the winter and greatest in the late spring. Averaged water-use rates ranged from about 9 percent below the 15-year daily mean in January to about 11 percent above the daily mean in May. Water-use rates were found to be statistically correlated to meteorological parameters and drought indices, and to be influenced by system memory. Metered rates (in gallons per day per active metered connection) were consistently found to be influenced by P, T, PET, and P-PET and changes in these parameters that occurred in prior months. In the single-variant analyses, best correlations were obtained by fitting polynomial functions to plots of metered rates versus moving-averaged values of selected parameters (R2 values greater than 0.50 at three of eight sites). Overall, metered water-use rates were best correlated with the 3- to 4-month moving average of Delta T or Delta PET (R2 values up to 0.66), whereas the full suite of meteorological parameters was best correlated with metered rates at Daytona and least correlated with rates at St. Augustine. Similarly, metered rates were substantially better correlated with moving-averaged values of precipitation (significant at all eight sites) than with single (current) monthly values (significant at only three sites). Total and metered water-use rates were positively correlated with T, PET, Delta P, Delta T, and Delta PET, and negatively correlated with P, P-PET, Delta (P-PET), PDSI, and SPI. The drought indices were better correlated with total water-use rates than with metered rates, whereas metered rates were better correlated with meteorological parameters. Multivariant analyses produced fits of the data that explained a greater degree of the variance in metered rates than did the single-variant analyses. Adjusted R2 values for the 'best' models ranged from 0.79 at JEA to 0.29 at St. Augustine and exceeded 0.60 at five of eight sites. The amount of available water (P-PET) was the si
Huang, Chiung-Shing; Harikrishnan, Pandurangan; Liao, Yu-Fang; Ko, Ellen W C; Liou, Eric J W; Chen, Philip K T
2007-05-01
To evaluate the changes in maxillary position after maxillary distraction osteogenesis in six growing children with cleft lip and palate. Retrospective, longitudinal study on maxillary changes at A point, anterior nasal spine, posterior nasal spine, central incisor, and first molar. The University Hospital Craniofacial Center. Cephalometric radiographs were used to measure the maxillary position immediately after distraction, at 6 months, and more than 1 year after distraction. After maxillary distraction with a rigid external distraction device, the maxilla (A point) on average moved forward 9.7 mm and downward 3.5 mm immediately after distraction, moved backward 0.9 mm and upward 2.0 mm after 6 months postoperatively, and then moved further backward 2.3 mm and downward 6.8 mm after more than 1 year from the predistraction position. In most cases, maxilla moved forward at distraction and started to move backward until 1 year after distraction, but remained forward, as compared with predistraction position. Maxilla also moved downward during distraction and upward in 6 months, but started descending in 1 year. There also was no further forward growth of the maxilla after distraction in growing children with clefts.
An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones.
Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han
2015-12-11
Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel "quasi-dynamic" Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the "process-level" fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move.
2010-01-01
Background Malaria still remains a public health problem in some districts of Bhutan despite marked reduction of cases in last few years. To strengthen the country's prevention and control measures, this study was carried out to develop forecasting and prediction models of malaria incidence in the endemic districts of Bhutan using time series and ARIMAX. Methods This study was carried out retrospectively using the monthly reported malaria cases from the health centres to Vector-borne Disease Control Programme (VDCP) and the meteorological data from Meteorological Unit, Department of Energy, Ministry of Economic Affairs. Time series analysis was performed on monthly malaria cases, from 1994 to 2008, in seven malaria endemic districts. The time series models derived from a multiplicative seasonal autoregressive integrated moving average (ARIMA) was deployed to identify the best model using data from 1994 to 2006. The best-fit model was selected for each individual district and for the overall endemic area was developed and the monthly cases from January to December 2009 and 2010 were forecasted. In developing the prediction model, the monthly reported malaria cases and the meteorological factors from 1996 to 2008 of the seven districts were analysed. The method of ARIMAX modelling was employed to determine predictors of malaria of the subsequent month. Results It was found that the ARIMA (p, d, q) (P, D, Q)s model (p and P representing the auto regressive and seasonal autoregressive; d and D representing the non-seasonal differences and seasonal differencing; and q and Q the moving average parameters and seasonal moving average parameters, respectively and s representing the length of the seasonal period) for the overall endemic districts was (2,1,1)(0,1,1)12; the modelling data from each district revealed two most common ARIMA models including (2,1,1)(0,1,1)12 and (1,1,1)(0,1,1)12. The forecasted monthly malaria cases from January to December 2009 and 2010 varied from 15 to 82 cases in 2009 and 67 to 149 cases in 2010, where population in 2009 was 285,375 and the expected population of 2010 to be 289,085. The ARIMAX model of monthly cases and climatic factors showed considerable variations among the different districts. In general, the mean maximum temperature lagged at one month was a strong positive predictor of an increased malaria cases for four districts. The monthly number of cases of the previous month was also a significant predictor in one district, whereas no variable could predict malaria cases for two districts. Conclusions The ARIMA models of time-series analysis were useful in forecasting the number of cases in the endemic areas of Bhutan. There was no consistency in the predictors of malaria cases when using ARIMAX model with selected lag times and climatic predictors. The ARIMA forecasting models could be employed for planning and managing malaria prevention and control programme in Bhutan. PMID:20813066
Proceedings of the Annual Conference on Manual Control (18th) Held at Dayton, Ohio on 8-10 June 1982
1983-01-01
frequency of the disturbance the probability to cross the borderline becomes larger, and corrective action (moving average value further away-,_. from the...pupillometer. The prototypical data was the average of 10 records from 5 normal subjects who showed similar responses. The different amplitudes of light...following orders touch, position, temperature , and vain. Our subjects sometimes reported numbness in the fingertips, dulled pinprick sensations
Mapping soil textural fractions across a large watershed in north-east Florida.
Lamsal, S; Mishra, U
2010-08-01
Assessment of regional scale soil spatial variation and mapping their distribution is constrained by sparse data which are collected using field surveys that are labor intensive and cost prohibitive. We explored geostatistical (ordinary kriging-OK), regression (Regression Tree-RT), and hybrid methods (RT plus residual Sequential Gaussian Simulation-SGS) to map soil textural fractions across the Santa Fe River Watershed (3585 km(2)) in north-east Florida. Soil samples collected from four depths (L1: 0-30 cm, L2: 30-60 cm, L3: 60-120 cm, and L4: 120-180 cm) at 141 locations were analyzed for soil textural fractions (sand, silt and clay contents), and combined with textural data (15 profiles) assembled under the Florida Soil Characterization program. Textural fractions in L1 and L2 were autocorrelated, and spatially mapped across the watershed. OK performance was poor, which may be attributed to the sparse sampling. RT model structure varied among textural fractions, and the model explained variations ranged from 25% for L1 silt to 61% for L2 clay content. Regression residuals were simulated using SGS, and the average of simulated residuals were used to approximate regression residual distribution map, which were added to regression trend maps. Independent validation of the prediction maps showed that regression models performed slightly better than OK, and regression combined with average of simulated regression residuals improved predictions beyond the regression model. Sand content >90% in both 0-30 and 30-60 cm covered 80.6% of the watershed area. Copyright 2010 Elsevier Ltd. All rights reserved.
Briët, Olivier J T; Amerasinghe, Priyanie H; Vounatsou, Penelope
2013-01-01
With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions' impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during "consolidation" and "pre-elimination" phases. Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.
Video-Assisted Thoracic Surgical Lobectomy for Lung Cancer: Description of a Learning Curve.
Yao, Fei; Wang, Jian; Yao, Ju; Hang, Fangrong; Cao, Shiqi; Cao, Yongke
2017-07-01
Video-assisted thoracic surgical (VATS) lobectomy is gaining popularity in the treatment of lung cancer. The aim of this study is to investigate the learning curve of VATS lobectomy by using multidimensional methods and to compare the learning curve groups with respect to perioperative clinical outcomes. We retrospectively reviewed a prospective database to identify 67 consecutive patients who underwent VATS lobectomy for lung cancer by a single surgeon. The learning curve was analyzed by using moving average and the cumulative sum (CUSUM) method. With the moving average and CUSUM analyses for the operation time, patients were stratified into two groups, with chronological order defining early and late experiences. Perioperative clinical outcomes were compared between the two learning curve groups. According to the moving average method, the peak point for operation time occurred at the 26th case. The CUSUM method also showed the operation time peak point at the 26th case. When results were compared between early- and late-experience periods, the operation time, duration of chest drainage, and postoperative hospital stay were significantly longer in the early-experience group (cases 1 to 26). The intraoperative estimated blood loss was significantly less in the late-experience group (cases 27 to 67). CUSUM charts showed a decreasing duration of chest drainage after the 36th case and shortening postoperative hospital stay after the 37th case. Multidimensional statistical analyses suggested that the learning curve for VATS lobectomy for lung cancer required ∼26 cases. Favorable intraoperative and postoperative care parameters for VATS lobectomy were observed in the late-experience group.
Park, Yoonah; Yong, Yuen Geng; Jung, Kyung Uk; Huh, Jung Wook; Cho, Yong Beom; Kim, Hee Cheol; Lee, Woo Yong; Chun, Ho-Kyung
2015-01-01
Purpose This study aimed to compare the learning curves and early postoperative outcomes for conventional laparoscopic (CL) and single incision laparoscopic (SIL) right hemicolectomy (RHC). Methods This retrospective study included the initial 35 cases in each group. Learning curves were evaluated by the moving average of operative time, mean operative time of every five consecutive cases, and cumulative sum (CUSUM) analysis. The learning phase was considered overcome when the moving average of operative times reached a plateau, and when the mean operative time of every five consecutive cases reached a low point and subsequently did not vary by more than 30 minutes. Results Six patients with missing data in the CL RHC group were excluded from the analyses. According to the mean operative time of every five consecutive cases, learning phase of SIL and CL RHC was completed between 26 and 30 cases, and 16 and 20 cases, respectively. Moving average analysis revealed that approximately 31 (SIL) and 25 (CL) cases were needed to complete the learning phase, respectively. CUSUM analysis demonstrated that 10 (SIL) and two (CL) cases were required to reach a steady state of complication-free performance, respectively. Postoperative complications rate was higher in SIL than in CL group, but the difference was not statistically significant (17.1% vs. 3.4%). Conclusion The learning phase of SIL RHC is longer than that of CL RHC. Early oncological outcomes of both techniques were comparable. However, SIL RHC had a statistically insignificant higher complication rate than CL RHC during the learning phase. PMID:25960990
Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope
2013-01-01
Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448
Sando, Steven K.; Sando, Roy; McCarthy, Peter M.; Dutton, DeAnn M.
2016-04-05
The climatic conditions of the specific time period during which peak-flow data were collected at a given streamflow-gaging station (hereinafter referred to as gaging station) can substantially affect how well the peak-flow frequency (hereinafter referred to as frequency) results represent long-term hydrologic conditions. Differences in the timing of the periods of record can result in substantial inconsistencies in frequency estimates for hydrologically similar gaging stations. Potential for inconsistency increases with decreasing peak-flow record length. The representativeness of the frequency estimates for a short-term gaging station can be adjusted by various methods including weighting the at-site results in association with frequency estimates from regional regression equations (RREs) by using the Weighted Independent Estimates (WIE) program. Also, for gaging stations that cannot be adjusted by using the WIE program because of regulation or drainage areas too large for application of RREs, frequency estimates might be improved by using record extension procedures, including a mixed-station analysis using the maintenance of variance type I (MOVE.1) procedure. The U.S. Geological Survey, in cooperation with the Montana Department of Transportation and the Montana Department of Natural Resources and Conservation, completed a study to provide adjusted frequency estimates for selected gaging stations through water year 2011.The purpose of Chapter D of this Scientific Investigations Report is to present adjusted frequency estimates for 504 selected streamflow-gaging stations in or near Montana based on data through water year 2011. Estimates of peak-flow magnitudes for the 66.7-, 50-, 42.9-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities are reported. These annual exceedance probabilities correspond to the 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.The at-site frequency estimates were adjusted by weighting with frequency estimates from RREs using the WIE program for 438 selected gaging stations in Montana. These 438 selected gaging stations (1) had periods of record less than or equal to 40 years, (2) represented unregulated or minor regulation conditions, and (3) had drainage areas less than about 2,750 square miles.The weighted-average frequency estimates obtained by weighting with RREs generally are considered to provide improved frequency estimates. In some cases, there are substantial differences among the at-site frequency estimates, the regression-equation frequency estimates, and the weighted-average frequency estimates. In these cases, thoughtful consideration should be applied when selecting the appropriate frequency estimate. Some factors that might be considered when selecting the appropriate frequency estimate include (1) whether the specific gaging station has peak-flow characteristics that distinguish it from most other gaging stations used in developing the RREs for the hydrologic region; and (2) the length of the peak-flow record and the general climatic characteristics during the period when the peak-flow data were collected. For critical structure-design applications, a conservative approach would be to select the higher of the at-site frequency estimate and the weighted-average frequency estimate.The mixed-station MOVE.1 procedure generally was applied in cases where three or more gaging stations were located on the same large river and some of the gaging stations could not be adjusted using the weighted-average method because of regulation or drainage areas too large for application of RREs. The mixed-station MOVE.1 procedure was applied to 66 selected gaging stations on 19 large rivers.The general approach for using mixed-station record extension procedures to adjust at-site frequencies involved (1) determining appropriate base periods for the gaging stations on the large rivers, (2) synthesizing peak-flow data for the gaging stations with incomplete peak-flow records during the base periods by using the mixed-station MOVE.1 procedure, and (3) conducting frequency analysis on the combined recorded and synthesized peak-flow data for each gaging station. Frequency estimates for the combined recorded and synthesized datasets for 66 gaging stations with incomplete peak-flow records during the base periods are presented. The uncertainties in the mixed-station record extension results are difficult to directly quantify; thus, it is important to understand the intended use of the estimated frequencies based on analysis of the combined recorded and synthesized datasets. The estimated frequencies are considered general estimates of frequency relations among gaging stations on the same stream channel that might be expected if the gaging stations had been gaged during the same long-term base period. However, because the mixed-station record extension procedures involve secondary statistical analysis with accompanying errors, the uncertainty of the frequency estimates is larger than would be obtained by collecting systematic records for the same number of years in the base period.
Improving Cluster Analysis with Automatic Variable Selection Based on Trees
2014-12-01
regression trees Daisy DISsimilAritY PAM partitioning around medoids PMA penalized multivariate analysis SPC sparse principal components UPGMA unweighted...unweighted pair-group average method ( UPGMA ). This method measures dissimilarities between all objects in two clusters and takes the average value
Hospital financial position and the adoption of electronic health records.
Ginn, Gregory O; Shen, Jay J; Moseley, Charles B
2011-01-01
The objective of this study was to examine the relationship between financial position and adoption of electronic health records (EHRs) in 2442 acute care hospitals. The study was cross-sectional and utilized a general linear mixed model with the multinomial distribution specification for data analysis. We verified the results by also running a multinomial logistic regression model. To measure our variables, we used data from (1) the 2007 American Hospital Association (AHA) electronic health record implementation survey, (2) the 2006 Centers for Medicare and Medicaid Cost Reports, and (3) the 2006 AHA Annual Survey containing organizational and operational data. Our dependent variable was an ordinal variable with three levels used to indicate the extent of EHR adoption by hospitals. Our independent variables were five financial ratios: (1) net days revenue in accounts receivable, (2) total margin, (3) the equity multiplier, (4) total asset turnover, and (5) the ratio of total payroll to total expenses. For control variables, we used (1) bed size, (2) ownership type, (3) teaching affiliation, (4) system membership, (5) network participation, (6) fulltime equivalent nurses per adjusted average daily census, (7) average daily census per staffed bed, (8) Medicare patients percentage, (9) Medicaid patients percentage, (10) capitation-based reimbursement, and (11) nonconcentrated market. Only liquidity was significant and positively associated with EHR adoption. Asset turnover ratio was significant but, unexpectedly, was negatively associated with EHR adoption. However, many control variables, most notably bed size, showed significant positive associations with EHR adoption. Thus, it seems that hospitals adopt EHRs as a strategic move to better align themselves with their environment.
Fall in homicides in the City of São Paulo: an exploratory analysis of possible determinants
Peres, Maria Fernanda Tourinho; de Almeida, Juliana Feliciano; Vicentin, Diego; Cerda, Magdalena; Cardia, Nancy; Adorno, Sérgio
2012-01-01
Throughout the first decade of the 2000s the homicide mortality rate (HMR) showed a significant reduction in the state and the city of São Paulo (MSP). The aim of this study is to describe the trend of HMR, socio-demographic indicators, and the investment in social and public security, and to analyze the correlation between HMR and independent variables in the MSP between 1996 and 2008. An exploratory time series ecological study was conducted. The following variables were included: HMR per 100,000 inhabitants, socio-demographic indicators, and investments in social and public security. The moving-averages for all variables were calculated and trends were analyzed through Simple Linear Regression models. Annual percentage changes, the average annual change and periodic percentage changes were calculated for all variables, and the associations between annual percentage changes were tested by Spearman’s correlation analysis. Correlations were found for the proportion of youth in the population (r = 0.69), unemployment rate (r = 0.60), State budget for education and culture (r = 0.87) and health and sanitation (r = 0.56), municipal (r = 0.68) and State (r = 0.53) budget for Public Security, firearms seized (r = 0.69) and the incarceration rate (r = 0.71). The results allow us to support the hypothesis that demographic changes, acceleration of the economy, in particular the fall in unemployment, investment in social policies and changes in public security policies act synergistically to reduce HMR in São Paulo. Complex models of analysis, incorporating the joint action of different potential explanatory variables, should be developed. PMID:22218669
Girls Thrive Emotionally, Boys Falter After Move to Better Neighborhood
... averaging 34 percent, compared to 50 percent for control group families. Mental illness is more prevalent among youth ... compared to 3.5 percent among boys in control group families who did not receive vouchers. Rates of ...
Rippling Dune Front in Herschel Crater on Mars
2011-11-17
A rippled dune front in Herschel Crater on Mars moved an average of about two meters about two yards between March 3, 2007 and December 1, 2010, as seen in one of two images from NASA Mars Reconnaissance Orbiter.
Rippling Dune Front in Herschel Crater on Mars
2011-11-17
A rippled dune front in Herschel Crater on Mars moved an average of about one meter about one yard between March 3, 2007 and December 1, 2010, as seen in one of two images from NASA Mars Reconnaissance Orbiter.
Shifting Sand in Herschel Crater
2011-11-17
The eastern margin of a rippled dune in Herschel Crater on Mars moved an average distance of three meters about three yards between March 3, 2007 and December 1, 2010, in one of two images taken by NASA Mars Reconnaissance Orbiter.
Computer simulation of concentrated solid solution strengthening
NASA Technical Reports Server (NTRS)
Kuo, C. T. K.; Arsenault, R. J.
1976-01-01
The interaction forces between a straight edge dislocation moving through a three-dimensional block containing a random array of solute atoms were determined. The yield stress at 0 K was obtained by determining the average maximum solute-dislocation interaction force that is encountered by edge dislocation, and an expression relating the yield stress to the length of the dislocation and the solute concentration is provided. The magnitude of the solid solution strengthening due to solute atoms can be determined directly from the numerical results, provided the dislocation line length that moves as a unit is specified.
Aydoğan, Tuğba; Akçay, Betül İlkay Sezgin; Kardeş, Esra; Ergin, Ahmet
2017-01-01
Purpose: The objective of this study is to evaluate the diagnostic ability of retinal nerve fiber layer (RNFL), macular, optic nerve head (ONH) parameters in healthy subjects, ocular hypertension (OHT), preperimetric glaucoma (PPG), and early glaucoma (EG) patients, to reveal factors affecting the diagnostic ability of spectral domain-optical coherence tomography (SD-OCT) parameters and risk factors for glaucoma. Methods: Three hundred and twenty-six eyes (89 healthy, 77 OHT, 94 PPG, and 66 EG eyes) were analyzed. RNFL, macular, and ONH parameters were measured with SD-OCT. The area under the receiver operating characteristic curve (AUC) and sensitivity at 95% specificity was calculated. Logistic regression analysis was used to determine the glaucoma risk factors. Receiver operating characteristic regression analysis was used to evaluate the influence of covariates on the diagnostic ability of parameters. Results: In PPG patients, parameters that had the largest AUC value were average RNFL thickness (0.83) and rim volume (0.83). In EG patients, parameter that had the largest AUC value was average RNFL thickness (0.98). The logistic regression analysis showed average RNFL thickness was a risk factor for both PPG and EG. Diagnostic ability of average RNFL and average ganglion cell complex thickness increased as disease severity increased. Signal strength index did not affect diagnostic abilities. Diagnostic ability of average RNFL and rim area increased as disc area increased. Conclusion: When evaluating patients with glaucoma, patients at risk for glaucoma, and healthy controls RNFL parameters deserve more attention in clinical practice. Further studies are needed to fully understand the influence of covariates on the diagnostic ability of OCT parameters. PMID:29133640
Zhou, Jun-Fu; Cai, Dong; Zhu, You-Gen; Yang, Jin-Lu; Peng, Cheng-Hong; Yu, Yang-Hai
2000-01-01
AIM: To study relationship of injury induced by nitric oxide, oxidation, peroxidation, lipoperoxidation with chronic cholecystitis. METHODS: The values of plasma nitric oxide (P-NO), plasma vitamin C (P-VC), plasma vitamin E (P-VE), plasma β-carotene (P-β-CAR), plasma lipoperoxides (P-LPO), erythrocyte superoxide dismutase (E-SOD), erythrocyte catalase (E-CAT), erythrocyte glutathione peroxidase (E-GSH-Px) activities and erythrocyte lipoperoxides (E-LPO) level in 77 patients with chro nic cholecystitis and 80 healthy control subjects were determined, differences of the above average values between t he patient group and the control group and differences of the average values bet ween preoperative and postoperative patients were analyzed and compared, linear regression and correlation of the disease course with the above determination values as well as the stepwise regression and correlation of the course with th e values were analyzed. RESULTS: Compared with the control group, the average values of P-NO, P-LPO, E-LPO were significantly increased (P < 0.01), and of P-VC, P-VE, P-β-CAR, E-SOD, E-CAT and E-GSH-Px decreased (P < 0.01) in the patient group. The analysis of the lin ear regression and correlation s howed that with prolonging of the course, the values of P-NO, P-LPO and E-LPO in the patients were gradually ascended and the values of P-VC, P-VE, P-β-CAR, E-SOD, E-CAT and E-GSH-Px descended (P < 0.01). The analysis of the stepwise regression and correlation indicated that the correlation of the course with P-NO, P-VE and P-β-CAR values was the closest. Compared with the preoperative patients, the average values of P-NO, P-LPO and E-LPO were significantly decre ased (P < 0.01) and the average values of P-VC, E-SOD, E-CAT and E-GSH-Px in postoperative pa tients increased (P < 0.01) in postoperative patients. But there was no signif icant difference in the average values of P-VE, P-β-CAR preope rative and postoperative patients. CONCLUSION: Chronic cholecystitis could induce the increase of nitric oxide, oxidation, peroxidation and lipoperoxidation. PMID:11819637
Frndak, Seth E; Smerbeck, Audrey M; Irwin, Lauren N; Drake, Allison S; Kordovski, Victoria M; Kunker, Katrina A; Khan, Anjum L; Benedict, Ralph H B
2016-10-01
We endeavored to clarify how distinct co-occurring symptoms relate to the presence of negative work events in employed multiple sclerosis (MS) patients. Latent profile analysis (LPA) was utilized to elucidate common disability patterns by isolating patient subpopulations. Samples of 272 employed MS patients and 209 healthy controls (HC) were administered neuroperformance tests of ambulation, hand dexterity, processing speed, and memory. Regression-based norms were created from the HC sample. LPA identified latent profiles using the regression-based z-scores. Finally, multinomial logistic regression tested for negative work event differences among the latent profiles. Four profiles were identified via LPA: a common profile (55%) characterized by slightly below average performance in all domains, a broadly low-performing profile (18%), a poor motor abilities profile with average cognition (17%), and a generally high-functioning profile (9%). Multinomial regression analysis revealed that the uniformly low-performing profile demonstrated a higher likelihood of reported negative work events. Employed MS patients with co-occurring motor, memory and processing speed impairments were most likely to report a negative work event, classifying them as uniquely at risk for job loss.
Motion patterns in acupuncture needle manipulation.
Seo, Yoonjeong; Lee, In-Seon; Jung, Won-Mo; Ryu, Ho-Sun; Lim, Jinwoong; Ryu, Yeon-Hee; Kang, Jung-Won; Chae, Younbyoung
2014-10-01
In clinical practice, acupuncture manipulation is highly individualised for each practitioner. Before we establish a standard for acupuncture manipulation, it is important to understand completely the manifestations of acupuncture manipulation in the actual clinic. To examine motion patterns during acupuncture manipulation, we generated a fitted model of practitioners' motion patterns and evaluated their consistencies in acupuncture manipulation. Using a motion sensor, we obtained real-time motion data from eight experienced practitioners while they conducted acupuncture manipulation using their own techniques. We calculated the average amplitude and duration of a sampled motion unit for each practitioner and, after normalisation, we generated a true regression curve of motion patterns for each practitioner using a generalised additive mixed modelling (GAMM). We observed significant differences in rotation amplitude and duration in motion samples among practitioners. GAMM showed marked variations in average regression curves of motion patterns among practitioners but there was strong consistency in motion parameters for individual practitioners. The fitted regression model showed that the true regression curve accounted for an average of 50.2% of variance in the motion pattern for each practitioner. Our findings suggest that there is great inter-individual variability between practitioners, but remarkable intra-individual consistency within each practitioner. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
What climate changes could be observed by two generations of Poles?
NASA Astrophysics Data System (ADS)
Szwed, M.
2010-09-01
For many years, numerous scientific papers in different disciplines have been published on different aspects of the global warming. The issue of climate change and its impacts has become certainly a "fashionable" research area. In Poland, for example, the issue was tackled by one of the greatest hydro-climatological research projects, namely: "Extreme meteorological and hydrological events in Poland (the evaluation of forecasting events and their effects on human environment)". However, for several years, and certainly since 2007, when Al Gore, former U.S. vice-president, and the Intergovernmental Panel on Climate Change (IPCC) won the Nobel Peace Prize, this topic has started to be increasingly more frequently raised by the Polish media. The average Polish citizen increasingly more often learns from the press, radio and television about the global warming. There are also those skeptical of the climate change who loudly express their opinions in the media. Can the average Pole not get lost in the thicket of information? Can they refer to their own memory or the memory of their parents or grandparents on issues of climate change? How is the typical summer or winter perceived the previous generations? Is it possible to observe such changes without reference to extreme events? This article is to try to answer the question whether the average Pole could see climate change, most simply understood as changes in the thermal conditions and precipitations. If yes, then what seasons or months see the biggest changes. Which parts of the country witness the biggest changes? The starting point of the analysis are the 58-years time series of real monthly temperature and precipitation in the period of 1951-2008 for 20 stations across Poland. However, they will not be analyzed in more detail. In order to smooth the data sequences and thus to reject the short-term fluctuations, the long-term moving averages in different sequences (individual months, seasons and years) will be analyzed. The analysis of moving averages will help to find potential longer-term trends or cycles in the test time series. Trends will be detected based on parametric and nonparametric tests, such as linear regression and Mann-Kendall test. Finally, the current temperature and precipitation will be compared to the climate projections at the end of the 21st century. To this end, the climate models from the ENSEMBLES research project will be used. In the case of temperature, these will be C41RCA3 from Rossby Centre (Norrköping, Sweden); CLM from ETH (Zurich, Switzerland), KNMI-RACMO2 from the Royal National Meteorological Institute (De Bilt, the Netherlands), MPI-M-REMO from the Max Planck Institute (Hamburg, Germany); METO-HC from the Met Office's Hadley Centre (Exeter, UK), and RCA from the SMHI Swedish Meteorological and Hydrological Institute (Norrköping, Sweden). In the case of precipitation, only the MPI-M-REMO model will be used. The reason is the outcome of the validation of models for the territory of Poland (previously made by the author) which indicated that this model was the best fit for the Polish precipitation conditions.
Defining phases of bedload transport using piecewise regression
Sandra E. Ryan; Laurie S. Porth; C. A. Troendle
2002-01-01
Differences in the transport rate and size of bedload exist for varying levels of flow in coarse-grained channels. For gravel-bed rivers, at least two phases of bedload transport, with notably differing qualities, have been described in the literature. Phase I consists primarily of sand and small gravel moving at relatively low rates over a stable channel surface....
Davoren, Mary; Hennessy, Sarah; Conway, Catherine; Marrinan, Seamus; Gill, Pauline; Kennedy, Harry G
2015-03-28
Detention in a secure forensic psychiatric hospital may inhibit engagement and recovery. Having validated the clinician rated DUNDRUM-3 (programme completion) and DUNDRUM-4 (recovery) in a forensic hospital, we set out to draft and validate scales measuring the same programme completion and recovery items that patients could use to self-rate. Based on previous work, we hypothesised that self-rating scores might be predictors of objective progress including conditional discharge. We hypothesised also that the difference between patients' and clinicians' ratings of progress in treatment and other factors relevant to readiness for discharge (concordance) would diminish as patients neared discharge. We hypothesised also that this difference in matched scores would predict objective progress including conditional discharge. In a prospective naturalistic observational cohort study in a forensic hospital, we examined whether scores on the self-rated DUNDRUM-3 programme completion and DUNDRUM-4 recovery scales or differences between clinician and patient ratings on the same scales (concordance) would predict moves between levels of therapeutic security and conditional discharge over the next twelve months. Both scales stratified along the recovery pathway of the hospital, but clinician ratings matched the level of therapeutic security more accurately than self ratings. The clinician rated scales predicted moves to less secure units and to more secure units and predicted conditional discharge but the self-rated scores did not. The difference between clinician and self-rated scores (concordance) predicted positive and negative moves and conditional discharge, but this was not always an independent predictor as shown by regression analysis. In regression analysis the DUNDRUM-3 predicted moves to less secure places though the HCR-20 C & R score dominated the model. Moves back to more secure places were predicted by lack of concordance on the DUNDRUM-4. Conditional discharge was predicted predominantly by the DUNDRUM-3. Patients accurately self-rate relative to other patients however their absolute ratings were consistently lower (better) than clinicians' ratings and were less accurate predictors of outcomes including conditional discharge. Quantifying concordance is a useful part of the recovery process and predicts outcomes but self-ratings are not accurate predictors.
Honda, Trenton; Pun, Vivian C; Manjourides, Justin; Suh, Helen
2017-04-01
Anemia, a highly prevalent disorder in elderly populations, is associated with numerous adverse health outcomes, including increased mortality, impaired functional status and cognitive disorders. Approximately two-thirds of anemia in American elderly is caused by chronic inflammation or is unexplained. A potential contributing factor may include air pollution exposures, which have been shown to increase systemic inflammation and affect erythropoiesis. Few studies, however, have investigated the associations of air pollution on hemoglobin levels and anemia. We used linear regression models and modified Poisson regression with robust error variance to examine the associations of particulate matter (PM 2.5 ) and nitrogen dioxide (NO 2 ) on hemoglobin concentrations and prevalence of anemia, respectively, among 4121 older Americans enrolled in the National Social Life, Health, and Aging Project. We estimated participant-specific exposures to PM 2.5 using spatio-temporal models, and to NO 2 using nearest measurements from Environmental Protection Agency's Air Quality System. Hemoglobin levels were measured for participants in each of two data collection waves from dried blood spots. Anemia was defined using World Health Organization hemoglobin-based criteria of <13 and <12g/dL for men and women, respectively. Models were adjusted for age, sex, smoking status, race, income, education, neighborhood socioeconomic status, region, urbanicity and medication use. Mediation by C-reactive protein (CRP), a marker of systemic inflammation, was also investigated. An inter-quartile range (IQR, 3.9μg/m 3 ) increase in the one-year moving average PM 2.5 was positively associated with anemia prevalence (prevalence ratio, or PR 1.33, 95% CI: 1.23, 1.45) and decreases in average hemoglobin of 0.81g/dL (p<0.001). Similarly, an IQR (9.6ppb) increase in NO 2 was associated with anemia prevalence (PR 1.43, 95% CI: 1.25, 1.63) and a decrease in average hemoglobin of 0.81g/dL (p<0.001). Strong dose-response relationships were identified for both pollutants. Mediation of the effect of PM 2.5 by CRP was also identified (p=0.007). Air pollution exposures were significantly associated with increased prevalence of anemia and decreased hemoglobin levels in a cohort of older Americans. If causal, these associations could indicate that chronic air pollution exposure is an important risk factor for anemia in older adults. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Schaeffel, Frank; Mathis, Ute; Brüggemann, Gunther
2007-07-01
To provide a framework for typical refractive development, as measured without cycloplegia with a commercial infrared photorefractor. To evaluate the usefulness of the screening for refractive errors, we retrospectively analyzed the data of a large number of unselected children of different ages in a pediatric practice in Tuebingen, Germany. During the standard regular preventive examinations that are performed in 80% to 90% of the young children in Germany by a pediatrician (the German "U1 to U9" system), 736 children were also measured with the first generation PowerRefractor (made by MCS, Reutlingen, Germany, but no longer available in this version). Of those, 172 were also measured with +3 D spectacles to find out whether this helps detect hyperopia. Children with more than +2 D of hyperopia or astigmatism, more than 1.5 D of anisometropia, or more than 1 D of myopia in the second year of life were referred to an eye care specialist. The actions taken by the eye care specialist were used to evaluate the merits of the screening. The average noncycloplegic spherical refractive errors in the right eyes declined linearly from +0.93 to +0.62 D over the first 6 years (p < 0.001)-between 1.5 and 0.5 D less hyperopic than in published studies with cycloplegic retinoscopy. As expected, +3 D spectacle lenses moved the refractions into the myopic direction, but this shift was not smaller in hyperopic children. The average negative cylinder magnitudes declined from -0.89 to 0.48 D (linear regression: p < 0.001). The J0 components displayed high correlations in both eyes (p < 0.001) but the J45 components did not. The average absolute anisometropias (difference of spheres) declined from 0.37 to 0.23 (linear regression: p < 0.001). Of the 736 children, 85 (11.5%) were referred to an eye care specialist. Of these, 52 received spectacles (61.2%), 14 (16.4%) were identified as "at risk" and remained under observation, and 18 (21.2%) were considered "false-positive." Non cycloplegic photorefraction provides considerably less hyperopic readings than retinoscopy under cycloplegia. Additional refractions performed through binocular +3-D lenses did not facilitate detection of hyperopia. With the referral criteria above, 11% of the children were referred to an eye care specialist, but with a 20% false-positive rate. The screening had some power to identify children at risk but the number of false-negatives remained uncertain.
Honda, Trenton; Pun, Vivian C.; Manjourides, Justin; Suh, Helen
2017-01-01
BACKGROUND Anemia, a highly prevalent disorder in elderly populations, is associated with numerous adverse health outcomes, including increased mortality, impaired functional status and cognitive disorders. Approximately two-thirds of anemia in American elderly is caused by chronic inflammation or is unexplained. A potential contributing factor may include air pollution exposures, which have been shown to increase systemic inflammation and affect erythropoiesis. Few studies, however, have investigated the associations of air pollution on hemoglobin levels and anemia. METHODS We used linear regression models and modified Poisson regression with robust error variance to examine the associations of particulate matter (PM2.5) and nitrogen dioxide (NO2) on hemoglobin concentrations and prevalence of anemia, respectively, among 4,121 older Americans enrolled in the National Social Life, Health, and Aging Project. We estimated participant-specific exposures to PM2.5 using spatio-temporal models, and to NO2 using nearest measurements from Environmental Protection Agency’s Air Quality System. Hemoglobin levels were measured for participants in each of two data collection waves from dried blood spots. Anemia was defined using World Health Organization hemoglobin-based criteria of <13 and <12 g/dL for men and women, respectively. Models were adjusted for age, sex, smoking status, race, income, education, neighborhood socioeconomic status, region, urbanicity and medication use. Mediation by C-reactive protein (CRP), a marker of systemic inflammation, was also investigated. RESULTS An inter-quartile range (IQR, 3.9 μg/m3) increase in the one-year moving average PM2.5 was positively associated with anemia prevalence (prevalence ratio, or PR 1.33, 95% CI: 1.23, 1.45) and decreases in average hemoglobin of 0.81 g/dL (p<0.001). Similarly, an IQR (9.6 ppb) increase in NO2 was associated with anemia prevalence (PR 1.43, 95% CI: 1.25, 1.63) and a decrease in average hemoglobin of 0.81 g/dL (p<0.001). Strong dose-response relationships were identified for both pollutants. Mediation of the effect of PM2.5 by CRP was also identified (P=0.007). CONCLUSIONS/INTERPRETATIONS Air pollution exposures were significantly associated with increased prevalence of anemia and decreased hemoglobin levels in a cohort of older Americans. If causal, these associations could indicate that chronic air pollution exposure is an important risk factor for anemia in older adults. PMID:28153527
Weather explains high annual variation in butterfly dispersal
Rytteri, Susu; Heikkinen, Risto K.; Heliölä, Janne; von Bagh, Peter
2016-01-01
Weather conditions fundamentally affect the activity of short-lived insects. Annual variation in weather is therefore likely to be an important determinant of their between-year variation in dispersal, but conclusive empirical studies are lacking. We studied whether the annual variation of dispersal can be explained by the flight season's weather conditions in a Clouded Apollo (Parnassius mnemosyne) metapopulation. This metapopulation was monitored using the mark–release–recapture method for 12 years. Dispersal was quantified for each monitoring year using three complementary measures: emigration rate (fraction of individuals moving between habitat patches), average residence time in the natal patch, and average distance moved. There was much variation both in dispersal and average weather conditions among the years. Weather variables significantly affected the three measures of dispersal and together with adjusting variables explained 79–91% of the variation observed in dispersal. Different weather variables became selected in the models explaining variation in three dispersal measures apparently because of the notable intercorrelations. In general, dispersal rate increased with increasing temperature, solar radiation, proportion of especially warm days, and butterfly density, and decreased with increasing cloudiness, rainfall, and wind speed. These results help to understand and model annually varying dispersal dynamics of species affected by global warming. PMID:27440662
Highly-resolved numerical simulations of bed-load transport in a turbulent open-channel flow
NASA Astrophysics Data System (ADS)
Vowinckel, Bernhard; Kempe, Tobias; Nikora, Vladimir; Jain, Ramandeep; Fröhlich, Jochen
2015-11-01
The study presents the analysis of phase-resolving Direct Numerical Simulations of a horizontal turbulent open-channel flow laden with a large number of spherical particles. These particles have a mobility close to their threshold of incipient motion andare transported in bed-load mode. The coupling of the fluid phase with the particlesis realized by an Immersed Boundary Method. The Double-Averaging Methodology is applied for the first time convolutingthe data into a handy set of quantities averaged in time and space to describe the most prominent flow features.In addition, a systematic study elucidatesthe impact of mobility and sediment supply on the pattern formation of particle clusters ina very large computational domain. A detailed description of fluid quantities links the developed particle patterns to the enhancement of turbulence and to a modified hydraulic resistance. Conditional averaging isapplied toerosion events providingthe processes involved inincipient particle motion. Furthermore, the detection of moving particle clusters as well as their surrounding flow field is addressedby a a moving frameanalysis. Funded by German Research Foundation (DFG), project FR 1593/5-2, computational time provided by ZIH Dresden, Germany, and JSC Juelich, Germany.
Use of streamflow data to estimate base flowground-water recharge for Wisconsin
Gebert, W.A.; Radloff, M.J.; Considine, E.J.; Kennedy, J.L.
2007-01-01
The average annual base flow/recharge was determined for streamflow-gaging stations throughout Wisconsin by base-flow separation. A map of the State was prepared that shows the average annual base flow for the period 1970-99 for watersheds at 118 gaging stations. Trend analysis was performed on 22 of the 118 streamflow-gaging stations that had long-term records, unregulated flow, and provided aerial coverage of the State. The analysis found that a statistically significant increasing trend was occurring for watersheds where the primary land use was agriculture. Most gaging stations where the land cover was forest had no significant trend. A method to estimate the average annual base flow at ungaged sites was developed by multiple-regression analysis using basin characteristics. The equation with the lowest standard error of estimate, 9.5%, has drainage area, soil infiltration and base flow factor as independent variables. To determine the average annual base flow for smaller watersheds, estimates were made at low-flow partial-record stations in 3 of the 12 major river basins in Wisconsin. Regression equations were developed for each of the three major river basins using basin characteristics. Drainage area, soil infiltration, basin storage and base-flow factor were the independent variables in the regression equations with the lowest standard error of estimate. The standard error of estimate ranged from 17% to 52% for the three river basins. ?? 2007 American Water Resources Association.
Driving-forces model on individual behavior in scenarios considering moving threat agents
NASA Astrophysics Data System (ADS)
Li, Shuying; Zhuang, Jun; Shen, Shifei; Wang, Jia
2017-09-01
The individual behavior model is a contributory factor to improve the accuracy of agent-based simulation in different scenarios. However, few studies have considered moving threat agents, which often occur in terrorist attacks caused by attackers with close-range weapons (e.g., sword, stick). At the same time, many existing behavior models lack validation from cases or experiments. This paper builds a new individual behavior model based on seven behavioral hypotheses. The driving-forces model is an extension of the classical social force model considering scenarios including moving threat agents. An experiment was conducted to validate the key components of the model. Then the model is compared with an advanced Elliptical Specification II social force model, by calculating the fitting errors between the simulated and experimental trajectories, and being applied to simulate a specific circumstance. Our results show that the driving-forces model reduced the fitting error by an average of 33.9% and the standard deviation by an average of 44.5%, which indicates the accuracy and stability of the model in the studied situation. The new driving-forces model could be used to simulate individual behavior when analyzing the risk of specific scenarios using agent-based simulation methods, such as risk analysis of close-range terrorist attacks in public places.
Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo
2013-05-06
A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.
Fischer, A; Friggens, N C; Berry, D P; Faverdin, P
2018-07-01
The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of measured NEI. Among all compared mixed models, the variance of the cow-specific part of predicted NEI represented between 53% and 59% of the variance of REI estimated from the lactation average model or between 4% and 5% of the variance of measured NEI. The remaining 41% to 47% of the variance of REI estimated with the lactation average model may therefore reflect model fitting errors or measurement errors. In conclusion, the use of a mixed model framework with cow-specific random regressions seems to be a promising method to isolate the cow-specific component of REI in dairy cows.
Teymuri, Ghulam Heidar; Sadeghian, Marzieh; Kangavari, Mehdi; Asghari, Mehdi; Madrese, Elham; Abbasinia, Marzieh; Ahmadnezhad, Iman; Gholizadeh, Yavar
2013-01-01
Background: One of the significant dangers that threaten people’s lives is the increased risk of accidents. Annually, more than 1.3 million people die around the world as a result of accidents, and it has been estimated that approximately 300 deaths occur daily due to traffic accidents in the world with more than 50% of that number being people who were not even passengers in the cars. The aim of this study was to examine traffic accidents in Tehran and forecast the number of future accidents using a time-series model. Methods: The study was a cross-sectional study that was conducted in 2011. The sample population was all traffic accidents that caused death and physical injuries in Tehran in 2010 and 2011, as registered in the Tehran Emergency ward. The present study used Minitab 15 software to provide a description of accidents in Tehran for the specified time period as well as those that occurred during April 2012. Results: The results indicated that the average number of daily traffic accidents in Tehran in 2010 was 187 with a standard deviation of 83.6. In 2011, there was an average of 180 daily traffic accidents with a standard deviation of 39.5. One-way analysis of variance indicated that the average number of accidents in the city was different for different months of the year (P < 0.05). Most of the accidents occurred in March, July, August, and September. Thus, more accidents occurred in the summer than in the other seasons. The number of accidents was predicted based on an auto-regressive, moving average (ARMA) for April 2012. The number of accidents displayed a seasonal trend. The prediction of the number of accidents in the city during April of 2012 indicated that a total of 4,459 accidents would occur with mean of 149 accidents per day during these three months. Conclusion: The number of accidents in Tehran displayed a seasonal trend, and the number of accidents was different for different seasons of the year. PMID:26120405
A study of video frame rate on the perception of moving imagery detail
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1993-01-01
The rate at which each frame of color moving video imagery is displayed was varied in small steps to determine what is the minimal acceptable frame rate for life scientists viewing white rats within a small enclosure. Two, twenty five second-long scenes (slow and fast animal motions) were evaluated by nine NASA principal investigators and animal care technicians. The mean minimum acceptable frame rate across these subjects was 3.9 fps both for the slow and fast moving animal scenes. The highest single trial frame rate averaged across all subjects for the slow and the fast scene was 6.2 and 4.8, respectively. Further research is called for in which frame rate, image size, and color/gray scale depth are covaried during the same observation period.
Haas, Patrick J; Bishop, Charles E; Gao, Yan; Griswold, Michael E; Schweinfurth, John M
2016-10-01
To evaluate the relationships among measures of physical activity and hearing in the Jackson Heart Study. Prospective cohort study. We assessed hearing on 1,221 Jackson Heart Study participants who also had validated physical activity questionnaire data on file. Hearing thresholds were measured across frequency octaves from 250 to 8,000 Hz, and various frequency pure-tone averages (PTAs) were constructed, including PTA4 (average of 500, 1,000, 2,000, and 4,000 Hz), PTA-high (average of 4,000 and 8,000 Hz), PTA-mid (average of 1,000 and 2,000 Hz), and PTA-low (average of 250 and 500 Hz). Hearing loss was defined for pure tones and pure-tone averages as >25 dB HL in either ear and averaged between the ears. Associations between physical activity and hearing were estimated using linear regression, reporting changes in decibel hearing level, and logistic regression, reporting odds ratios (OR) of hearing loss. Physical activity exhibited a statistically significant but small inverse relationship with PTA4, -0.20 dB HL per doubling of activity (95% confidence interval [CI]: -0.35, -0.04; P = .016), as well as with PTA-low and pure tones at 250, 2,000, and 4,000 Hz in adjusted models. Multivariable logistic regression modeling supported a decrease in the odds of high-frequency hearing loss among participants who reported at least some moderate weekly physical activity (PTA-high, OR: 0.69 [95% CI: 0.52, 0.92]; P = .011 and 4000 Hz, OR: 0.75 [95% CI: 0.57, 0.99]; P = .044). Our study provides further evidence that physical activity is related to better hearing; however, the clinical significance of this relationship cannot be estimated given the nature of the cross-sectional study design. 2b Laryngoscope, 126:2376-2381, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.
Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih
2016-10-01
In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.
Principal regression analysis and the index leverage effect
NASA Astrophysics Data System (ADS)
Reigneron, Pierre-Alain; Allez, Romain; Bouchaud, Jean-Philippe
2011-09-01
We revisit the index leverage effect, that can be decomposed into a volatility effect and a correlation effect. We investigate the latter using a matrix regression analysis, that we call ‘Principal Regression Analysis' (PRA) and for which we provide some analytical (using Random Matrix Theory) and numerical benchmarks. We find that downward index trends increase the average correlation between stocks (as measured by the most negative eigenvalue of the conditional correlation matrix), and makes the market mode more uniform. Upward trends, on the other hand, also increase the average correlation between stocks but rotates the corresponding market mode away from uniformity. There are two time scales associated to these effects, a short one on the order of a month (20 trading days), and a longer time scale on the order of a year. We also find indications of a leverage effect for sectorial correlations as well, which reveals itself in the second and third mode of the PRA.
REVIEW ARTICLE: Hither and yon: a review of bi-directional microtubule-based transport
NASA Astrophysics Data System (ADS)
Gross, Steven P.
2004-06-01
Active transport is critical for cellular organization and function, and impaired transport has been linked to diseases such as neuronal degeneration. Much long distance transport in cells uses opposite polarity molecular motors of the kinesin and dynein families to move cargos along microtubules. It is increasingly clear that many cargos are moved by both sets of motors, and frequently reverse course. This review compares this bi-directional transport to the more well studied uni-directional transport. It discusses some bi-directionally moving cargos, and critically evaluates three different physical models for how such transport might occur. It then considers the evidence for the number of active motors per cargo, and how the net or average direction of transport might be controlled. The likelihood of a complex linking the activities of kinesin and dynein is also discussed. The paper concludes by reviewing elements of apparent universality between different bi-directionally moving cargos and by briefly considering possible reasons for the existence of bi-directional transport.
Random walk of passive tracers among randomly moving obstacles.
Gori, Matteo; Donato, Irene; Floriani, Elena; Nardecchia, Ilaria; Pettini, Marco
2016-04-14
This study is mainly motivated by the need of understanding how the diffusion behavior of a biomolecule (or even of a larger object) is affected by other moving macromolecules, organelles, and so on, inside a living cell, whence the possibility of understanding whether or not a randomly walking biomolecule is also subject to a long-range force field driving it to its target. By means of the Continuous Time Random Walk (CTRW) technique the topic of random walk in random environment is here considered in the case of a passively diffusing particle among randomly moving and interacting obstacles. The relevant physical quantity which is worked out is the diffusion coefficient of the passive tracer which is computed as a function of the average inter-obstacles distance. The results reported here suggest that if a biomolecule, let us call it a test molecule, moves towards its target in the presence of other independently interacting molecules, its motion can be considerably slowed down.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2013-01-01
Examined are the annual averages, 10-year moving averages, decadal averages, and sunspot cycle (SC) length averages of the mean, maximum, and minimum surface air temperatures and the diurnal temperature range (DTR) for the Armagh Observatory, Northern Ireland, during the interval 1844-2012. Strong upward trends are apparent in the Armagh surface-air temperatures (ASAT), while a strong downward trend is apparent in the DTR, especially when the ASAT data are averaged by decade or over individual SC lengths. The long-term decrease in the decadaland SC-averaged annual DTR occurs because the annual minimum temperatures have risen more quickly than the annual maximum temperatures. Estimates are given for the Armagh annual mean, maximum, and minimum temperatures and the DTR for the current decade (2010-2019) and SC24.
Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction
NASA Astrophysics Data System (ADS)
Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.
2012-12-01
The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies over 1 billion gallons of water per day to more than 9 million customers. DEP's "West of Hudson" reservoirs located in the Catskill Mountains are unfiltered per a renewable filtration avoidance determination granted by the EPA. While water quality is usually pristine, high volume storm events occasionally cause the reservoirs to become highly turbid. A logical strategy for turbidity control is to temporarily remove the turbid reservoirs from service. While effective in limiting delivery of turbid water and reducing the need for in-reservoir alum flocculation, this strategy runs the risk of negatively impacting water supply reliability. Thus, it is advantageous for DEP to understand how long a particular turbidity event will affect their system. In order to understand the duration, intensity and total load of a turbidity event, predictions of future in-stream turbidity values are important. Traditionally, turbidity predictions have been carried out by applying streamflow observations/forecasts to a flow-turbidity rating curve. However, predictions from rating curves are often inaccurate due to inter- and intra-event variability in flow-turbidity relationships. Predictions can be improved by applying an autoregressive moving average (ARMA) time series model in combination with a traditional rating curve. Since 2003, DEP and the Upstate Freshwater Institute have compiled a relatively consistent set of 15-minute turbidity observations at various locations on Esopus Creek above Ashokan Reservoir. Using daily averages of this data and streamflow observations at nearby USGS gauges, flow-turbidity rating curves were developed via linear regression. Time series analysis revealed that the linear regression residuals may be represented using an ARMA(1,2) process. Based on this information, flow-turbidity regressions with ARMA(1,2) errors were fit to the observations. Preliminary model validation exercises at a 30-day forecast horizon show that the ARMA error models generally improve the predictive skill of the linear regression rating curves. Skill seems to vary based on the ambient hydrologic conditions at the onset of the forecast. For example, ARMA error model forecasts issued before a high flow/turbidity event do not show significant improvements over the rating curve approach. However, ARMA error model forecasts issued during the "falling limb" of the hydrograph are significantly more accurate than rating curves for both single day and accumulated event predictions. In order to assist in reservoir operations decisions associated with turbidity events and general water supply reliability, DEP has initiated design of an Operations Support Tool (OST). OST integrates a reservoir operations model with 2D hydrodynamic water quality models and a database compiling near-real-time data sources and hydrologic forecasts. Currently, OST uses conventional flow-turbidity rating curves and hydrologic forecasts for predictive turbidity inputs. Given the improvements in predictive skill over traditional rating curves, the ARMA error models are currently being evaluated as an addition to DEP's Operations Support Tool.
Quantile Regression in the Study of Developmental Sciences
Petscher, Yaacov; Logan, Jessica A. R.
2014-01-01
Linear regression analysis is one of the most common techniques applied in developmental research, but only allows for an estimate of the average relations between the predictor(s) and the outcome. This study describes quantile regression, which provides estimates of the relations between the predictor(s) and outcome, but across multiple points of the outcome’s distribution. Using data from the High School and Beyond and U.S. Sustained Effects Study databases, quantile regression is demonstrated and contrasted with linear regression when considering models with: (a) one continuous predictor, (b) one dichotomous predictor, (c) a continuous and a dichotomous predictor, and (d) a longitudinal application. Results from each example exhibited the differential inferences which may be drawn using linear or quantile regression. PMID:24329596
Early Home Activities and Oral Language Skills in Middle Childhood: A Quantile Analysis
ERIC Educational Resources Information Center
Law, James; Rush, Robert; King, Tom; Westrupp, Elizabeth; Reilly, Sheena
2018-01-01
Oral language development is a key outcome of elementary school, and it is important to identify factors that predict it most effectively. Commonly researchers use ordinary least squares regression with conclusions restricted to average performance conditional on relevant covariates. Quantile regression offers a more sophisticated alternative.…
A 12-Year Analysis of Nonbattle Injury Among US Service Members Deployed to Iraq and Afghanistan.
Le, Tuan D; Gurney, Jennifer M; Nnamani, Nina S; Gross, Kirby R; Chung, Kevin K; Stockinger, Zsolt T; Nessen, Shawn C; Pusateri, Anthony E; Akers, Kevin S
2018-05-30
Nonbattle injury (NBI) among deployed US service members increases the burden on medical systems and results in high rates of attrition, affecting the available force. The possible causes and trends of NBI in the Iraq and Afghanistan wars have, to date, not been comprehensively described. To describe NBI among service members deployed to Iraq and Afghanistan, quantify absolute numbers of NBIs and proportion of NBIs within the Department of Defense Trauma Registry, and document the characteristics of this injury category. In this retrospective cohort study, data from the Department of Defense Trauma Registry on 29 958 service members injured in Iraq and Afghanistan from January 1, 2003, through December 31, 2014, were obtained. Injury incidence, patterns, and severity were characterized by battle injury and NBI. Trends in NBI were modeled using time series analysis with autoregressive integrated moving average and the weighted moving average method. Statistical analysis was performed from January 1, 2003, to December 31, 2014. Primary outcomes were proportion of NBIs and the changes in NBI over time. Among 29 958 casualties (battle injury and NBI) analyzed, 29 003 were in men and 955 were in women; the median age at injury was 24 years (interquartile range, 21-29 years). Nonbattle injury caused 34.1% of total casualties (n = 10 203) and 11.5% of all deaths (206 of 1788). Rates of NBI were higher among women than among men (63.2% [604 of 955] vs 33.1% [9599 of 29 003]; P < .001) and in Operation New Dawn (71.0% [298 of 420]) and Operation Iraqi Freedom (36.3% [6655 of 18 334]) compared with Operation Enduring Freedom (29.0% [3250 of 11 204]) (P < .001). A higher proportion of NBIs occurred in members of the Air Force (66.3% [539 of 810]) and Navy (48.3% [394 of 815]) than in members of the Army (34.7% [7680 of 22 154]) and Marine Corps (25.7% [1584 of 6169]) (P < .001). Leading mechanisms of NBI included falls (2178 [21.3%]), motor vehicle crashes (1921 [18.8%]), machinery or equipment accidents (1283 [12.6%]), blunt objects (1107 [10.8%]), gunshot wounds (728 [7.1%]), and sports (697 [6.8%]), causing predominantly blunt trauma (7080 [69.4%]). The trend in proportion of NBIs did not decrease over time, remaining at approximately 35% (by weighted moving average) after 2006 and approximately 39% by autoregressive integrated moving average. Assuming stable battlefield conditions, the autoregressive integrated moving average model estimated that the proportion of NBIs from 2015 to 2022 would be approximately 41.0% (95% CI, 37.8%-44.3%). In this study, approximately one-third of injuries during the Iraq and Afghanistan wars resulted from NBI, and the proportion of NBIs was steady for 12 years. Understanding the possible causes of NBI during military operations may be useful to target protective measures and safety interventions, thereby conserving fighting strength on the battlefield.
Living environment and mobility of older adults.
Cress, M Elaine; Orini, Stefania; Kinsler, Laura
2011-01-01
Older adults often elect to move into smaller living environments. Smaller living space and the addition of services provided by a retirement community (RC) may make living easier for the individual, but it may also reduce the amount of daily physical activity and ultimately reduce functional ability. With home size as an independent variable, the primary purpose of this study was to evaluate daily physical activity and physical function of community dwellers (CD; n = 31) as compared to residents of an RC (n = 30). In this cross-sectional study design, assessments included: the Continuous Scale Physical Functional Performance - 10 test, with a possible range of 0-100, higher scores reflecting better function; Step Activity Monitor (StepWatch 3.1); a physical activity questionnaire, the area of the home (in square meters). Groups were compared by one-way ANOVA. A general linear regression model was used to predict the number of steps per day at home. The level of significance was p < 0.05. Of the 61 volunteers (mean age: 79 ± 6.3 years; range: 65-94 years), the RC living space (68 ± 37.7 m(2)) was 62% smaller than the CD living space (182.8 ± 77.9 m(2); p = 0.001). After correcting for age, the RC took fewer total steps per day excluding exercise (p = 0.03) and had lower function (p = 0.005) than the CD. On average, RC residents take 3,000 steps less per day and have approximately 60% of the living space of a CD. Home size and physical function were primary predictors of the number of steps taken at home, as found using a general linear regression analysis. Copyright © 2010 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Putra, Alfian; Vassileva, Maria; Santo, Ryoko; Tsenkova, Roumina
2017-06-01
Cadmium (Cd) is a common industrial pollutant with long biological half-life, which makes it as a cumulative toxicant. Near-infrared spectroscopy has been successfully used for quick and accurate assessment of Cd content in agricultural materials, but the development of a quick detection method for ground and drinking water samples is equal importance for pollution monitoring. Metals have no absorbance in the NIR spectral range, thus the methods developed so far have focused on detection of metal-organic complexes (move to intro). This study focuses on the use of Aquaphotomics technique to measure Cd in aqueous solutions by analyzing the changes in water spectra that occur due to water-metal interaction. Measurements were performed with Cd (II) in 0.1 M HNO3, in the 680-1090 nm (water second and third overtones) and 1110-1800 nm (water first overtone) spectral regions, and were subjected to partial least-square regression analysis. It was found/determined that A concentration of Cd from 1 mg L-1 to 10 mg L-1 could be predicted by this model with average prediction correlation coefficient of 0.897. The model was tested by perturbations with temperature and other metal presence in the solution. The regression coefficient showed consistent peaks at 728, 752, 770, 780, 1362, 1430,1444, 1472/1474 and 1484 nm under various perturbations, indicating that metal to influence the water spectra. The residual predictive deviation values (RPD) were greater than 2, indicating that the model is appropriate for practical use. The result suggested that this newly proposed approach is capable of detecting metal ion in a much simpler, rapid and reliable way.
Zhao, Desheng; Wang, Lulu; Cheng, Jian; Xu, Jun; Xu, Zhiwei; Xie, Mingyu; Yang, Huihui; Li, Kesheng; Wen, Lingying; Wang, Xu; Zhang, Heng; Wang, Shusi; Su, Hong
2017-03-01
Hand, foot, and mouth disease (HFMD) is one of the most common communicable diseases in China, and current climate change had been recognized as a significant contributor. Nevertheless, no reliable models have been put forward to predict the dynamics of HFMD cases based on short-term weather variations. The present study aimed to examine the association between weather factors and HFMD, and to explore the accuracy of seasonal auto-regressive integrated moving average (SARIMA) model with local weather conditions in forecasting HFMD. Weather and HFMD data from 2009 to 2014 in Huainan, China, were used. Poisson regression model combined with a distributed lag non-linear model (DLNM) was applied to examine the relationship between weather factors and HFMD. The forecasting model for HFMD was performed by using the SARIMA model. The results showed that temperature rise was significantly associated with an elevated risk of HFMD. Yet, no correlations between relative humidity, barometric pressure and rainfall, and HFMD were observed. SARIMA models with temperature variable fitted HFMD data better than the model without it (sR 2 increased, while the BIC decreased), and the SARIMA (0, 1, 1)(0, 1, 0) 52 offered the best fit for HFMD data. In addition, compared with females and nursery children, males and scattered children may be more suitable for using SARIMA model to predict the number of HFMD cases and it has high precision. In conclusion, high temperature could increase the risk of contracting HFMD. SARIMA model with temperature variable can effectively improve its forecast accuracy, which can provide valuable information for the policy makers and public health to construct a best-fitting model and optimize HFMD prevention.
The impact of alcohol policies on alcohol-attributable diseases in Taiwan-A population-based study.
Ying, Yung-Hsiang; Weng, Yung-Ching; Chang, Koyin
2017-11-01
Taiwan has some of the strictest alcohol-related driving laws in the world. However, its laws continue to be toughened to reduce the ever-increasing social cost of alcohol-related harm. This study assumes that alcohol-related driving laws show a spillover effect such that behavioral changes originally meant to apply behind the wheel come to affect drinking behavior in other contexts. The effects of alcohol driving laws and taxes on alcohol-related morbidity are assessed; incidence rates of alcohol-attributable diseases (AAD) serve as our measure of morbidity. Monthly incidence rates of alcohol-attributable diseases were calculated with data from the National Health Insurance Research Database (NHIRD) from 1996 to 2011. These rates were then submitted to intervention analyses using Seasonal Autoregressive Integrated Moving Average models (ARIMA) with multivariate adaptive regression splines (MARS). ARIMA is well-suited to time series analysis while MARS helps fit the regression model to the cubic curvature form of the irregular AAD incidence rates of hospitalization (AIRH). Alcoholic liver disease, alcohol abuse and dependence syndrome, and alcohol psychoses were the most common AADs in Taiwan. Compared to women, men had a higher incidence of AADs and their AIRH were more responsive to changes in the laws governing permissible blood alcohol. The adoption of tougher blood alcohol content (BAC) laws had significant effects on AADs, controlling for overall consumption of alcoholic beverages. Blood alcohol level laws and alcohol taxation effectively reduced alcohol-attributable morbidities with the exception of alcohol dependence and abuse, a disease to which middle-aged, lower income people are particularly susceptible. Attention should be focused on this cohort to protect this vulnerable population. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Desheng; Wang, Lulu; Cheng, Jian; Xu, Jun; Xu, Zhiwei; Xie, Mingyu; Yang, Huihui; Li, Kesheng; Wen, Lingying; Wang, Xu; Zhang, Heng; Wang, Shusi; Su, Hong
2017-03-01
Hand, foot, and mouth disease (HFMD) is one of the most common communicable diseases in China, and current climate change had been recognized as a significant contributor. Nevertheless, no reliable models have been put forward to predict the dynamics of HFMD cases based on short-term weather variations. The present study aimed to examine the association between weather factors and HFMD, and to explore the accuracy of seasonal auto-regressive integrated moving average (SARIMA) model with local weather conditions in forecasting HFMD. Weather and HFMD data from 2009 to 2014 in Huainan, China, were used. Poisson regression model combined with a distributed lag non-linear model (DLNM) was applied to examine the relationship between weather factors and HFMD. The forecasting model for HFMD was performed by using the SARIMA model. The results showed that temperature rise was significantly associated with an elevated risk of HFMD. Yet, no correlations between relative humidity, barometric pressure and rainfall, and HFMD were observed. SARIMA models with temperature variable fitted HFMD data better than the model without it (s R 2 increased, while the BIC decreased), and the SARIMA (0, 1, 1)(0, 1, 0)52 offered the best fit for HFMD data. In addition, compared with females and nursery children, males and scattered children may be more suitable for using SARIMA model to predict the number of HFMD cases and it has high precision. In conclusion, high temperature could increase the risk of contracting HFMD. SARIMA model with temperature variable can effectively improve its forecast accuracy, which can provide valuable information for the policy makers and public health to construct a best-fitting model and optimize HFMD prevention.
Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia
NASA Astrophysics Data System (ADS)
Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg
2013-03-01
Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.
An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones
Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han
2015-01-01
Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel “quasi-dynamic” Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the “process-level” fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move. PMID:26690447
Commercial vehicle fleet management and information systems. Phase 1 : interim report
DOT National Transportation Integrated Search
1998-01-01
The three-quarter moving composite price index is the weighted average of the indices for three consecutive quarters. The Composite Bid Price Index is composed of six indicator items: common excavation, to indicate the price trend for all roadway exc...
Castro, Marcia C; Tsuruta, Atsuko; Kanamori, Shogo; Kannady, Khadija; Mkude, Sixbert
2009-04-08
Historically, environmental management has brought important achievements in malaria control and overall improvements of health conditions. Currently, however, implementation is often considered not to be cost-effective. A community-based environmental management for malaria control was conducted in Dar es Salaam between 2005 and 2007. After community sensitization, two drains were cleaned followed by maintenance. This paper assessed the impact of the intervention on community awareness, prevalence of malaria infection, and Anopheles larval presence in drains. A survey was conducted in neighbourhoods adjacent to cleaned drains; for comparison, neighbourhoods adjacent to two drains treated with larvicides and two drains under no intervention were also surveyed. Data routinely collected by the Urban Malaria Control Programme were also used. Diverse impacts were evaluated through comparison of means, odds ratios (OR), logistic regression, and time trends calculated by moving averages. Individual awareness of health risks and intervention goals were significantly higher among sensitized neighbourhoods. A reduction in the odds of malaria infection during the post-cleaning period in intervention neighbourhoods was observed when compared to the pre-cleaning period (OR = 0.12, 95% CI 0.05-0.3, p < 0.001). During the post-cleaning period, a higher risk of infection (OR = 1.7, 95% CI 1.1-2.4, p = 0.0069) was observed in neighbourhoods under no intervention compared to intervention ones. Eighteen months after the initial cleaning, one of the drains was still clean due to continued maintenance efforts (it contained no waste materials and the water was flowing at normal velocity). A three-month moving average of the percentage of water habitats in that drain containing pupae and/or Anopheles larvae indicated a decline in larval density. In the other drain, lack of proper resources and local commitment limited success. Although environmental management was historically coordinated by authoritarian/colonial regimes or by industries/corporations, its successful implementation as part of an integrated vector management framework for malaria control under democratic governments can be possible if four conditions are observed: political will and commitment, community sensitization and participation, provision of financial resources for initial cleaning and structural repairs, and inter-sectoral collaboration. Such effort not only is expected to reduce malaria transmission, but has the potential to empower communities, improve health and environmental conditions, and ultimately contribute to poverty alleviation and sustainable development.
Social Inequality and Labor Force Participation.
ERIC Educational Resources Information Center
King, Jonathan
The labor force participation rates of whites, blacks, and Spanish-Americans, grouped by sex, are explained in a linear regression model fitted with 1970 U. S. Census data on Standard Metropolitan Statistical Area (SMSA). The explanatory variables are: average age, average years of education, vocational training rate, disabled rate, unemployment…
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2009-01-01
Yearly frequencies of North Atlantic basin tropical cyclones, their locations of origin, peak wind speeds, average peak wind speeds, lowest pressures, and average lowest pressures for the interval 1950-2008 are examined. The effects of El Nino and La Nina on the tropical cyclone parametric values are investigated. Yearly and 10-year moving average (10-yma) values of tropical cyclone parameters are compared against those of temperature and decadal-length oscillation, employing both linear and bi-variate analysis, and first differences in the 10-yma are determined. Discussion of the 2009 North Atlantic basin hurricane season, updating earlier results, is given.
Chess-playing epilepsy: a case report with video-EEG and back averaging.
Mann, M W; Gueguen, B; Guillou, S; Debrand, E; Soufflet, C
2004-12-01
A patient suffering from juvenile myoclonic epilepsy experienced myoclonic jerks, fairly regularly, while playing chess. The myoclonus appeared particularly when he had to plan his strategy, to choose between two solutions or while raising the arm to move a chess figure. Video-EEG-polygraphy was performed, with back averaging of the myoclonus registered during a chess match and during neuropsychological testing with Kohs cubes. The EEG spike wave complexes were localised in the fronto-central region. [Published with video sequences].
Verity Salmon; Colleen Iversen; Peter Thornton; Ma
2017-03-01
Transect data is from point center quarter surveys for shrub density performed in July 2016 at the Kougarok hill slope located at Kougarok Road, Mile Marker 64. For each sample point along the transects, moving averages for shrub density and shrub basal area are provided along with GPS coordinates, average shrub height and active layer depth. The individual height, basal area, and species of surveyed shrubs are also included. Data upload will be completed January 2017.
2016-11-22
Unclassified REPORT DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1...compact at all conditions tested, as indicated by the overlap of OH and CH2O distributions. 5. We developed analytical techniques for pseudo- Lagrangian ...condition in a constant density flow requires that the flow divergence is zero, ∇ · ~u = 0. Three smoothing schemes were examined, a moving average (i.e
Time series analysis of collective motions in proteins
NASA Astrophysics Data System (ADS)
Alakent, Burak; Doruker, Pemra; ćamurdan, Mehmet C.
2004-01-01
The dynamics of α-amylase inhibitor tendamistat around its native state is investigated using time series analysis of the principal components of the Cα atomic displacements obtained from molecular dynamics trajectories. Collective motion along a principal component is modeled as a homogeneous nonstationary process, which is the result of the damped oscillations in local minima superimposed on a random walk. The motion in local minima is described by a stationary autoregressive moving average model, consisting of the frequency, damping factor, moving average parameters and random shock terms. Frequencies for the first 50 principal components are found to be in the 3-25 cm-1 range, which are well correlated with the principal component indices and also with atomistic normal mode analysis results. Damping factors, though their correlation is less pronounced, decrease as principal component indices increase, indicating that low frequency motions are less affected by friction. The existence of a positive moving average parameter indicates that the stochastic force term is likely to disturb the mode in opposite directions for two successive sampling times, showing the modes tendency to stay close to minimum. All these four parameters affect the mean square fluctuations of a principal mode within a single minimum. The inter-minima transitions are described by a random walk model, which is driven by a random shock term considerably smaller than that for the intra-minimum motion. The principal modes are classified into three subspaces based on their dynamics: essential, semiconstrained, and constrained, at least in partial consistency with previous studies. The Gaussian-type distributions of the intermediate modes, called "semiconstrained" modes, are explained by asserting that this random walk behavior is not completely free but between energy barriers.
Models for short term malaria prediction in Sri Lanka
Briët, Olivier JT; Vounatsou, Penelope; Gunawardena, Dissanayake M; Galappaththy, Gawrie NL; Amerasinghe, Priyanie H
2008-01-01
Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA) models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA) models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal) ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal) ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts) for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed. PMID:18460204
Gerber, Brian D.; Kendall, William L.
2017-01-01
Monitoring animal populations can be difficult. Limited resources often force monitoring programs to rely on unadjusted or smoothed counts as an index of abundance. Smoothing counts is commonly done using a moving-average estimator to dampen sampling variation. These indices are commonly used to inform management decisions, although their reliability is often unknown. We outline a process to evaluate the biological plausibility of annual changes in population counts and indices from a typical monitoring scenario and compare results with a hierarchical Bayesian time series (HBTS) model. We evaluated spring and fall counts, fall indices, and model-based predictions for the Rocky Mountain population (RMP) of Sandhill Cranes (Antigone canadensis) by integrating juvenile recruitment, harvest, and survival into a stochastic stage-based population model. We used simulation to evaluate population indices from the HBTS model and the commonly used 3-yr moving average estimator. We found counts of the RMP to exhibit biologically unrealistic annual change, while the fall population index was largely biologically realistic. HBTS model predictions suggested that the RMP changed little over 31 yr of monitoring, but the pattern depended on assumptions about the observational process. The HBTS model fall population predictions were biologically plausible if observed crane harvest mortality was compensatory up to natural mortality, as empirical evidence suggests. Simulations indicated that the predicted mean of the HBTS model was generally a more reliable estimate of the true population than population indices derived using a moving 3-yr average estimator. Practitioners could gain considerable advantages from modeling population counts using a hierarchical Bayesian autoregressive approach. Advantages would include: (1) obtaining measures of uncertainty; (2) incorporating direct knowledge of the observational and population processes; (3) accommodating missing years of data; and (4) forecasting population size.
A new image segmentation method based on multifractal detrended moving average analysis
NASA Astrophysics Data System (ADS)
Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le
2015-08-01
In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.
Annabi, Majid; Kebriaeezadeh, Abbas; Mohammadi, Timor; Marashi Shoshtari, Seyed Nasrolah; Abedin Dorkoosh, Farid; Pourreza, Abolghasem; Heydari, Hassan
2017-01-01
The aim of this study was to measure the potential of production and the capacity used in the pharmaceutical industry. Capacity use is the actual production rate to the potential output, which reflects the gap between actual production and production capacity . Through econometric methods, translog cost function in the short run along with functions of share cost of production factors is estimated through seemingly unrelated repeated regression (SURE) as a multivariate regression analysis provided by zeller. During the study the capacity used is decreasing. The capacity used, which calculated by weighted average, also decreased and the amount during the study period is much less than the simple average of the industry. Average capacity utilization in the industry over five years of study is equal to 57% while the average capacity used calculated by the weighted of industry average is 37%. To enhance the economic potential requires a proper use of resources, creation of favorable economic structure and productivity of the industry. Due to the large amount of unused capacity in the pharmaceutical industry there is no need to invest anymore unless in new grounds and it is obvious that more investment will change using capacity.
Annabi, Majid; Kebriaeezadeh, Abbas; Mohammadi, Timor; Marashi Shoshtari, Seyed Nasrolah; Abedin Dorkoosh, Farid; Pourreza, Abolghasem; Heydari, Hassan
2017-01-01
The aim of this study was to measure the potential of production and the capacity used in the pharmaceutical industry. Capacity use is the actual production rate to the potential output, which reflects the gap between actual production and production capacity. Through econometric methods, translog cost function in the short run along with functions of share cost of production factors is estimated through seemingly unrelated repeated regression (SURE) as a multivariate regression analysis provided by zeller. During the study the capacity used is decreasing. The capacity used, which calculated by weighted average, also decreased and the amount during the study period is much less than the simple average of the industry. Average capacity utilization in the industry over five years of study is equal to 57% while the average capacity used calculated by the weighted of industry average is 37%. To enhance the economic potential requires a proper use of resources, creation of favorable economic structure and productivity of the industry. Due to the large amount of unused capacity in the pharmaceutical industry there is no need to invest anymore unless in new grounds and it is obvious that more investment will change using capacity. PMID:29552074
Waltemeyer, Scott D.
2006-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable flood-hazard mapping in the Navajo Nation in Arizona, Utah, Colorado, and New Mexico. The Bureau of Indian Affairs, U.S. Army Corps of Engineers, and Navajo Nation requested that the U.S. Geological Survey update estimates of peak discharge magnitude for gaging stations in the region and update regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites using data collected through 1999 at 146 gaging stations, an additional 13 years of peak-discharge data since a 1997 investigation, which used gaging-station data through 1986. The equations for estimation of peak discharges at ungaged sites were developed for flood regions 8, 11, high elevation, and 6 and are delineated on the basis of the hydrologic codes from the 1997 investigation. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 82 of the 146 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge having a recurrence interval of less than 1.4 years in the probability-density function. Within each region, logarithms of the peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then was applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction for a peak discharge have a recurrence interval of 100-years for region 8 was 53 percent (average) for the 100-year flood. The average standard of prediction, which includes average sampling error and average standard error of regression, ranged from 45 to 83 percent for the 100-year flood. Estimated standard error of prediction for a hybrid method for region 11 was large in the 1997 investigation. No distinction of floods produced from a high-elevation region was presented in the 1997 investigation. Overall, the equations based on generalized least-squares regression techniques are considered to be more reliable than those in the 1997 report because of the increased length of record and improved GIS method. Techniques for transferring flood-frequency relations to ungaged sites on the same stream can be estimated at an ungaged site by a direct application of the regional regression equation or at an ungaged site on a stream that has a gaging station upstream or downstream by using the drainage-area ratio and the drainage-area exponent from the regional regression equation of the respective region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coruh, M; Ewell, L; Demez, N
Purpose: To estimate the dose delivered to a moving lung tumor by proton therapy beams of different modulation types, and compare with Monte Carlo predictions. Methods: A radiology support devices (RSD) phantom was irradiated with therapeutic proton radiation beams using two different types of modulation: uniform scanning (US) and double scattered (DS). The Eclipse© dose plan was designed to deliver 1.00Gy to the isocenter of a static ∼3×3×3cm (27cc) tumor in the phantom with 100% coverage. The peak to peak amplitude of tumor motion varied from 0.0 to 2.5cm. The radiation dose was measured with an ion-chamber (CC-13) located withinmore » the tumor. The time required to deliver the radiation dose varied from an average of 65s for the DS beams to an average of 95s for the US beams. Results: The amount of radiation dose varied from 100% (both US and DS) to the static tumor down to approximately 92% for the moving tumor. The ratio of US dose to DS dose ranged from approximately 1.01 for the static tumor, down to 0.99 for the 2.5cm moving tumor. A Monte Carlo simulation using TOPAS included a lung tumor with 4.0cm of peak to peak motion. In this simulation, the dose received by the tumor varied by ∼40% as the period of this motion varied from 1s to 4s. Conclusion: The radiation dose deposited to a moving tumor was less than for a static tumor, as expected. At large (2.5cm) amplitudes, the DS proton beams gave a dose closer to the desired dose than the US beams, but equal within experimental uncertainty. TOPAS Monte Carlo simulation can give insight into the moving tumor — dose relationship. This work was supported in part by the Philips corporation.« less
Empirical likelihood inference in randomized clinical trials.
Zhang, Biao
2017-01-01
In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.
Consistent and efficient processing of ADCP streamflow measurements
Mueller, David S.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan
2016-01-01
The use of Acoustic Doppler Current Profilers (ADCPs) from a moving boat is a commonly used method for measuring streamflow. Currently, the algorithms used to compute the average depth, compute edge discharge, identify invalid data, and estimate velocity and discharge for invalid data vary among manufacturers. These differences could result in different discharges being computed from identical data. Consistent computational algorithm, automated filtering, and quality assessment of ADCP streamflow measurements that are independent of the ADCP manufacturer are being developed in a software program that can process ADCP moving-boat discharge measurements independent of the ADCP used to collect the data.
Streamflow record extension using power transformations and application to sediment transport
NASA Astrophysics Data System (ADS)
Moog, Douglas B.; Whiting, Peter J.; Thomas, Robert B.
1999-01-01
To obtain a representative set of flow rates for a stream, it is often desirable to fill in missing data or extend measurements to a longer time period by correlation to a nearby gage with a longer record. Linear least squares regression of the logarithms of the flows is a traditional and still common technique. However, its purpose is to generate optimal estimates of each day's discharge, rather than the population of discharges, for which it tends to underestimate variance. Maintenance-of-variance-extension (MOVE) equations [Hirsch, 1982] were developed to correct this bias. This study replaces the logarithmic transformation by the more general Box-Cox scaled power transformation, generating a more linear, constant-variance relationship for the MOVE extension. Combining the Box-Cox transformation with the MOVE extension is shown to improve accuracy in estimating order statistics of flow rate, particularly for the nonextreme discharges which generally govern cumulative transport over time. This advantage is illustrated by prediction of cumulative fractions of total bed load transport.
Integrating WEPP into the WEPS infrastructure
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) and the Water Erosion Prediction Project (WEPP) share a common modeling philosophy, that of moving away from primarily empirically based models based on indices or "average conditions", and toward a more process based approach which can be evaluated using ac...
Model Identification of Integrated ARMA Processes
ERIC Educational Resources Information Center
Stadnytska, Tetiana; Braun, Simone; Werner, Joachim
2008-01-01
This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…