Sample records for average sarima model

  1. Monthly reservoir inflow forecasting using a new hybrid SARIMA genetic programming approach

    NASA Astrophysics Data System (ADS)

    Moeeni, Hamid; Bonakdari, Hossein; Ebtehaj, Isa

    2017-03-01

    Forecasting reservoir inflow is one of the most important components of water resources and hydroelectric systems operation management. Seasonal autoregressive integrated moving average (SARIMA) models have been frequently used for predicting river flow. SARIMA models are linear and do not consider the random component of statistical data. To overcome this shortcoming, monthly inflow is predicted in this study based on a combination of seasonal autoregressive integrated moving average (SARIMA) and gene expression programming (GEP) models, which is a new hybrid method (SARIMA-GEP). To this end, a four-step process is employed. First, the monthly inflow datasets are pre-processed. Second, the datasets are modelled linearly with SARIMA and in the third stage, the non-linearity of residual series caused by linear modelling is evaluated. After confirming the non-linearity, the residuals are modelled in the fourth step using a gene expression programming (GEP) method. The proposed hybrid model is employed to predict the monthly inflow to the Jamishan Dam in west Iran. Thirty years' worth of site measurements of monthly reservoir dam inflow with extreme seasonal variations are used. The results of this hybrid model (SARIMA-GEP) are compared with SARIMA, GEP, artificial neural network (ANN) and SARIMA-ANN models. The results indicate that the SARIMA-GEP model ( R 2=78.8, VAF =78.8, RMSE =0.89, MAPE =43.4, CRM =0.053) outperforms SARIMA and GEP and SARIMA-ANN ( R 2=68.3, VAF =66.4, RMSE =1.12, MAPE =56.6, CRM =0.032) displays better performance than the SARIMA and ANN models. A comparison of the two hybrid models indicates the superiority of SARIMA-GEP over the SARIMA-ANN model.

  2. Weather variability and the incidence of cryptosporidiosis: comparison of time series poisson regression and SARIMA models.

    PubMed

    Hu, Wenbiao; Tong, Shilu; Mengersen, Kerrie; Connell, Des

    2007-09-01

    Few studies have examined the relationship between weather variables and cryptosporidiosis in Australia. This paper examines the potential impact of weather variability on the transmission of cryptosporidiosis and explores the possibility of developing an empirical forecast system. Data on weather variables, notified cryptosporidiosis cases, and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics for the period of January 1, 1996-December 31, 2004, respectively. Time series Poisson regression and seasonal auto-regression integrated moving average (SARIMA) models were performed to examine the potential impact of weather variability on the transmission of cryptosporidiosis. Both the time series Poisson regression and SARIMA models show that seasonal and monthly maximum temperature at a prior moving average of 1 and 3 months were significantly associated with cryptosporidiosis disease. It suggests that there may be 50 more cases a year for an increase of 1 degrees C maximum temperature on average in Brisbane. Model assessments indicated that the SARIMA model had better predictive ability than the Poisson regression model (SARIMA: root mean square error (RMSE): 0.40, Akaike information criterion (AIC): -12.53; Poisson regression: RMSE: 0.54, AIC: -2.84). Furthermore, the analysis of residuals shows that the time series Poisson regression appeared to violate a modeling assumption, in that residual autocorrelation persisted. The results of this study suggest that weather variability (particularly maximum temperature) may have played a significant role in the transmission of cryptosporidiosis. A SARIMA model may be a better predictive model than a Poisson regression model in the assessment of the relationship between weather variability and the incidence of cryptosporidiosis.

  3. Prediction of South China sea level using seasonal ARIMA models

    NASA Astrophysics Data System (ADS)

    Fernandez, Flerida Regine; Po, Rodolfo; Montero, Neil; Addawe, Rizavel

    2017-11-01

    Accelerating sea level rise is an indicator of global warming and poses a threat to low-lying places and coastal countries. This study aims to fit a Seasonal Autoregressive Integrated Moving Average (SARIMA) model to the time series obtained from the TOPEX and Jason series of satellite radar altimetries of the South China Sea from the year 2008 to 2015. With altimetric measurements taken in a 10-day repeat cycle, monthly averages of the satellite altimetry measurements were taken to compose the data set used in the study. SARIMA models were then tried and fitted to the time series in order to find the best-fit model. Results show that the SARIMA(1,0,0)(0,1,1)12 model best fits the time series and was used to forecast the values for January 2016 to December 2016. The 12-month forecast using SARIMA(1,0,0)(0,1,1)12 shows that the sea level gradually increases from January to September 2016, and decreases until December 2016.

  4. Forecasting mortality of road traffic injuries in China using seasonal autoregressive integrated moving average model.

    PubMed

    Zhang, Xujun; Pang, Yuanyuan; Cui, Mengjing; Stallones, Lorann; Xiang, Huiyun

    2015-02-01

    Road traffic injuries have become a major public health problem in China. This study aimed to develop statistical models for predicting road traffic deaths and to analyze seasonality of deaths in China. A seasonal autoregressive integrated moving average (SARIMA) model was used to fit the data from 2000 to 2011. Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were used to evaluate the constructed models. Autocorrelation function and partial autocorrelation function of residuals and Ljung-Box test were used to compare the goodness-of-fit between the different models. The SARIMA model was used to forecast monthly road traffic deaths in 2012. The seasonal pattern of road traffic mortality data was statistically significant in China. SARIMA (1, 1, 1) (0, 1, 1)12 model was the best fitting model among various candidate models; the Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were -483.679, -475.053, and 4.937, respectively. Goodness-of-fit testing showed nonautocorrelations in the residuals of the model (Ljung-Box test, Q = 4.86, P = .993). The fitted deaths using the SARIMA (1, 1, 1) (0, 1, 1)12 model for years 2000 to 2011 closely followed the observed number of road traffic deaths for the same years. The predicted and observed deaths were also very close for 2012. This study suggests that accurate forecasting of road traffic death incidence is possible using SARIMA model. The SARIMA model applied to historical road traffic deaths data could provide important evidence of burden of road traffic injuries in China. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Forecasting the number of zoonotic cutaneous leishmaniasis cases in south of Fars province, Iran using seasonal ARIMA time series method.

    PubMed

    Sharafi, Mehdi; Ghaem, Haleh; Tabatabaee, Hamid Reza; Faramarzi, Hossein

    2017-01-01

    To predict the trend of cutaneous leishmaniasis and assess the relationship between the disease trend and weather variables in south of Fars province using Seasonal Autoregressive Integrated Moving Average (SARIMA) model. The trend of cutaneous leishmaniasis was predicted using Mini tab software and SARIMA model. Besides, information about the disease and weather conditions was collected monthly based on time series design during January 2010 to March 2016. Moreover, various SARIMA models were assessed and the best one was selected. Then, the model's fitness was evaluated based on normality of the residuals' distribution, correspondence between the fitted and real amounts, and calculation of Akaike Information Criteria (AIC) and Bayesian Information Criteria (BIC). The study results indicated that SARIMA model (4,1,4)(0,1,0) (12) in general and SARIMA model (4,1,4)(0,1,1) (12) in below and above 15 years age groups could appropriately predict the disease trend in the study area. Moreover, temperature with a three-month delay (lag3) increased the disease trend, rainfall with a four-month delay (lag4) decreased the disease trend, and rainfall with a nine-month delay (lag9) increased the disease trend. Based on the results, leishmaniasis follows a descending trend in the study area in case drought condition continues, SARIMA models can suitably measure the disease trend, and the disease follows a seasonal trend. Copyright © 2017 Hainan Medical University. Production and hosting by Elsevier B.V. All rights reserved.

  6. Forecasting the incidence of tuberculosis in China using the seasonal auto-regressive integrated moving average (SARIMA) model.

    PubMed

    Mao, Qiang; Zhang, Kai; Yan, Wu; Cheng, Chaonan

    2018-05-02

    The aims of this study were to develop a forecasting model for the incidence of tuberculosis (TB) and analyze the seasonality of infections in China; and to provide a useful tool for formulating intervention programs and allocating medical resources. Data for the monthly incidence of TB from January 2004 to December 2015 were obtained from the National Scientific Data Sharing Platform for Population and Health (China). The Box-Jenkins method was applied to fit a seasonal auto-regressive integrated moving average (SARIMA) model to forecast the incidence of TB over the subsequent six months. During the study period of 144 months, 12,321,559 TB cases were reported in China, with an average monthly incidence of 6.4426 per 100,000 of the population. The monthly incidence of TB showed a clear 12-month cycle, and a seasonality with two peaks occurring in January and March and a trough in December. The best-fit model was SARIMA (1,0,0)(0,1,1) 12 , which demonstrated adequate information extraction (white noise test, p>0.05). Based on the analysis, the incidence of TB from January to June 2016 were 6.6335, 4.7208, 5.8193, 5.5474, 5.2202 and 4.9156 per 100,000 of the population, respectively. According to the seasonal pattern of TB incidence in China, the SARIMA model was proposed as a useful tool for monitoring epidemics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Forecasting the Incidence of Mumps in Zibo City Based on a SARIMA Model.

    PubMed

    Xu, Qinqin; Li, Runzi; Liu, Yafei; Luo, Cheng; Xu, Aiqiang; Xue, Fuzhong; Xu, Qing; Li, Xiujun

    2017-08-17

    This study aimed to predict the incidence of mumps using a seasonal autoregressive integrated moving average (SARIMA) model, and provide theoretical evidence for early warning prevention and control in Zibo City, Shandong Province, China. Monthly mumps data from Zibo City gathered between 2005 and 2013 were used as a training set to construct a SARIMA model, and the monthly mumps in 2014 were defined as a test set for the model. From 2005 to 2014, a total of 8722 cases of mumps were reported in Zibo City; the male-to-female ratio of cases was 1.85:1, the age group of 1-20 years old accounted for 94.05% of all reported cases, and students made up the largest proportion (65.89%). The main serious endemic areas of mumps were located in Huantai County, Linzi District, and Boshan District of Zibo City. There were two epidemic peaks from April to July and from October to January in next year. The fitted model SARIMA (0, 1, 1) (0, 1, 1) 12 was established (AIC = 157.528), which has high validity and reasonability. The SARIMA model fitted dynamic changes of mumps in Zibo City well. It can be used for short-term forecasting and early warning of mumps.

  8. Forecasting the Incidence of Mumps in Zibo City Based on a SARIMA Model

    PubMed Central

    Xu, Qinqin; Li, Runzi; Liu, Yafei; Luo, Cheng; Xu, Aiqiang; Xue, Fuzhong; Xu, Qing; Li, Xiujun

    2017-01-01

    This study aimed to predict the incidence of mumps using a seasonal autoregressive integrated moving average (SARIMA) model, and provide theoretical evidence for early warning prevention and control in Zibo City, Shandong Province, China. Monthly mumps data from Zibo City gathered between 2005 and 2013 were used as a training set to construct a SARIMA model, and the monthly mumps in 2014 were defined as a test set for the model. From 2005 to 2014, a total of 8722 cases of mumps were reported in Zibo City; the male-to-female ratio of cases was 1.85:1, the age group of 1–20 years old accounted for 94.05% of all reported cases, and students made up the largest proportion (65.89%). The main serious endemic areas of mumps were located in Huantai County, Linzi District, and Boshan District of Zibo City. There were two epidemic peaks from April to July and from October to January in next year. The fitted model SARIMA (0, 1, 1) (0, 1, 1)12 was established (AIC = 157.528), which has high validity and reasonability. The SARIMA model fitted dynamic changes of mumps in Zibo City well. It can be used for short-term forecasting and early warning of mumps. PMID:28817101

  9. Seasonality and Trend Forecasting of Tuberculosis Prevalence Data in Eastern Cape, South Africa, Using a Hybrid Model.

    PubMed

    Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin

    2016-07-26

    Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute intervention to reduce infectious disease transmission with co-infection with HIV and other concomitant diseases, and also at festival peak periods.

  10. Modeling and roles of meteorological factors in outbreaks of highly pathogenic avian influenza H5N1.

    PubMed

    Biswas, Paritosh K; Islam, Md Zohorul; Debnath, Nitish C; Yamage, Mat

    2014-01-01

    The highly pathogenic avian influenza A virus subtype H5N1 (HPAI H5N1) is a deadly zoonotic pathogen. Its persistence in poultry in several countries is a potential threat: a mutant or genetically reassorted progenitor might cause a human pandemic. Its world-wide eradication from poultry is important to protect public health. The global trend of outbreaks of influenza attributable to HPAI H5N1 shows a clear seasonality. Meteorological factors might be associated with such trend but have not been studied. For the first time, we analyze the role of meteorological factors in the occurrences of HPAI outbreaks in Bangladesh. We employed autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to assess the roles of different meteorological factors in outbreaks of HPAI. Outbreaks were modeled best when multiplicative seasonality was incorporated. Incorporation of any meteorological variable(s) as inputs did not improve the performance of any multivariable models, but relative humidity (RH) was a significant covariate in several ARIMA and SARIMA models with different autoregressive and moving average orders. The variable cloud cover was also a significant covariate in two SARIMA models, but air temperature along with RH might be a predictor when moving average (MA) order at lag 1 month is considered.

  11. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    NASA Astrophysics Data System (ADS)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  12. Impact of weather factors on hand, foot and mouth disease, and its role in short-term incidence trend forecast in Huainan City, Anhui Province.

    PubMed

    Zhao, Desheng; Wang, Lulu; Cheng, Jian; Xu, Jun; Xu, Zhiwei; Xie, Mingyu; Yang, Huihui; Li, Kesheng; Wen, Lingying; Wang, Xu; Zhang, Heng; Wang, Shusi; Su, Hong

    2017-03-01

    Hand, foot, and mouth disease (HFMD) is one of the most common communicable diseases in China, and current climate change had been recognized as a significant contributor. Nevertheless, no reliable models have been put forward to predict the dynamics of HFMD cases based on short-term weather variations. The present study aimed to examine the association between weather factors and HFMD, and to explore the accuracy of seasonal auto-regressive integrated moving average (SARIMA) model with local weather conditions in forecasting HFMD. Weather and HFMD data from 2009 to 2014 in Huainan, China, were used. Poisson regression model combined with a distributed lag non-linear model (DLNM) was applied to examine the relationship between weather factors and HFMD. The forecasting model for HFMD was performed by using the SARIMA model. The results showed that temperature rise was significantly associated with an elevated risk of HFMD. Yet, no correlations between relative humidity, barometric pressure and rainfall, and HFMD were observed. SARIMA models with temperature variable fitted HFMD data better than the model without it (sR 2 increased, while the BIC decreased), and the SARIMA (0, 1, 1)(0, 1, 0) 52 offered the best fit for HFMD data. In addition, compared with females and nursery children, males and scattered children may be more suitable for using SARIMA model to predict the number of HFMD cases and it has high precision. In conclusion, high temperature could increase the risk of contracting HFMD. SARIMA model with temperature variable can effectively improve its forecast accuracy, which can provide valuable information for the policy makers and public health to construct a best-fitting model and optimize HFMD prevention.

  13. Impact of weather factors on hand, foot and mouth disease, and its role in short-term incidence trend forecast in Huainan City, Anhui Province

    NASA Astrophysics Data System (ADS)

    Zhao, Desheng; Wang, Lulu; Cheng, Jian; Xu, Jun; Xu, Zhiwei; Xie, Mingyu; Yang, Huihui; Li, Kesheng; Wen, Lingying; Wang, Xu; Zhang, Heng; Wang, Shusi; Su, Hong

    2017-03-01

    Hand, foot, and mouth disease (HFMD) is one of the most common communicable diseases in China, and current climate change had been recognized as a significant contributor. Nevertheless, no reliable models have been put forward to predict the dynamics of HFMD cases based on short-term weather variations. The present study aimed to examine the association between weather factors and HFMD, and to explore the accuracy of seasonal auto-regressive integrated moving average (SARIMA) model with local weather conditions in forecasting HFMD. Weather and HFMD data from 2009 to 2014 in Huainan, China, were used. Poisson regression model combined with a distributed lag non-linear model (DLNM) was applied to examine the relationship between weather factors and HFMD. The forecasting model for HFMD was performed by using the SARIMA model. The results showed that temperature rise was significantly associated with an elevated risk of HFMD. Yet, no correlations between relative humidity, barometric pressure and rainfall, and HFMD were observed. SARIMA models with temperature variable fitted HFMD data better than the model without it (s R 2 increased, while the BIC decreased), and the SARIMA (0, 1, 1)(0, 1, 0)52 offered the best fit for HFMD data. In addition, compared with females and nursery children, males and scattered children may be more suitable for using SARIMA model to predict the number of HFMD cases and it has high precision. In conclusion, high temperature could increase the risk of contracting HFMD. SARIMA model with temperature variable can effectively improve its forecast accuracy, which can provide valuable information for the policy makers and public health to construct a best-fitting model and optimize HFMD prevention.

  14. Forecasting Daily Volume and Acuity of Patients in the Emergency Department.

    PubMed

    Calegari, Rafael; Fogliatto, Flavio S; Lucini, Filipe R; Neyeloff, Jeruza; Kuchenbecker, Ricardo S; Schaan, Beatriz D

    2016-01-01

    This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification.

  15. Forecasting Daily Volume and Acuity of Patients in the Emergency Department

    PubMed Central

    Fogliatto, Flavio S.; Neyeloff, Jeruza; Kuchenbecker, Ricardo S.; Schaan, Beatriz D.

    2016-01-01

    This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification. PMID:27725842

  16. Time series model for forecasting the number of new admission inpatients.

    PubMed

    Zhou, Lingling; Zhao, Ping; Wu, Dongdong; Cheng, Cheng; Huang, Hao

    2018-06-15

    Hospital crowding is a rising problem, effective predicting and detecting managment can helpful to reduce crowding. Our team has successfully proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in the schistosomiasis and hand, foot, and mouth disease forecasting study. In this paper, our aim is to explore the application of the hybrid ARIMA-NARNN model to track the trends of the new admission inpatients, which provides a methodological basis for reducing crowding. We used the single seasonal ARIMA (SARIMA), NARNN and the hybrid SARIMA-NARNN model to fit and forecast the monthly and daily number of new admission inpatients. The root mean square error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to compare the forecasting performance among the three models. The modeling time range of monthly data included was from January 2010 to June 2016, July to October 2016 as the corresponding testing data set. The daily modeling data set was from January 4 to September 4, 2016, while the testing time range included was from September 5 to October 2, 2016. For the monthly data, the modeling RMSE and the testing RMSE, MAE and MAPE of SARIMA-NARNN model were less than those obtained from the single SARIMA or NARNN model, but the MAE and MAPE of modeling performance of SARIMA-NARNN model did not improve. For the daily data, all RMSE, MAE and MAPE of NARNN model were the lowest both in modeling stage and testing stage. Hybrid model does not necessarily outperform its constituents' performances. It is worth attempting to explore the reliable model to forecast the number of new admission inpatients from different data.

  17. Forecasting zoonotic cutaneous leishmaniasis using meteorological factors in eastern Fars province, Iran: a SARIMA analysis.

    PubMed

    Tohidinik, Hamid Reza; Mohebali, Mehdi; Mansournia, Mohammad Ali; Niakan Kalhori, Sharareh R; Ali-Akbarpour, Mohsen; Yazdani, Kamran

    2018-05-22

    To predict the occurrence of zoonotic cutaneous leishmaniasis (ZCL) and evaluate the effect of climatic variables on disease incidence in the east of Fars province, Iran using the Seasonal Autoregressive Integrated Moving Average (SARIMA) model. The Box-Jenkins approach was applied to fit the SARIMA model for ZCL incidence from 2004 to 2015. Then the model was used to predict the number of ZCL cases for the year 2016. Finally, we assessed the relation of meteorological variables (rainfall, rainy days, temperature, hours of sunshine and relative humidity) with ZCL incidence. SARIMA(2,0,0) (2,1,0)12 was the preferred model for predicting ZCL incidence in the east of Fars province (validation Root Mean Square Error, RMSE = 0.27). It showed that ZCL incidence in a given month can be estimated by the number of cases occurring 1 and 2 months, as well as 12 and 24 months earlier. The predictive power of SARIMA models was improved by the inclusion of rainfall at a lag of 2 months (β = -0.02), rainy days at a lag of 2 months (β = -0.09) and relative humidity at a lag of 8 months (β = 0.13) as external regressors (P-values < 0.05). The latter was the best climatic variable for predicting ZCL cases (validation RMSE = 0.26). Time series models can be useful tools to predict the trend of ZCL in Fars province, Iran; thus, they can be used in the planning of public health programmes. Introducing meteorological variables into the models may improve their precision. © 2018 John Wiley & Sons Ltd.

  18. Modeling and forecasting rainfall patterns of southwest monsoons in North-East India as a SARIMA process

    NASA Astrophysics Data System (ADS)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-02-01

    Weather forecasting is an important issue in the field of meteorology all over the world. The pattern and amount of rainfall are the essential factors that affect agricultural systems. India experiences the precious Southwest monsoon season for four months from June to September. The present paper describes an empirical study for modeling and forecasting the time series of Southwest monsoon rainfall patterns in the North-East India. The Box-Jenkins Seasonal Autoregressive Integrated Moving Average (SARIMA) methodology has been adopted for model identification, diagnostic checking and forecasting for this region. The study has shown that the SARIMA (0, 1, 1) (1, 0, 1)4 model is appropriate for analyzing and forecasting the future rainfall patterns. The Analysis of Means (ANOM) is a useful alternative to the analysis of variance (ANOVA) for comparing the group of treatments to study the variations and critical comparisons of rainfall patterns in different months of the season.

  19. Passenger Flow Forecasting Research for Airport Terminal Based on SARIMA Time Series Model

    NASA Astrophysics Data System (ADS)

    Li, Ziyu; Bi, Jun; Li, Zhiyin

    2017-12-01

    Based on the data of practical operating of Kunming Changshui International Airport during2016, this paper proposes Seasonal Autoregressive Integrated Moving Average (SARIMA) model to predict the passenger flow. This article not only considers the non-stationary and autocorrelation of the sequence, but also considers the daily periodicity of the sequence. The prediction results can accurately describe the change trend of airport passenger flow and provide scientific decision support for the optimal allocation of airport resources and optimization of departure process. The result shows that this model is applicable to the short-term prediction of airport terminal departure passenger traffic and the average error ranges from 1% to 3%. The difference between the predicted and the true values of passenger traffic flow is quite small, which indicates that the model has fairly good passenger traffic flow prediction ability.

  20. Burden of Disease Measured by Disability-Adjusted Life Years and a Disease Forecasting Time Series Model of Scrub Typhus in Laiwu, China

    PubMed Central

    Yang, Li-Ping; Liang, Si-Yuan; Wang, Xian-Jun; Li, Xiu-Jun; Wu, Yan-Ling; Ma, Wei

    2015-01-01

    Background Laiwu District is recognized as a hyper-endemic region for scrub typhus in Shandong Province, but the seriousness of this problem has been neglected in public health circles. Methodology/Principal Findings A disability-adjusted life years (DALYs) approach was adopted to measure the burden of scrub typhus in Laiwu, China during the period 2006 to 2012. A multiple seasonal autoregressive integrated moving average model (SARIMA) was used to identify the most suitable forecasting model for scrub typhus in Laiwu. Results showed that the disease burden of scrub typhus is increasing yearly in Laiwu, and which is higher in females than males. For both females and males, DALY rates were highest for the 60–69 age group. Of all the SARIMA models tested, the SARIMA(2,1,0)(0,1,0)12 model was the best fit for scrub typhus cases in Laiwu. Human infections occurred mainly in autumn with peaks in October. Conclusions/Significance Females, especially those of 60 to 69 years of age, were at highest risk of developing scrub typhus in Laiwu, China. The SARIMA (2,1,0)(0,1,0)12 model was the best fit forecasting model for scrub typhus in Laiwu, China. These data are useful for developing public health education and intervention programs to reduce disease. PMID:25569248

  1. Forecasting incidence of dengue in Rajasthan, using time series analyses.

    PubMed

    Bhatnagar, Sunil; Lal, Vivek; Gupta, Shiv D; Gupta, Om P

    2012-01-01

    To develop a prediction model for dengue fever/dengue haemorrhagic fever (DF/DHF) using time series data over the past decade in Rajasthan and to forecast monthly DF/DHF incidence for 2011. Seasonal autoregressive integrated moving average (SARIMA) model was used for statistical modeling. During January 2001 to December 2010, the reported DF/DHF cases showed a cyclical pattern with seasonal variation. SARIMA (0,0,1) (0,1,1) 12 model had the lowest normalized Bayesian information criteria (BIC) of 9.426 and mean absolute percentage error (MAPE) of 263.361 and appeared to be the best model. The proportion of variance explained by the model was 54.3%. Adequacy of the model was established through Ljung-Box test (Q statistic 4.910 and P-value 0.996), which showed no significant correlation between residuals at different lag times. The forecast for the year 2011 showed a seasonal peak in the month of October with an estimated 546 cases. Application of SARIMA model may be useful for forecast of cases and impending outbreaks of DF/DHF and other infectious diseases, which exhibit seasonal pattern.

  2. Comparative study of four time series methods in forecasting typhoid fever incidence in China.

    PubMed

    Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A; Li, Xiaosong

    2013-01-01

    Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model.

  3. Comparative Study of Four Time Series Methods in Forecasting Typhoid Fever Incidence in China

    PubMed Central

    Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A.; Li, Xiaosong

    2013-01-01

    Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model. PMID:23650546

  4. Climate variations and salmonellosis transmission in Adelaide, South Australia: a comparison between regression models

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Bi, Peng; Hiller, Janet

    2008-01-01

    This is the first study to identify appropriate regression models for the association between climate variation and salmonellosis transmission. A comparison between different regression models was conducted using surveillance data in Adelaide, South Australia. By using notified salmonellosis cases and climatic variables from the Adelaide metropolitan area over the period 1990-2003, four regression methods were examined: standard Poisson regression, autoregressive adjusted Poisson regression, multiple linear regression, and a seasonal autoregressive integrated moving average (SARIMA) model. Notified salmonellosis cases in 2004 were used to test the forecasting ability of the four models. Parameter estimation, goodness-of-fit and forecasting ability of the four regression models were compared. Temperatures occurring 2 weeks prior to cases were positively associated with cases of salmonellosis. Rainfall was also inversely related to the number of cases. The comparison of the goodness-of-fit and forecasting ability suggest that the SARIMA model is better than the other three regression models. Temperature and rainfall may be used as climatic predictors of salmonellosis cases in regions with climatic characteristics similar to those of Adelaide. The SARIMA model could, thus, be adopted to quantify the relationship between climate variations and salmonellosis transmission.

  5. Predicting the outbreak of hand, foot, and mouth disease in Nanjing, China: a time-series model based on weather variability

    NASA Astrophysics Data System (ADS)

    Liu, Sijun; Chen, Jiaping; Wang, Jianming; Wu, Zhuchao; Wu, Weihua; Xu, Zhiwei; Hu, Wenbiao; Xu, Fei; Tong, Shilu; Shen, Hongbing

    2017-10-01

    Hand, foot, and mouth disease (HFMD) is a significant public health issue in China and an accurate prediction of epidemic can improve the effectiveness of HFMD control. This study aims to develop a weather-based forecasting model for HFMD using the information on climatic variables and HFMD surveillance in Nanjing, China. Daily data on HFMD cases and meteorological variables between 2010 and 2015 were acquired from the Nanjing Center for Disease Control and Prevention, and China Meteorological Data Sharing Service System, respectively. A multivariate seasonal autoregressive integrated moving average (SARIMA) model was developed and validated by dividing HFMD infection data into two datasets: the data from 2010 to 2013 were used to construct a model and those from 2014 to 2015 were used to validate it. Moreover, we used weekly prediction for the data between 1 January 2014 and 31 December 2015 and leave-1-week-out prediction was used to validate the performance of model prediction. SARIMA (2,0,0)52 associated with the average temperature at lag of 1 week appeared to be the best model (R 2 = 0.936, BIC = 8.465), which also showed non-significant autocorrelations in the residuals of the model. In the validation of the constructed model, the predicted values matched the observed values reasonably well between 2014 and 2015. There was a high agreement rate between the predicted values and the observed values (sensitivity 80%, specificity 96.63%). This study suggests that the SARIMA model with average temperature could be used as an important tool for early detection and prediction of HFMD outbreaks in Nanjing, China.

  6. Ecology of West Nile virus across four European countries: empirical modelling of the Culex pipiens abundance dynamics as a function of weather.

    PubMed

    Groen, Thomas A; L'Ambert, Gregory; Bellini, Romeo; Chaskopoulou, Alexandra; Petric, Dusan; Zgomba, Marija; Marrama, Laurence; Bicout, Dominique J

    2017-10-26

    Culex pipiens is the major vector of West Nile virus in Europe, and is causing frequent outbreaks throughout the southern part of the continent. Proper empirical modelling of the population dynamics of this species can help in understanding West Nile virus epidemiology, optimizing vector surveillance and mosquito control efforts. But modelling results may differ from place to place. In this study we look at which type of models and weather variables can be consistently used across different locations. Weekly mosquito trap collections from eight functional units located in France, Greece, Italy and Serbia for several years were combined. Additionally, rainfall, relative humidity and temperature were recorded. Correlations between lagged weather conditions and Cx. pipiens dynamics were analysed. Also seasonal autoregressive integrated moving-average (SARIMA) models were fitted to describe the temporal dynamics of Cx. pipiens and to check whether the weather variables could improve these models. Correlations were strongest between mean temperatures at short time lags, followed by relative humidity, most likely due to collinearity. Precipitation alone had weak correlations and inconsistent patterns across sites. SARIMA models could also make reasonable predictions, especially when longer time series of Cx. pipiens observations are available. Average temperature was a consistently good predictor across sites. When only short time series (~ < 4 years) of observations are available, average temperature can therefore be used to model Cx. pipiens dynamics. When longer time series (~ > 4 years) are available, SARIMAs can provide better statistical descriptions of Cx. pipiens dynamics, without the need for further weather variables. This suggests that density dependence is also an important determinant of Cx. pipiens dynamics.

  7. Optimization of seasonal ARIMA models using differential evolution - simulated annealing (DESA) algorithm in forecasting dengue cases in Baguio City

    NASA Astrophysics Data System (ADS)

    Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.

    2016-10-01

    Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.

  8. Forecasting and prediction of scorpion sting cases in Biskra province, Algeria, using a seasonal autoregressive integrated moving average model.

    PubMed

    Selmane, Schehrazad; L'Hadj, Mohamed

    2016-01-01

    The aims of this study were to highlight some epidemiological aspects of scorpion envenomations, to analyse and interpret the available data for Biskra province, Algeria, and to develop a forecasting model for scorpion sting cases in Biskra province, which records the highest number of scorpion stings in Algeria. In addition to analysing the epidemiological profile of scorpion stings that occurred throughout the year 2013, we used the Box-Jenkins approach to fit a seasonal autoregressive integrated moving average (SARIMA) model to the monthly recorded scorpion sting cases in Biskra from 2000 to 2012. The epidemiological analysis revealed that scorpion stings were reported continuously throughout the year, with peaks in the summer months. The most affected age group was 15 to 49 years old, with a male predominance. The most prone human body areas were the upper and lower limbs. The majority of cases (95.9%) were classified as mild envenomations. The time series analysis showed that a (5,1,0)×(0,1,1) 12 SARIMA model offered the best fit to the scorpion sting surveillance data. This model was used to predict scorpion sting cases for the year 2013, and the fitted data showed considerable agreement with the actual data. SARIMA models are useful for monitoring scorpion sting cases, and provide an estimate of the variability to be expected in future scorpion sting cases. This knowledge is helpful in predicting whether an unusual situation is developing or not, and could therefore assist decision-makers in strengthening the province's prevention and control measures and in initiating rapid response measures.

  9. Time series analysis of influenza incidence in Chinese provinces from 2004 to 2011

    PubMed Central

    Song, Xin; Xiao, Jun; Deng, Jiang; Kang, Qiong; Zhang, Yanyu; Xu, Jinbo

    2016-01-01

    Abstract Influenza as a severe infectious disease has caused catastrophes throughout human history, and every pandemic of influenza has produced a great social burden. We compiled monthly data of influenza incidence from all provinces and autonomous regions in mainland China from January 2004 to December 2011, comprehensively evaluated and classified these data, and then randomly selected 4 provinces with higher incidence (Hebei, Gansu, Guizhou, and Hunan), 2 provinces with median incidence (Tianjin and Henan), 1 province with lower incidence (Shandong), using time series analysis to construct an ARIMA model, which is based on the monthly incidence from 2004 to 2011 as the training set. We exerted the X-12-ARIMA procedure for modeling due to the seasonality these data implied. Autocorrelation function (ACF), partial autocorrelation function (PACF), and automatic model selection were to determine the order of the model parameters. The optimal model was decided by a nonseasonal and seasonal moving average test. Finally, we applied this model to predict the monthly incidence of influenza in 2012 as the test set, and the simulated incidence was compared with the observed incidence to evaluate the model's validity by the criterion of both percentage variability in regression analyses (R2) and root mean square error (RMSE). It is conceivable that SARIMA (0,1,1)(0,1,1)12 could simultaneously forecast the influenza incidence of the Hebei Province, Guizhou Province, Henan Province, and Shandong Province; SARIMA (1,0,0)(0,1,1)12 could forecast the influenza incidence in Gansu Province; SARIMA (3,1,1)(0,1,1)12 could forecast the influenza incidence in Tianjin City; and SARIMA (0,1,1)(0,0,1)12 could forecast the influenza incidence in Hunan Province. Time series analysis is a good tool for prediction of disease incidence. PMID:27367989

  10. Forecasting and prediction of scorpion sting cases in Biskra province, Algeria, using a seasonal autoregressive integrated moving average model

    PubMed Central

    2016-01-01

    OBJECTIVES The aims of this study were to highlight some epidemiological aspects of scorpion envenomations, to analyse and interpret the available data for Biskra province, Algeria, and to develop a forecasting model for scorpion sting cases in Biskra province, which records the highest number of scorpion stings in Algeria. METHODS In addition to analysing the epidemiological profile of scorpion stings that occurred throughout the year 2013, we used the Box-Jenkins approach to fit a seasonal autoregressive integrated moving average (SARIMA) model to the monthly recorded scorpion sting cases in Biskra from 2000 to 2012. RESULTS The epidemiological analysis revealed that scorpion stings were reported continuously throughout the year, with peaks in the summer months. The most affected age group was 15 to 49 years old, with a male predominance. The most prone human body areas were the upper and lower limbs. The majority of cases (95.9%) were classified as mild envenomations. The time series analysis showed that a (5,1,0)×(0,1,1)12 SARIMA model offered the best fit to the scorpion sting surveillance data. This model was used to predict scorpion sting cases for the year 2013, and the fitted data showed considerable agreement with the actual data. CONCLUSIONS SARIMA models are useful for monitoring scorpion sting cases, and provide an estimate of the variability to be expected in future scorpion sting cases. This knowledge is helpful in predicting whether an unusual situation is developing or not, and could therefore assist decision-makers in strengthening the province’s prevention and control measures and in initiating rapid response measures. PMID:27866407

  11. Development of S-ARIMA Model for Forecasting Demand in a Beverage Supply Chain

    NASA Astrophysics Data System (ADS)

    Mircetic, Dejan; Nikolicic, Svetlana; Maslaric, Marinko; Ralevic, Nebojsa; Debelic, Borna

    2016-11-01

    Demand forecasting is one of the key activities in planning the freight flows in supply chains, and accordingly it is essential for planning and scheduling of logistic activities within observed supply chain. Accurate demand forecasting models directly influence the decrease of logistics costs, since they provide an assessment of customer demand. Customer demand is a key component for planning all logistic processes in supply chain, and therefore determining levels of customer demand is of great interest for supply chain managers. In this paper we deal with exactly this kind of problem, and we develop the seasonal Autoregressive IntegratedMoving Average (SARIMA) model for forecasting demand patterns of a major product of an observed beverage company. The model is easy to understand, flexible to use and appropriate for assisting the expert in decision making process about consumer demand in particular periods.

  12. Time series modelling to forecast prehospital EMS demand for diabetic emergencies.

    PubMed

    Villani, Melanie; Earnest, Arul; Nanayakkara, Natalie; Smith, Karen; de Courten, Barbora; Zoungas, Sophia

    2017-05-05

    Acute diabetic emergencies are often managed by prehospital Emergency Medical Services (EMS). The projected growth in prevalence of diabetes is likely to result in rising demand for prehospital EMS that are already under pressure. The aims of this study were to model the temporal trends and provide forecasts of prehospital attendances for diabetic emergencies. A time series analysis on monthly cases of hypoglycemia and hyperglycemia was conducted using data from the Ambulance Victoria (AV) electronic database between 2009 and 2015. Using the seasonal autoregressive integrated moving average (SARIMA) modelling process, different models were evaluated. The most parsimonious model with the highest accuracy was selected. Forty-one thousand four hundred fifty-four prehospital diabetic emergencies were attended over a seven-year period with an increase in the annual median monthly caseload between 2009 (484.5) and 2015 (549.5). Hypoglycemia (70%) and people with type 1 diabetes (48%) accounted for most attendances. The SARIMA (0,1,0,12) model provided the best fit, with a MAPE of 4.2% and predicts a monthly caseload of approximately 740 by the end of 2017. Prehospital EMS demand for diabetic emergencies is increasing. SARIMA time series models are a valuable tool to allow forecasting of future caseload with high accuracy and predict increasing cases of prehospital diabetic emergencies into the future. The model generated by this study may be used by service providers to allow appropriate planning and resource allocation of EMS for diabetic emergencies.

  13. Forecasting dengue hemorrhagic fever cases using ARIMA model: a case study in Asahan district

    NASA Astrophysics Data System (ADS)

    Siregar, Fazidah A.; Makmur, Tri; Saprin, S.

    2018-01-01

    Time series analysis had been increasingly used to forecast the number of dengue hemorrhagic fever in many studies. Since no vaccine exist and poor public health infrastructure, predicting the occurrence of dengue hemorrhagic fever (DHF) is crucial. This study was conducted to determine trend and forecasting the occurrence of DHF in Asahan district, North Sumatera Province. Monthly reported dengue cases for the years 2012-2016 were obtained from the district health offices. A time series analysis was conducted by Autoregressive integrated moving average (ARIMA) modeling to forecast the occurrence of DHF. The results demonstrated that the reported DHF cases showed a seasonal variation. The SARIMA (1,0,0)(0,1,1)12 model was the best model and adequate for the data. The SARIMA model for DHF is necessary and could applied to predict the incidence of DHF in Asahan district and assist with design public health maesures to prevent and control the diseases.

  14. A comparative study of shallow groundwater level simulation with three time series models in a coastal aquifer of South China

    NASA Astrophysics Data System (ADS)

    Yang, Q.; Wang, Y.; Zhang, J.; Delgado, J.

    2017-05-01

    Accurate and reliable groundwater level forecasting models can help ensure the sustainable use of a watershed's aquifers for urban and rural water supply. In this paper, three time series analysis methods, Holt-Winters (HW), integrated time series (ITS), and seasonal autoregressive integrated moving average (SARIMA), are explored to simulate the groundwater level in a coastal aquifer, China. The monthly groundwater table depth data collected in a long time series from 2000 to 2011 are simulated and compared with those three time series models. The error criteria are estimated using coefficient of determination ( R 2), Nash-Sutcliffe model efficiency coefficient ( E), and root-mean-squared error. The results indicate that three models are all accurate in reproducing the historical time series of groundwater levels. The comparisons of three models show that HW model is more accurate in predicting the groundwater levels than SARIMA and ITS models. It is recommended that additional studies explore this proposed method, which can be used in turn to facilitate the development and implementation of more effective and sustainable groundwater management strategies.

  15. Periodicity analysis of tourist arrivals to Banda Aceh using smoothing SARIMA approach

    NASA Astrophysics Data System (ADS)

    Miftahuddin, Helida, Desri; Sofyan, Hizir

    2017-11-01

    Forecasting the number of tourist arrivals who enters a region is needed for tourism businesses, economic and industrial policies, so that the statistical modeling needs to be conducted. Banda Aceh is the capital of Aceh province more economic activity is driven by the services sector, one of which is the tourism sector. Therefore, the prediction of the number of tourist arrivals is needed to develop further policies. The identification results indicate that the data arrival of foreign tourists to Banda Aceh to contain the trend and seasonal nature. Allegedly, the number of arrivals is influenced by external factors, such as economics, politics, and the holiday season caused the structural break in the data. Trend patterns are detected by using polynomial regression with quadratic and cubic approaches, while seasonal is detected by a periodic regression polynomial with quadratic and cubic approach. To model the data that has seasonal effects, one of the statistical methods that can be used is SARIMA (Seasonal Autoregressive Integrated Moving Average). The results showed that the smoothing, a method to detect the trend pattern is cubic polynomial regression approach, with the modified model and the multiplicative periodicity of 12 months. The AIC value obtained was 70.52. While the method for detecting the seasonal pattern is a periodic regression polynomial cubic approach, with the modified model and the multiplicative periodicity of 12 months. The AIC value obtained was 73.37. Furthermore, the best model to predict the number of foreign tourist arrivals to Banda Aceh in 2017 to 2018 is SARIMA (0,1,1)(1,1,0) with MAPE is 26%.

  16. Weather variability, tides, and Barmah Forest virus disease in the Gladstone region, Australia.

    PubMed

    Naish, Suchithra; Hu, Wenbiao; Nicholls, Neville; Mackenzie, John S; McMichael, Anthony J; Dale, Pat; Tong, Shilu

    2006-05-01

    In this study we examined the impact of weather variability and tides on the transmission of Barmah Forest virus (BFV) disease and developed a weather-based forecasting model for BFV disease in the Gladstone region, Australia. We used seasonal autoregressive integrated moving-average (SARIMA) models to determine the contribution of weather variables to BFV transmission after the time-series data of response and explanatory variables were made stationary through seasonal differencing. We obtained data on the monthly counts of BFV cases, weather variables (e.g., mean minimum and maximum temperature, total rainfall, and mean relative humidity), high and low tides, and the population size in the Gladstone region between January 1992 and December 2001 from the Queensland Department of Health, Australian Bureau of Meteorology, Queensland Department of Transport, and Australian Bureau of Statistics, respectively. The SARIMA model shows that the 5-month moving average of minimum temperature (b=0.15, p-value<0.001) was statistically significantly and positively associated with BFV disease, whereas high tide in the current month (b=-1.03, p-value=0.04) was statistically significantly and inversely associated with it. However, no significant association was found for other variables. These results may be applied to forecast the occurrence of BFV disease and to use public health resources in BFV control and prevention.

  17. Presentations to Alberta emergency departments for asthma: a time series analysis.

    PubMed

    Rosychuk, Rhonda J; Youngson, Erik; Rowe, Brian H

    2015-08-01

    Asthma is a common chronic respiratory condition, and exacerbations may cause individuals to seek care in emergency departments (EDs). This study examines the monthly patterns of asthma presentations to EDs in Alberta, Canada. All presentations to the ED for asthma from April 1999 to March 2011 were extracted from provincial administrative health databases. Data included age, sex, and health zone of residence. Crude rates per 100,000 population were calculated. Seasonal autoregressive integrated moving average (SARIMA) time-series models were developed. There were a total of 362,430 ED presentations for asthma, and the monthly rate of presentation declined from 115.5 to 41.6 per 100,000 during the study period. Males made 50.1% of ED presentations, and adults made 52.8%. The absolute number of ED presentations for asthma declined in each of the five administrative health zones in the province, with smaller percentage decreases seen in the most urbanized zones (32.1%) than the other zones (46.9%). One SARIMA model closely predicted overall presentation rates as well as the rates of ED presentations for age and zone subgroups. These models showed strong seasonal components, with the strongest estimates occurring for the pediatric subgroup and the southernmost provincial zone. Rates of ED presentations for asthma have been declining in this province during the past decade. The reasons for this decline warrant further exploration. The SARIMA models quantified the temporal patterns and may be helpful for planning research and health care service needs. © 2015 by the Society for Academic Emergency Medicine.

  18. Inter-comparison of time series models of lake levels predicted by several modeling strategies

    NASA Astrophysics Data System (ADS)

    Khatibi, R.; Ghorbani, M. A.; Naghipour, L.; Jothiprakash, V.; Fathima, T. A.; Fazelifard, M. H.

    2014-04-01

    Five modeling strategies are employed to analyze water level time series of six lakes with different physical characteristics such as shape, size, altitude and range of variations. The models comprise chaos theory, Auto-Regressive Integrated Moving Average (ARIMA) - treated for seasonality and hence SARIMA, Artificial Neural Networks (ANN), Gene Expression Programming (GEP) and Multiple Linear Regression (MLR). Each is formulated on a different premise with different underlying assumptions. Chaos theory is elaborated in a greater detail as it is customary to identify the existence of chaotic signals by a number of techniques (e.g. average mutual information and false nearest neighbors) and future values are predicted using the Nonlinear Local Prediction (NLP) technique. This paper takes a critical view of past inter-comparison studies seeking a superior performance, against which it is reported that (i) the performances of all five modeling strategies vary from good to poor, hampering the recommendation of a clear-cut predictive model; (ii) the performances of the datasets of two cases are consistently better with all five modeling strategies; (iii) in other cases, their performances are poor but the results can still be fit-for-purpose; (iv) the simultaneous good performances of NLP and SARIMA pull their underlying assumptions to different ends, which cannot be reconciled. A number of arguments are presented including the culture of pluralism, according to which the various modeling strategies facilitate an insight into the data from different vantages.

  19. Forecasting daily emergency department visits using calendar variables and ambient temperature readings.

    PubMed

    Marcilio, Izabel; Hajat, Shakoor; Gouveia, Nelson

    2013-08-01

    This study aimed to develop different models to forecast the daily number of patients seeking emergency department (ED) care in a general hospital according to calendar variables and ambient temperature readings and to compare the models in terms of forecasting accuracy. The authors developed and tested six different models of ED patient visits using total daily counts of patient visits to an ED in Sao Paulo, Brazil, from January 1, 2008, to December 31, 2010. The first 33 months of the data set were used to develop the ED patient visits forecasting models (the training set), leaving the last 3 months to measure each model's forecasting accuracy by the mean absolute percentage error (MAPE). Forecasting models were developed using three different time-series analysis methods: generalized linear models (GLM), generalized estimating equations (GEE), and seasonal autoregressive integrated moving average (SARIMA). For each method, models were explored with and without the effect of mean daily temperature as a predictive variable. The daily mean number of ED visits was 389, ranging from 166 to 613. Data showed a weekly seasonal distribution, with highest patient volumes on Mondays and lowest patient volumes on weekends. There was little variation in daily visits by month. GLM and GEE models showed better forecasting accuracy than SARIMA models. For instance, the MAPEs from GLM models and GEE models at the first month of forecasting (October 2012) were 11.5 and 10.8% (models with and without control for the temperature effect, respectively), while the MAPEs from SARIMA models were 12.8 and 11.7%. For all models, controlling for the effect of temperature resulted in worse or similar forecasting ability than models with calendar variables alone, and forecasting accuracy was better for the short-term horizon (7 days in advance) than for the longer term (30 days in advance). This study indicates that time-series models can be developed to provide forecasts of daily ED patient visits, and forecasting ability was dependent on the type of model employed and the length of the time horizon being predicted. In this setting, GLM and GEE models showed better accuracy than SARIMA models. Including information about ambient temperature in the models did not improve forecasting accuracy. Forecasting models based on calendar variables alone did in general detect patterns of daily variability in ED volume and thus could be used for developing an automated system for better planning of personnel resources. © 2013 by the Society for Academic Emergency Medicine.

  20. Forecasting typhoid fever incidence in the Cordillera administrative region in the Philippines using seasonal ARIMA models

    NASA Astrophysics Data System (ADS)

    Cawiding, Olive R.; Natividad, Gina May R.; Bato, Crisostomo V.; Addawe, Rizavel C.

    2017-11-01

    The prevalence of typhoid fever in developing countries such as the Philippines calls for a need for accurate forecasting of the disease. This will be of great assistance in strategic disease prevention. This paper presents a development of useful models that predict the behavior of typhoid fever incidence based on the monthly incidence in the provinces of the Cordillera Administrative Region from 2010 to 2015 using univariate time series analysis. The data used was obtained from the Cordillera Office of the Department of Health (DOH-CAR). Seasonal autoregressive moving average (SARIMA) models were used to incorporate the seasonality of the data. A comparison of the results of the obtained models revealed that the SARIMA (1,1,7)(0,0,1)12 with a fixed coefficient at the seventh lag produces the smallest root mean square error (RMSE), mean absolute error (MAE), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC). The model suggested that for the year 2016, the number of cases would increase from the months of July to September and have a drop in December. This was then validated using the data collected from January 2016 to December 2016.

  1. Artificial neural network and SARIMA based models for power load forecasting in Turkish electricity market

    PubMed Central

    2017-01-01

    Load information plays an important role in deregulated electricity markets, since it is the primary factor to make critical decisions on production planning, day-to-day operations, unit commitment and economic dispatch. Being able to predict the load for a short term, which covers one hour to a few days, equips power generation facilities and traders with an advantage. With the deregulation of electricity markets, a variety of short term load forecasting models are developed. Deregulation in Turkish Electricity Market has started in 2001 and liberalization is still in progress with rules being effective in its predefined schedule. However, there is a very limited number of studies for Turkish Market. In this study, we introduce two different models for current Turkish Market using Seasonal Autoregressive Integrated Moving Average (SARIMA) and Artificial Neural Network (ANN) and present their comparative performances. Building models that cope with the dynamic nature of deregulated market and are able to run in real-time is the main contribution of this study. We also use our ANN based model to evaluate the effect of several factors, which are claimed to have effect on electrical load. PMID:28426739

  2. Artificial neural network and SARIMA based models for power load forecasting in Turkish electricity market.

    PubMed

    Bozkurt, Ömer Özgür; Biricik, Göksel; Tayşi, Ziya Cihan

    2017-01-01

    Load information plays an important role in deregulated electricity markets, since it is the primary factor to make critical decisions on production planning, day-to-day operations, unit commitment and economic dispatch. Being able to predict the load for a short term, which covers one hour to a few days, equips power generation facilities and traders with an advantage. With the deregulation of electricity markets, a variety of short term load forecasting models are developed. Deregulation in Turkish Electricity Market has started in 2001 and liberalization is still in progress with rules being effective in its predefined schedule. However, there is a very limited number of studies for Turkish Market. In this study, we introduce two different models for current Turkish Market using Seasonal Autoregressive Integrated Moving Average (SARIMA) and Artificial Neural Network (ANN) and present their comparative performances. Building models that cope with the dynamic nature of deregulated market and are able to run in real-time is the main contribution of this study. We also use our ANN based model to evaluate the effect of several factors, which are claimed to have effect on electrical load.

  3. Weather Variability, Tides, and Barmah Forest Virus Disease in the Gladstone Region, Australia

    PubMed Central

    Naish, Suchithra; Hu, Wenbiao; Nicholls, Neville; Mackenzie, John S.; McMichael, Anthony J.; Dale, Pat; Tong, Shilu

    2006-01-01

    In this study we examined the impact of weather variability and tides on the transmission of Barmah Forest virus (BFV) disease and developed a weather-based forecasting model for BFV disease in the Gladstone region, Australia. We used seasonal autoregressive integrated moving-average (SARIMA) models to determine the contribution of weather variables to BFV transmission after the time-series data of response and explanatory variables were made stationary through seasonal differencing. We obtained data on the monthly counts of BFV cases, weather variables (e.g., mean minimum and maximum temperature, total rainfall, and mean relative humidity), high and low tides, and the population size in the Gladstone region between January 1992 and December 2001 from the Queensland Department of Health, Australian Bureau of Meteorology, Queensland Department of Transport, and Australian Bureau of Statistics, respectively. The SARIMA model shows that the 5-month moving average of minimum temperature (β = 0.15, p-value < 0.001) was statistically significantly and positively associated with BFV disease, whereas high tide in the current month (β = −1.03, p-value = 0.04) was statistically significantly and inversely associated with it. However, no significant association was found for other variables. These results may be applied to forecast the occurrence of BFV disease and to use public health resources in BFV control and prevention. PMID:16675420

  4. Assessment and prediction of road accident injuries trend using time-series models in Kurdistan.

    PubMed

    Parvareh, Maryam; Karimi, Asrin; Rezaei, Satar; Woldemichael, Abraha; Nili, Sairan; Nouri, Bijan; Nasab, Nader Esmail

    2018-01-01

    Road traffic accidents are commonly encountered incidents that can cause high-intensity injuries to the victims and have direct impacts on the members of the society. Iran has one of the highest incident rates of road traffic accidents. The objective of this study was to model the patterns of road traffic accidents leading to injury in Kurdistan province, Iran. A time-series analysis was conducted to characterize and predict the frequency of road traffic accidents that lead to injury in Kurdistan province. The injuries were categorized into three separate groups which were related to the car occupants, motorcyclists and pedestrian road traffic accident injuries. The Box-Jenkins time-series analysis was used to model the injury observations applying autoregressive integrated moving average (ARIMA) and seasonal autoregressive integrated moving average (SARIMA) from March 2009 to February 2015 and to predict the accidents up to 24 months later (February 2017). The analysis was carried out using R-3.4.2 statistical software package. A total of 5199 pedestrians, 9015 motorcyclists, and 28,906 car occupants' accidents were observed. The mean (SD) number of car occupant, motorcyclist and pedestrian accident injuries observed were 401.01 (SD 32.78), 123.70 (SD 30.18) and 71.19 (SD 17.92) per year, respectively. The best models for the pattern of car occupant, motorcyclist, and pedestrian injuries were the ARIMA (1, 0, 0), SARIMA (1, 0, 2) (1, 0, 0) 12 , and SARIMA (1, 1, 1) (0, 0, 1) 12 , respectively. The motorcyclist and pedestrian injuries showed a seasonal pattern and the peak was during summer (August). The minimum frequency for the motorcyclist and pedestrian injuries were observed during the late autumn and early winter (December and January). Our findings revealed that the observed motorcyclist and pedestrian injuries had a seasonal pattern that was explained by air temperature changes overtime. These findings call the need for close monitoring of the accidents during the high-risk periods in order to control and decrease the rate of the injuries.

  5. Using Google Trends and ambient temperature to predict seasonal influenza outbreaks.

    PubMed

    Zhang, Yuzhou; Bambrick, Hilary; Mengersen, Kerrie; Tong, Shilu; Hu, Wenbiao

    2018-05-16

    The discovery of the dynamics of seasonal and non-seasonal influenza outbreaks remains a great challenge. Previous internet-based surveillance studies built purely on internet or climate data do have potential error. We collected influenza notifications, temperature and Google Trends (GT) data between January 1st, 2011 and December 31st, 2016. We performed time-series cross correlation analysis and temporal risk analysis to discover the characteristics of influenza epidemics in the period. Then, the seasonal autoregressive integrated moving average (SARIMA) model and regression tree model were developed to track influenza epidemics using GT and climate data. Influenza infection was significantly corrected with GT at lag of 1-7 weeks in Brisbane and Gold Coast, and temperature at lag of 1-10 weeks for the two study settings. SARIMA models with GT and temperature data had better predictive performance. We identified autoregression (AR) for influenza was the most important determinant for influenza occurrence in both Brisbane and Gold Coast. Our results suggested internet search metrics in conjunction with temperature can be used to predict influenza outbreaks, which can be considered as a pre-requisite for constructing early warning systems using search and temperature data. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Temporal patterns and a disease forecasting model of dengue hemorrhagic fever in Jakarta based on 10 years of surveillance data.

    PubMed

    Sitepu, Monika S; Kaewkungwal, Jaranit; Luplerdlop, Nathanej; Soonthornworasiri, Ngamphol; Silawan, Tassanee; Poungsombat, Supawadee; Lawpoolsri, Saranath

    2013-03-01

    This study aimed to describe the temporal patterns of dengue transmission in Jakarta from 2001 to 2010, using data from the national surveillance system. The Box-Jenkins forecasting technique was used to develop a seasonal autoregressive integrated moving average (SARIMA) model for the study period and subsequently applied to forecast DHF incidence in 2011 in Jakarta Utara, Jakarta Pusat, Jakarta Barat, and the municipalities of Jakarta Province. Dengue incidence in 2011, based on the forecasting model was predicted to increase from the previous year.

  7. Presentations to Emergency Departments for COPD: A Time Series Analysis.

    PubMed

    Rosychuk, Rhonda J; Youngson, Erik; Rowe, Brian H

    2016-01-01

    Background. Chronic obstructive pulmonary disease (COPD) is a common respiratory condition characterized by progressive dyspnea and acute exacerbations which may result in emergency department (ED) presentations. This study examines monthly rates of presentations to EDs in one Canadian province. Methods. Presentations for COPD made by individuals aged ≥55 years during April 1999 to March 2011 were extracted from provincial databases. Data included age, sex, and health zone of residence (North, Central, South, and urban). Crude rates were calculated. Seasonal autoregressive integrated moving average (SARIMA) time series models were developed. Results. ED presentations for COPD totalled 188,824 and the monthly rate of presentation remained relatively stable (from 197.7 to 232.6 per 100,000). Males and seniors (≥65 years) comprised 52.2% and 73.7% of presentations, respectively. The ARIMA(1,0, 0) × (1,0, 1)12 model was appropriate for the overall rate of presentations and for each sex and seniors. Zone specific models showed relatively stable or decreasing rates; the North zone had an increasing trend. Conclusions. ED presentation rates for COPD have been relatively stable in Alberta during the past decade. However, their increases in northern regions deserve further exploration. The SARIMA models quantified the temporal patterns and can help planning future health care service needs.

  8. Time series analysis of cholera in Matlab, Bangladesh, during 1988-2001.

    PubMed

    Ali, Mohammad; Kim, Deok Ryun; Yunus, Mohammad; Emch, Michael

    2013-03-01

    The study examined the impact of in-situ climatic and marine environmental variability on cholera incidence in an endemic area of Bangladesh and developed a forecasting model for understanding the magnitude of incidence. Diarrhoea surveillance data collected between 1988 and 2001 were obtained from a field research site in Matlab, Bangladesh. Cholera cases were defined as Vibrio cholerae O1 isolated from faecal specimens of patients who sought care at treatment centres serving the Matlab population. Cholera incidence for 168 months was correlated with remotely-sensed sea-surface temperature (SST) and in-situ environmental data, including rainfall and ambient temperature. A seasonal autoregressive integrated moving average (SARIMA) model was used for determining the impact of climatic and environmental variability on cholera incidence and evaluating the ability of the model to forecast the magnitude of cholera. There were 4,157 cholera cases during the study period, with an average of 1.4 cases per 1,000 people. Since monthly cholera cases varied significantly by month, it was necessary to stabilize the variance of cholera incidence by computing the natural logarithm to conduct the analysis. The SARIMA model shows temporal clustering of cholera at one- and 12-month lags. There was a 6% increase in cholera incidence with a minimum temperature increase of one degree celsius in the current month. For increase of SST by one degree celsius, there was a 25% increase in the cholera incidence at currrent month and 18% increase in the cholera incidence at two months. Rainfall did not influenc to cause variation in cholera incidence during the study period. The model forecast the fluctuation of cholera incidence in Matlab reasonably well (Root mean square error, RMSE: 0.108). Thus, the ambient and sea-surface temperature-based model could be used in forecasting cholera outbreaks in Matlab.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voynikova, D. S., E-mail: desi-sl2000@yahoo.com; Gocheva-Ilieva, S. G., E-mail: snegocheva@yahoo.com; Ivanov, A. V., E-mail: aivanov-99@yahoo.com

    Numerous time series methods are used in environmental sciences allowing the detailed investigation of air pollution processes. The goal of this study is to present the empirical analysis of various aspects of stochastic modeling and in particular the ARIMA/SARIMA methods. The subject of investigation is air pollution in the town of Kardzhali, Bulgaria with 2 problematic pollutants – sulfur dioxide (SO2) and particulate matter (PM10). Various SARIMA Transfer Function models are built taking into account meteorological factors, data transformations and the use of different horizons selected to predict future levels of concentrations of the pollutants.

  10. Identification of the prediction model for dengue incidence in Can Tho city, a Mekong Delta area in Vietnam.

    PubMed

    Phung, Dung; Huang, Cunrui; Rutherford, Shannon; Chu, Cordia; Wang, Xiaoming; Nguyen, Minh; Nguyen, Nga Huy; Manh, Cuong Do

    2015-01-01

    The Mekong Delta is highly vulnerable to climate change and a dengue endemic area in Vietnam. This study aims to examine the association between climate factors and dengue incidence and to identify the best climate prediction model for dengue incidence in Can Tho city, the Mekong Delta area in Vietnam. We used three different regression models comprising: standard multiple regression model (SMR), seasonal autoregressive integrated moving average model (SARIMA), and Poisson distributed lag model (PDLM) to examine the association between climate factors and dengue incidence over the period 2003-2010. We validated the models by forecasting dengue cases for the period of January-December, 2011 using the mean absolute percentage error (MAPE). Receiver operating characteristics curves were used to analyze the sensitivity of the forecast of a dengue outbreak. The results indicate that temperature and relative humidity are significantly associated with changes in dengue incidence consistently across the model methods used, but not cumulative rainfall. The Poisson distributed lag model (PDLM) performs the best prediction of dengue incidence for a 6, 9, and 12-month period and diagnosis of an outbreak however the SARIMA model performs a better prediction of dengue incidence for a 3-month period. The simple or standard multiple regression performed highly imprecise prediction of dengue incidence. We recommend a follow-up study to validate the model on a larger scale in the Mekong Delta region and to analyze the possibility of incorporating a climate-based dengue early warning method into the national dengue surveillance system. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network

    PubMed Central

    Yu, Ying; Wang, Yirui; Tang, Zheng

    2017-01-01

    With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient. PMID:28246527

  12. Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.

    PubMed

    Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng

    2017-01-01

    With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.

  13. Time Series Analysis of Cholera in Matlab, Bangladesh, during 1988-2001

    PubMed Central

    Kim, Deok Ryun; Yunus, Mohammad; Emch, Michael

    2013-01-01

    The study examined the impact of in-situ climatic and marine environmental variability on cholera incidence in an endemic area of Bangladesh and developed a forecasting model for understanding the magnitude of incidence. Diarrhoea surveillance data collected between 1988 and 2001were obtained from a field research site in Matlab, Bangladesh. Cholera cases were defined as Vibrio cholerae O1 isolated from faecal specimens of patients who sought care at treatment centres serving the Matlab population. Cholera incidence for 168 months was correlated with remotely-sensed sea-surface temperature (SST) and in-situ environmental data, including rainfall and ambient temperature. A seasonal autoregressive integrated moving average (SARIMA) model was used for determining the impact of climatic and environmental variability on cholera incidence and evaluating the ability of the model to forecast the magnitude of cholera. There were 4,157 cholera cases during the study period, with an average of 1.4 cases per 1,000 people. Since monthly cholera cases varied significantly by month, it was necessary to stabilize the variance of cholera incidence by computing the natural logarithm to conduct the analysis. The SARIMA model shows temporal clustering of cholera at one- and 12-month lags. There was a 6% increase in cholera incidence with a minimum temperature increase of one degree celsius in the current month. For increase of SST by one degree celsius, there was a 25% increase in the cholera incidence at currrent month and 18% increase in the cholera incidence at two months. Rainfall did not influenc to cause variation in cholera incidence during the study period. The model forecast the fluctuation of cholera incidence in Matlab reasonably well (Root mean square error, RMSE: 0.108). Thus, the ambient and sea-surface temperature-based model could be used in forecasting cholera outbreaks in Matlab. PMID:23617200

  14. Assessing air quality in Aksaray with time series analysis

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Kadilar, Cem

    2017-04-01

    Sulphur dioxide (SO2) is a major air pollutant caused by the dominant usage of diesel, petrol and fuels by vehicles and industries. One of the most air-polluted city in Turkey is Aksaray. Hence, in this study, the level of SO2 is analyzed in Aksaray based on the database monitored at air quality monitoring station of Turkey. Seasonal Autoregressive Integrated Moving Average (SARIMA) approach is used to forecast the level of SO2 air quality parameter. The results indicate that the seasonal ARIMA model provides reliable and satisfactory predictions for the air quality parameters and expected to be an alternative tool for practical assessment and justification.

  15. Short-term forecasting of meteorological time series using Nonparametric Functional Data Analysis (NPFDA)

    NASA Astrophysics Data System (ADS)

    Curceac, S.; Ternynck, C.; Ouarda, T.

    2015-12-01

    Over the past decades, a substantial amount of research has been conducted to model and forecast climatic variables. In this study, Nonparametric Functional Data Analysis (NPFDA) methods are applied to forecast air temperature and wind speed time series in Abu Dhabi, UAE. The dataset consists of hourly measurements recorded for a period of 29 years, 1982-2010. The novelty of the Functional Data Analysis approach is in expressing the data as curves. In the present work, the focus is on daily forecasting and the functional observations (curves) express the daily measurements of the above mentioned variables. We apply a non-linear regression model with a functional non-parametric kernel estimator. The computation of the estimator is performed using an asymmetrical quadratic kernel function for local weighting based on the bandwidth obtained by a cross validation procedure. The proximities between functional objects are calculated by families of semi-metrics based on derivatives and Functional Principal Component Analysis (FPCA). Additionally, functional conditional mode and functional conditional median estimators are applied and the advantages of combining their results are analysed. A different approach employs a SARIMA model selected according to the minimum Akaike (AIC) and Bayessian (BIC) Information Criteria and based on the residuals of the model. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE), relative RMSE, BIAS and relative BIAS. The results indicate that the NPFDA models provide more accurate forecasts than the SARIMA models. Key words: Nonparametric functional data analysis, SARIMA, time series forecast, air temperature, wind speed

  16. Time-series modeling and prediction of global monthly absolute temperature for environmental decision making

    NASA Astrophysics Data System (ADS)

    Ye, Liming; Yang, Guixia; Van Ranst, Eric; Tang, Huajun

    2013-03-01

    A generalized, structural, time series modeling framework was developed to analyze the monthly records of absolute surface temperature, one of the most important environmental parameters, using a deterministicstochastic combined (DSC) approach. Although the development of the framework was based on the characterization of the variation patterns of a global dataset, the methodology could be applied to any monthly absolute temperature record. Deterministic processes were used to characterize the variation patterns of the global trend and the cyclic oscillations of the temperature signal, involving polynomial functions and the Fourier method, respectively, while stochastic processes were employed to account for any remaining patterns in the temperature signal, involving seasonal autoregressive integrated moving average (SARIMA) models. A prediction of the monthly global surface temperature during the second decade of the 21st century using the DSC model shows that the global temperature will likely continue to rise at twice the average rate of the past 150 years. The evaluation of prediction accuracy shows that DSC models perform systematically well against selected models of other authors, suggesting that DSC models, when coupled with other ecoenvironmental models, can be used as a supplemental tool for short-term (˜10-year) environmental planning and decision making.

  17. Models for Train Passenger Forecasting of Java and Sumatra

    NASA Astrophysics Data System (ADS)

    Sartono

    2017-04-01

    People tend to take public transportation to avoid high traffic, especially in Java. In Jakarta, the number of railway passengers is over than the capacity of the train at peak time. This is an opportunity as well as a challenge. If it is managed well then the company can get high profit. Otherwise, it may lead to disaster. This article discusses models for the train passengers, hence, finding the reasonable models to make a prediction overtimes. The Box-Jenkins method is occupied to develop a basic model. Then, this model is compared to models obtained using exponential smoothing method and regression method. The result shows that Holt-Winters model is better to predict for one-month, three-month, and six-month ahead for the passenger in Java. In addition, SARIMA(1,1,0)(2,0,0) is more accurate for nine-month and twelve-month oversee. On the other hand, for Sumatra passenger forecasting, SARIMA(1,1,1)(0,0,2) gives a better approximation for one-month ahead, and ARIMA model is best for three-month ahead prediction. The rest, Trend Seasonal and Liner Model has the least of RMSE to forecast for six-month, nine-month, and twelve-month ahead.

  18. Climatic Variables and Malaria Morbidity in Mutale Local Municipality, South Africa: A 19-Year Data Analysis

    PubMed Central

    Botai, Joel O.; Rautenbach, Hannes; Ncongwane, Katlego P.; Botai, Christina M.

    2017-01-01

    The north-eastern parts of South Africa, comprising the Limpopo Province, have recorded a sudden rise in the rate of malaria morbidity and mortality in the 2017 malaria season. The epidemiological profiles of malaria, as well as other vector-borne diseases, are strongly associated with climate and environmental conditions. A retrospective understanding of the relationship between climate and the occurrence of malaria may provide insight into the dynamics of the disease’s transmission and its persistence in the north-eastern region. In this paper, the association between climatic variables and the occurrence of malaria was studied in the Mutale local municipality in South Africa over a period of 19-year. Time series analysis was conducted on monthly climatic variables and monthly malaria cases in the Mutale municipality for the period of 1998–2017. Spearman correlation analysis was performed and the Seasonal Autoregressive Integrated Moving Average (SARIMA) model was developed. Microsoft Excel was used for data cleaning, and statistical software R was used to analyse the data and develop the model. Results show that both climatic variables’ and malaria cases’ time series exhibited seasonal patterns, showing a number of peaks and fluctuations. Spearman correlation analysis indicated that monthly total rainfall, mean minimum temperature, mean maximum temperature, mean average temperature, and mean relative humidity were significantly and positively correlated with monthly malaria cases in the study area. Regression analysis showed that monthly total rainfall and monthly mean minimum temperature (R2 = 0.65), at a two-month lagged effect, are the most significant climatic predictors of malaria transmission in Mutale local municipality. A SARIMA (2,1,2) (1,1,1) model fitted with only malaria cases has a prediction performance of about 51%, and the SARIMAX (2,1,2) (1,1,1) model with climatic variables as exogenous factors has a prediction performance of about 72% in malaria cases. The model gives a close comparison between the predicted and observed number of malaria cases, hence indicating that the model provides an acceptable fit to predict the number of malaria cases in the municipality. To sum up, the association between the climatic variables and malaria cases provides clues to better understand the dynamics of malaria transmission. The lagged effect detected in this study can help in adequate planning for malaria intervention. PMID:29117114

  19. Climatic Variables and Malaria Morbidity in Mutale Local Municipality, South Africa: A 19-Year Data Analysis.

    PubMed

    Adeola, Abiodun M; Botai, Joel O; Rautenbach, Hannes; Adisa, Omolola M; Ncongwane, Katlego P; Botai, Christina M; Adebayo-Ojo, Temitope C

    2017-11-08

    The north-eastern parts of South Africa, comprising the Limpopo Province, have recorded a sudden rise in the rate of malaria morbidity and mortality in the 2017 malaria season. The epidemiological profiles of malaria, as well as other vector-borne diseases, are strongly associated with climate and environmental conditions. A retrospective understanding of the relationship between climate and the occurrence of malaria may provide insight into the dynamics of the disease's transmission and its persistence in the north-eastern region. In this paper, the association between climatic variables and the occurrence of malaria was studied in the Mutale local municipality in South Africa over a period of 19-year. Time series analysis was conducted on monthly climatic variables and monthly malaria cases in the Mutale municipality for the period of 1998-2017. Spearman correlation analysis was performed and the Seasonal Autoregressive Integrated Moving Average (SARIMA) model was developed. Microsoft Excel was used for data cleaning, and statistical software R was used to analyse the data and develop the model. Results show that both climatic variables' and malaria cases' time series exhibited seasonal patterns, showing a number of peaks and fluctuations. Spearman correlation analysis indicated that monthly total rainfall, mean minimum temperature, mean maximum temperature, mean average temperature, and mean relative humidity were significantly and positively correlated with monthly malaria cases in the study area. Regression analysis showed that monthly total rainfall and monthly mean minimum temperature ( R ² = 0.65), at a two-month lagged effect, are the most significant climatic predictors of malaria transmission in Mutale local municipality. A SARIMA (2,1,2) (1,1,1) model fitted with only malaria cases has a prediction performance of about 51%, and the SARIMAX (2,1,2) (1,1,1) model with climatic variables as exogenous factors has a prediction performance of about 72% in malaria cases. The model gives a close comparison between the predicted and observed number of malaria cases, hence indicating that the model provides an acceptable fit to predict the number of malaria cases in the municipality. To sum up, the association between the climatic variables and malaria cases provides clues to better understand the dynamics of malaria transmission. The lagged effect detected in this study can help in adequate planning for malaria intervention.

  20. Strengthening economy through tourism sector by tourist arrival prediction

    NASA Astrophysics Data System (ADS)

    Supriatna, A.; Subartini, B.; Hertini, E.; Sukono; Rumaisha; Istiqamah, N.

    2018-03-01

    Tourism sector has a tendency to be proposed as a support for national economy to many countries with various of natural resources, such as Indonesia. The number of tourist is very related with the success rate of a tourist attraction, since it is also related with planning and strategy. Hence, it is important to predict the climate of tourism in Indonesia, especially the number of domestic or international tourist in the future. This study uses Seasonal Autoregressive Integrated Moving Average (SARIMA) time series method to predict the number of tourist arrival to tourism strategic areas in Nusa Tenggara Barat. The prediction was done using the international and domestic tourist arrival to Nusa Tenggara Barat data from January 2008 to June 2016. The established SARIMA method was (0,1,1)(0,0,2)12 with MAPE error of 15.76. The prediction for the next six time periods showed that the highest number of tourist arrival is during September 2016 with 330,516 tourist arrivals. Prediction of tourist arrival hopefully might be used as reference for local and national government to make policies to strengthen national economy for a long period of time

  1. Treatment on outliers in UBJ-SARIMA models for forecasting dengue cases on age groups not eligible for vaccination in Baguio City, Philippines

    NASA Astrophysics Data System (ADS)

    Magsakay, Clarenz B.; De Vera, Nora U.; Libatique, Criselda P.; Addawe, Rizavel C.; Addawe, Joel M.

    2017-11-01

    Dengue vaccination has become a breakthrough in the fight against dengue infection. This is however not applicable to all ages. Individuals from 0 to 8 years old and adults older than 45 years old remain susceptible to the vector-borne disease dengue. Forecasting future dengue cases accurately from susceptible age groups would aid in the efforts to prevent further increase in dengue infections. For the age groups of individuals not eligible for vaccination, the presence of outliers was observed and was treated using winsorization, square root, and logarithmic transformations to create a SARIMA model. The best model for the age group 0 to 8 years old was found to be ARIMA(13,1,0)(1,0,0)12 with 10 fixed variables using square root transformation with a 95% winsorization, and the best model for the age group older than 45 years old is ARIMA(7,1,0)(1,0,0)12 with 5 fixed variables using logarithmic transformation with 90% winsorization. These models are then used to forecast the monthly dengue cases for Baguio City for the age groups considered.

  2. Daily air quality index forecasting with hybrid models: A case in China.

    PubMed

    Zhu, Suling; Lian, Xiuyuan; Liu, Haixia; Hu, Jianming; Wang, Yuanyuan; Che, Jinxing

    2017-12-01

    Air quality is closely related to quality of life. Air pollution forecasting plays a vital role in air pollution warnings and controlling. However, it is difficult to attain accurate forecasts for air pollution indexes because the original data are non-stationary and chaotic. The existing forecasting methods, such as multiple linear models, autoregressive integrated moving average (ARIMA) and support vector regression (SVR), cannot fully capture the information from series of pollution indexes. Therefore, new effective techniques need to be proposed to forecast air pollution indexes. The main purpose of this research is to develop effective forecasting models for regional air quality indexes (AQI) to address the problems above and enhance forecasting accuracy. Therefore, two hybrid models (EMD-SVR-Hybrid and EMD-IMFs-Hybrid) are proposed to forecast AQI data. The main steps of the EMD-SVR-Hybrid model are as follows: the data preprocessing technique EMD (empirical mode decomposition) is utilized to sift the original AQI data to obtain one group of smoother IMFs (intrinsic mode functions) and a noise series, where the IMFs contain the important information (level, fluctuations and others) from the original AQI series. LS-SVR is applied to forecast the sum of the IMFs, and then, S-ARIMA (seasonal ARIMA) is employed to forecast the residual sequence of LS-SVR. In addition, EMD-IMFs-Hybrid first separately forecasts the IMFs via statistical models and sums the forecasting results of the IMFs as EMD-IMFs. Then, S-ARIMA is employed to forecast the residuals of EMD-IMFs. To certify the proposed hybrid model, AQI data from June 2014 to August 2015 collected from Xingtai in China are utilized as a test case to investigate the empirical research. In terms of some of the forecasting assessment measures, the AQI forecasting results of Xingtai show that the two proposed hybrid models are superior to ARIMA, SVR, GRNN, EMD-GRNN, Wavelet-GRNN and Wavelet-SVR. Therefore, the proposed hybrid models can be used as effective and simple tools for air pollution forecasting and warning as well as for management. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Simulation And Forecasting of Daily Pm10 Concentrations Using Autoregressive Models In Kagithane Creek Valley, Istanbul

    NASA Astrophysics Data System (ADS)

    Ağaç, Kübra; Koçak, Kasım; Deniz, Ali

    2015-04-01

    A time series approach using autoregressive model (AR), moving average model (MA) and seasonal autoregressive integrated moving average model (SARIMA) were used in this study to simulate and forecast daily PM10 concentrations in Kagithane Creek Valley, Istanbul. Hourly PM10 concentrations have been measured in Kagithane Creek Valley between 2010 and 2014 periods. Bosphorus divides the city in two parts as European and Asian parts. The historical part of the city takes place in Golden Horn. Our study area Kagithane Creek Valley is connected with this historical part. The study area is highly polluted because of its topographical structure and industrial activities. Also population density is extremely high in this site. The dispersion conditions are highly poor in this creek valley so it is necessary to calculate PM10 levels for air quality and human health. For given period there were some missing PM10 concentration values so to make an accurate calculations and to obtain exact results gap filling method was applied by Singular Spectrum Analysis (SSA). SSA is a new and efficient method for gap filling and it is an state-of-art modeling. SSA-MTM Toolkit was used for our study. SSA is considered as a noise reduction algorithm because it decomposes an original time series to trend (if exists), oscillatory and noise components by way of a singular value decomposition. The basic SSA algorithm has stages of decomposition and reconstruction. For given period daily and monthly PM10 concentrations were calculated and episodic periods are determined. Long term and short term PM10 concentrations were analyzed according to European Union (EU) standards. For simulation and forecasting of high level PM10 concentrations, meteorological data (wind speed, pressure and temperature) were used to see the relationship between daily PM10 concentrations. Fast Fourier Transformation (FFT) was also applied to the data to see the periodicity and according to these periods models were built in MATLAB an Eviews programmes. Because of the seasonality of PM10 data SARIMA model was also used. The order of autoregression model was determined according to AIC and BIC criteria. The model performances were evaluated from Fractional Bias, Normalized Mean Square Error (NMSE) and Mean Absolute Percentage Error (MAPE). As expected, the results were encouraging. Keywords: PM10, Autoregression, Forecast Acknowledgement The authors would like to acknowledge the financial support by the Scientific and Technological Research Council of Turkey (TUBITAK, project no:112Y319).

  4. Application of Time-series Model to Predict Groundwater Quality Parameters for Agriculture: (Plain Mehran Case Study)

    NASA Astrophysics Data System (ADS)

    Mehrdad Mirsanjari, Mir; Mohammadyari, Fatemeh

    2018-03-01

    Underground water is regarded as considerable water source which is mainly available in arid and semi arid with deficient surface water source. Forecasting of hydrological variables are suitable tools in water resources management. On the other hand, time series concepts is considered efficient means in forecasting process of water management. In this study the data including qualitative parameters (electrical conductivity and sodium adsorption ratio) of 17 underground water wells in Mehran Plain has been used to model the trend of parameters change over time. Using determined model, the qualitative parameters of groundwater is predicted for the next seven years. Data from 2003 to 2016 has been collected and were fitted by AR, MA, ARMA, ARIMA and SARIMA models. Afterward, the best model is determined using information criterion or Akaike (AIC) and correlation coefficient. After modeling parameters, the map of agricultural land use in 2016 and 2023 were generated and the changes between these years were studied. Based on the results, the average of predicted SAR (Sodium Adsorption Rate) in all wells in the year 2023 will increase compared to 2016. EC (Electrical Conductivity) average in the ninth and fifteenth holes and decreases in other wells will be increased. The results indicate that the quality of groundwater for Agriculture Plain Mehran will decline in seven years.

  5. Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.

    PubMed

    Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam

    2015-01-01

    Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.

  6. PREDICTING CLINICALLY DIAGNOSED DYSENTERY INCIDENCE OBTAINED FROM MONTHLY CASE REPORTING BASED ON METEOROLOGICAL VARIABLES IN DALIAN, LIAONING PROVINCE, CHINA, 2005-2011 USING A DEVELOPED MODEL.

    PubMed

    An, Qingyu; Yao, Wei; Wu, Jun

    2015-03-01

    This study describes our development of a model to predict the incidence of clinically diagnosed dysentery in Dalian, Liaoning Province, China, using time series analysis. The model was developed using the seasonal autoregressive integrated moving average (SARIMA). Spearman correlation analysis was conducted to explore the relationship between meteorological variables and the incidence of clinically diagnosed dysentery. The meteorological variables which significantly correlated with the incidence of clinically diagnosed dysentery were then used as covariables in the model, which incorporated the monthly incidence of clinically diagnosed dysentery from 2005 to 2010 in Dalian. After model development, a simulation was conducted for the year 2011 and the results of this prediction were compared with the real observed values. The model performed best when the temperature data for the preceding month was used to predict clinically diagnosed dysentery during the following month. The developed model was effective and reliable in predicting the incidence of clinically diagnosed dysentery for most but not all months, and may be a useful tool for dysentery disease control and prevention, but further studies are needed to fine tune the model.

  7. Models for short term malaria prediction in Sri Lanka

    PubMed Central

    Briët, Olivier JT; Vounatsou, Penelope; Gunawardena, Dissanayake M; Galappaththy, Gawrie NL; Amerasinghe, Priyanie H

    2008-01-01

    Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA) models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA) models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal) ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal) ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts) for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed. PMID:18460204

  8. Climate variability, weather and enteric disease incidence in New Zealand: time series analysis.

    PubMed

    Lal, Aparna; Ikeda, Takayoshi; French, Nigel; Baker, Michael G; Hales, Simon

    2013-01-01

    Evaluating the influence of climate variability on enteric disease incidence may improve our ability to predict how climate change may affect these diseases. To examine the associations between regional climate variability and enteric disease incidence in New Zealand. Associations between monthly climate and enteric diseases (campylobacteriosis, salmonellosis, cryptosporidiosis, giardiasis) were investigated using Seasonal Auto Regressive Integrated Moving Average (SARIMA) models. No climatic factors were significantly associated with campylobacteriosis and giardiasis, with similar predictive power for univariate and multivariate models. Cryptosporidiosis was positively associated with average temperature of the previous month (β =  0.130, SE =  0.060, p <0.01) and inversely related to the Southern Oscillation Index (SOI) two months previously (β =  -0.008, SE =  0.004, p <0.05). By contrast, salmonellosis was positively associated with temperature (β  = 0.110, SE = 0.020, p<0.001) of the current month and SOI of the current (β  = 0.005, SE = 0.002, p<0.050) and previous month (β  = 0.005, SE = 0.002, p<0.05). Forecasting accuracy of the multivariate models for cryptosporidiosis and salmonellosis were significantly higher. Although spatial heterogeneity in the observed patterns could not be assessed, these results suggest that temporally lagged relationships between climate variables and national communicable disease incidence data can contribute to disease prediction models and early warning systems.

  9. An application of seasonal ARIMA models on group commodities to forecast Philippine merchandise exports performance

    NASA Astrophysics Data System (ADS)

    Natividad, Gina May R.; Cawiding, Olive R.; Addawe, Rizavel C.

    2017-11-01

    The increase in the merchandise exports of the country offers information about the Philippines' trading role within the global economy. Merchandise exports statistics are used to monitor the country's overall production that is consumed overseas. This paper investigates the comparison between two models obtained by a) clustering the commodity groups into two based on its proportional contribution to the total exports, and b) treating only the total exports. Different seasonal autoregressive integrated moving average (SARIMA) models were then developed for the clustered commodities and for the total exports based on the monthly merchandise exports of the Philippines from 2011 to 2016. The data set used in this study was retrieved from the Philippine Statistics Authority (PSA) which is the central statistical authority in the country responsible for primary data collection. A test for significance of the difference between means at 0.05 level of significance was then performed on the forecasts produced. The result indicates that there is a significant difference between the mean of the forecasts of the two models. Moreover, upon a comparison of the root mean square error (RMSE) and mean absolute error (MAE) of the models, it was found that the models used for the clustered groups outperform the model for the total exports.

  10. Developing a dengue early warning system using time series model: Case study in Tainan, Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, Xiao-Wei; Jan, Chyan-Deng; Wang, Ji-Shang

    2017-04-01

    Dengue fever (DF) is a climate-sensitive disease that has been emerging in southern regions of Taiwan over the past few decades, causing a significant health burden to affected areas. This study aims to propose a predictive model to implement an early warning system so as to enhance dengue surveillance and control in Tainan, Taiwan. The Seasonal Autoregressive Integrated Moving Average (SARIMA) model was used herein to forecast dengue cases. Temporal correlation between dengue incidences and climate variables were examined by Pearson correlation analysis and Cross-correlation tests in order to identify key determinants to be included as predictors. The dengue surveillance data between 2000 and 2009, as well as their respective climate variables were then used as inputs for the model. We validated the model by forecasting the number of dengue cases expected to occur each week between January 1, 2010 and December 31, 2015. In addition, we analyzed historical dengue trends and found that 25 cases occurring in one week was a trigger point that often led to a dengue outbreak. This threshold point was combined with the season-based framework put forth by the World Health Organization to create a more accurate epidemic threshold for a Tainan-specific warning system. A Seasonal ARIMA model with the general form: (1,0,5)(1,1,1)52 is identified as the most appropriate model based on lowest AIC, and was proven significant in the prediction of observed dengue cases. Based on the correlation coefficient, Lag-11 maximum 1-hr rainfall (r=0.319, P<0.05) and Lag-11 minimum temperature (r=0.416, P<0.05) are found to be the most positively correlated climate variables. Comparing the four multivariate models(i.e.1, 4, 9 and 13 weeks ahead), we found that including the climate variables improves the prediction RMSE as high as 3.24%, 10.39%, 17.96%, 21.81% respectively, in contrast to univariate models. Furthermore, the ability of the four multivariate models to determine whether the epidemic threshold would be exceeded in any given week during the forecasting period of 2010-2015 was analyzed using a contingency table. The 4 weeks-ahead approach was the most appropriate for an operational public health response with a 78.7% hit rate and 0.7% false alarm rate. Our findings indicate that SARIMA model is an ideal model for detecting outbreaks as it has high sensitivity and low risk of false alarms. Accurately forecasting future trends will provide valuable time to activate dengue surveillance and control in Tainan, Taiwan. We conclude that this timely dengue early warning system will enable public health services to allocate limited resources more effectively, and public health officials to adjust dengue emergency response plans to their maximum capabilities.

  11. Time series trends of the safety effects of pavement resurfacing.

    PubMed

    Park, Juneyoung; Abdel-Aty, Mohamed; Wang, Jung-Han

    2017-04-01

    This study evaluated the safety performance of pavement resurfacing projects on urban arterials in Florida using the observational before and after approaches. The safety effects of pavement resurfacing were quantified in the crash modification factors (CMFs) and estimated based on different ranges of heavy vehicle traffic volume and time changes for different severity levels. In order to evaluate the variation of CMFs over time, crash modification functions (CMFunctions) were developed using nonlinear regression and time series models. The results showed that pavement resurfacing projects decrease crash frequency and are found to be more safety effective to reduce severe crashes in general. Moreover, the results of the general relationship between the safety effects and time changes indicated that the CMFs increase over time after the resurfacing treatment. It was also found that pavement resurfacing projects for the urban roadways with higher heavy vehicle volume rate are more safety effective than the roadways with lower heavy vehicle volume rate. Based on the exploration and comparison of the developed CMFucntions, the seasonal autoregressive integrated moving average (SARIMA) and exponential functional form of the nonlinear regression models can be utilized to identify the trend of CMFs over time. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China.

    PubMed

    Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun

    2013-06-01

    Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 - 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 - 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to develop integrated policies and measures for waste management over the long term. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.

    PubMed

    Durdu, Omer Faruk

    2010-10-01

    In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.

  14. Combined influence of multiple climatic factors on the incidence of bacterial foodborne diseases.

    PubMed

    Park, Myoung Su; Park, Ki Hwan; Bahk, Gyung Jin

    2018-01-01

    Information regarding the relationship between the incidence of foodborne diseases (FBD) and climatic factors is useful in designing preventive strategies for FBD based on anticipated future climate change. To better predict the effect of climate change on foodborne pathogens, the present study investigated the combined influence of multiple climatic factors on bacterial FBD incidence in South Korea. During 2011-2015, the relationships between 8 climatic factors and the incidences of 13 bacterial FBD, were determined based on inpatient stays, on a monthly basis using the Pearson correlation analyses, multicollinearity tests, principal component analysis (PCA), and the seasonal autoregressive integrated moving average (SARIMA) modeling. Of the 8 climatic variables, the combination of temperature, relative humidity, precipitation, insolation, and cloudiness was significantly associated with salmonellosis (P<0.01), vibriosis (P<0.05), and enterohemorrhagic Escherichia coli O157:H7 infection (P<0.01). The combined effects of snowfall, wind speed, duration of sunshine, and cloudiness were not significant for these 3 FBD. Other FBD, including campylobacteriosis, were not significantly associated with any combination of climatic factors. These findings indicate that the relationships between multiple climatic factors and bacterial FBD incidence can be valuable for the development of prediction models for future patterns of diseases in response to changes in climate. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Projecting the Water and Electric Consumption of Polytechnic University of the Philippines

    NASA Astrophysics Data System (ADS)

    Urrutia, Jackie D.; Mercado, Joseph; Bautista, Lincoln A.; Baccay, Edcon B.

    2017-03-01

    This study investigates water and electric consumption in Polytechnic University of the Philippines - Sta. Mesa using a time series analysis. The researchers analyzed the water and electric usage separately. Electric consumption was examined in terms of pesos and kilowatt-hour, while water consumption was analyzed in pesos and cubic meter. The data are gathered from the university limited only from January 2009 to July 2015 in a monthly based record. The aim is to forecast the water and electric usage of the university for the years 2016 and 2017. There are two main statistical treatments that the researchers conducted to be able to formulate mathematical models that can estimate the water and electric consumption of the said school. Using Seasonal Autoregressive Integrated Moving Average (SARIMA), electric usage was forecasted in peso and kilowatt-hour, and water usage in peso and cubic meter. Moreover, the predicted values of the consumptions are compared to the actual values using Paired T-test to examine whether there is a significant difference. Forecasting accurately the water and electric consumption would be helpful to manage the budget allotted for the water and electric consumption of PUP - Sta. Mesa for the next two years.

  16. Crimean-Congo hemorrhagic fever and its relationship with climate factors in southeast Iran: a 13-year experience.

    PubMed

    Ansari, Hossein; Shahbaz, Babak; Izadi, Shahrokh; Zeinali, Mohammad; Tabatabaee, Seyyed Mehdi; Mahmoodi, Mahmood; Holakouie Naieni, Kourosh; Mansournia, Mohammad Ali

    2014-06-11

    Crimean-Congo hemorrhagic fever (CCHF) is endemic in southeast Iran. In this study we present the epidemiological features of CCHF and its relationship with climate factors in over a 13-year span. Surveillance system data of CCHF from 2000 to 2012 were obtained from the Province Health Centre of Zahedan University of Medical Sciences in southeast Iran. The climate data were obtained from the climate organization. The seasonal auto-regression integrated moving average (SARIMA) model was used for time series analysis to produce a model as applicable as possible in predicting the variations in the occurrence of the disease. Between 2000 and 2012, 647 confirmed CCHF cases were reported from Sistan-va-Baluchistan province. The total case fatality rate was about 10.0%. Climate variables including mean temperature (°C), accumulated rainfall (mm), and maximum relative humidity (%) were significantly correlated with monthly incidence of CCHF (p <0.05). There was no clear pattern of decline in the reported number of cases within the study's time span. The first spike in the number of CCHF cases in Iran occurred after the first surge of the disease in Pakistan. This study shows the potential of climate indicators as predictive factors in modeling the occurrence of CCHF, even though it has to be appreciated whether there is any need for a practically applicable model. There are also other factors, such as entomological indicators and virological finding that must be considered.

  17. Modeling malaria control intervention effect in KwaZulu-Natal, South Africa using intervention time series analysis.

    PubMed

    Ebhuoma, Osadolor; Gebreslasie, Michael; Magubane, Lethumusa

    The change of the malaria control intervention policy in South Africa (SA), re-introduction of dichlorodiphenyltrichloroethane (DDT), may be responsible for the low and sustained malaria transmission in KwaZulu-Natal (KZN). We evaluated the effect of the re-introduction of DDT on malaria in KZN and suggested practical ways the province can strengthen her already existing malaria control and elimination efforts, to achieve zero malaria transmission. We obtained confirmed monthly malaria cases in KZN from the malaria control program of KZN from 1998 to 2014. The seasonal autoregressive integrated moving average (SARIMA) intervention time series analysis (ITSA) was employed to model the effect of the re-introduction of DDT on confirmed monthly malaria cases. The result is an abrupt and permanent decline of monthly malaria cases (w 0 =-1174.781, p-value=0.003) following the implementation of the intervention policy. The sustained low malaria cases observed over a long period suggests that the continued usage of DDT did not result in insecticide resistance as earlier anticipated. It may be due to exophagic malaria vectors, which renders the indoor residual spraying not totally effective. Therefore, the feasibility of reducing malaria transmission to zero in KZN requires other reliable and complementary intervention resources to optimize the existing ones. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Time lag between immigration and tuberculosis rates in immigrants in the Netherlands: a time-series analysis.

    PubMed

    van Aart, C; Boshuizen, H; Dekkers, A; Korthals Altes, H

    2017-05-01

    In low-incidence countries, most tuberculosis (TB) cases are foreign-born. We explored the temporal relationship between immigration and TB in first-generation immigrants between 1995 and 2012 to assess whether immigration can be a predictor for TB in immigrants from high-incidence countries. We obtained monthly data on immigrant TB cases and immigration for the three countries of origin most frequently represented among TB cases in the Netherlands: Morocco, Somalia and Turkey. The best-fit seasonal autoregressive integrated moving average (SARIMA) model to the immigration time-series was used to prewhiten the TB time series. The cross-correlation function (CCF) was then computed on the residual time series to detect time lags between immigration and TB rates. We identified a 17-month lag between Somali immigration and Somali immigrant TB cases, but no time lag for immigrants from Morocco and Turkey. The absence of a lag in the Moroccan and Turkish population may be attributed to the relatively low TB prevalence in the countries of origin and an increased likelihood of reactivation TB in an ageing immigrant population. Understanding the time lag between Somali immigration and TB disease would benefit from a closer epidemiological analysis of cohorts of Somali cases diagnosed within the first years after entry.

  19. Temporal patterns of human and canine Giardia infection in the United States: 2003-2009.

    PubMed

    Mohamed, Ahmed S; Levine, Michael; Camp, Joseph W; Lund, Elisabeth; Yoder, Jonathan S; Glickman, Larry T; Moore, George E

    2014-02-01

    Giardia protozoa have been suspected to be of zoonotic transmission, including transmission from companion animals such as pet dogs to humans. Patterns of infection have been previously described for dogs and humans, but such investigations have used different time periods and locations for these two species. Our objective was to describe and compare the overall trend and seasonality of Giardia species infection among dogs and humans in the United States from 2003 through 2009 in an ecological study using public health surveillance data and medical records of pet dogs visiting a large nationwide private veterinary hospital. Canine data were obtained from all dogs visiting Banfield hospitals in the United States with fecal test results for Giardia species, from January 2003 through December 2009. Incidence data of human cases from the same time period were obtained from the CDC. Descriptive time plots, a seasonal trend decomposition (STL) procedure, and seasonal autoregressive moving-average (SARIMA) models were used to assess the temporal characteristics of Giardia infection in the two species. Canine incidence showed a gradual decline from 2003 to 2009 with no significant/distinct regular seasonal component. By contrast, human incidence showed a stable annual rate with a significant regular seasonal cycle, peaking in August and September. Different temporal patterns in human and canine Giardia cases observed in this study suggest that the epidemiological disease processes underlying both series might be different, and Giardia transmission between humans and their companion dogs seems uncommon. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. An Analysis on the Unemployment Rate in the Philippines: A Time Series Data Approach

    NASA Astrophysics Data System (ADS)

    Urrutia, J. D.; Tampis, R. L.; E Atienza, JB

    2017-03-01

    This study aims to formulate a mathematical model for forecasting and estimating unemployment rate in the Philippines. Also, factors which can predict the unemployment is to be determined among the considered variables namely Labor Force Rate, Population, Inflation Rate, Gross Domestic Product, and Gross National Income. Granger-causal relationship and integration among the dependent and independent variables are also examined using Pairwise Granger-causality test and Johansen Cointegration Test. The data used were acquired from the Philippine Statistics Authority, National Statistics Office, and Bangko Sentral ng Pilipinas. Following the Box-Jenkins method, the formulated model for forecasting the unemployment rate is SARIMA (6, 1, 5) × (0, 1, 1)4 with a coefficient of determination of 0.79. The actual values are 99 percent identical to the predicted values obtained through the model, and are 72 percent closely relative to the forecasted ones. According to the results of the regression analysis, Labor Force Rate and Population are the significant factors of unemployment rate. Among the independent variables, Population, GDP, and GNI showed to have a granger-causal relationship with unemployment. It is also found that there are at least four cointegrating relations between the dependent and independent variables.

  1. Handgun Acquisitions in California After Two Mass Shootings.

    PubMed

    Studdert, David M; Zhang, Yifan; Rodden, Jonathan A; Hyndman, Rob J; Wintemute, Garen J

    2017-05-16

    Mass shootings are common in the United States. They are the most visible form of firearm violence. Their effect on personal decisions to purchase firearms is not well-understood. To determine changes in handgun acquisition patterns after the mass shootings in Newtown, Connecticut, in 2012 and San Bernardino, California, in 2015. Time-series analysis using seasonal autoregressive integrated moving-average (SARIMA) models. California. Adults who acquired handguns between 2007 and 2016. Excess handgun acquisitions (defined as the difference between actual and expected acquisitions) in the 6-week and 12-week periods after each shooting, overall and within subgroups of acquirers. In the 6 weeks after the Newtown and San Bernardino shootings, there were 25 705 (95% prediction interval, 17 411 to 32 788) and 27 413 (prediction interval, 15 188 to 37 734) excess acquisitions, respectively, representing increases of 53% (95% CI, 30% to 80%) and 41% (CI, 19% to 68%) over expected volume. Large increases in acquisitions occurred among white and Hispanic persons, but not among black persons, and among persons with no record of having previously acquired a handgun. After the San Bernardino shootings, acquisition rates increased by 85% among residents of that city and adjacent neighborhoods, compared with 35% elsewhere in California. The data relate to handguns in 1 state. The statistical analysis cannot establish causality. Large increases in handgun acquisitions occurred after these 2 mass shootings. The spikes were short-lived and accounted for less than 10% of annual handgun acquisitions statewide. Further research should examine whether repeated shocks of this kind lead to substantial increases in the prevalence of firearm ownership. None.

  2. An Acoustic Levitation Technique for the Study of Nonlinear Oscillations of Gas Bubbles in Liquids.

    DTIC Science & Technology

    1983-08-15

    ECKS =R*SQPT (2. .DIGDO’BIGDI) QQZl= (SINH ( ECKS ) -SIN ( ECK 5))- (COSH ( ECKS ) -COS... ECKS -Z)) 56 QQZ2= (S MH ( ECKS )+S IN ( ECKS ) ) ’(COSH ( ECKS ) -COS ( ECKS )) DTHERM=3. ’G6MMAP-1) *(ECKS.QQZ2-2) ’ECKSUDTHERI𔃻=DTHERM( ECKS +3.(RMP1*QI...GAMMA=GRMMAP’ (1+DTHERM*2) :~~ GAMMA=GRMMA’ (1+3. (GAMMAP- 1) .QQZ1 ’ ECKS ) SArIMA=1.0*6AMMA PO=PIIF42. .SIGMA’R DBL U=2. *S IGMA’ (R*PO0) ALPHRI=4.

  3. Forecasting international tourism demand from the US, Japan and South Korea to Malaysia: A SARIMA approach

    NASA Astrophysics Data System (ADS)

    Borhan, Nurbaizura; Arsad, Zainudin

    2014-07-01

    One of the major contributing sectors for Malaysia's economic growth is tourism. The number of international tourist arrivals to Malaysia has been showing an upward trend as a result of several programs and promotion introduced by the Malaysian government to attract international tourists to the country. This study attempts to model and to forecast tourism demand for Malaysia by three selected countries: the US, Japan and South Korea. This study utilized monthly time series data for the period from January 1999 to December 2012 and employed the well-known Box-Jenkins seasonal ARIMA modeling procedures. Not surprisingly the results show the number of tourist arrivals from the three countries contain strong seasonal component as the arrivals strongly dependent on the season in the country of origin. The findings of the study also show that the number of tourist arrivals from the US and South Korea will continue to increase in the near future. Meanwhile the arrivals from Japan is forecasted to show a drop in the near future and as such tourism authorities in Malaysia need to enhance the promotional effort to attract more tourists from Japan to visit Malaysia.

  4. Hospital daily outpatient visits forecasting using a combinatorial model based on ARIMA and SES models.

    PubMed

    Luo, Li; Luo, Le; Zhang, Xinli; He, Xiaoli

    2017-07-10

    Accurate forecasting of hospital outpatient visits is beneficial for the reasonable planning and allocation of healthcare resource to meet the medical demands. In terms of the multiple attributes of daily outpatient visits, such as randomness, cyclicity and trend, time series methods, ARIMA, can be a good choice for outpatient visits forecasting. On the other hand, the hospital outpatient visits are also affected by the doctors' scheduling and the effects are not pure random. Thinking about the impure specialty, this paper presents a new forecasting model that takes cyclicity and the day of the week effect into consideration. We formulate a seasonal ARIMA (SARIMA) model on a daily time series and then a single exponential smoothing (SES) model on the day of the week time series, and finally establish a combinatorial model by modifying them. The models are applied to 1 year of daily visits data of urban outpatients in two internal medicine departments of a large hospital in Chengdu, for forecasting the daily outpatient visits about 1 week ahead. The proposed model is applied to forecast the cross-sectional data for 7 consecutive days of daily outpatient visits over an 8-weeks period based on 43 weeks of observation data during 1 year. The results show that the two single traditional models and the combinatorial model are simplicity of implementation and low computational intensiveness, whilst being appropriate for short-term forecast horizons. Furthermore, the combinatorial model can capture the comprehensive features of the time series data better. Combinatorial model can achieve better prediction performance than the single model, with lower residuals variance and small mean of residual errors which needs to be optimized deeply on the next research step.

  5. A Time Series Model for Assessing the Trend and Forecasting the Road Traffic Accident Mortality

    PubMed Central

    Yousefzadeh-Chabok, Shahrokh; Ranjbar-Taklimie, Fatemeh; Malekpouri, Reza; Razzaghi, Alireza

    2016-01-01

    Background Road traffic accident (RTA) is one of the main causes of trauma and known as a growing public health concern worldwide, especially in developing countries. Assessing the trend of fatalities in the past years and forecasting it enables us to make the appropriate planning for prevention and control. Objectives This study aimed to assess the trend of RTAs and forecast it in the next years by using time series modeling. Materials and Methods In this historical analytical study, the RTA mortalities in Zanjan Province, Iran, were evaluated during 2007 - 2013. The time series analyses including Box-Jenkins models were used to assess the trend of accident fatalities in previous years and forecast it for the next 4 years. Results The mean age of the victims was 37.22 years (SD = 20.01). From a total of 2571 deaths, 77.5% (n = 1992) were males and 22.5% (n = 579) were females. The study models showed a descending trend of fatalities in the study years. The SARIMA (1, 1, 3) (0, 1, 0) 12 model was recognized as a best fit model in forecasting the trend of fatalities. Forecasting model also showed a descending trend of traffic accident mortalities in the next 4 years. Conclusions There was a decreasing trend in the study and the future years. It seems that implementation of some interventions in the recent decade has had a positive effect on the decline of RTA fatalities. Nevertheless, there is still a need to pay more attention in order to prevent the occurrence and the mortalities related to traffic accidents. PMID:27800467

  6. A Time Series Model for Assessing the Trend and Forecasting the Road Traffic Accident Mortality.

    PubMed

    Yousefzadeh-Chabok, Shahrokh; Ranjbar-Taklimie, Fatemeh; Malekpouri, Reza; Razzaghi, Alireza

    2016-09-01

    Road traffic accident (RTA) is one of the main causes of trauma and known as a growing public health concern worldwide, especially in developing countries. Assessing the trend of fatalities in the past years and forecasting it enables us to make the appropriate planning for prevention and control. This study aimed to assess the trend of RTAs and forecast it in the next years by using time series modeling. In this historical analytical study, the RTA mortalities in Zanjan Province, Iran, were evaluated during 2007 - 2013. The time series analyses including Box-Jenkins models were used to assess the trend of accident fatalities in previous years and forecast it for the next 4 years. The mean age of the victims was 37.22 years (SD = 20.01). From a total of 2571 deaths, 77.5% (n = 1992) were males and 22.5% (n = 579) were females. The study models showed a descending trend of fatalities in the study years. The SARIMA (1, 1, 3) (0, 1, 0) 12 model was recognized as a best fit model in forecasting the trend of fatalities. Forecasting model also showed a descending trend of traffic accident mortalities in the next 4 years. There was a decreasing trend in the study and the future years. It seems that implementation of some interventions in the recent decade has had a positive effect on the decline of RTA fatalities. Nevertheless, there is still a need to pay more attention in order to prevent the occurrence and the mortalities related to traffic accidents.

  7. Modeling seasonal variation of hip fracture in Montreal, Canada.

    PubMed

    Modarres, Reza; Ouarda, Taha B M J; Vanasse, Alain; Orzanco, Maria Gabriela; Gosselin, Pierre

    2012-04-01

    The investigation of the association of the climate variables with hip fracture incidences is important in social health issues. This study examined and modeled the seasonal variation of monthly population based hip fracture rate (HFr) time series. The seasonal ARIMA time series modeling approach is used to model monthly HFr incidences time series of female and male patients of the ages 40-74 and 75+ of Montreal, Québec province, Canada, in the period of 1993-2004. The correlation coefficients between meteorological variables such as temperature, snow depth, rainfall depth and day length and HFr are significant. The nonparametric Mann-Kendall test for trend assessment and the nonparametric Levene's test and Wilcoxon's test for checking the difference of HFr before and after change point are also used. The seasonality in HFr indicated sharp difference between winter and summer time. The trend assessment showed decreasing trends in HFr of female and male groups. The nonparametric test also indicated a significant change of the mean HFr. A seasonal ARIMA model was applied for HFr time series without trend and a time trend ARIMA model (TT-ARIMA) was developed and fitted to HFr time series with a significant trend. The multi criteria evaluation showed the adequacy of SARIMA and TT-ARIMA models for modeling seasonal hip fracture time series with and without significant trend. In the time series analysis of HFr of the Montreal region, the effects of the seasonal variation of climate variables on hip fracture are clear. The Seasonal ARIMA model is useful for modeling HFr time series without trend. However, for time series with significant trend, the TT-ARIMA model should be applied for modeling HFr time series. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Analysis of significant factors for dengue fever incidence prediction.

    PubMed

    Siriyasatien, Padet; Phumee, Atchara; Ongruk, Phatsavee; Jampachaisri, Katechan; Kesorn, Kraisak

    2016-04-16

    Many popular dengue forecasting techniques have been used by several researchers to extrapolate dengue incidence rates, including the K-H model, support vector machines (SVM), and artificial neural networks (ANN). The time series analysis methodology, particularly ARIMA and SARIMA, has been increasingly applied to the field of epidemiological research for dengue fever, dengue hemorrhagic fever, and other infectious diseases. The main drawback of these methods is that they do not consider other variables that are associated with the dependent variable. Additionally, new factors correlated to the disease are needed to enhance the prediction accuracy of the model when it is applied to areas of similar climates, where weather factors such as temperature, total rainfall, and humidity are not substantially different. Such drawbacks may consequently lower the predictive power for the outbreak. The predictive power of the forecasting model-assessed by Akaike's information criterion (AIC), Bayesian information criterion (BIC), and the mean absolute percentage error (MAPE)-is improved by including the new parameters for dengue outbreak prediction. This study's selected model outperforms all three other competing models with the lowest AIC, the lowest BIC, and a small MAPE value. The exclusive use of climate factors from similar locations decreases a model's prediction power. The multivariate Poisson regression, however, effectively forecasts even when climate variables are slightly different. Female mosquitoes and seasons were strongly correlated with dengue cases. Therefore, the dengue incidence trends provided by this model will assist the optimization of dengue prevention. The present work demonstrates the important roles of female mosquito infection rates from the previous season and climate factors (represented as seasons) in dengue outbreaks. Incorporating these two factors in the model significantly improves the predictive power of dengue hemorrhagic fever forecasting models, as confirmed by AIC, BIC, and MAPE.

  9. Scaling properties and symmetrical patterns in the epidemiology of rotavirus infection.

    PubMed Central

    José, Marco V; Bishop, Ruth F

    2003-01-01

    The rich epidemiological database of the incidence of rotavirus, as a cause of severe diarrhoea in young children, coupled with knowledge of the natural history of the infection, can make this virus a paradigm for studies of epidemic dynamics. The cyclic recurrence of childhood rotavirus epidemics in unvaccinated populations provides one of the best documented phenomena in population dynamics. This paper makes use of epidemiological data on rotavirus infection in young children admitted to hospital in Melbourne, Australia from 1977 to 2000. Several mathematical methods were used to characterize the overall dynamics of rotavirus infections as a whole and individually as serotypes G1, G2, G3, G4 and G9. These mathematical methods are as follows: seasonal autoregressive integrated moving-average (SARIMA) models, power spectral density (PSD), higher-order spectral analysis (HOSA) (bispectrum estimation and quadratic phase coupling (QPC)), detrended fluctuation analysis (DFA), wavelet analysis (WA) and a surrogate data analysis technique. Each of these techniques revealed different dynamic aspects of rotavirus epidemiology. In particular, we confirm the existence of an annual, biannual and a quinquennial period but additionally we found other embedded cycles (e.g. ca. 3 years). There seems to be an overall unique geometric and dynamic structure of the data despite the apparent changes in the dynamics of the last years. The inherent dynamics seems to be conserved regardless of the emergence of new serotypes, the re-emergence of old serotypes or the transient disappearance of a particular serotype. More importantly, the dynamics of all serotypes is multiple synchronized so that they behave as a single entity at the epidemic level. Overall, the whole dynamics follow a scale-free power-law fractal scaling behaviour. We found that there are three different scaling regions in the time-series, suggesting that processes influencing the epidemic dynamics of rotavirus over less than 12 months differ from those that operate between 1 and ca. 3 years, as well as those between 3 and ca. 5 years. To discard the possibility that the observed patterns could be due to artefacts, we applied a surrogate data analysis technique which enabled us to discern if only random components or linear features of the incidence of rotavirus contribute to its dynamics. The global dynamics of the epidemic is portrayed by wavelet-based incidence analysis. The resulting wavelet transform of the incidence of rotavirus crisply reveals a repeating pattern over time that looks similar on many scales (a property called self-similarity). Both the self-similar behaviour and the absence of a single characteristic scale of the power-law fractal-like scaling of the incidence of rotavirus infection imply that there is not a universal inherently more virulent serotype to which severe gastroenteritis can uniquely be ascribed. PMID:14561323

  10. Influence of climate variability on anchovy reproductive timing off northern Chile

    NASA Astrophysics Data System (ADS)

    Contreras-Reyes, Javier E.; Canales, T. Mariella; Rojas, Pablo M.

    2016-12-01

    We investigated the relationship between environmental variables and the Gonadosomatic Monthly Mean (GMM) index of anchovy (Engraulis ringens) to understand how the environment affects the dynamics of anchovy reproductive timing. The data examined corresponds to biological information collected from samples of the landings off northern Chile (18°21‧S, 24°00‧S) during the period 1990-2010. We used the Humboldt Current Index (HCI) and the Multivariate ENSO Index (MEI), which combine several physical-oceanographic factors in the Tropical and South Pacific regions. Using the GMM index, we studied the dynamics of anchovy reproductive timing at different intervals of length, specifically females with a length between 11.5 and 14 cm (medium class) and longer than 14 cm (large class). Seasonal Autoregressive Integrated Mobile Average (SARIMA) was used to predict missing observations. The trends of the environment and reproductive indexes were explored via the Breaks For Additive Season and Trend (BFAST) statistical technique and the relationship between these indexes via cross-correlation functions (CCF) analysis. Our results showed that the habitat of anchovy switched from cool to warm condition, which also influenced gonad development. This was revealed by two and three significant changes (breaks) in the trend of the HCI and MEI indexes, and two significant breaks in the GMM of each time series of anchovy females (medium and large). Negative cross-correlation between the MEI index and GMM of medium and large class females was found, indicating that as the environment gets warmer (positive value of MEI) a decrease in the reproductive activity of anchovy can be expected. Correlation between the MEI index and larger females was stronger than with medium females. Additionally, our results indicate that the GMM index of anchovy for both length classes reaches two maximums per year; the first from August to September and the second from December to January. The intensity (maximum GMM values at rise point) of reproductive activity was not equal though, with the August-September peak being the highest. We also discuss how the synchronicity between environment and reproductive timing, the negative correlation found between MEI and GMM indexes, and the two increases per year of anchovy GMM relate to previous studies. Based on these findings we propose ways to advance in the understanding of how anchovy synchronize gonad development with the environment.

  11. Waif goodbye! Average-size female models promote positive body image and appeal to consumers.

    PubMed

    Diedrichs, Phillippa C; Lee, Christina

    2011-10-01

    Despite consensus that exposure to media images of thin fashion models is associated with poor body image and disordered eating behaviours, few attempts have been made to enact change in the media. This study sought to investigate an effective alternative to current media imagery, by exploring the advertising effectiveness of average-size female fashion models, and their impact on the body image of both women and men. A sample of 171 women and 120 men were assigned to one of three advertisement conditions: no models, thin models and average-size models. Women and men rated average-size models as equally effective in advertisements as thin and no models. For women with average and high levels of internalisation of cultural beauty ideals, exposure to average-size female models was associated with a significantly more positive body image state in comparison to exposure to thin models and no models. For men reporting high levels of internalisation, exposure to average-size models was also associated with a more positive body image state in comparison to viewing thin models. These findings suggest that average-size female models can promote positive body image and appeal to consumers.

  12. GI Joe or Average Joe? The impact of average-size and muscular male fashion models on men's and women's body image and advertisement effectiveness.

    PubMed

    Diedrichs, Phillippa C; Lee, Christina

    2010-06-01

    Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.

  13. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  14. Combining forecast weights: Why and how?

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim

    2012-09-01

    This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.

  15. Ranking prediction model using the competition record of Ladies Professional Golf Association players.

    PubMed

    Chae, Jin Seok; Park, Jin; So, Wi-Young

    2017-07-28

    The purpose of this study was to suggest a ranking prediction model using the competition record of the Ladies Professional Golf Association (LPGA) players. The top 100 players on the tour money list from the 2013-2016 US Open were analyzed in this model. Stepwise regression analysis was conducted to examine the effect of performance and independent variables (i.e., driving accuracy, green in regulation, putts per round, driving distance, percentage of sand saves, par-3 average, par-4 average, par-5 average, birdies average, and eagle average) on dependent variables (i.e., scoring average, official money, top-10 finishes, winning percentage, and 60-strokes average). The following prediction model was suggested:Y (Scoring average) = 55.871 - 0.947 (Birdies average) + 4.576 (Par-4 average) - 0.028 (Green in regulation) - 0.012 (Percentage of sand saves) + 2.088 (Par-3 average) - 0.026 (Driving accuracy) - 0.017 (Driving distance) + 0.085 (Putts per round)Y (Official money) = 6628736.723 + 528557.907 (Birdies average) - 1831800.821 (Par-4 average) + 11681.739 (Green in regulation) + 6476.344 (Percentage of sand saves) - 688115.074 (Par-3 average) + 7375.971 (Driving accuracy)Y (Top-10 finish%) = 204.462 + 12.562 (Birdies average) - 47.745 (Par-4 average) + 1.633 (Green in regulation) - 5.151 (Putts per round) + 0.132 (Percentage of sand saves)Y (Winning percentage) = 49.949 + 3.191 (Birdies average) - 15.023 (Par-4 average) + 0.043 (Percentage of sand saves)Y (60-strokes average) = 217.649 + 13.978 (Birdies average) - 44.855 (Par-4 average) - 22.433 (Par-3 average) + 0.16 (Green in regulation)Scoring of the above five prediction models and the prediction of golf ranking in the 2016 Women's Golf Olympic competition in Rio revealed a significant correlation between the predicted and real ranking (r = 0.689, p < 0.001) and between the predicted and the real average score (r = 0.653, p < 0.001). Our ranking prediction model using LPGA data may help coaches and players to identify which players are likely to participate in Olympic and World competitions, based on their performance.

  16. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  17. Application of Bayesian model averaging to measurements of the primordial power spectrum

    NASA Astrophysics Data System (ADS)

    Parkinson, David; Liddle, Andrew R.

    2010-11-01

    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940

  18. Light propagation in the averaged universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagheri, Samae; Schwarz, Dominik J., E-mail: s_bagheri@physik.uni-bielefeld.de, E-mail: dschwarz@physik.uni-bielefeld.de

    Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of themore » null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.« less

  19. A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.

    2003-01-01

    A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

  20. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  1. Average inactivity time model, associated orderings and reliability properties

    NASA Astrophysics Data System (ADS)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  2. SPARSE—A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure

    PubMed Central

    Davis, Sean L.; Sen, Oishik; Udaykumar, H. S.

    2017-01-01

    A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian–Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles. PMID:28413341

  3. SPARSE-A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure.

    PubMed

    Davis, Sean L; Jacobs, Gustaaf B; Sen, Oishik; Udaykumar, H S

    2017-03-01

    A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian-Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles.

  4. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait.

    PubMed

    Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N

    2016-06-14

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of the 56 MT parts contained in a state-of-the-art MS model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by the perturbed MT parts and by all the remaining MT parts, respectively, during a simulated gait cycle. Results indicated that sensitivity of the model depended on the specific role of each MT part during gait, and not merely on its size and length. Tendon slack length was the most sensitive parameter, followed by maximal isometric muscle force and optimal muscle fiber length, while nominal pennation angle showed very low sensitivity. The highest sensitivity values were found for the MT parts that act as prime movers of gait (Soleus: average OSI=5.27%, Rectus Femoris: average OSI=4.47%, Gastrocnemius: average OSI=3.77%, Vastus Lateralis: average OSI=1.36%, Biceps Femoris Caput Longum: average OSI=1.06%) and hip stabilizers (Gluteus Medius: average OSI=3.10%, Obturator Internus: average OSI=1.96%, Gluteus Minimus: average OSI=1.40%, Piriformis: average OSI=0.98%), followed by the Peroneal muscles (average OSI=2.20%) and Tibialis Anterior (average OSI=1.78%) some of which were not included in previous sensitivity studies. Finally, the proposed priority list provides quantitative information to indicate which MT parts and which MT parameters should be estimated most accurately to create detailed and reliable subject-specific MS models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Robust Semi-Active Ride Control under Stochastic Excitation

    DTIC Science & Technology

    2014-01-01

    broad classes of time-series models which are of practical importance; the Auto-Regressive (AR) models, the Integrated (I) models, and the Moving...Average (MA) models [12]. Combinations of these models result in autoregressive moving average (ARMA) and autoregressive integrated moving average...Down Up 4) Down Down These four cases can be written in compact form as: (20) Where is the Heaviside

  6. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  7. Accounting for uncertainty in health economic decision models by using model averaging.

    PubMed

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-04-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.

  8. A Lagrangian dynamic subgrid-scale model turbulence

    NASA Technical Reports Server (NTRS)

    Meneveau, C.; Lund, T. S.; Cabot, W.

    1994-01-01

    A new formulation of the dynamic subgrid-scale model is tested in which the error associated with the Germano identity is minimized over flow pathlines rather than over directions of statistical homogeneity. This procedure allows the application of the dynamic model with averaging to flows in complex geometries that do not possess homogeneous directions. The characteristic Lagrangian time scale over which the averaging is performed is chosen such that the model is purely dissipative, guaranteeing numerical stability when coupled with the Smagorinsky model. The formulation is tested successfully in forced and decaying isotropic turbulence and in fully developed and transitional channel flow. In homogeneous flows, the results are similar to those of the volume-averaged dynamic model, while in channel flow, the predictions are superior to those of the plane-averaged dynamic model. The relationship between the averaged terms in the model and vortical structures (worms) that appear in the LES is investigated. Computational overhead is kept small (about 10 percent above the CPU requirements of the volume or plane-averaged dynamic model) by using an approximate scheme to advance the Lagrangian tracking through first-order Euler time integration and linear interpolation in space.

  9. 40 CFR 600.512-08 - Model year report.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Determining Manufacturer's Average Fuel Economy and Manufacturer's Average Carbon-Related Exhaust Emissions § 600.512-08 Model year... average fuel economy. The results of the manufacturer calculations and summary information of model type...

  10. 40 CFR 600.512-08 - Model year report.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Procedures for Determining Manufacturer's Average Fuel Economy and Manufacturer's Average Carbon-Related Exhaust Emissions § 600.512-08 Model year... average fuel economy. The results of the manufacturer calculations and summary information of model type...

  11. Improved simulation of group averaged CO2 surface concentrations using GEOS-Chem and fluxes from VEGAS

    NASA Astrophysics Data System (ADS)

    Chen, Z. H.; Zhu, J.; Zeng, N.

    2013-01-01

    CO2 measurements have been combined with simulated CO2 distributions from a transport model in order to produce the optimal estimates of CO2 surface fluxes in inverse modeling. However one persistent problem in using model-observation comparisons for this goal relates to the issue of compatibility. Observations at a single site reflect all underlying processes of various scales that usually cannot be fully resolved by model simulations at the grid points nearest the site due to lack of spatial or temporal resolution or missing processes in models. In this article we group site observations of multiple stations according to atmospheric mixing regimes and surface characteristics. The group averaged values of CO2 concentration from model simulations and observations are used to evaluate the regional model results. Using the group averaged measurements of CO2 reduces the noise of individual stations. The difference of group averaged values between observation and modeled results reflects the uncertainties of the large scale flux in the region where the grouped stations are. We compared the group averaged values between model results with two biospheric fluxes from the model Carnegie-Ames-Stanford-Approach (CASA) and VEgetation-Global-Atmosphere-Soil (VEGAS) and observations to evaluate the regional model results. Results show that the modeling group averaged values of CO2 concentrations in all regions with fluxes from VEGAS have significant improvements for most regions. There is still large difference between two model results and observations for grouped average values in North Atlantic, Indian Ocean, and South Pacific Tropics. This implies possible large uncertainties in the fluxes there.

  12. Appropriateness of selecting different averaging times for modelling chronic and acute exposure to environmental odours

    NASA Astrophysics Data System (ADS)

    Drew, G. H.; Smith, R.; Gerard, V.; Burge, C.; Lowe, M.; Kinnersley, R.; Sneath, R.; Longhurst, P. J.

    Odour emissions are episodic, characterised by periods of high emission rates, interspersed with periods of low emissions. It is frequently the short term, high concentration peaks that result in annoyance in the surrounding population. Dispersion modelling is accepted as a useful tool for odour impact assessment, and two approaches can be adopted. The first approach of modelling the hourly average concentration can underestimate total odour concentration peaks, resulting in annoyance and complaints. The second modelling approach involves the use of short averaging times. This study assesses the appropriateness of using different averaging times to model the dispersion of odour from a landfill site. We also examine perception of odour in the community in conjunction with the modelled odour dispersal, by using community monitors to record incidents of odour. The results show that with the shorter averaging times, the modelled pattern of dispersal reflects the pattern of observed odour incidents recorded in the community monitoring database, with the modelled odour dispersing further in a north easterly direction. Therefore, the current regulatory method of dispersion modelling, using hourly averaging times, is less successful at capturing peak concentrations, and does not capture the pattern of odour emission as indicated by the community monitoring database. The use of short averaging times is therefore of greater value in predicting the likely nuisance impact of an odour source and in framing appropriate regulatory controls.

  13. Modeling Seasonal Influenza Transmission and Its Association with Climate Factors in Thailand Using Time-Series and ARIMAX Analyses.

    PubMed

    Chadsuthi, Sudarat; Iamsirithaworn, Sopon; Triampo, Wannapong; Modchang, Charin

    2015-01-01

    Influenza is a worldwide respiratory infectious disease that easily spreads from one person to another. Previous research has found that the influenza transmission process is often associated with climate variables. In this study, we used autocorrelation and partial autocorrelation plots to determine the appropriate autoregressive integrated moving average (ARIMA) model for influenza transmission in the central and southern regions of Thailand. The relationships between reported influenza cases and the climate data, such as the amount of rainfall, average temperature, average maximum relative humidity, average minimum relative humidity, and average relative humidity, were evaluated using cross-correlation function. Based on the available data of suspected influenza cases and climate variables, the most appropriate ARIMA(X) model for each region was obtained. We found that the average temperature correlated with influenza cases in both central and southern regions, but average minimum relative humidity played an important role only in the southern region. The ARIMAX model that includes the average temperature with a 4-month lag and the minimum relative humidity with a 2-month lag is the appropriate model for the central region, whereas including the minimum relative humidity with a 4-month lag results in the best model for the southern region.

  14. Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.

    2008-01-01

    Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.

  15. Scale Dependence of Statistics of Spatially Averaged Rain Rate Seen in TOGA COARE Comparison with Predictions from a Stochastic Model

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, T. L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    A characteristic feature of rainfall statistics is that they in general depend on the space and time scales over which rain data are averaged. As a part of an earlier effort to determine the sampling error of satellite rain averages, a space-time model of rainfall statistics was developed to describe the statistics of gridded rain observed in GATE. The model allows one to compute the second moment statistics of space- and time-averaged rain rate which can be fitted to satellite or rain gauge data to determine the four model parameters appearing in the precipitation spectrum - an overall strength parameter, a characteristic length separating the long and short wavelength regimes and a characteristic relaxation time for decay of the autocorrelation of the instantaneous local rain rate and a certain 'fractal' power law exponent. For area-averaged instantaneous rain rate, this exponent governs the power law dependence of these statistics on the averaging length scale $L$ predicted by the model in the limit of small $L$. In particular, the variance of rain rate averaged over an $L \\times L$ area exhibits a power law singularity as $L \\rightarrow 0$. In the present work the model is used to investigate how the statistics of area-averaged rain rate over the tropical Western Pacific measured with ship borne radar during TOGA COARE (Tropical Ocean Global Atmosphere Coupled Ocean Atmospheric Response Experiment) and gridded on a 2 km grid depends on the size of the spatial averaging scale. Good agreement is found between the data and predictions from the model over a wide range of averaging length scales.

  16. Model Averaging for Predicting the Exposure to Aflatoxin B1 Using DNA Methylation in White Blood Cells of Infants

    NASA Astrophysics Data System (ADS)

    Rahardiantoro, S.; Sartono, B.; Kurnia, A.

    2017-03-01

    In recent years, DNA methylation has been the special issue to reveal the pattern of a lot of human diseases. Huge amount of data would be the inescapable phenomenon in this case. In addition, some researchers interesting to take some predictions based on these huge data, especially using regression analysis. The classical approach would be failed to take the task. Model averaging by Ando and Li [1] could be an alternative approach to face this problem. This research applied the model averaging to get the best prediction in high dimension of data. In the practice, the case study by Vargas et al [3], data of exposure to aflatoxin B1 (AFB1) and DNA methylation in white blood cells of infants in The Gambia, take the implementation of model averaging. The best ensemble model selected based on the minimum of MAPE, MAE, and MSE of predictions. The result is ensemble model by model averaging with number of predictors in model candidate is 15.

  17. Accounting for uncertainty in health economic decision models by using model averaging

    PubMed Central

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-01-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment. PMID:19381329

  18. Standard-Cell, Open-Architecture Power Conversion Systems

    DTIC Science & Technology

    2005-10-01

    TLmax Maximum junction temperature 423 OK Table 5. 9. PEBB average model description in VTB. Terminal Type Name - 4 -, A Power DC Bus + B Power AC Pole...5 A. Switching models ........................................................................................ 5 B. Average ...11-6 IV. Average Modeling of PEBB-Based Converters...................................................... 11-10 0 IV. 1.Voltage

  19. An improved switching converter model. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters in the continuous mode and discontinuous mode was done by averaging and discrete sampling techniques. A model was developed by combining these two techniques. This model, the discrete average model, accurately predicts the envelope of the output voltage and is easy to implement in circuit and state variable forms. The proposed model is shown to be dependent on the type of duty cycle control. The proper selection of the power stage model, between average and discrete average, is largely a function of the error processor in the feedback loop. The accuracy of the measurement data taken by a conventional technique is affected by the conditions at which the data is collected.

  20. The origin of consistent protein structure refinement from structural averaging.

    PubMed

    Park, Hahnbeom; DiMaio, Frank; Baker, David

    2015-06-02

    Recent studies have shown that explicit solvent molecular dynamics (MD) simulation followed by structural averaging can consistently improve protein structure models. We find that improvement upon averaging is not limited to explicit water MD simulation, as consistent improvements are also observed for more efficient implicit solvent MD or Monte Carlo minimization simulations. To determine the origin of these improvements, we examine the changes in model accuracy brought about by averaging at the individual residue level. We find that the improvement in model quality from averaging results from the superposition of two effects: a dampening of deviations from the correct structure in the least well modeled regions, and a reinforcement of consistent movements towards the correct structure in better modeled regions. These observations are consistent with an energy landscape model in which the magnitude of the energy gradient toward the native structure decreases with increasing distance from the native state. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    PubMed

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  2. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    PubMed Central

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  3. Climate Change Implications for Tropical Islands: Interpolating and Interpreting Statistically Downscaled GCM Projections for Management and Planning

    Treesearch

    Azad Henareh Khalyani; William A. Gould; Eric Harmsen; Adam Terando; Maya Quinones; Jaime A. Collazo

    2016-01-01

  4. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit Reynolds stress model. Fortunately, the numerical error assessment at most of the axial stations used to compare with measurements clearly indicated that the scale-resolving simulations were improving (i.e. approaching the measured values) as the grid was refined. Hence, unlike a Reynolds-averaged simulation, the hybrid approach provides a mechanism to the end-user for reducing model-form errors.

  5. An Approach to Average Modeling and Simulation of Switch-Mode Systems

    ERIC Educational Resources Information Center

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of average modeling of PWM switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The paper discusses the derivation of PSPICE/ORCAD-compatible average models of the switch-mode power stages, their software implementation, and…

  6. Estimating Energy Conversion Efficiency of Thermoelectric Materials: Constant Property Versus Average Property Models

    NASA Astrophysics Data System (ADS)

    Armstrong, Hannah; Boese, Matthew; Carmichael, Cody; Dimich, Hannah; Seay, Dylan; Sheppard, Nathan; Beekman, Matt

    2017-01-01

    Maximum thermoelectric energy conversion efficiencies are calculated using the conventional "constant property" model and the recently proposed "cumulative/average property" model (Kim et al. in Proc Natl Acad Sci USA 112:8205, 2015) for 18 high-performance thermoelectric materials. We find that the constant property model generally predicts higher energy conversion efficiency for nearly all materials and temperature differences studied. Although significant deviations are observed in some cases, on average the constant property model predicts an efficiency that is a factor of 1.16 larger than that predicted by the average property model, with even lower deviations for temperature differences typical of energy harvesting applications. Based on our analysis, we conclude that the conventional dimensionless figure of merit ZT obtained from the constant property model, while not applicable for some materials with strongly temperature-dependent thermoelectric properties, remains a simple yet useful metric for initial evaluation and/or comparison of thermoelectric materials, provided the ZT at the average temperature of projected operation, not the peak ZT, is used.

  7. Model averaging and muddled multimodel inferences.

    PubMed

    Cade, Brian S

    2015-09-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.

  8. Model averaging and muddled multimodel inferences

    USGS Publications Warehouse

    Cade, Brian S.

    2015-01-01

    Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.

  9. Incremental Value of Repeated Risk Factor Measurements for Cardiovascular Disease Prediction in Middle-Aged Korean Adults: Results From the NHIS-HEALS (National Health Insurance System-National Health Screening Cohort).

    PubMed

    Cho, In-Jeong; Sung, Ji Min; Chang, Hyuk-Jae; Chung, Namsik; Kim, Hyeon Chang

    2017-11-01

    Increasing evidence suggests that repeatedly measured cardiovascular disease (CVD) risk factors may have an additive predictive value compared with single measured levels. Thus, we evaluated the incremental predictive value of incorporating periodic health screening data for CVD prediction in a large nationwide cohort with periodic health screening tests. A total of 467 708 persons aged 40 to 79 years and free from CVD were randomly divided into development (70%) and validation subcohorts (30%). We developed 3 different CVD prediction models: a single measure model using single time point screening data; a longitudinal average model using average risk factor values from periodic screening data; and a longitudinal summary model using average values and the variability of risk factors. The development subcohort included 327 396 persons who had 3.2 health screenings on average and 25 765 cases of CVD over 12 years. The C statistics (95% confidence interval [CI]) for the single measure, longitudinal average, and longitudinal summary models were 0.690 (95% CI, 0.682-0.698), 0.695 (95% CI, 0.687-0.703), and 0.752 (95% CI, 0.744-0.760) in men and 0.732 (95% CI, 0.722-0.742), 0.735 (95% CI, 0.725-0.745), and 0.790 (95% CI, 0.780-0.800) in women, respectively. The net reclassification index from the single measure model to the longitudinal average model was 1.78% in men and 1.33% in women, and the index from the longitudinal average model to the longitudinal summary model was 32.71% in men and 34.98% in women. Using averages of repeatedly measured risk factor values modestly improves CVD predictability compared with single measurement values. Incorporating the average and variability information of repeated measurements can lead to great improvements in disease prediction. URL: https://www.clinicaltrials.gov. Unique identifier: NCT02931500. © 2017 American Heart Association, Inc.

  10. Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations

    DOE PAGES

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...

    2016-05-31

    Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.

  11. Average of delta: a new quality control tool for clinical laboratories.

    PubMed

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  12. Averaging Theory for Description of Environmental Problems: What Have We Learned?

    PubMed Central

    Miller, Cass T.; Schrefler, Bernhard A.

    2012-01-01

    Advances in Water Resources has been a prime archival source for implementation of averaging theories in changing the scale at which processes of importance in environmental modeling are described. Thus in celebration of the 35th year of this journal, it seems appropriate to assess what has been learned about these theories and about their utility in describing systems of interest. We review advances in understanding and use of averaging theories to describe porous medium flow and transport at the macroscale, an averaged scale that models spatial variability, and at the megascale, an integral scale that only considers time variation of system properties. We detail physical insights gained from the development and application of averaging theory for flow through porous medium systems and for the behavior of solids at the macroscale. We show the relationship between standard models that are typically applied and more rigorous models that are derived using modern averaging theory. We discuss how the results derived from averaging theory that are available can be built upon and applied broadly within the community. We highlight opportunities and needs that exist for collaborations among theorists, numerical analysts, and experimentalists to advance the new classes of models that have been derived. Lastly, we comment on averaging developments for rivers, estuaries, and watersheds. PMID:23393409

  13. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  14. 40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Calculating the fleet average carbon-related exhaust emissions. (1) Manufacturers must compute separate production-weighted fleet average carbon-related exhaust emissions at the end of the model year for passenger... for sale, and certifying model types to standards as defined in § 86.1818-12. The model type carbon...

  15. Creating "Intelligent" Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, Noel; Taylor, Patrick

    2014-05-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.

  16. A Reading Paradigm to Meet the Needs of All Students.

    ERIC Educational Resources Information Center

    Bonds, Charles W.; Sida, Don

    1993-01-01

    Describes a reading model that suggests seven components essential for meeting the reading instructional needs of all students in a school. Notes that the model provides differentiated instruction for below-average, average, and above-average readers. (SR)

  17. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  18. Professional hazards? The impact of models' body size on advertising effectiveness and women's body-focused anxiety in professions that do and do not emphasize the cultural ideal of thinness.

    PubMed

    Dittmar, Helga; Howard, Sarah

    2004-12-01

    Previous experimental research indicates that the use of average-size women models in advertising prevents the well-documented negative effect of thin models on women's body image, while such adverts are perceived as equally effective (Halliwell & Dittmar, 2004). The current study extends this work by: (a) seeking to replicate the finding of no difference in advertising effectiveness between average-size and thin models (b) examining level of ideal-body internalization as an individual, internal factor that moderates women's vulnerability to thin media models, in the context of (c) comparing women in professions that differ radically in their focus on, and promotion of, the sociocultural ideal of thinness for women--employees in fashion advertising (n = 75) and teachers in secondary schools (n = 75). Adverts showing thin, average-size and no models were perceived as equally effective. High internalizers in both groups of women felt worse about their body image after exposure to thin models compared to other images. Profession affected responses to average-size models. Teachers reported significantly less body-focused anxiety after seeing average-size models compared to no models, while there was no difference for fashion advertisers. This suggests that women in professional environments with less focus on appearance-related ideals can experience increased body-esteem when exposed to average-size models, whereas women in appearance-focused professions report no such relief.

  19. Averaging principle for second-order approximation of heterogeneous models with homogeneous models.

    PubMed

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-11-27

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).

  20. Averaging principle for second-order approximation of heterogeneous models with homogeneous models

    PubMed Central

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569

  1. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    NASA Astrophysics Data System (ADS)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  2. Bayes factors and multimodel inference

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    Multimodel inference has two main themes: model selection, and model averaging. Model averaging is a means of making inference conditional on a model set, rather than on a selected model, allowing formal recognition of the uncertainty associated with model choice. The Bayesian paradigm provides a natural framework for model averaging, and provides a context for evaluation of the commonly used AIC weights. We review Bayesian multimodel inference, noting the importance of Bayes factors. Noting the sensitivity of Bayes factors to the choice of priors on parameters, we define and propose nonpreferential priors as offering a reasonable standard for objective multimodel inference.

  3. The statistical average of optical properties for alumina particle cluster in aircraft plume

    NASA Astrophysics Data System (ADS)

    Li, Jingying; Bai, Lu; Wu, Zhensen; Guo, Lixin

    2018-04-01

    We establish a model for lognormal distribution of monomer radius and number of alumina particle clusters in plume. According to the Multi-Sphere T Matrix (MSTM) theory, we provide a method for finding the statistical average of optical properties for alumina particle clusters in plume, analyze the effect of different distributions and different detection wavelengths on the statistical average of optical properties for alumina particle cluster, and compare the statistical average optical properties under the alumina particle cluster model established in this study and those under three simplified alumina particle models. The calculation results show that the monomer number of alumina particle cluster and its size distribution have a considerable effect on its statistical average optical properties. The statistical average of optical properties for alumina particle cluster at common detection wavelengths exhibit obvious differences, whose differences have a great effect on modeling IR and UV radiation properties of plume. Compared with the three simplified models, the alumina particle cluster model herein features both higher extinction and scattering efficiencies. Therefore, we may find that an accurate description of the scattering properties of alumina particles in aircraft plume is of great significance in the study of plume radiation properties.

  4. A Note on Spatial Averaging and Shear Stresses Within Urban Canopies

    NASA Astrophysics Data System (ADS)

    Xie, Zheng-Tong; Fuka, Vladimir

    2018-04-01

    One-dimensional urban models embedded in mesoscale numerical models may place several grid points within the urban canopy. This requires an accurate parametrization for shear stresses (i.e. vertical momentum fluxes) including the dispersive stress and momentum sinks at these points. We used a case study with a packing density of 33% and checked rigorously the vertical variation of spatially-averaged total shear stress, which can be used in a one-dimensional column urban model. We found that the intrinsic spatial average, in which the volume or area of the solid parts are not included in the average process, yield greater time-spatial average of total stress within the canopy and a more evident abrupt change at the top of the buildings than the comprehensive spatial average, in which the volume or area of the solid parts are included in the average.

  5. Properties of model-averaged BMDLs: a study of model averaging in dichotomous response risk estimation.

    PubMed

    Wheeler, Matthew W; Bailer, A John

    2007-06-01

    Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.

  6. An improved car-following model with two preceding cars' average speed

    NASA Astrophysics Data System (ADS)

    Yu, Shao-Wei; Shi, Zhong-Ke

    2015-01-01

    To better describe cooperative car-following behaviors under intelligent transportation circumstances and increase roadway traffic mobility, the data of three successive following cars at a signalized intersection of Jinan in China were obtained and employed to explore the linkage between two preceding cars' average speed and car-following behaviors. The results indicate that two preceding cars' average velocity has significant effects on the following car's motion. Then an improved car-following model considering two preceding cars' average velocity was proposed and calibrated based on full velocity difference model and some numerical simulations were carried out to study how two preceding cars' average speed affected the starting process and the traffic flow evolution process with an initial small disturbance, the results indicate that the improved car-following model can qualitatively describe the impacts of two preceding cars' average velocity on traffic flow and that taking two preceding cars' average velocity into account in designing the control strategy for the cooperative adaptive cruise control system can improve the stability of traffic flow, suppress the appearance of traffic jams and increase the capacity of signalized intersections.

  7. The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool

    EPA Pesticide Factsheets

    Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.

  8. Cycle-averaged dynamics of a periodically driven, closed-loop circulation model

    NASA Technical Reports Server (NTRS)

    Heldt, T.; Chang, J. L.; Chen, J. J. S.; Verghese, G. C.; Mark, R. G.

    2005-01-01

    Time-varying elastance models have been used extensively in the past to simulate the pulsatile nature of cardiovascular waveforms. Frequently, however, one is interested in dynamics that occur over longer time scales, in which case a detailed simulation of each cardiac contraction becomes computationally burdensome. In this paper, we apply circuit-averaging techniques to a periodically driven, closed-loop, three-compartment recirculation model. The resultant cycle-averaged model is linear and time invariant, and greatly reduces the computational burden. It is also amenable to systematic order reduction methods that lead to further efficiencies. Despite its simplicity, the averaged model captures the dynamics relevant to the representation of a range of cardiovascular reflex mechanisms. c2004 Elsevier Ltd. All rights reserved.

  9. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2014-11-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  10. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  11. Potential biases in evapotranspiration estimates from Earth system models due to spatial heterogeneity and lateral moisture redistribution

    NASA Astrophysics Data System (ADS)

    Rouholahnejad, E.; Kirchner, J. W.

    2016-12-01

    Evapotranspiration (ET) is a key process in land-climate interactions and affects the dynamics of the atmosphere at local and regional scales. In estimating ET, most earth system models average over considerable sub-grid heterogeneity in land surface properties, precipitation (P), and potential evapotranspiration (PET). This spatial averaging could potentially bias ET estimates, due to the nonlinearities in the underlying relationships. In addition, most earth system models ignore lateral redistribution of water within and between grid cells, which could potentially alter both local and regional ET. Here we present a first attempt to quantify the effects of spatial heterogeneity and lateral redistribution on grid-cell-averaged ET as seen from the atmosphere over heterogeneous landscapes. Using a Budyko framework to express ET as a function of P and PET, we quantify how sub-grid heterogeneity affects average ET at the scale of typical earth system model grid cells. We show that averaging over sub-grid heterogeneity in P and PET, as typical earth system models do, leads to overestimates of average ET. We use a similar approach to quantify how lateral redistribution of water could affect average ET, as seen from the atmosphere. We show that where the aridity index P/PET increases with altitude, gravitationally driven lateral redistribution will increase average ET, implying that models that neglect lateral moisture redistribution will underestimate average ET. In contrast, where the aridity index P/PET decreases with altitude, gravitationally driven lateral redistribution will decrease average ET. This approach yields a simple conceptual framework and mathematical expressions for determining whether, and how much, spatial heterogeneity and lateral redistribution can affect regional ET fluxes as seen from the atmosphere. This analysis provides the basis for quantifying heterogeneity and redistribution effects on ET at regional and continental scales, which will be the focus of future work.

  12. An improved switching converter model using discrete and average techniques

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.; Lee, F. C.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.

  13. Age-dependence of the average and equivalent refractive indices of the crystalline lens

    PubMed Central

    Charman, W. Neil; Atchison, David A.

    2013-01-01

    Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474

  14. Department of Transportation, National Highway Traffic Safety Administration : light truck average fuel economy standard, model year 1999

    DOT National Transportation Integrated Search

    1997-04-18

    Section 32902(a) of title 49, United States Code, requires the Secretary of Transportation to prescribe by regulation, at least 18 months in advance of each model year, average fuel economy standards (known as "Corporate Average Fuel Economy" or "CAF...

  15. 40 CFR 600.510-08 - Calculation of average fuel economy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Regulations for Model Year 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy and...) Average fuel economy will be calculated to the nearest 0.1 mpg for the classes of automobiles identified...

  16. 40 CFR 600.510-86 - Calculation of average fuel economy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Regulations for Model Year 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy and...) Average fuel economy will be calculated to the nearest 0.1 mpg for the classes of automobiles identified...

  17. 40 CFR 600.510-93 - Calculation of average fuel economy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Regulations for Model Year 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy and...) Average fuel economy will be calculated to the nearest 0.1 mpg for the classes of automobiles identified...

  18. A review of US anthropometric reference data (1971 2000) with comparisons to both stylized and tomographic anatomic models

    NASA Astrophysics Data System (ADS)

    Huh, C.; Bolch, W. E.

    2003-10-01

    Two classes of anatomic models currently exist for use in both radiation protection and radiation dose reconstruction: stylized mathematical models and tomographic voxel models. The former utilize 3D surface equations to represent internal organ structure and external body shape, while the latter are based on segmented CT or MR images of a single individual. While tomographic models are clearly more anthropomorphic than stylized models, a given model's characterization as being anthropometric is dependent upon the reference human to which the model is compared. In the present study, data on total body mass, standing/sitting heights and body mass index are collected and reviewed for the US population covering the time interval from 1971 to 2000. These same anthropometric parameters are then assembled for the ORNL series of stylized models, the GSF series of tomographic models (Golem, Helga, Donna, etc), the adult male Zubal tomographic model and the UF newborn tomographic model. The stylized ORNL models of the adult male and female are found to be fairly representative of present-day average US males and females, respectively, in terms of both standing and sitting heights for ages between 20 and 60-80 years. While the ORNL adult male model provides a reasonably close match to the total body mass of the average US 21-year-old male (within ~5%), present-day 40-year-old males have an average total body mass that is ~16% higher. For radiation protection purposes, the use of the larger 73.7 kg adult ORNL stylized hermaphrodite model provides a much closer representation of average present-day US females at ages ranging from 20 to 70 years. In terms of the adult tomographic models from the GSF series, only Donna (40-year-old F) closely matches her age-matched US counterpart in terms of average body mass. Regarding standing heights, the better matches to US age-correlated averages belong to Irene (32-year-old F) for the females and Golem (38-year-old M) for the males. Both Helga (27-year-old F) and Donna, however, provide good matches to average US sitting heights for adult females, while Golem and Otoko (male of unknown age) yield sitting heights that are slightly below US adult male averages. Finally, Helga is seen as the only GSF tomographic female model that yields a body mass index in line with her average US female counterpart at age 26. In terms of dose reconstruction activities, however, all current tomographic voxel models are valuable assets in attempting to cover the broad distribution of individual anthropometric parameters representative of the current US population. It is highly recommended that similar attempts to create a broad library of tomographic models be initiated in the United States and elsewhere to complement and extend the limited number of tomographic models presently available for these efforts.

  19. Effects of spatial variability and scale on areal -average evapotranspiration

    NASA Technical Reports Server (NTRS)

    Famiglietti, J. S.; Wood, Eric F.

    1993-01-01

    This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.

  20. Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes

    DOE PAGES

    Chapman, E. G.; Shaw, W. J.; Easter, R. C.; ...

    2002-12-03

    The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speedsmore » can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.« less

  1. Variability analysis of SAR from 20 MHz to 2.4 GHz for different adult and child models using finite-difference time-domain

    NASA Astrophysics Data System (ADS)

    Conil, E.; Hadjem, A.; Lacroux, F.; Wong, M. F.; Wiart, J.

    2008-03-01

    This paper deals with the variability of body models used in numerical dosimetry studies. Six adult anthropomorphic voxel models have been collected and used to build 5-, 8- and 12-year-old children using a morphing method respecting anatomical parameters. Finite-difference time-domain calculations of a specific absorption rate (SAR) have been performed for a range of frequencies from 20 MHz to 2.4 GHz for isolated models illuminated by plane waves. A whole-body-averaged SAR is presented as well as the average on specific tissues such as skin, muscles, fat or bones and the average on specific parts of the body such as head, legs, arms or torso. Results point out the variability of adult models. The standard deviation of whole-body-averaged SAR of adult models can reach 40%. All phantoms are exposed to the ICNIRP reference levels. Results show that for adults, compliance with reference levels ensures compliance with basic restrictions, but concerning children models involved in this study, the whole-body-averaged SAR goes over the fundamental safety limits up to 40%. For more information on this article, see medicalphysicsweb.org

  2. Parameterization of eddy sensible heat transports in a zonally averaged dynamic model of the atmosphere

    NASA Technical Reports Server (NTRS)

    Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean

    1990-01-01

    A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.

  3. Supermodeling With A Global Atmospheric Model

    NASA Astrophysics Data System (ADS)

    Wiegerinck, Wim; Burgers, Willem; Selten, Frank

    2013-04-01

    In weather and climate prediction studies it often turns out to be the case that the multi-model ensemble mean prediction has the best prediction skill scores. One possible explanation is that the major part of the model error is random and is averaged out in the ensemble mean. In the standard multi-model ensemble approach, the models are integrated in time independently and the predicted states are combined a posteriori. Recently an alternative ensemble prediction approach has been proposed in which the models exchange information during the simulation and synchronize on a common solution that is closer to the truth than any of the individual model solutions in the standard multi-model ensemble approach or a weighted average of these. This approach is called the super modeling approach (SUMO). The potential of the SUMO approach has been demonstrated in the context of simple, low-order, chaotic dynamical systems. The information exchange takes the form of linear nudging terms in the dynamical equations that nudge the solution of each model to the solution of all other models in the ensemble. With a suitable choice of the connection strengths the models synchronize on a common solution that is indeed closer to the true system than any of the individual model solutions without nudging. This approach is called connected SUMO. An alternative approach is to integrate a weighted averaged model, weighted SUMO. At each time step all models in the ensemble calculate the tendency, these tendencies are weighted averaged and the state is integrated one time step into the future with this weighted averaged tendency. It was shown that in case the connected SUMO synchronizes perfectly, the connected SUMO follows the weighted averaged trajectory and both approaches yield the same solution. In this study we pioneer both approaches in the context of a global, quasi-geostrophic, three-level atmosphere model that is capable of simulating quite realistically the extra-tropical circulation in the Northern Hemisphere winter.

  4. Time Series Analysis for Forecasting Hospital Census: Application to the Neonatal Intensive Care Unit

    PubMed Central

    Hoover, Stephen; Jackson, Eric V.; Paul, David; Locke, Robert

    2016-01-01

    Summary Background Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Objective Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. Methods We used five years of retrospective daily NICU census data for model development (January 2008 – December 2012, N=1827 observations) and one year of data for validation (January – December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. Results The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Conclusions Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning. PMID:27437040

  5. Time Series Analysis for Forecasting Hospital Census: Application to the Neonatal Intensive Care Unit.

    PubMed

    Capan, Muge; Hoover, Stephen; Jackson, Eric V; Paul, David; Locke, Robert

    2016-01-01

    Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. We used five years of retrospective daily NICU census data for model development (January 2008 - December 2012, N=1827 observations) and one year of data for validation (January - December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning.

  6. Turbulence modeling for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Marvin, J. G.; Coakley, T. J.

    1989-01-01

    Turbulence modeling for high speed compressible flows is described and discussed. Starting with the compressible Navier-Stokes equations, methods of statistical averaging are described by means of which the Reynolds-averaged Navier-Stokes equations are developed. Unknown averages in these equations are approximated using various closure concepts. Zero-, one-, and two-equation eddy viscosity models, algebraic stress models and Reynolds stress transport models are discussed. Computations of supersonic and hypersonic flows obtained using several of the models are discussed and compared with experimental results. Specific examples include attached boundary layer flows, shock wave boundary layer interactions and compressible shear layers. From these examples, conclusions regarding the status of modeling and recommendations for future studies are discussed.

  7. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2015-04-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  8. Quantification of root water uptake in soil using X-ray computed tomography and image-based modelling.

    PubMed

    Daly, Keith R; Tracy, Saoirse R; Crout, Neil M J; Mairhofer, Stefan; Pridmore, Tony P; Mooney, Sacha J; Roose, Tiina

    2018-01-01

    Spatially averaged models of root-soil interactions are often used to calculate plant water uptake. Using a combination of X-ray computed tomography (CT) and image-based modelling, we tested the accuracy of this spatial averaging by directly calculating plant water uptake for young wheat plants in two soil types. The root system was imaged using X-ray CT at 2, 4, 6, 8 and 12 d after transplanting. The roots were segmented using semi-automated root tracking for speed and reproducibility. The segmented geometries were converted to a mesh suitable for the numerical solution of Richards' equation. Richards' equation was parameterized using existing pore scale studies of soil hydraulic properties in the rhizosphere of wheat plants. Image-based modelling allows the spatial distribution of water around the root to be visualized and the fluxes into the root to be calculated. By comparing the results obtained through image-based modelling to spatially averaged models, the impact of root architecture and geometry in water uptake was quantified. We observed that the spatially averaged models performed well in comparison to the image-based models with <2% difference in uptake. However, the spatial averaging loses important information regarding the spatial distribution of water near the root system. © 2017 John Wiley & Sons Ltd.

  9. Ionospheric Storm Reconstructions with a Multimodel Ensemble Prdiction System (MEPS) of Data Assimilation Models: Mid and Low Latitude Dynamics

    NASA Astrophysics Data System (ADS)

    Schunk, R. W.; Scherliess, L.; Eccles, V.; Gardner, L. C.; Sojka, J. J.; Zhu, L.; Pi, X.; Mannucci, A. J.; Komjathy, A.; Wang, C.; Rosen, G.

    2016-12-01

    As part of the NASA-NSF Space Weather Modeling Collaboration, we created a Multimodel Ensemble Prediction System (MEPS) for the Ionosphere-Thermosphere-Electrodynamics system that is based on Data Assimilation (DA) models. MEPS is composed of seven physics-based data assimilation models that cover the globe. Ensemble modeling can be conducted for the mid-low latitude ionosphere using the four GAIM data assimilation models, including the Gauss Markov (GM), Full Physics (FP), Band Limited (BL) and 4DVAR DA models. These models can assimilate Total Electron Content (TEC) from a constellation of satellites, bottom-side electron density profiles from digisondes, in situ plasma densities, occultation data and ultraviolet emissions. The four GAIM models were run for the March 16-17, 2013, geomagnetic storm period with the same data, but we also systematically added new data types and re-ran the GAIM models to see how the different data types affected the GAIM results, with the emphasis on elucidating differences in the underlying ionospheric dynamics and thermospheric coupling. Also, for each scenario the outputs from the four GAIM models were used to produce an ensemble mean for TEC, NmF2, and hmF2. A simple average of the models was used in the ensemble averaging to see if there was an improvement of the ensemble average over the individual models. For the scenarios considered, the ensemble average yielded better specifications than the individual GAIM models. The model differences and averages, and the consequent differences in ionosphere-thermosphere coupling and dynamics will be discussed.

  10. 10 CFR 431.445 - Determination of small electric motor efficiency.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) General requirements. The average full-load efficiency of each basic model of small electric motor must be... this section, provided, however, that an AEDM may be used to determine the average full-load efficiency of one or more of a manufacturer's basic models only if the average full-load efficiency of at least...

  11. 10 CFR 431.445 - Determination of small electric motor efficiency.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) General requirements. The average full-load efficiency of each basic model of small electric motor must be... this section, provided, however, that an AEDM may be used to determine the average full-load efficiency of one or more of a manufacturer's basic models only if the average full-load efficiency of at least...

  12. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  13. Indirect Validation of Probe Speed Data on Arterial Corridors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eshragh, Sepideh; Young, Stanley E.; Sharifi, Elham

    This study aimed to estimate the accuracy of probe speed data on arterial corridors on the basis of roadway geometric attributes and functional classification. It was assumed that functional class (medium and low) along with other road characteristics (such as weighted average of the annual average daily traffic, average signal density, average access point density, and average speed) were available as correlation factors to estimate the accuracy of probe traffic data. This study tested these factors as predictors of the fidelity of probe traffic data by using the results of an extensive validation exercise. This study showed strong correlations betweenmore » these geometric attributes and the accuracy of probe data when they were assessed by using average absolute speed error. Linear models were regressed to existing data to estimate appropriate models for medium- and low-type arterial corridors. The proposed models for medium- and low-type arterials were validated further on the basis of the results of a slowdown analysis. These models can be used to predict the accuracy of probe data indirectly in medium and low types of arterial corridors.« less

  14. Average Magnetic Field Magnitude Profiles of Wind Magnetic Clouds as a Function of Closest Approach to the Clouds' Axes and Comparison to Model

    NASA Astrophysics Data System (ADS)

    Berdichevsky, D. B.; Lepping, R. P.; Wu, C. C.

    2016-12-01

    We examine the average magnetic field magnitude (|B|) within magnetic clouds (MCs) observed over the period of 1995 to July of 2015, to understand the difference between this field magnitude and the ideal (field magnitude) |B|-profiles expected from using a static, constant-α, force-free, cylindrically symmetric model for MCs (Lepping et al. 1990, denoted as the LJB model here) in general. We classify all MCs according to an objectively assigned quality, Qo (=1,2,3, for excellent, good, and poor). There are a total of 209 MCs and 124 if only Qo=1,2 cases are considered. Average normalized field with respect to closest approach (CA) is stressed where we separate cases into four CA sectors centered at 12.5%, 37.5%, 62.5%, and 87.5% of the average radius; the averaging is done on a percent-duration basis to put all cases on the same footing. By normalized field we mean that, before averaging, the |B| for each MC at each point is divided by the field magnitude estimated for the MC's axis (Bo) as determined by the LJB model. The actual averages for the 209 and 124 MC sets are compared separately to the LJB model, after an adjustment for MC expansion, which is estimated from long-term average conditions of MCs at 1 AU using a typical speed difference of 40 km/s across the average MC. The comparison is a direct difference (average observations - model) vs. time for the four sets separately. These four difference-relationships are fitted with four quadratic curves, which have very small sigmas for the fits. Interpretation of these relationships (called Quad formulae) should provide a comprehensive view of the variation of the normalized field-magnitude throughout the average MC where we expect both front and rear compression (due to solar wind interaction) to be part of its explanation. These formulae are also being considered for modifying the LJB model. This modification is expected to be used for assistance in a scheme used for forecasting the timing and magnitude of magnetic storms caused by MCs. Extensive testing of the Quad formulae shows that the formulae are quite useful in correcting individual MC field magnitude profiles, especially for the Qo=1,2 cases and especially for the first 1/3 of these MCs. However, the use of this type of |B|-correction constitutes a slight violation of the force free assumption used in the original LJB MC model

  15. Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information.

    PubMed

    Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E

    2011-01-01

    Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  16. Design of a hybrid model for cardiac arrhythmia classification based on Daubechies wavelet transform.

    PubMed

    Rajagopal, Rekha; Ranganathan, Vidhyapriya

    2018-06-05

    Automation in cardiac arrhythmia classification helps medical professionals make accurate decisions about the patient's health. The aim of this work was to design a hybrid classification model to classify cardiac arrhythmias. The design phase of the classification model comprises the following stages: preprocessing of the cardiac signal by eliminating detail coefficients that contain noise, feature extraction through Daubechies wavelet transform, and arrhythmia classification using a collaborative decision from the K nearest neighbor classifier (KNN) and a support vector machine (SVM). The proposed model is able to classify 5 arrhythmia classes as per the ANSI/AAMI EC57: 1998 classification standard. Level 1 of the proposed model involves classification using the KNN and the classifier is trained with examples from all classes. Level 2 involves classification using an SVM and is trained specifically to classify overlapped classes. The final classification of a test heartbeat pertaining to a particular class is done using the proposed KNN/SVM hybrid model. The experimental results demonstrated that the average sensitivity of the proposed model was 92.56%, the average specificity 99.35%, the average positive predictive value 98.13%, the average F-score 94.5%, and the average accuracy 99.78%. The results obtained using the proposed model were compared with the results of discriminant, tree, and KNN classifiers. The proposed model is able to achieve a high classification accuracy.

  17. Argentina wheat yield model

    NASA Technical Reports Server (NTRS)

    Callis, S. L.; Sakamoto, C.

    1984-01-01

    Five models based on multiple regression were developed to estimate wheat yields for the five wheat growing provinces of Argentina. Meteorological data sets were obtained for each province by averaging data for stations within each province. Predictor variables for the models were derived from monthly total precipitation, average monthly mean temperature, and average monthly maximum temperature. Buenos Aires was the only province for which a trend variable was included because of increasing trend in yield due to technology from 1950 to 1963.

  18. Argentina corn yield model

    NASA Technical Reports Server (NTRS)

    Callis, S. L.; Sakamoto, C.

    1984-01-01

    A model based on multiple regression was developed to estimate corn yields for the country of Argentina. A meteorological data set was obtained for the country by averaging data for stations within the corn-growing area. Predictor variables for the model were derived from monthly total precipitation, average monthly mean temperature, and average monthly maximum temperature. A trend variable was included for the years 1965 to 1980 since an increasing trend in yields due to technology was observed between these years.

  19. Watershed Regressions for Pesticides (WARP) for Predicting Annual Maximum and Annual Maximum Moving-Average Concentrations of Atrazine in Streams

    USGS Publications Warehouse

    Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.

    2008-01-01

    Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize the probable levels of atrazine for comparison to specific water-quality benchmarks. Sites with a high probability of exceeding a benchmark for human health or aquatic life can be prioritized for monitoring.

  20. On the connection between the Koppel-Young and the Nelkin Models for thermal neutron scattering in water molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markovic, M.I.

    1982-10-01

    A critical analysis of the Koppel-Young model is presented and compared with Nelkin's model and their equivalence is asserted. It is shown that the only distinction between the two models is in the orientational averaging of the rotational-vibrational intermedial scattering function. Based on total cross sections, the Krieger-Nelkin orientation averaging has been confirmed to give excellent agreement with the Koppel-Young orientation averaging. However, significant quasi-periodical differences are observed when calculating differential cross sections. As a result of these insights, a new unified model is proposed for microdynamics of water molecules.

  1. Locating helicopter emergency medical service bases to optimise population coverage versus average response time.

    PubMed

    Garner, Alan A; van den Berg, Pieter L

    2017-10-16

    New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.

  2. The role of heterogeneity in contact timing and duration in network models of influenza spread in schools

    PubMed Central

    Toth, Damon J. A.; Leecaster, Molly; Pettey, Warren B. P.; Gundlapalli, Adi V.; Gao, Hongjiang; Rainey, Jeanette J.; Uzicanin, Amra; Samore, Matthew H.

    2015-01-01

    Influenza poses a significant health threat to children, and schools may play a critical role in community outbreaks. Mathematical outbreak models require assumptions about contact rates and patterns among students, but the level of temporal granularity required to produce reliable results is unclear. We collected objective contact data from students aged 5–14 at an elementary school and middle school in the state of Utah, USA, and paired those data with a novel, data-based model of influenza transmission in schools. Our simulations produced within-school transmission averages consistent with published estimates. We compared simulated outbreaks over the full resolution dynamic network with simulations on networks with averaged representations of contact timing and duration. For both schools, averaging the timing of contacts over one or two school days caused average outbreak sizes to increase by 1–8%. Averaging both contact timing and pairwise contact durations caused average outbreak sizes to increase by 10% at the middle school and 72% at the elementary school. Averaging contact durations separately across within-class and between-class contacts reduced the increase for the elementary school to 5%. Thus, the effect of ignoring details about contact timing and duration in school contact networks on outbreak size modelling can vary across different schools. PMID:26063821

  3. An empirical investigation on different methods of economic growth rate forecast and its behavior from fifteen countries across five continents

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    Our empirical results show that we can predict GDP growth rate more accurately in continent with fewer large economies, compared to smaller economies like Malaysia. This difficulty is very likely positively correlated with subsidy or social security policies. The stage of economic development and level of competiveness also appears to have interactive effects on this forecast stability. These results are generally independent of the forecasting procedures. Countries with high stability in their economic growth, forecasting by model selection is better than model averaging. Overall forecast weight averaging (FWA) is a better forecasting procedure in most countries. FWA also outperforms simple model averaging (SMA) and has the same forecasting ability as Bayesian model averaging (BMA) in almost all countries.

  4. The phenotypic equilibrium of cancer cells: From average-level stability to path-wise convergence.

    PubMed

    Niu, Yuanling; Wang, Yue; Zhou, Da

    2015-12-07

    The phenotypic equilibrium, i.e. heterogeneous population of cancer cells tending to a fixed equilibrium of phenotypic proportions, has received much attention in cancer biology very recently. In the previous literature, some theoretical models were used to predict the experimental phenomena of the phenotypic equilibrium, which were often explained by different concepts of stabilities of the models. Here we present a stochastic multi-phenotype branching model by integrating conventional cellular hierarchy with phenotypic plasticity mechanisms of cancer cells. Based on our model, it is shown that: (i) our model can serve as a framework to unify the previous models for the phenotypic equilibrium, and then harmonizes the different kinds of average-level stabilities proposed in these models; and (ii) path-wise convergence of our model provides a deeper understanding to the phenotypic equilibrium from stochastic point of view. That is, the emergence of the phenotypic equilibrium is rooted in the stochastic nature of (almost) every sample path, the average-level stability just follows from it by averaging stochastic samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. A Comparison of Averaged and Full Models to Study the Third-Body Perturbation

    PubMed Central

    Solórzano, Carlos Renato Huaura; Prado, Antonio Fernando Bertachini de Almeida

    2013-01-01

    The effects of a third-body travelling in a circular orbit around a main body on a massless satellite that is orbiting the same main body are studied under two averaged models, single and double, where expansions of the disturbing function are made, and the full restricted circular three-body problem. The goal is to compare the behavior of these two averaged models against the full problem for long-term effects, in order to have some knowledge of their differences. The single averaged model eliminates the terms due to the short period of the spacecraft. The double average is taken over the mean motion of the satellite and the mean motion of the disturbing body, so removing both short period terms. As an example of the methods, an artificial satellite around the Earth perturbed by the Moon is used. A detailed study of the effects of different initial conditions in the orbit of the spacecraft is made. PMID:24319348

  6. A comparison of averaged and full models to study the third-body perturbation.

    PubMed

    Solórzano, Carlos Renato Huaura; Prado, Antonio Fernando Bertachini de Almeida

    2013-01-01

    The effects of a third-body travelling in a circular orbit around a main body on a massless satellite that is orbiting the same main body are studied under two averaged models, single and double, where expansions of the disturbing function are made, and the full restricted circular three-body problem. The goal is to compare the behavior of these two averaged models against the full problem for long-term effects, in order to have some knowledge of their differences. The single averaged model eliminates the terms due to the short period of the spacecraft. The double average is taken over the mean motion of the satellite and the mean motion of the disturbing body, so removing both short period terms. As an example of the methods, an artificial satellite around the Earth perturbed by the Moon is used. A detailed study of the effects of different initial conditions in the orbit of the spacecraft is made.

  7. Macroscopic neural mass model constructed from a current-based network model of spiking neurons.

    PubMed

    Umehara, Hiroaki; Okada, Masato; Teramae, Jun-Nosuke; Naruse, Yasushi

    2017-02-01

    Neural mass models (NMMs) are efficient frameworks for describing macroscopic cortical dynamics including electroencephalogram and magnetoencephalogram signals. Originally, these models were formulated on an empirical basis of synaptic dynamics with relatively long time constants. By clarifying the relations between NMMs and the dynamics of microscopic structures such as neurons and synapses, we can better understand cortical and neural mechanisms from a multi-scale perspective. In a previous study, the NMMs were analytically derived by averaging the equations of synaptic dynamics over the neurons in the population and further averaging the equations of the membrane-potential dynamics. However, the averaging of synaptic current assumes that the neuron membrane potentials are nearly time invariant and that they remain at sub-threshold levels to retain the conductance-based model. This approximation limits the NMM to the non-firing state. In the present study, we newly propose a derivation of a NMM by alternatively approximating the synaptic current which is assumed to be independent of the membrane potential, thus adopting a current-based model. Our proposed model releases the constraint of the nearly constant membrane potential. We confirm that the obtained model is reducible to the previous model in the non-firing situation and that it reproduces the temporal mean values and relative power spectrum densities of the average membrane potentials for the spiking neurons. It is further ensured that the existing NMM properly models the averaged dynamics over individual neurons even if they are spiking in the populations.

  8. Evaluation of the accuracy of an offline seasonally-varying matrix transport model for simulating ideal age

    DOE PAGES

    Bardin, Ann; Primeau, Francois; Lindsay, Keith; ...

    2016-07-21

    Newton-Krylov solvers for ocean tracers have the potential to greatly decrease the computational costs of spinning up deep-ocean tracers, which can take several thousand model years to reach equilibrium with surface processes. One version of the algorithm uses offline tracer transport matrices to simulate an annual cycle of tracer concentrations and applies Newton’s method to find concentrations that are periodic in time. Here we present the impact of time-averaging the transport matrices on the equilibrium values of an ideal-age tracer. We compared annually-averaged, monthly-averaged, and 5-day-averaged transport matrices to an online simulation using the ocean component of the Community Earthmore » System Model (CESM) with a nominal horizontal resolution of 1° × 1° and 60 vertical levels. We found that increasing the time resolution of the offline transport model reduced a low age bias from 12% for the annually-averaged transport matrices, to 4% for the monthly-averaged transport matrices, and to less than 2% for the transport matrices constructed from 5-day averages. The largest differences were in areas with strong seasonal changes in the circulation, such as the Northern Indian Ocean. As a result, for many applications the relatively small bias obtained using the offline model makes the offline approach attractive because it uses significantly less computer resources and is simpler to set up and run.« less

  9. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  10. Solar-QBO Interaction and Its Impact on Stratospheric Ozone in a Zonally Averaged Photochemical Transport Model of the Middle Atmosphere

    DTIC Science & Technology

    2007-08-28

    Solar- QBO interaction and its impact on stratospheric ozone in a zonally averaged photochemical transport model of the middle atmosphere J. P...investigate the solar cycle modulation of the quasi-biennial oscillation ( QBO ) in stratospheric zonal winds and its impact on stratospheric ozone with an...updated version of the zonally averaged CHEM2D middle atmosphere model. We find that the duration of the westerly QBO phase at solar maximum is 3 months

  11. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    NASA Astrophysics Data System (ADS)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  12. Development of a modal emissions model using data from the Cooperative Industry/Government Exhaust Emission test program

    DOT National Transportation Integrated Search

    2003-06-22

    The Environmental Protection Agencys (EPAs) recommended model, MOBILE5a, has been : used extensively to predict emission factors based on average speeds for each fleet type. : Because average speeds are not appropriate in modeling intersections...

  13. Measured values of coal mine stopping resistance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oswald, N.; Prosser, B.; Ruckman, R.

    2008-12-15

    As coal mines become larger, the number of stoppings in the ventilation system increases. Each stopping represents a potential leakage path which must be adequately represented in the ventilation model. Stopping resistance can be calculated using two methods, the USBM method, used to determine a resistance for a single stopping, and the MVS technique, in which an average resistance is calculated for multiple stoppings. Through MVS data collected from ventilation surveys of different subsurface coal mines, average resistances for stoppings were determined for stopping in poor, average, good, and excellent conditions. The calculated average stoppings resistance were determined for concretemore » block and Kennedy stopping. Using the average stopping resistance, measured and calculated using the MVS method, provides a ventilation modeling tool which can be used to construct more accurate and useful ventilation models. 3 refs., 3 figs.« less

  14. An averaging battery model for a lead-acid battery operating in an electric car

    NASA Technical Reports Server (NTRS)

    Bozek, J. M.

    1979-01-01

    A battery model is developed based on time averaging the current or power, and is shown to be an effective means of predicting the performance of a lead acid battery. The effectiveness of this battery model was tested on battery discharge profiles expected during the operation of an electric vehicle following the various SAE J227a driving schedules. The averaging model predicts the performance of a battery that is periodically charged (regenerated) if the regeneration energy is assumed to be converted to retrievable electrochemical energy on a one-to-one basis.

  15. Forecasting coconut production in the Philippines with ARIMA model

    NASA Astrophysics Data System (ADS)

    Lim, Cristina Teresa

    2015-02-01

    The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.

  16. Parameter regionalisation methods for a semi-distributed rainfall-runoff model: application to a Northern Apennine region

    NASA Astrophysics Data System (ADS)

    Neri, Mattia; Toth, Elena

    2017-04-01

    The study presents the implementation of different regionalisation approaches for the transfer of model parameters from similar and/or neighbouring gauged basin to an ungauged catchment, and in particular it uses a semi-distributed continuously-simulating conceptual rainfall-runoff model for simulating daily streamflows. The case study refers to a set of Apennine catchments (in the Emilia-Romagna region, Italy), that, given the spatial proximity, are assumed to belong to the same hydrologically homogeneous region and are used, alternatively, as donors and regionalised basins. The model is a semi-distributed version of the HBV model (TUWien model) in which the catchment is divided in zones of different altitude that contribute separately to the total outlet flow. The model includes a snow module, whose application in the Apennine area has been, so far, very limited, even if snow accumulation and melting phenomena do have an important role in the study basins. Two methods, both widely applied in the recent literature, are applied for regionalising the model: i) "parameters averaging", where each parameter is obtained as a weighted mean of the parameters obtained, through calibration, on the donor catchments ii) "output averaging", where the model is run over the ungauged basin using the entire set of parameters of each donor basin and the simulated outputs are then averaged. In the first approach, the parameters are regionalised independently from each other, in the second one, instead, the correlation among the parameters is maintained. Since the model is a semi-distributed one, where each elevation zone contributes separately, the study proposes to test also a modified version of the second approach ("output averaging"), where each zone is considered as an autonomous entity, whose parameters are transposed to the ungauged sub-basin corresponding to the same elevation zone. The study explores also the choice of the weights to be used for averaging the parameters (in the "parameters averaging" approach) or for averaging the simulated streamflow (in the "output averaging" approach): in particular, weights are estimated as a function of the similarity/distance of the ungauged basin/zone to the donors, on the basis of a set of geo-morphological catchment descriptors. The predictive accuracy of the different regionalisation methods is finally assessed by jack-knife cross-validation against the observed daily runoff for all the study catchments.

  17. Study on characteristics of the aperture-averaging factor of atmospheric scintillation in terrestrial optical wireless communication

    NASA Astrophysics Data System (ADS)

    Shen, Hong; Liu, Wen-xing; Zhou, Xue-yun; Zhou, Li-ling; Yu, Long-Kun

    2018-02-01

    In order to thoroughly understand the characteristics of the aperture-averaging effect of atmospheric scintillation in terrestrial optical wireless communication and provide references for engineering design and performance evaluation of the optics system employed in the atmosphere, we have theoretically deduced the generally analytic expression of the aperture-averaging factor of atmospheric scintillation, and numerically investigated characteristics of the apertureaveraging factor under different propagation conditions. The limitations of the current commonly used approximate calculation formula of aperture-averaging factor have been discussed, and the results showed that the current calculation formula is not applicable for the small receiving aperture under non-uniform turbulence link. Numerical calculation has showed that aperture-averaging factor of atmospheric scintillation presented an exponential decline model for the small receiving aperture under non-uniform turbulent link, and the general expression of the model was given. This model has certain guiding significance for evaluating the aperture-averaging effect in the terrestrial optical wireless communication.

  18. Modeling the Zeeman effect in high altitude SSMIS channels for numerical weather prediction profiles: comparing a fast model and a line-by-line model

    NASA Astrophysics Data System (ADS)

    Larsson, R.; Milz, M.; Rayer, P.; Saunders, R.; Bell, W.; Booton, A.; Buehler, S. A.; Eriksson, P.; John, V.

    2015-10-01

    We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Same channel, there is 1.2 K in average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Same channel, there is 1.3 K in average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies causing up to ± 7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.

  19. Modeling the Zeeman effect in high-altitude SSMIS channels for numerical weather prediction profiles: comparing a fast model and a line-by-line model

    NASA Astrophysics Data System (ADS)

    Larsson, Richard; Milz, Mathias; Rayer, Peter; Saunders, Roger; Bell, William; Booton, Anna; Buehler, Stefan A.; Eriksson, Patrick; John, Viju O.

    2016-03-01

    We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high-altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Concerning the same channel, there is 1.2 K on average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Regarding the same channel, there is 1.3 K on average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies, causing up to ±7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.

  20. Model averaging, optimal inference, and habit formation

    PubMed Central

    FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.

    2014-01-01

    Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724

  1. 49 CFR 537.7 - Pre-model year and mid-model year reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AUTOMOTIVE FUEL ECONOMY REPORTS § 537.7 Pre... manufacturer's light trucks for the current model year. (b) Projected average and required fuel economy. (1) State the projected average fuel economy for the manufacturer's automobiles determined in accordance...

  2. 49 CFR 537.7 - Pre-model year and mid-model year reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AUTOMOTIVE FUEL ECONOMY REPORTS § 537.7 Pre... manufacturer's light trucks for the current model year. (b) Projected average and required fuel economy. (1) State the projected average fuel economy for the manufacturer's automobiles determined in accordance...

  3. 49 CFR 537.7 - Pre-model year and mid-model year reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AUTOMOTIVE FUEL ECONOMY REPORTS § 537.7 Pre... manufacturer's light trucks for the current model year. (b) Projected average and required fuel economy. (1) State the projected average fuel economy for the manufacturer's automobiles determined in accordance...

  4. 49 CFR 537.7 - Pre-model year and mid-model year reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... List the model types in order of increasing average inertia weight from top to bottom down the left... form. List the model types in order of increasing average inertia weight from top to bottom down the... trucks in your fleet that meet the mild and strong hybrid vehicle definitions. For each mild and strong...

  5. 49 CFR 537.7 - Pre-model year and mid-model year reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... List the model types in order of increasing average inertia weight from top to bottom down the left... form. List the model types in order of increasing average inertia weight from top to bottom down the... trucks in your fleet that meet the mild and strong hybrid vehicle definitions. For each mild and strong...

  6. Climate modeling for Yamal territory using supercomputer atmospheric circulation model ECHAM5-wiso

    NASA Astrophysics Data System (ADS)

    Denisova, N. Y.; Gribanov, K. G.; Werner, M.; Zakharov, V. I.

    2015-11-01

    Dependences of monthly means of regional averages of model atmospheric parameters on initial and boundary condition remoteness in the past are the subject of the study. We used atmospheric general circulation model ECHAM5-wiso for simulation of monthly means of regional averages of climate parameters for Yamal region and different periods of premodeling. Time interval was varied from several months to 12 years. We present dependences of model monthly means of regional averages of surface temperature, 2 m air temperature and humidity for December of 2000 on duration of premodeling. Comparison of these results with reanalysis data showed that best coincidence with true parameters could be reached if duration of pre-modelling is approximately 10 years.

  7. The Crossover Time as an Evaluation of Ocean Models Against Persistence

    NASA Astrophysics Data System (ADS)

    Phillipson, L. M.; Toumi, R.

    2018-01-01

    A new ocean evaluation metric, the crossover time, is defined as the time it takes for a numerical model to equal the performance of persistence. As an example, the average crossover time calculated using the Lagrangian separation distance (the distance between simulated trajectories and observed drifters) for the global MERCATOR ocean model analysis is found to be about 6 days. Conversely, the model forecast has an average crossover time longer than 6 days, suggesting limited skill in Lagrangian predictability by the current generation of global ocean models. The crossover time of the velocity error is less than 3 days, which is similar to the average decorrelation time of the observed drifters. The crossover time is a useful measure to quantify future ocean model improvements.

  8. Water quality characterization and mathematical modeling of dissolved oxygen in the East and West Ponds, Jamaica Bay Wildlife Refuge.

    PubMed

    Maillacheruvu, Krishnanand; Roy, D; Tanacredi, J

    2003-09-01

    The current study was undertaken to characterize the East and West Ponds and develop a mathematical model of the effects of nutrient and BOD loading on dissolved oxygen (DO) concentrations in these ponds. The model predicted that both ponds will recover adequately given the average expected range of nutrient and BOD loading due to waste from surface runoff and migratory birds. The predicted dissolved oxygen levels in both ponds were greater than 5.0 mg/L, and were supported by DO levels in the field which were typically above 5.0 mg/L during the period of this study. The model predicted a steady-state NBOD concentration of 12.0-14.0 mg/L in the East Pond, compared to an average measured value of 3.73 mg/L in 1994 and an average measured value of 12.51 mg/L in a 1996-97 study. The model predicted that the NBOD concentration in the West Pond would be under 3.0 mg/L compared to the average measured values of 7.50 mg/L in 1997, and 8.51 mg/L in 1994. The model predicted that phosphorus (as PO4(3-)) concentration in the East Pond will approach 4.2 mg/L in 4 months, compared to measured average value of 2.01 mg/L in a 1994 study. The model predicted that phosphorus concentration in the West Pond will approach 1.00 mg/L, compared to a measured average phosphorus (as PO4(3-)) concentration of 1.57 mg/L in a 1994 study.

  9. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  10. Modeling particle number concentrations along Interstate 10 in El Paso, Texas

    PubMed Central

    Olvera, Hector A.; Jimenez, Omar; Provencio-Vasquez, Elias

    2014-01-01

    Annual average daily particle number concentrations around a highway were estimated with an atmospheric dispersion model and a land use regression model. The dispersion model was used to estimate particle concentrations along Interstate 10 at 98 locations within El Paso, Texas. This model employed annual averaged wind speed and annual average daily traffic counts as inputs. A land use regression model with vehicle kilometers traveled as the predictor variable was used to estimate local background concentrations away from the highway to adjust the near-highway concentration estimates. Estimated particle number concentrations ranged between 9.8 × 103 particles/cc and 1.3 × 105 particles/cc, and averaged 2.5 × 104 particles/cc (SE 421.0). Estimates were compared against values measured at seven sites located along I10 throughout the region. The average fractional error was 6% and ranged between -1% and -13% across sites. The largest bias of -13% was observed at a semi-rural site where traffic was lowest. The average bias amongst urban sites was 5%. The accuracy of the estimates depended primarily on the emission factor and the adjustment to local background conditions. An emission factor of 1.63 × 1014 particles/veh-km was based on a value proposed in the literature and adjusted with local measurements. The integration of the two modeling techniques ensured that the particle number concentrations estimates captured the impact of traffic along both the highway and arterial roadways. The performance and economical aspects of the two modeling techniques used in this study shows that producing particle concentration surfaces along major roadways would be feasible in urban regions where traffic and meteorological data are readily available. PMID:25313294

  11. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  12. A model for closing the inviscid form of the average-passage equation system

    NASA Technical Reports Server (NTRS)

    Adamczyk, J. J.; Mulac, R. A.; Celestina, M. L.

    1985-01-01

    A mathematical model is proposed for closing or mathematically completing the system of equations which describes the time average flow field through the blade passages of multistage turbomachinery. These equations referred to as the average passage equation system govern a conceptual model which has proven useful in turbomachinery aerodynamic design and analysis. The closure model is developed so as to insure a consistency between these equations and the axisymmetric through flow equations. The closure model was incorporated into a computer code for use in simulating the flow field about a high speed counter rotating propeller and a high speed fan stage. Results from these simulations are presented.

  13. The Case of Effort Variables in Student Performance.

    ERIC Educational Resources Information Center

    Borg, Mary O.; And Others

    1989-01-01

    Tests the existence of a structural shift between above- and below-average students in the econometric models that explain students' grades in principles of economics classes. Identifies a structural shift and estimates separate models for above- and below-average students. Concludes that separate models as well as educational policies are…

  14. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  15. Estimation of open water evaporation using land-based meteorological data

    NASA Astrophysics Data System (ADS)

    Li, Fawen; Zhao, Yong

    2017-10-01

    Water surface evaporation is an important process in the hydrologic and energy cycles. Accurate simulation of water evaporation is important for the evaluation of water resources. In this paper, using meteorological data from the Aixinzhuang reservoir, the main factors affecting water surface evaporation were determined by the principal component analysis method. To illustrate the influence of these factors on water surface evaporation, the paper first adopted the Dalton model to simulate water surface evaporation. The results showed that the simulation precision was poor for the peak value zone. To improve the model simulation's precision, a modified Dalton model considering relative humidity was proposed. The results show that the 10-day average relative error is 17.2%, assessed as qualified; the monthly average relative error is 12.5%, assessed as qualified; and the yearly average relative error is 3.4%, assessed as excellent. To validate its applicability, the meteorological data of Kuancheng station in the Luan River basin were selected to test the modified model. The results show that the 10-day average relative error is 15.4%, assessed as qualified; the monthly average relative error is 13.3%, assessed as qualified; and the yearly average relative error is 6.0%, assessed as good. These results showed that the modified model had good applicability and versatility. The research results can provide technical support for the calculation of water surface evaporation in northern China or similar regions.

  16. Forecast of Frost Days Based on Monthly Temperatures

    NASA Astrophysics Data System (ADS)

    Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.

    2009-04-01

    Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.

  17. Validation of a spatial model used to locate fish spawning reef construction sites in the St. Clair–Detroit River system

    USGS Publications Warehouse

    Fischer, Jason L.; Bennion, David; Roseman, Edward F.; Manny, Bruce A.

    2015-01-01

    Lake sturgeon (Acipenser fulvescens) populations have suffered precipitous declines in the St. Clair–Detroit River system, following the removal of gravel spawning substrates and overfishing in the late 1800s to mid-1900s. To assist the remediation of lake sturgeon spawning habitat, three hydrodynamic models were integrated into a spatial model to identify areas in two large rivers, where water velocities were appropriate for the restoration of lake sturgeon spawning habitat. Here we use water velocity data collected with an acoustic Doppler current profiler (ADCP) to assess the ability of the spatial model and its sub-models to correctly identify areas where water velocities were deemed suitable for restoration of fish spawning habitat. ArcMap 10.1 was used to create raster grids of water velocity data from model estimates and ADCP measurements which were compared to determine the percentage of cells similarly classified as unsuitable, suitable, or ideal for fish spawning habitat remediation. The spatial model categorized 65% of the raster cells the same as depth-averaged water velocity measurements from the ADCP and 72% of the raster cells the same as surface water velocity measurements from the ADCP. Sub-models focused on depth-averaged velocities categorized the greatest percentage of cells similar to ADCP measurements where 74% and 76% of cells were the same as depth-averaged water velocity measurements. Our results indicate that integrating depth-averaged and surface water velocity hydrodynamic models may have biased the spatial model and overestimated suitable spawning habitat. A model solely integrating depth-averaged velocity models could improve identification of areas suitable for restoration of fish spawning habitat.

  18. Using Discrete Event Simulation to predict KPI's at a Projected Emergency Room.

    PubMed

    Concha, Pablo; Neriz, Liliana; Parada, Danilo; Ramis, Francisco

    2015-01-01

    Discrete Event Simulation (DES) is a powerful factor in the design of clinical facilities. DES enables facilities to be built or adapted to achieve the expected Key Performance Indicators (KPI's) such as average waiting times according to acuity, average stay times and others. Our computational model was built and validated using expert judgment and supporting statistical data. One scenario studied resulted in a 50% decrease in the average cycle time of patients compared to the original model, mainly by modifying the patient's attention model.

  19. The effect of the behavior of an average consumer on the public debt dynamics

    NASA Astrophysics Data System (ADS)

    De Luca, Roberto; Di Mauro, Marco; Falzarano, Angelo; Naddeo, Adele

    2017-09-01

    An important issue within the present economic crisis is understanding the dynamics of the public debt of a given country, and how the behavior of average consumers and tax payers in that country affects it. Starting from a model of the average consumer behavior introduced earlier by the authors, we propose a simple model to quantitatively address this issue. The model is then studied and analytically solved under some reasonable simplifying assumptions. In this way we obtain a condition under which the public debt steadily decreases.

  20. Understanding the past to interpret the future: Comparison of simulated groundwater recharge in the upper Colorado River basin (USA) using observed and general-circulation-model historical climate data

    USGS Publications Warehouse

    Tillman, Fred D.; Gangopadhyay, Subhrendu; Pruitt, Tom

    2017-01-01

    In evaluating potential impacts of climate change on water resources, water managers seek to understand how future conditions may differ from the recent past. Studies of climate impacts on groundwater recharge often compare simulated recharge from future and historical time periods on an average monthly or overall average annual basis, or compare average recharge from future decades to that from a single recent decade. Baseline historical recharge estimates, which are compared with future conditions, are often from simulations using observed historical climate data. Comparison of average monthly results, average annual results, or even averaging over selected historical decades, may mask the true variability in historical results and lead to misinterpretation of future conditions. Comparison of future recharge results simulated using general circulation model (GCM) climate data to recharge results simulated using actual historical climate data may also result in an incomplete understanding of the likelihood of future changes. In this study, groundwater recharge is estimated in the upper Colorado River basin, USA, using a distributed-parameter soil-water balance groundwater recharge model for the period 1951–2010. Recharge simulations are performed using precipitation, maximum temperature, and minimum temperature data from observed climate data and from 97 CMIP5 (Coupled Model Intercomparison Project, phase 5) projections. Results indicate that average monthly and average annual simulated recharge are similar using observed and GCM climate data. However, 10-year moving-average recharge results show substantial differences between observed and simulated climate data, particularly during period 1970–2000, with much greater variability seen for results using observed climate data.

  1. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    PubMed

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  2. 40 CFR 60.3042 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...

  3. 40 CFR 60.3042 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...

  4. 40 CFR 60.3042 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...

  5. 40 CFR 60.3042 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...

  6. 40 CFR 60.3042 - How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...

  7. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajami, N K; Duan, Q; Gao, X

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less

  8. Testing averaged cosmology with type Ia supernovae and BAO data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO datamore » is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.« less

  9. [Influence of trabecular microstructure modeling on finite element analysis of dental implant].

    PubMed

    Shen, M J; Wang, G G; Zhu, X H; Ding, X

    2016-09-01

    To analyze the influence of trabecular microstructure modeling on the biomechanical distribution of implant-bone interface with a three-dimensional finite element mandible model of trabecular structure. Dental implants were embeded in the mandibles of a beagle dog. After three months of the implant installation, the mandibles with dental implants were harvested and scaned by micro-CT and cone-beam CT. Two three-dimensional finite element mandible models, trabecular microstructure(precise model) and macrostructure(simplified model), were built. The values of stress and strain of implant-bone interface were calculated using the software of Ansys 14.0. Compared with the simplified model, the precise models' average values of the implant bone interface stress increased obviously and its maximum values did not change greatly. The maximum values of quivalent stress of the precise models were 80% and 110% of the simplified model and the average values were 170% and 290% of simplified model. The maximum and average values of equivalent strain of precise models were obviously decreased, and the maximum values of the equivalent effect strain were 17% and 26% of simplified model and the average ones were 21% and 16% of simplified model respectively. Stress and strain concentrations at implant-bone interface were obvious in the simplified model. However, the distributions of stress and strain were uniform in the precise model. The precise model has significant effect on the distribution of stress and strain at implant-bone interface.

  10. The Stagger-grid: A grid of 3D stellar atmosphere models. II. Horizontal and temporal averaging and spectral line formation

    NASA Astrophysics Data System (ADS)

    Magic, Z.; Collet, R.; Hayek, W.; Asplund, M.

    2013-12-01

    Aims: We study the implications of averaging methods with different reference depth scales for 3D hydrodynamical model atmospheres computed with the Stagger-code. The temporally and spatially averaged (hereafter denoted as ⟨3D⟩) models are explored in the light of local thermodynamic equilibrium (LTE) spectral line formation by comparing spectrum calculations using full 3D atmosphere structures with those from ⟨3D⟩ averages. Methods: We explored methods for computing mean ⟨3D⟩ stratifications from the Stagger-grid time-dependent 3D radiative hydrodynamical atmosphere models by considering four different reference depth scales (geometrical depth, column-mass density, and two optical depth scales). Furthermore, we investigated the influence of alternative averages (logarithmic, enforced hydrostatic equilibrium, flux-weighted temperatures). For the line formation we computed curves of growth for Fe i and Fe ii lines in LTE. Results: The resulting ⟨3D⟩ stratifications for the four reference depth scales can be very different. We typically find that in the upper atmosphere and in the superadiabatic region just below the optical surface, where the temperature and density fluctuations are highest, the differences become considerable and increase for higher Teff, lower log g, and lower [Fe / H]. The differential comparison of spectral line formation shows distinctive differences depending on which ⟨3D⟩ model is applied. The averages over layers of constant column-mass density yield the best mean ⟨3D⟩ representation of the full 3D models for LTE line formation, while the averages on layers at constant geometrical height are the least appropriate. Unexpectedly, the usually preferred averages over layers of constant optical depth are prone to increasing interference by reversed granulation towards higher effective temperature, in particular at low metallicity. Appendix A is available in electronic form at http://www.aanda.orgMean ⟨3D⟩ models are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/560/A8 as well as at http://www.stagger-stars.net

  11. Effects of whole spine alignment patterns on neck responses in rear end impact.

    PubMed

    Sato, Fusako; Odani, Mamiko; Miyazaki, Yusuke; Yamazaki, Kunio; Östh, Jonas; Svensson, Mats

    2017-02-17

    The aim of this study was to investigate the whole spine alignment in automotive seated postures for both genders and the effects of the spinal alignment patterns on cervical vertebral motion in rear impact using a human finite element (FE) model. Image data for 8 female and 7 male subjects in a seated posture acquired by an upright open magnetic resonance imaging (MRI) system were utilized. Spinal alignment was determined from the centers of the vertebrae and average spinal alignment patterns for both genders were estimated by multidimensional scaling (MDS). An occupant FE model of female average size (162 cm, 62 kg; the AF 50 size model) was developed by scaling THUMS AF 05. The average spinal alignment pattern for females was implemented in the model, and model validation was made with respect to female volunteer sled test data from rear end impacts. Thereafter, the average spinal alignment pattern for males and representative spinal alignments for all subjects were implemented in the validated female model, and additional FE simulations of the sled test were conducted to investigate effects of spinal alignment patterns on cervical vertebral motion. The estimated average spinal alignment pattern was slight kyphotic, or almost straight cervical and less-kyphotic thoracic spine for the females and lordotic cervical and more pronounced kyphotic thoracic spine for the males. The AF 50 size model with the female average spinal alignment exhibited spine straightening from upper thoracic vertebra level and showed larger intervertebral angular displacements in the cervical spine than the one with the male average spinal alignment. The cervical spine alignment is continuous with the thoracic spine, and a trend of the relationship between cervical spine and thoracic spinal alignment was shown in this study. Simulation results suggested that variations in thoracic spinal alignment had a potential impact on cervical spine motion as well as cervical spinal alignment in rear end impact condition.

  12. Partially-Averaged Navier Stokes Model for Turbulence: Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Abdol-Hamid, Khaled S.

    2005-01-01

    Partially-averaged Navier Stokes (PANS) is a suite of turbulence closure models of various modeled-to-resolved scale ratios ranging from Reynolds-averaged Navier Stokes (RANS) to Navier-Stokes (direct numerical simulations). The objective of PANS, like hybrid models, is to resolve large scale structures at reasonable computational expense. The modeled-to-resolved scale ratio or the level of physical resolution in PANS is quantified by two parameters: the unresolved-to-total ratios of kinetic energy (f(sub k)) and dissipation (f(sub epsilon)). The unresolved-scale stress is modeled with the Boussinesq approximation and modeled transport equations are solved for the unresolved kinetic energy and dissipation. In this paper, we first present a brief discussion of the PANS philosophy followed by a description of the implementation procedure and finally perform preliminary evaluation in benchmark problems.

  13. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  14. Redshift drift in an inhomogeneous universe: averaging and the backreaction conjecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koksbang, S.M.; Hannestad, S., E-mail: koksbang@phys.au.dk, E-mail: sth@phys.au.dk

    2016-01-01

    An expression for the average redshift drift in a statistically homogeneous and isotropic dust universe is given. The expression takes the same form as the expression for the redshift drift in FLRW models. It is used for a proof-of-principle study of the effects of backreaction on redshift drift measurements by combining the expression with two-region models. The study shows that backreaction can lead to positive redshift drift at low redshifts, exemplifying that a positive redshift drift at low redshifts does not require dark energy. Moreover, the study illustrates that models without a dark energy component can have an average redshiftmore » drift observationally indistinguishable from that of the standard model according to the currently expected precision of ELT measurements. In an appendix, spherically symmetric solutions to Einstein's equations with inhomogeneous dark energy and matter are used to study deviations from the average redshift drift and effects of local voids.« less

  15. Expected indoor 222Rn levels in counties with very high and very low lung cancer rates.

    PubMed

    Cohen, B L

    1989-12-01

    Counties in the U.S. with high lung cancer rates should have higher average 222Rn levels than counties with low lung cancer rates, assuming the average 222Rn level in a county is not correlated with other factors that cause lung cancer. The magnitude of this effect was calculated, using the absolute risk model, the relative risk model, and an intermediate model, for females who died in 1950-1969. The results were similar for all three models. We concluded that, ignoring migration, the average Rn level in the highest lung cancer counties should be about three times higher than in the lowest lung cancer counties according to the theory. Preliminary data are presented indicating that the situation is quite the opposite: The average Rn level in the highest lung cancer counties was only about one-half that in the lowest lung cancer counties.

  16. Accuracy of three-dimensional dental resin models created by fused deposition modeling, stereolithography, and Polyjet prototype technologies: A comparative study.

    PubMed

    Rebong, Raymund E; Stewart, Kelton T; Utreja, Achint; Ghoneima, Ahmed A

    2018-05-01

    The aim of this study was to assess the dimensional accuracy of fused deposition modeling (FDM)-, Polyjet-, and stereolithography (SLA)-produced models by comparing them to traditional plaster casts. A total of 12 maxillary and mandibular posttreatment orthodontic plaster casts were selected from the archives of the Orthodontic Department at the Indiana University School of Dentistry. Plaster models were scanned, saved as stereolithography files, and printed as physical models using three different three-dimensional (3D) printers: Makerbot Replicator (FDM), 3D Systems SLA 6000 (SLA), and Objet Eden500V (Polyjet). A digital caliper was used to obtain measurements on the original plaster models as well as on the printed resin models. Comparison between the 3D printed models and the plaster casts showed no statistically significant differences in most of the parameters. However, FDM was significantly higher on average than were plaster casts in maxillary left mixed plane (MxL-MP) and mandibular intermolar width (Md-IMW). Polyjet was significantly higher on average than were plaster casts in maxillary intercanine width (Mx-ICW), mandibular intercanine width (Md-ICW), and mandibular left mixed plane (MdL-MP). Polyjet was significantly lower on average than were plaster casts in maxillary right vertical plane (MxR-vertical), maxillary left vertical plane (MxL-vertical), mandibular right anteroposterior plane (MdR-AP), mandibular right vertical plane (MdR-vertical), and mandibular left vertical plane (MdL-vertical). SLA was significantly higher on average than were plaster casts in MxL-MP, Md-ICW, and overbite. SLA was significantly lower on average than were plaster casts in MdR-vertical and MdL-vertical. Dental models reconstructed by FDM technology had the fewest dimensional measurement differences compared to plaster models.

  17. Assessing the Resolution Adaptability of the Zhang-McFarlane Cumulus Parameterization With Spatial and Temporal Averaging: RESOLUTION ADAPTABILITY OF ZM SCHEME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, Yuxing; Fan, Jiwen; Xiao, Heng

    Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32more » km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.« less

  18. 77 FR 17219 - Patient Protection and Affordable Care Act; Standards Related to Reinsurance, Risk Corridors and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-23

    ... parts of the risk adjustment process--the risk adjustment model, the calculation of plan average... risk adjustment process. The risk adjustment model calculates individual risk scores. The calculation...'' to mean all data that are used in a risk adjustment model, the calculation of plan average actuarial...

  19. National Snow Analyses - NOHRSC - The ultimate source for snow information

    Science.gov Websites

    Equivalent Thumbnail image of Modeled Snow Water Equivalent Animate: Season --- Two weeks --- One Day Snow Depth Thumbnail image of Modeled Snow Depth Animate: Season --- Two weeks --- One Day Average Snowpack Temp Thumbnail image of Modeled Average Snowpack Temp Animate: Season --- Two weeks --- One Day SWE

  20. Recommendations for the U.S. Coast Guard Survival Prediction Tool

    DTIC Science & Technology

    2009-04-01

    model. Not enough data to support modeling of how alcohol impairs swimming ability. Experimental evidence shows no significant cooling effect 50...equation. When matched for physical attributes, females cool more quickly than males due to lower metabolic response and greater surface-area-to-mass...April 2009 However, the average female has about 10% more body fat than the average male so, on average, males cool faster than females. (Tipton

  1. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive... credits obtained through trading. (b) Beginning in model year 2004, credits used to demonstrate a zero or...

  2. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive... credits obtained through trading. (b) Beginning in model year 2004, credits used to demonstrate a zero or...

  3. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive... credits obtained through trading. (b) Beginning in model year 2004, credits used to demonstrate a zero or...

  4. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive... credits obtained through trading. (b) Beginning in model year 2004, credits used to demonstrate a zero or...

  5. 40 CFR 91.1304 - Averaging.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Averaging. (a) A manufacturer may use averaging across engine families to demonstrate a zero or positive... credits obtained through trading. (b) Beginning in model year 2004, credits used to demonstrate a zero or...

  6. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    NASA Astrophysics Data System (ADS)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  7. Occupational exposure assessment of magnetic fields generated by induction heating equipment-the role of spatial averaging.

    PubMed

    Kos, Bor; Valič, Blaž; Kotnik, Tadej; Gajšek, Peter

    2012-10-07

    Induction heating equipment is a source of strong and nonhomogeneous magnetic fields, which can exceed occupational reference levels. We investigated a case of an induction tempering tunnel furnace. Measurements of the emitted magnetic flux density (B) were performed during its operation and used to validate a numerical model of the furnace. This model was used to compute the values of B and the induced in situ electric field (E) for 15 different body positions relative to the source. For each body position, the computed B values were used to determine their maximum and average values, using six spatial averaging schemes (9-285 averaging points) and two averaging algorithms (arithmetic mean and quadratic mean). Maximum and average B values were compared to the ICNIRP reference level, and E values to the ICNIRP basic restriction. Our results show that in nonhomogeneous fields, the maximum B is an overly conservative predictor of overexposure, as it yields many false positives. The average B yielded fewer false positives, but as the number of averaging points increased, false negatives emerged. The most reliable averaging schemes were obtained for averaging over the torso with quadratic averaging, with no false negatives even for the maximum number of averaging points investigated.

  8. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.

  9. Comparative evaluation of a new lactation curve model for pasture-based Holstein-Friesian dairy cows.

    PubMed

    Adediran, S A; Ratkowsky, D A; Donaghy, D J; Malau-Aduli, A E O

    2012-09-01

    Fourteen lactation models were fitted to average and individual cow lactation data from pasture-based dairy systems in the Australian states of Victoria and Tasmania. The models included a new "log-quadratic" model, and a major objective was to evaluate and compare the performance of this model with the other models. Nine empirical and 5 mechanistic models were first fitted to average test-day milk yield of Holstein-Friesian dairy cows using the nonlinear procedure in SAS. Two additional semiparametric models were fitted using a linear model in ASReml. To investigate the influence of days to first test-day and the number of test-days, 5 of the best-fitting models were then fitted to individual cow lactation data. Model goodness of fit was evaluated using criteria such as the residual mean square, the distribution of residuals, the correlation between actual and predicted values, and the Wald-Wolfowitz runs test. Goodness of fit was similar in all but one of the models in terms of fitting average lactation but they differed in their ability to predict individual lactations. In particular, the widely used incomplete gamma model most displayed this failing. The new log-quadratic model was robust in fitting average and individual lactations, and was less affected by sampled data and more parsimonious in having only 3 parameters, each of which lends itself to biological interpretation. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Thomson scattering in the average-atom approximation.

    PubMed

    Johnson, W R; Nilsen, J; Cheng, K T

    2012-09-01

    The average-atom model is applied to study Thomson scattering of x-rays from warm dense matter with emphasis on scattering by bound electrons. Parameters needed to evaluate the dynamic structure function (chemical potential, average ionic charge, free electron density, bound and continuum wave functions, and occupation numbers) are obtained from the average-atom model. The resulting analysis provides a relatively simple diagnostic for use in connection with x-ray scattering measurements. Applications are given to dense hydrogen, beryllium, aluminum, and titanium plasmas. In the case of titanium, bound states are predicted to modify the spectrum significantly.

  11. Free-free opacity in dense plasmas with an average atom model

    DOE PAGES

    Shaffer, Nathaniel R.; Ferris, Natalie G.; Colgan, James Patrick; ...

    2017-02-28

    A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Furthermore, comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule.

  12. Free-free opacity in dense plasmas with an average atom model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaffer, Nathaniel R.; Ferris, Natalie G.; Colgan, James Patrick

    A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Furthermore, comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule.

  13. Multi-Model Ensemble Wake Vortex Prediction

    NASA Technical Reports Server (NTRS)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  14. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  15. Kumaraswamy autoregressive moving average models for double bounded environmental data

    NASA Astrophysics Data System (ADS)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  16. Mean-field velocity difference model considering the average effect of multi-vehicle interaction

    NASA Astrophysics Data System (ADS)

    Guo, Yan; Xue, Yu; Shi, Yin; Wei, Fang-ping; Lü, Liang-zhong; He, Hong-di

    2018-06-01

    In this paper, a mean-field velocity difference model(MFVD) is proposed to describe the average effect of multi-vehicle interactions on the whole road. By stability analysis, the stability condition of traffic system is obtained. Comparison with stability of full velocity-difference (FVD) model and the completeness of MFVD model are discussed. The mKdV equation is derived from MFVD model through nonlinear analysis to reveal the traffic jams in the form of the kink-antikink density wave. Then the numerical simulation is performed and the results illustrate that the average effect of multi-vehicle interactions plays an important role in effectively suppressing traffic jam. The increase strength of the mean-field velocity difference in MFVD model can rapidly reduce traffic jam and enhance the stability of traffic system.

  17. A Framework for Validating Traffic Simulation Models at the Vehicle Trajectory Level

    DOT National Transportation Integrated Search

    2017-03-01

    Based on current practices, traffic simulation models are calibrated and validated using macroscopic measures such as 15-minute averages of traffic counts or average point-to-point travel times. For an emerging number of applications, including conne...

  18. A modeling study of the impact of urban trees on ozone

    Treesearch

    David J. Nowak; Kevin L. Civerolo; S. Trivikrama Rao; Gopal Sistla; Christopher J. Luley; Daniel E. Crane

    2000-01-01

    Modeling the effects of increased urban tree cover on ozone concentrations (July 13-15, 1995) from Washington, DC, to central Massachusetts reveals that urban trees generally reduce ozone concentrations in cities, but tend to increase average ozone concentrations in the overall modeling domain. During the daytime, average ozone reductions in urban areas (1 ppb) were...

  19. Evaluation of average daily gain predictions by the integrated farm system model for forage-finished beef steers

    USDA-ARS?s Scientific Manuscript database

    Representing the performance of cattle finished on an all forage diet in process-based whole farm system models has presented a challenge. To address this challenge, a study was done to evaluate average daily gain (ADG) predictions of the Integrated Farm System Model (IFSM) for steers consuming all-...

  20. Parental and interpersonal relationships of transsexual and masculine and feminine homosexual men.

    PubMed

    Sípová, I; Brzek, A

    1983-01-01

    A group of transsexual and homosexual men was examined using the Leary Test as a psycho-sociogram, and findings were compared to those from a group of heterosexual men. It was found that the fathers of homosexuals and transsexuals were more hostile and less dominant than the fathers of the control group and hence less desirable identification models. The average mothers of transsexuals were close to the ideal person in our culture, e.g., dominant, strong and kindly, and thus an imposing identification model. Heterosexual men and transsexuals, in their behavior towards their wives, on the average identified with the models set by their fathers. Effeminate homosexuals in relations towards their male partners on the average identified with their mothers. Non-effeminate homosexuals modeled their behavior somewhere between both parents. Heterosexual men tended to choose wives modelled on their mothers and modelled their behavior towards their wives on their fathers' behavior. Non-effeminate homosexuals tended to choose their male partners according to the model set by their mothers and behaved toward them in a more dominant manner than any of the other groups studied (effeminate homosexuals, non-effeminate homosexuals and transsexuals). Effeminate homosexuals on average chose the most dominant male partners and modelled their behavior toward them on that of their mothers. The wives of transsexuals were rated as the most hostile. The self-esteem of all the groups studied suffered from lack of dominance. On the average, non-effeminate homosexuals were found to be closest to the heterosexual norms, transsexuals the furthest.

  1. Area-averaged evapotranspiration over a heterogeneous land surface: aggregation of multi-point EC flux measurements with a high-resolution land-cover map and footprint analysis

    NASA Astrophysics Data System (ADS)

    Xu, Feinan; Wang, Weizhen; Wang, Jiemin; Xu, Ziwei; Qi, Yuan; Wu, Yueru

    2017-08-01

    The determination of area-averaged evapotranspiration (ET) at the satellite pixel scale/model grid scale over a heterogeneous land surface plays a significant role in developing and improving the parameterization schemes of the remote sensing based ET estimation models and general hydro-meteorological models. The Heihe Watershed Allied Telemetry Experimental Research (HiWATER) flux matrix provided a unique opportunity to build an aggregation scheme for area-averaged fluxes. On the basis of the HiWATER flux matrix dataset and high-resolution land-cover map, this study focused on estimating the area-averaged ET over a heterogeneous landscape with footprint analysis and multivariate regression. The procedure is as follows. Firstly, quality control and uncertainty estimation for the data of the flux matrix, including 17 eddy-covariance (EC) sites and four groups of large-aperture scintillometers (LASs), were carefully done. Secondly, the representativeness of each EC site was quantitatively evaluated; footprint analysis was also performed for each LAS path. Thirdly, based on the high-resolution land-cover map derived from aircraft remote sensing, a flux aggregation method was established combining footprint analysis and multiple-linear regression. Then, the area-averaged sensible heat fluxes obtained from the EC flux matrix were validated by the LAS measurements. Finally, the area-averaged ET of the kernel experimental area of HiWATER was estimated. Compared with the formerly used and rather simple approaches, such as the arithmetic average and area-weighted methods, the present scheme is not only with a much better database, but also has a solid grounding in physics and mathematics in the integration of area-averaged fluxes over a heterogeneous surface. Results from this study, both instantaneous and daily ET at the satellite pixel scale, can be used for the validation of relevant remote sensing models and land surface process models. Furthermore, this work will be extended to the water balance study of the whole Heihe River basin.

  2. Forecasting Instability Indicators in the Horn of Africa

    DTIC Science & Technology

    2008-03-01

    further than 2 (Makridakis, et al, 1983, 359). 2-32 Autoregressive Integrated Moving Average ( ARIMA ) Model . Similar to the ARMA model except for...stationary process. ARIMA models are described as ARIMA (p,d,q), where p is the order of the autoregressive process, d is the degree of the...differential process, and q is the order of the moving average process. The ARMA (1,1) model shown above is equivalent to an ARIMA (1,0,1) model . An ARIMA

  3. Simulation of multistage turbine flows

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.; Mulac, Richard A.

    1987-01-01

    A flow model has been developed for analyzing multistage turbomachinery flows. This model, referred to as the average passage flow model, describes the time-averaged flow field with a typical passage of a blade row embedded within a multistage configuration. Computer resource requirements, supporting empirical modeling, formulation code development, and multitasking and storage are discussed. Illustrations from simulations of the space shuttle main engine (SSME) fuel turbine performed to date are given.

  4. Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, N. C.; Taylor, P. C.

    2014-12-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.

  5. Modeling level of urban taxi services using neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J.; Wong, S.C.; Tong, C.O.

    1999-05-01

    This paper is concerned with the modeling of the complex demand-supply relationship in urban taxi services. A neural network model is developed, based on a taxi service situation observed in the urban area of Hong Kong. The input consists of several exogenous variables including number of licensed taxis, incremental charge of taxi fare, average occupied taxi journey time, average disposable income, and population and customer price index; the output consists of a set of endogenous variables including daily taxi passenger demand, passenger waiting time, vacant taxi headway, average percentage of occupied taxis, taxi utilization, and average taxi waiting time. Comparisonsmore » of the estimation accuracy are made between the neural network model and the simultaneous equations model. The results show that the neural network-based macro taxi model can obtain much more accurate information of the taxi services than the simultaneous equations model does. Although the data set used for training the neural network is small, the results obtained thus far are very encouraging. The neural network model can be used as a policy tool by regulator to assist with the decisions concerning the restriction over the number of taxi licenses and the fixing of the taxi fare structure as well as a range of service quality control.« less

  6. Application of scl - pbl method to increase quality learning of industrial statistics course in department of industrial engineering pancasila university

    NASA Astrophysics Data System (ADS)

    Darmawan, M.; Hidayah, N. Y.

    2017-12-01

    Currently, there has been a change of new paradigm in the learning model in college, ie from Teacher Centered Learning (TCL) model to Student Centered Learing (SCL). It is generally assumed that the SCL model is better than the TCL model. The Courses of 2nd Industrial Statistics in the Department Industrial Engineering Pancasila University is the subject that belongs to the Basic Engineering group. So far, the applied learning model refers more to the TCL model, and field facts show that the learning outcomes are less satisfactory. Of the three consecutive semesters, ie even semester 2013/2014, 2014/2015, and 2015/2016 obtained grade average is equal to 56.0; 61.1, and 60.5. In the even semester of 2016/2017, Classroom Action Research (CAR) is conducted for this course through the implementation of SCL model with Problem Based Learning (PBL) methods. The hypothesis proposed is that the SCL-PBL model will be able to improve the final grade of the course. The results shows that the average grade of the course can be increased to 73.27. This value was then tested using the ANOVA and the test results concluded that the average grade was significantly different from the average grade value in the previous three semesters.

  7. Forecasting Dust Storms Using the CARMA-Dust Model and MM5 Weather Data

    NASA Astrophysics Data System (ADS)

    Barnum, B. H.; Winstead, N. S.; Wesely, J.; Hakola, A.; Colarco, P.; Toon, O. B.; Ginoux, P.; Brooks, G.; Hasselbarth, L. M.; Toth, B.; Sterner, R.

    2002-12-01

    An operational model for the forecast of dust storms in Northern Africa, the Middle East and Southwest Asia has been developed for the United States Air Force Weather Agency (AFWA). The dust forecast model uses the 5th generation Penn State Mesoscale Meteorology Model (MM5), and a modified version of the Colorado Aerosol and Radiation Model for Atmospheres (CARMA). AFWA conducted a 60 day evaluation of the dust model to look at the model's ability to forecast dust storms for short, medium and long range (72 hour) forecast periods. The study used satellite and ground observations of dust storms to verify the model's effectiveness. Each of the main mesoscale forecast theaters was broken down into smaller sub-regions for detailed analysis. The study found the forecast model was able to forecast dust storms in Saharan Africa and the Sahel region with an average Probability of Detection (POD)exceeding 68%, with a 16% False Alarm Rate (FAR). The Southwest Asian theater had average POD's of 61% with FAR's averaging 10%.

  8. Spatial averaging of a dissipative particle dynamics model for active suspensions

    NASA Astrophysics Data System (ADS)

    Panchenko, Alexander; Hinz, Denis F.; Fried, Eliot

    2018-03-01

    Starting from a fine-scale dissipative particle dynamics (DPD) model of self-motile point particles, we derive meso-scale continuum equations by applying a spatial averaging version of the Irving-Kirkwood-Noll procedure. Since the method does not rely on kinetic theory, the derivation is valid for highly concentrated particle systems. Spatial averaging yields stochastic continuum equations similar to those of Toner and Tu. However, our theory also involves a constitutive equation for the average fluctuation force. According to this equation, both the strength and the probability distribution vary with time and position through the effective mass density. The statistics of the fluctuation force also depend on the fine scale dissipative force equation, the physical temperature, and two additional parameters which characterize fluctuation strengths. Although the self-propulsion force entering our DPD model contains no explicit mechanism for aligning the velocities of neighboring particles, our averaged coarse-scale equations include the commonly encountered cubically nonlinear (internal) body force density.

  9. A new type of exact arbitrarily inhomogeneous cosmology: evolution of deceleration in the flat homogeneous-on-average case

    NASA Astrophysics Data System (ADS)

    Hellaby, Charles

    2012-01-01

    A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic modelling of the effect will need this construction to be generalised.

  10. Upscaling the Navier-Stokes Equation for Turbulent Flows in Porous Media Using a Volume Averaging Method

    NASA Astrophysics Data System (ADS)

    Wood, Brian; He, Xiaoliang; Apte, Sourabh

    2017-11-01

    Turbulent flows through porous media are encountered in a number of natural and engineered systems. Many attempts to close the Navier-Stokes equation for such type of flow have been made, for example using RANS models and double averaging. On the other hand, Whitaker (1996) applied volume averaging theorem to close the macroscopic N-S equation for low Re flow. In this work, the volume averaging theory is extended into the turbulent flow regime to posit a relationship between the macroscale velocities and the spatial velocity statistics in terms of the spatial averaged velocity only. Rather than developing a Reynolds stress model, we propose a simple algebraic closure, consistent with generalized effective viscosity models (Pope 1975), to represent the spatial fluctuating velocity and pressure respectively. The coefficients (one 1st order, two 2nd order and one 3rd order tensor) of the linear functions depend on averaged velocity and gradient. With the data set from DNS, performed with inertial and turbulent flows (pore Re of 300, 500 and 1000) through a periodic face centered cubic (FCC) unit cell, all the unknown coefficients can be computed and the closure is complete. The macroscopic quantity calculated from the averaging is then compared with DNS data to verify the upscaling. NSF Project Numbers 1336983, 1133363.

  11. Two-Component Modeling of the Solar IR CO Lines

    NASA Astrophysics Data System (ADS)

    Avrett, E. H.

    One-dimensional hydrostatic models of quiet and active solar regions can be constructed that generally account for the observed intensities of lines and continua throughout the spectrum, except for the infrared CO lines. There is an apparent conflict between a) observations of the strongest infrared CO lines formed in LTE at low-chromospheric heights but at temperatures much cooler than the average chromospheric values, and b) observations of Ca II, UV, and microwave intensities that originate from the same chromospheric heights but at the much higher temperatures characteristic of the average chromosphere. A model M_CO has been constructed which gives a good fit to the full range of mean CO line profiles (averaged over the central area of the solar disk and over time) but this model conflicts with other observations of average quiet regions. A model L_CO which is approximately 100 K cooler than M_CO combined with a very bright network model F in the proportions 0.6L_CO+0.4F is found to be generally consistent with the CO, Ca II, UV, and microwave observations. Ayres, Testerman, and Brault found that models COOLC and FLUXT in the proportions 0.925 and 0.075 account for the CO and Ca II lines, but these combined models give an average UV intensity at 140 nm about 20 times larger than observed. The 0.6L_CO+0.4F result may give a better description of the cool and hot components that produce the space- and time-averaged spectra. Recent observations carried out by Uitenbroek, Noyes, and Rabin with high spatial and temporal resolution indicate that the faintest intensities in the strong CO lines measured at given locations usually become much brighter within 1 to 3 minutes. The cool regions thus seem to be mostly the low- temperature portions of oscillatory waves rather than cool structures that are stationary.

  12. Two-component modeling of the solar IR CO lines

    NASA Technical Reports Server (NTRS)

    Avrett, Eugene H.

    1995-01-01

    One-dimensional hydrostatic models of quiet and active solar regions can be constructed that generally account for the observed intensities of lines and continue throughout the spectrum, except for the infrared CO lines. There is an apparent conflict between: (1) observations of the strongest infrared CO lines formed in LTE at low-chromospheric heights but at temperatures much cooler than the average chromospheric values; and (2) observations of Ca II, UV (ultraviolet), and microwave intensities that originate from the same chromospheric heights but at the much higher temperatures characteristic of the average chromosphere. A model M(sub CO) has been constructed which gives a good fit to the full range of mean CO line profiles (averaged over the central area of the solar disk and over time) but this model conflicts with other observations of average quiet regions. A model L(sub CO) which is approximately 100 K cooler than M(sub CO) combined with a very bright network model F in the proportions 0.6 L(sub CO) + 0.4 F is found to be generally consistent with the CO, Ca II, UV, and microwave observations. Ayres, Testerman, and Brault found that models COOLC and FLUXT in the proportions 0.925 and 0.075 account for the CO and Ca II lines, but these combined models give an average UV intensity at 140 nm about 20 times larger than observed. The 0.6 L(sub CO) + 0.4 F result may give a better description of the cool and hot components that produce the space- and time-averaged spectra. Recent observations carried out by Uitenbroek, Noyes, and Rabine with high spatial and temporal resolution indicate that the faintest intensities in the strong CO lines measured at given locations usually become much brighter within 1 to 3 minutes. The cool regions thus seem to be mostly the low-temperature portions of oscillatory waves rather than cool structures that are stationary.

  13. Proof of Concept for the Trajectory-Level Validation Framework for Traffic Simulation Models

    DOT National Transportation Integrated Search

    2017-10-30

    Based on current practices, traffic simulation models are calibrated and validated using macroscopic measures such as 15-minute averages of traffic counts or average point-to-point travel times. For an emerging number of applications, including conne...

  14. 40 CFR 600.512-86 - Model year report.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy and Manufacturer's Average Carbon... submitted. (3) Separate reports shall be submitted for passenger automobiles and light trucks (as identified...

  15. 40 CFR 600.512-01 - Model year report.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy and Manufacturer's Average Carbon... report must be submitted. (3) Separate reports shall be submitted for passenger automobiles and light...

  16. 40 CFR 600.512-08 - Model year report.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy and Manufacturer's Average Carbon... report must be submitted. (3) Separate reports shall be submitted for passenger automobiles and light...

  17. 49 CFR 536.3 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... manufactured by a manufacturer in that compliance category in a particular model year have greater average fuel.... 32905) than that manufacturer's fuel economy standard for that compliance category and model year... year have lower average fuel economy (calculated in a manner that reflects the incentives for...

  18. 49 CFR 536.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... manufactured by a manufacturer in that compliance category in a particular model year have greater average fuel.... 32905) than that manufacturer's fuel economy standard for that compliance category and model year... year have lower average fuel economy (calculated in a manner that reflects the incentives for...

  19. Spatial Interpretation of Tower, Chamber and Modelled Terrestrial Fluxes in a Tropical Forest Plantation

    NASA Astrophysics Data System (ADS)

    Whidden, E.; Roulet, N.

    2003-04-01

    Interpretation of a site average terrestrial flux may be complicated in the presence of inhomogeneities. Inhomogeneity may invalidate the basic assumptions of aerodynamic flux measurement. Chamber measurement may miss or misinterpret important temporal or spatial anomalies. Models may smooth over important nonlinearities depending on the scale of application. Although inhomogeneity is usually seen as a design problem, many sites have spatial variance that may have a large impact on net flux, and in many cases a large homogeneous surface is unrealistic. The sensitivity and validity of a site average flux are investigated in the presence of an inhomogeneous site. Directional differences are used to evaluate the validity of aerodynamic methods and the computation of a site average tower flux. Empirical and modelling methods are used to interpret the spatial controls on flux. An ecosystem model, Ecosys, is used to assess spatial length scales appropriate to the ecophysiologic controls. A diffusion model is used to compare tower, chamber, and model data, by spatially weighting contributions within the tower footprint. Diffusion model weighting is also used to improve tower flux estimates by producing footprint averaged ecological parameters (soil moisture, soil temperature, etc.). Although uncertainty remains in the validity of measurement methods and the accuracy of diffusion models, a detailed spatial interpretation is required at an inhomogeneous site. Flux estimation between methods improves with spatial interpretation, showing the importance to an estimation of a site average flux. Small-scale temporal and spatial anomalies may be relatively unimportant to overall flux, but accounting for medium-scale differences in ecophysiological controls is necessary. A combination of measurements and modelling can be used to define the appropriate time and length scales of significant non-linearity due to inhomogeneity.

  20. MAX-DOAS tropospheric nitrogen dioxide column measurements compared with the Lotos-Euros air quality model

    NASA Astrophysics Data System (ADS)

    Vlemmix, T.; Eskes, H. J.; Piters, A. J. M.; Schaap, M.; Sauter, F. J.; Kelder, H.; Levelt, P. F.

    2015-02-01

    A 14-month data set of MAX-DOAS (Multi-Axis Differential Optical Absorption Spectroscopy) tropospheric NO2 column observations in De Bilt, the Netherlands, has been compared with the regional air quality model Lotos-Euros. The model was run on a 7×7 km2 grid, the same resolution as the emission inventory used. A study was performed to assess the effect of clouds on the retrieval accuracy of the MAX-DOAS observations. Good agreement was found between modeled and measured tropospheric NO2 columns, with an average difference of less than 1% of the average tropospheric column (14.5 · 1015 molec cm-2). The comparisons show little cloud cover dependence after cloud corrections for which ceilometer data were used. Hourly differences between observations and model show a Gaussian behavior with a standard deviation (σ) of 5.5 · 1015 molec cm-2. For daily averages of tropospheric NO2 columns, a correlation of 0.72 was found for all observations, and 0.79 for cloud free conditions. The measured and modeled tropospheric NO2 columns have an almost identical distribution over the wind direction. A significant difference between model and measurements was found for the average weekly cycle, which shows a much stronger decrease during the weekend for the observations; for the diurnal cycle, the observed range is about twice as large as the modeled range. The results of the comparison demonstrate that averaged over a long time period, the tropospheric NO2 column observations are representative for a large spatial area despite the fact that they were obtained in an urban region. This makes the MAX-DOAS technique especially suitable for validation of satellite observations and air quality models in urban regions.

  1. A computer model of long-term salinity in San Francisco Bay: Sensitivity to mixing and inflows

    USGS Publications Warehouse

    Uncles, R.J.; Peterson, D.H.

    1995-01-01

    A two-level model of the residual circulation and tidally-averaged salinity in San Francisco Bay has been developed in order to interpret long-term (days to decades) salinity variability in the Bay. Applications of the model to biogeochemical studies are also envisaged. The model has been used to simulate daily-averaged salinity in the upper and lower levels of a 51-segment discretization of the Bay over the 22-y period 1967–1988. Observed, monthly-averaged surface salinity data and monthly averages of the daily-simulated salinity are in reasonable agreement, both near the Golden Gate and in the upper reaches, close to the delta. Agreement is less satisfactory in the central reaches of North Bay, in the vicinity of Carquinez Strait. Comparison of daily-averaged data at Station 5 (Pittsburg, in the upper North Bay) with modeled data indicates close agreement with a correlation coefficient of 0.97 for the 4110 daily values. The model successfully simulates the marked seasonal variability in salinity as well as the effects of rapidly changing freshwater inflows. Salinity variability is driven primarily by freshwater inflow. The sensitivity of the modeled salinity to variations in the longitudinal mixing coefficients is investigated. The modeled salinity is relatively insensitive to the calibration factor for vertical mixing and relatively sensitive to the calibration factor for longitudinal mixing. The optimum value of the longitudinal calibration factor is 1.1, compared with the physically-based value of 1.0. Linear time-series analysis indicates that the observed and dynamically-modeled salinity-inflow responses are in good agreement in the lower reaches of the Bay.

  2. Synchronized Trajectories in a Climate "Supermodel"

    NASA Astrophysics Data System (ADS)

    Duane, Gregory; Schevenhoven, Francine; Selten, Frank

    2017-04-01

    Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.

  3. A Novel A Posteriori Investigation of Scalar Flux Models for Passive Scalar Dispersion in Compressible Boundary Layer Flows

    NASA Astrophysics Data System (ADS)

    Braman, Kalen; Raman, Venkat

    2011-11-01

    A novel direct numerical simulation (DNS) based a posteriori technique has been developed to investigate scalar transport modeling error. The methodology is used to test Reynolds-averaged Navier-Stokes turbulent scalar flux models for compressible boundary layer flows. Time-averaged DNS velocity and turbulence fields provide the information necessary to evolve the time-averaged scalar transport equation without requiring the use of turbulence modeling. With this technique, passive dispersion of a scalar from a boundary layer surface in a supersonic flow is studied with scalar flux modeling error isolated from any flowfield modeling errors. Several different scalar flux models are used. It is seen that the simple gradient diffusion model overpredicts scalar dispersion, while anisotropic scalar flux models underpredict dispersion. Further, the use of more complex models does not necessarily guarantee an increase in predictive accuracy, indicating that key physics is missing from existing models. Using comparisons of both a priori and a posteriori scalar flux evaluations with DNS data, the main modeling shortcomings are identified. Results will be presented for different boundary layer conditions.

  4. EMC Global Climate And Weather Modeling Branch Personnel

    Science.gov Websites

    Comparison Statistics which includes: NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias Reduction (Percents) CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias Reduction

  5. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year does...

  6. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year does...

  7. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year does...

  8. 49 CFR 525.11 - Termination of exemption; amendment of alternative average fuel economy standard.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... average fuel economy standard. 525.11 Section 525.11 Transportation Other Regulations Relating to... EXEMPTIONS FROM AVERAGE FUEL ECONOMY STANDARDS § 525.11 Termination of exemption; amendment of alternative average fuel economy standard. (a) Any exemption granted under this part for an affected model year does...

  9. Prediction of dosage-based parameters from the puff dispersion of airborne materials in urban environments using the CFD-RANS methodology

    NASA Astrophysics Data System (ADS)

    Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.

    2018-02-01

    One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict ensemble-average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model's predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble-average peak concentration was systematically underpredicted by the model to a degree higher than the allowable by the acceptance criteria, in 1 of the 2 wind-tunnel experiments. The model performance depended on the positions of the examined sensors in relation to the emission source and the buildings configuration. The work presented in this paper was carried out (partly) within the scope of COST Action ES1006 "Evaluation, improvement, and guidance for the use of local-scale emergency prediction and response tools for airborne hazards in built environments".

  10. A depth-averaged 2-D shallow water model for breaking and non-breaking long waves affected by rigid vegetation

    USDA-ARS?s Scientific Manuscript database

    This paper presents a depth-averaged two-dimensional shallow water model for simulating long waves in vegetated water bodies under breaking and non-breaking conditions. The effects of rigid vegetation are modelled in the form of drag and inertia forces as sink terms in the momentum equations. The dr...

  11. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    PubMed

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  12. Potential breeding distributions of U.S. birds predicted with both short-term variability and long-term average climate data.

    PubMed

    Bateman, Brooke L; Pidgeon, Anna M; Radeloff, Volker C; Flather, Curtis H; VanDerWal, Jeremy; Akçakaya, H Resit; Thogmartin, Wayne E; Albright, Thomas P; Vavrus, Stephen J; Heglund, Patricia J

    2016-12-01

    Climate conditions, such as temperature or precipitation, averaged over several decades strongly affect species distributions, as evidenced by experimental results and a plethora of models demonstrating statistical relations between species occurrences and long-term climate averages. However, long-term averages can conceal climate changes that have occurred in recent decades and may not capture actual species occurrence well because the distributions of species, especially at the edges of their range, are typically dynamic and may respond strongly to short-term climate variability. Our goal here was to test whether bird occurrence models can be predicted by either covariates based on short-term climate variability or on long-term climate averages. We parameterized species distribution models (SDMs) based on either short-term variability or long-term average climate covariates for 320 bird species in the conterminous USA and tested whether any life-history trait-based guilds were particularly sensitive to short-term conditions. Models including short-term climate variability performed well based on their cross-validated area-under-the-curve AUC score (0.85), as did models based on long-term climate averages (0.84). Similarly, both models performed well compared to independent presence/absence data from the North American Breeding Bird Survey (independent AUC of 0.89 and 0.90, respectively). However, models based on short-term variability covariates more accurately classified true absences for most species (73% of true absences classified within the lowest quarter of environmental suitability vs. 68%). In addition, they have the advantage that they can reveal the dynamic relationship between species and their environment because they capture the spatial fluctuations of species potential breeding distributions. With this information, we can identify which species and guilds are sensitive to climate variability, identify sites of high conservation value where climate variability is low, and assess how species' potential distributions may have already shifted due recent climate change. However, long-term climate averages require less data and processing time and may be more readily available for some areas of interest. Where data on short-term climate variability are not available, long-term climate information is a sufficient predictor of species distributions in many cases. However, short-term climate variability data may provide information not captured with long-term climate data for use in SDMs. © 2016 by the Ecological Society of America.

  13. Modeling aboveground biomass of Tamarix ramosissima in the Arkansas River Basin of Southeastern Colorado, USA

    USGS Publications Warehouse

    Evangelista, P.; Kumar, S.; Stohlgren, T.J.; Crall, A.W.; Newman, G.J.

    2007-01-01

    Predictive models of aboveground biomass of nonnative Tamarix ramosissima of various sizes were developed using destructive sampling techniques on 50 individuals and four 100-m2 plots. Each sample was measured for average height (m) of stems and canopy area (m2) prior to cutting, drying, and weighing. Five competing regression models (P < 0.05) were developed to estimate aboveground biomass of T. ramosissima using average height and/or canopy area measurements and were evaluated using Akaike's Information Criterion corrected for small sample size (AICc). Our best model (AICc = -148.69, ??AICc = 0) successfully predicted T. ramosissima aboveground biomass (R2 = 0.97) and used average height and canopy area as predictors. Our 2nd-best model, using the same predictors, was also successful in predicting aboveground biomass (R2 = 0.97, AICc = -131.71, ??AICc = 16.98). A 3rd model demonstrated high correlation between only aboveground biomass and canopy area (R2 = 0.95), while 2 additional models found high correlations between aboveground biomass and average height measurements only (R2 = 0.90 and 0.70, respectively). These models illustrate how simple field measurements, such as height and canopy area, can be used in allometric relationships to accurately predict aboveground biomass of T. ramosissima. Although a correction factor may be necessary for predictions at larger scales, the models presented will prove useful for many research and management initiatives.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonald, G.; Abarbanel, H.; Carruthers, P.

    The questions of the sources of atmospheric carbon dioxide are addressed; distribution of the present carbon dioxide among the atmospheric, oceanic, and biospheric reservoirs is considered; and the impact on climate as reflected by the average ground temperature at each latitude of significant increases in atmospheric carbon dioxide is assessed. A new model for the mixing of carbon dioxide in the oceans is proposed. The proposed model explicitly takes into account the flow of colder and/or saltier water to great depths. We have constructed two models for the case of radiative equilibrium treating the atmosphere as gray and dividing themore » infrared emission region into nine bands. The gray atmosphere model predicts an increase of average surface temperature of 2.8/sup 0/K for a doubling of CO/sub 2/, a result about a degree less than the nine band model. An analytic model of the atmosphere was constructed (JASON Climate Model). Calculation with this zonally averaged model shows an increase of average surface temperature of 2.4/sup 0/ for a doubling of CO/sub 2/. The equatorial temperature increases by 0.7/sup 0/K while the poles warm up by 10 to 12/sup 0/K. The JASON climate model suffers from a number of fundamental weaknesses. The role of clouds in determining the albedo is not adequately taken into account nor are the asymmetries between the northern and southern hemisphere. (JGB)« less

  15. Predictability of weather and climate in a coupled ocean-atmosphere model: A dynamical systems approach. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Nese, Jon M.

    1989-01-01

    A dynamical systems approach is used to quantify the instantaneous and time-averaged predictability of a low-order moist general circulation model. Specifically, the effects on predictability of incorporating an active ocean circulation, implementing annual solar forcing, and asynchronously coupling the ocean and atmosphere are evaluated. The predictability and structure of the model attractors is compared using the Lyapunov exponents, the local divergence rates, and the correlation, fractal, and Lyapunov dimensions. The Lyapunov exponents measure the average rate of growth of small perturbations on an attractor, while the local divergence rates quantify phase-spatial variations of predictability. These local rates are exploited to efficiently identify and distinguish subtle differences in predictability among attractors. In addition, the predictability of monthly averaged and yearly averaged states is investigated by using attractor reconstruction techniques.

  16. Statistical Models for Averaging of the Pump–Probe Traces: Example of Denoising in Terahertz Time-Domain Spectroscopy

    NASA Astrophysics Data System (ADS)

    Skorobogatiy, Maksim; Sadasivan, Jayesh; Guerboukha, Hichem

    2018-05-01

    In this paper, we first discuss the main types of noise in a typical pump-probe system, and then focus specifically on terahertz time domain spectroscopy (THz-TDS) setups. We then introduce four statistical models for the noisy pulses obtained in such systems, and detail rigorous mathematical algorithms to de-noise such traces, find the proper averages and characterise various types of experimental noise. Finally, we perform a comparative analysis of the performance, advantages and limitations of the algorithms by testing them on the experimental data collected using a particular THz-TDS system available in our laboratories. We conclude that using advanced statistical models for trace averaging results in the fitting errors that are significantly smaller than those obtained when only a simple statistical average is used.

  17. 40 CFR 600.512-12 - Model year report.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy and Manufacturer's Average Carbon... reports shall be submitted for passenger automobiles and light trucks (as identified in § 600.510). (c...

  18. 40 CFR 600.513-08 - Gas Guzzler Tax.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy and Manufacturer's Average Carbon... automobiles sold after December 27, 1991, regardless of the model year of those vehicles. For alcohol dual...

  19. 40 CFR 600.513-91 - Gas Guzzler Tax.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy and Manufacturer's Average Carbon... automobiles sold after December 27, 1991, regardless of the model year of those vehicles. For alcohol dual...

  20. Sources and Trends of Nitrogen Loading to New England Estuaries

    EPA Science Inventory

    A database of nitrogen (N) loading components to estuaries of the conterminous United States has been developed through application of regional SPARROW models. The original SPARROW models predict average detrended loads by source based on average flow conditions and 2002 source t...

  1. Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames

    NASA Astrophysics Data System (ADS)

    Schlup, Jason; Blanquart, Guillaume

    2018-03-01

    The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.

  2. Learning Instance-Specific Predictive Models

    PubMed Central

    Visweswaran, Shyam; Cooper, Gregory F.

    2013-01-01

    This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325

  3. A FEniCS-based programming framework for modeling turbulent flow by the Reynolds-averaged Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Mortensen, Mikael; Langtangen, Hans Petter; Wells, Garth N.

    2011-09-01

    Finding an appropriate turbulence model for a given flow case usually calls for extensive experimentation with both models and numerical solution methods. This work presents the design and implementation of a flexible, programmable software framework for assisting with numerical experiments in computational turbulence. The framework targets Reynolds-averaged Navier-Stokes models, discretized by finite element methods. The novel implementation makes use of Python and the FEniCS package, the combination of which leads to compact and reusable code, where model- and solver-specific code resemble closely the mathematical formulation of equations and algorithms. The presented ideas and programming techniques are also applicable to other fields that involve systems of nonlinear partial differential equations. We demonstrate the framework in two applications and investigate the impact of various linearizations on the convergence properties of nonlinear solvers for a Reynolds-averaged Navier-Stokes model.

  4. Dynamics of a prey-predator system under Poisson white noise excitation

    NASA Astrophysics Data System (ADS)

    Pan, Shan-Shan; Zhu, Wei-Qiu

    2014-10-01

    The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.

  5. Distributional behavior of diffusion coefficients obtained by single trajectories in annealed transit time model

    NASA Astrophysics Data System (ADS)

    Akimoto, Takuma; Yamamoto, Eiji

    2016-12-01

    Local diffusion coefficients in disordered systems such as spin glass systems and living cells are highly heterogeneous and may change over time. Such a time-dependent and spatially heterogeneous environment results in irreproducibility of single-particle-tracking measurements. Irreproducibility of time-averaged observables has been theoretically studied in the context of weak ergodicity breaking in stochastic processes. Here, we provide rigorous descriptions of equilibrium and non-equilibrium diffusion processes for the annealed transit time model, which is a heterogeneous diffusion model in living cells. We give analytical solutions for the mean square displacement (MSD) and the relative standard deviation of the time-averaged MSD for equilibrium and non-equilibrium situations. We find that the time-averaged MSD grows linearly with time and that the time-averaged diffusion coefficients are intrinsically random (irreproducible) even in the long-time measurements in non-equilibrium situations. Furthermore, the distribution of the time-averaged diffusion coefficients converges to a universal distribution in the sense that it does not depend on initial conditions. Our findings pave the way for a theoretical understanding of distributional behavior of the time-averaged diffusion coefficients in disordered systems.

  6. Average male and female virtual dummy model (BioRID and EvaRID) simulations with two seat concepts in the Euro NCAP low severity rear impact test configuration.

    PubMed

    Linder, Astrid; Holmqvist, Kristian; Svensson, Mats Y

    2018-05-01

    Soft tissue neck injuries, also referred to as whiplash injuries, which can lead to long term suffering accounts for more than 60% of the cost of all injuries leading to permanent medical impairment for the insurance companies, with respect to injuries sustained in vehicle crashes. These injuries are sustained in all impact directions, however they are most common in rear impacts. Injury statistics have since the mid-1960s consistently shown that females are subject to a higher risk of sustaining this type of injury than males, on average twice the risk of injury. Furthermore, some recently developed anti-whiplash systems have revealed they provide less protection for females than males. The protection of both males and females should be addresses equally when designing and evaluating vehicle safety systems to ensure maximum safety for everyone. This is currently not the case. The norm for crash test dummies representing humans in crash test laboratories is an average male. The female part of the population is not represented in tests performed by consumer information organisations such as NCAP or in regulatory tests due to the absence of a physical dummy representing an average female. Recently, the world first virtual model of an average female crash test dummy was developed. In this study, simulations were run with both this model and an average male dummy model, seated in a simplified model of a vehicle seat. The results of the simulations were compared to earlier published results from simulations run in the same test set-up with a vehicle concepts seat. The three crash pulse severities of the Euro NCAP low severity rear impact test were applied. The motion of the neck, head and upper torso were analysed in addition to the accelerations and the Neck Injury Criterion (NIC). Furthermore, the response of the virtual models was compared to the response of volunteers as well as the average male model, to that of the response of a physical dummy model. Simulations with the virtual male and female dummy models revealed differences in dynamic response related to the crash severity, as well as between the two dummies in the two different seat models. For the comparison of the response of the virtual models to the response of the volunteers and the physical dummy model, the peak angular motion of the first thoracic vertebra as found in the volunteer tests and mimicked by the physical dummy were not of the same magnitude in the virtual models. The results of the study highlight the need for an extended test matrix that includes an average female dummy model to evaluate the level of occupant protection different seats provide in vehicle crashes. This would provide developers with an additional tool to ensure that both male and female occupants receive satisfactory protection and promote seat concepts that provide the best possible protection for the whole adult population. This study shows that using the mathematical models available today can provide insights suitable for future testing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Co-Axial Supersonic Free-Jet Experiment

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Edwards, J. R.

    2009-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The baseline value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was noted when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid simulation results showed the same trends as the baseline Reynolds-averaged predictions. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions are suggested as a remedy to this dilemma. Comparisons between resolved second-order turbulence statistics and their modeled Reynolds-averaged counterparts were also performed.

  8. Climate, soil water storage, and the average annual water balance

    USGS Publications Warehouse

    Milly, P.C.D.

    1994-01-01

    This paper describes the development and testing of the hypothesis that the long-term water balance is determined only by the local interaction of fluctuating water supply (precipitation) and demand (potential evapotranspiration), mediated by water storage in the soil. Adoption of this hypothesis, together with idealized representations of relevant input variabilities in time and space, yields a simple model of the water balance of a finite area having a uniform climate. The partitioning of average annual precipitation into evapotranspiration and runoff depends on seven dimensionless numbers: the ratio of average annual potential evapotranspiration to average annual precipitation (index of dryness); the ratio of the spatial average plant-available water-holding capacity of the soil to the annual average precipitation amount; the mean number of precipitation events per year; the shape parameter of the gamma distribution describing spatial variability of storage capacity; and simple measures of the seasonality of mean precipitation intensity, storm arrival rate, and potential evapotranspiration. The hypothesis is tested in an application of the model to the United States east of the Rocky Mountains, with no calibration. Study area averages of runoff and evapotranspiration, based on observations, are 263 mm and 728 mm, respectively; the model yields corresponding estimates of 250 mm and 741 mm, respectively, and explains 88% of the geographical variance of observed runoff within the study region. The differences between modeled and observed runoff can be explained by uncertainties in the model inputs and in the observed runoff. In the humid (index of dryness <1) parts of the study area, the dominant factor producing runoff is the excess of annual precipitation over annual potential evapotranspiration, but runoff caused by variability of supply and demand over time is also significant; in the arid (index of dryness >1) parts, all of the runoff is caused by variability of forcing over time. Contributions to model runoff attributable to small-scale spatial variability of storage capacity are insignificant throughout the study area. The consistency of the model with observational data is supportive of the supply-demand-storage hypothesis, which neglects infiltration excess runoff and other finite-permeability effects on the soil water balance.

  9. [The impact of exposure to images of ideally thin models on body dissatisfaction in young French and Italian women].

    PubMed

    Rodgers, R; Chabrol, H

    2009-06-01

    The thin-ideal of feminine beauty has a strong impact on body image and plays a central part in eating disorders. This ideal is widely promoted by the media images that flood western societies. Although the harmful effects of exposure to thin-ideal media images have been repeatedly demonstrated experimentally in English-speaking western countries, no such studies exist in southern Europe. There is evidence to suggest that the use of average-size models could reduce these negative effects. This study investigates body image amongst French and Italian students following exposure to media images of thin or average-size models, with a neutral or supportive slogan. The data were gathered in three locations: the psychology departments of the Universities of Padua, Italy, and Toulouse, France, and lastly high schools in the Toulouse area. A total of 299 girls took part in the study; their average age was 19.9 years old (S.D.=2.54) In order to investigate the effects of media images, we created three fake advertisements, allegedly promoting body-cream. The first advertisement displayed an ideally-thin model accompanied by a neutral slogan. In the second, the model was average-size with the same neutral slogan. The last advertisement also contained the average-size model, but with a supportive slogan designed to convey acceptance of deviations from the social norms of thinness. The participants first graded themselves on a VAS of body dissatisfaction (0 to 10). On the basis of this score, we created a first group containing girls reporting body dissatisfaction (VAS>or=5), the second with those reporting no body dissatisfaction (VAS<5). Participants were then randomly exposed to one of the three advertisements, after which they filled in the body dissatisfaction sub-scale of the Eating Disorders Inventory (EDI-2). The results showed that girls with initial body dissatisfaction reported higher body dissatisfaction after being exposed to images of ideally thin models than images of average-size models (F(1.32)=4.64, p=0.039). However, there was no significant difference between body dissatisfaction scores reported after exposure to images of average-size models accompanied by neutral or supportive slogans (F(1.39)=0.093, p=0.76). This study illustrates the negative effects of exposure to thin-ideal media images among students with body dissatisfaction. The use of average-size models in the media and advertising might help reduce these effects. No improvement was obtained via the use of a supportive slogan. These results highlight the importance of media literacy campaigns in the prevention of eating disorders.

  10. Scattering in infrared radiative transfer: A comparison between the spectrally averaging model JURASSIC and the line-by-line model KOPRA

    NASA Astrophysics Data System (ADS)

    Griessbach, Sabine; Hoffmann, Lars; Höpfner, Michael; Riese, Martin; Spang, Reinhold

    2013-09-01

    The viability of a spectrally averaging model to perform radiative transfer calculations in the infrared including scattering by atmospheric particles is examined for the application of infrared limb remote sensing measurements. Here we focus on the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) aboard the European Space Agency's Envisat. Various spectra for clear air and cloudy conditions were simulated with a spectrally averaging radiative transfer model and a line-by-line radiative transfer model for three atmospheric window regions (825-830, 946-951, 1224-1228 cm-1) and compared to each other. The results are rated in terms of the MIPAS noise equivalent spectral radiance (NESR). The clear air simulations generally agree within one NESR. The cloud simulations neglecting the scattering source term agree within two NESR. The differences between the cloud simulations including the scattering source term are generally below three and always below four NESR. We conclude that the spectrally averaging approach is well suited for fast and accurate infrared radiative transfer simulations including scattering by clouds. We found that the main source for the differences between the cloud simulations of both models is the cloud edge sampling. Furthermore we reasoned that this model comparison for clouds is also valid for atmospheric aerosol in general.

  11. Reproducing the Ensemble Average Polar Solvation Energy of a Protein from a Single Structure: Gaussian-Based Smooth Dielectric Function for Macromolecular Modeling.

    PubMed

    Chakravorty, Arghya; Jia, Zhe; Li, Lin; Zhao, Shan; Alexov, Emil

    2018-02-13

    Typically, the ensemble average polar component of solvation energy (ΔG polar solv ) of a macromolecule is computed using molecular dynamics (MD) or Monte Carlo (MC) simulations to generate conformational ensemble and then single/rigid conformation solvation energy calculation is performed on each snapshot. The primary objective of this work is to demonstrate that Poisson-Boltzmann (PB)-based approach using a Gaussian-based smooth dielectric function for macromolecular modeling previously developed by us (Li et al. J. Chem. Theory Comput. 2013, 9 (4), 2126-2136) can reproduce that ensemble average (ΔG polar solv ) of a protein from a single structure. We show that the Gaussian-based dielectric model reproduces the ensemble average ΔG polar solv (⟨ΔG polar solv ⟩) from an energy-minimized structure of a protein regardless of the minimization environment (structure minimized in vacuo, implicit or explicit waters, or crystal structure); the best case, however, is when it is paired with an in vacuo-minimized structure. In other minimization environments (implicit or explicit waters or crystal structure), the traditional two-dielectric model can still be selected with which the model produces correct solvation energies. Our observations from this work reflect how the ability to appropriately mimic the motion of residues, especially the salt bridge residues, influences a dielectric model's ability to reproduce the ensemble average value of polar solvation free energy from a single in vacuo-minimized structure.

  12. A kinetic model for estimating net photosynthetic rates of cos lettuce leaves under pulsed light.

    PubMed

    Jishi, Tomohiro; Matsuda, Ryo; Fujiwara, Kazuhiro

    2015-04-01

    Time-averaged net photosynthetic rate (P n) under pulsed light (PL) is known to be affected by the PL frequency and duty ratio, even though the time-averaged photosynthetic photon flux density (PPFD) is unchanged. This phenomenon can be explained by considering that photosynthetic intermediates (PIs) are pooled during light periods and then consumed by partial photosynthetic reactions during dark periods. In this study, we developed a kinetic model to estimate P n of cos lettuce (Lactuca sativa L. var. longifolia) leaves under PL based on the dynamics of the amount of pooled PIs. The model inputs are average PPFD, duty ratio, and frequency; the output is P n. The rates of both PI accumulation and consumption at a given moment are assumed to be dependent on the amount of pooled PIs at that point. Required model parameters and three explanatory variables (average PPFD, frequency, and duty ratio) were determined for the simulation using P n values under PL based on several combinations of the three variables. The model simulation for various PL levels with a wide range of time-averaged PPFDs, frequencies, and duty ratios further demonstrated that P n under PL with high frequencies and duty ratios was comparable to, but did not exceed, P n under continuous light, and also showed that P n under PL decreased as either frequency or duty ratio was decreased. The developed model can be used to estimate P n under various light environments where PPFD changes cyclically.

  13. Instantaneous-to-daily GPP upscaling schemes based on a coupled photosynthesis-stomatal conductance model: correcting the overestimation of GPP by directly using daily average meteorological inputs.

    PubMed

    Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin

    2014-11-01

    Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.

  14. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  15. Equity venture capital platform model based on complex network

    NASA Astrophysics Data System (ADS)

    Guo, Dongwei; Zhang, Lanshu; Liu, Miao

    2018-05-01

    This paper uses the small-world network and the random-network to simulate the relationship among the investors, construct the network model of the equity venture capital platform to explore the impact of the fraud rate and the bankruptcy rate on the robustness of the network model while observing the impact of the average path length and the average agglomeration coefficient of the investor relationship network on the income of the network model. The study found that the fraud rate and bankruptcy rate exceeded a certain threshold will lead to network collapse; The bankruptcy rate has a great influence on the income of the platform; The risk premium exists, and the average return is better under a certain range of bankruptcy risk; The structure of the investor relationship network has no effect on the income of the investment model.

  16. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.

  17. 49 CFR 537.4 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., DEPARTMENT OF TRANSPORTATION AUTOMOTIVE FUEL ECONOMY REPORTS § 537.4 Definitions. (a) Statutory terms. (1) The terms average fuel economy standard, fuel, manufacture, and model year are used as defined in... accordance with part 529 of this chapter. (3) The terms average fuel economy, fuel economy, and model type...

  18. Detectability of planetary characteristics in disk-averaged spectra. I: The Earth model.

    PubMed

    Tinetti, Giovanna; Meadows, Victoria S; Crisp, David; Fong, William; Fishbein, Evan; Turnbull, Margaret; Bibring, Jean-Pierre

    2006-02-01

    Over the next 2 decades, NASA and ESA are planning a series of space-based observatories to detect and characterize extrasolar planets. This first generation of observatories will not be able to spatially resolve the terrestrial planets detected. Instead, these planets will be characterized by disk-averaged spectroscopy. To assess the detectability of planetary characteristics in disk-averaged spectra, we have developed a spatially and spectrally resolved model of the Earth. This model uses atmospheric and surface properties from existing observations and modeling studies as input, and generates spatially resolved high-resolution synthetic spectra using the Spectral Mapping Atmospheric Radiative Transfer model. Synthetic spectra were generated for a variety of conditions, including cloud coverage, illumination fraction, and viewing angle geometry, over a wavelength range extending from the ultraviolet to the farinfrared. Here we describe the model and validate it against disk-averaged visible to infrared observations of the Earth taken by the Mars Global Surveyor Thermal Emission Spectrometer, the ESA Mars Express Omega instrument, and ground-based observations of earthshine reflected from the unilluminated portion of the Moon. The comparison between the data and model indicates that several atmospheric species can be identified in disk-averaged Earth spectra, and potentially detected depending on the wavelength range and resolving power of the instrument. At visible wavelengths (0.4-0.9 microm) O3, H2O, O2, and oxygen dimer [(O2)2] are clearly apparent. In the mid-infrared (5-20 microm) CO2, O3, and H2O are present. CH4, N2O, CO2, O3, and H2O are visible in the near-infrared (1-5 microm). A comprehensive three-dimensional model of the Earth is needed to produce a good fit with the observations.

  19. Adaptive Anchoring Model: How Static and Dynamic Presentations of Time Series Influence Judgments and Predictions.

    PubMed

    Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick

    2018-01-01

    When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  20. Financial modelling of femtosecond laser-assisted cataract surgery within the National Health Service using a 'hub and spoke' model for the delivery of high-volume cataract surgery.

    PubMed

    Roberts, H W; Ni, M Z; O'Brart, D P S

    2017-03-16

    To develop financial models which offset additional costs associated with femtosecond laser (FL)-assisted cataract surgery (FLACS) against improvements in productivity and to determine important factors relating to its implementation into the National Health Service (NHS). FL platforms are expensive, in initial purchase and running costs. The additional costs associated with FL technology might be offset by an increase in surgical efficiency. Using a 'hub and spoke' model to provide high-volume cataract surgery, we designed a financial model, comparing FLACS against conventional phacoemulsification surgery (CPS). The model was populated with averaged financial data from 4 NHS foundation trusts and 4 commercial organisations manufacturing FL platforms. We tested our model with sensitivity and threshold analyses to allow for variations or uncertainties. The averaged weekly workload for cataract surgery using our hub and spoke model required either 8 or 5.4 theatre sessions with CPS or FLACS, respectively. Despite reduced theatre utilisation, CPS (average £433/case) was still found to be 8.7% cheaper than FLACS (average £502/case). The greatest associated cost of FLACS was the patient interface (PI) (average £135/case). Sensitivity analyses demonstrated that FLACS could be less expensive than CPS, but only if increased efficiency, in terms of cataract procedures per theatre list, increased by over 100%, or if the cost of the PI was reduced by almost 70%. The financial viability of FLACS within the NHS is currently precluded by the cost of the PI and the lack of knowledge regarding any gains in operational efficiency. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  1. Geohydrology of the French Creek basin and simulated effects of droughtand ground-water withdrawals, Chester County, Pennsylvania

    USGS Publications Warehouse

    Sloto, Ronald A.

    2004-01-01

    This report describes the results of a study by the U.S. Geological Survey, in cooperation with the Delaware River Basin Commission, to develop a regional ground-water-flow model of the French Creek Basin in Chester County, Pa. The model was used to assist water-resource managers by illustrating the interconnection between ground-water and surface-water systems. The 70.7-mi2 (square mile) French Creek Basin is in the Piedmont Physiographic Province and is underlain by crystalline and sedimentary fractured-rock aquifers. Annual water budgets were calculated for 1969-2001 for the French Creek Basin upstream of streamflow measurement station French Creek near Phoenixville (01472157). Average annual precipitation was 46.28 in. (inches), average annual streamflow was 20.29 in., average annual base flow determined by hydrograph separation was 12.42 in., and estimated average annual ET (evapotranspiration) was 26.10 in. Estimated average annual recharge was 14.32 in. and is equal to 31 percent of the average annual precipitation. Base flow made up an average of 61 percent of streamflow. Ground-water flow in the French Creek Basin was simulated using the finite-difference MODFLOW-96 computer program. The model structure is based on a simplified two-dimensional conceptualization of the ground-water-flow system. The modeled area was extended outside the French Creek Basin to natural hydrologic boundaries; the modeled area includes 40 mi2 of adjacent areas outside the basin. The hydraulic conductivity for each geologic unit was calculated from reported specific-capacity data determined from aquifer tests and was adjusted during model calibration. The model was calibrated for aboveaverage conditions by simulating base-flow and water-level measurements made on May 1, 2001, using a recharge rate of 20 in/yr (inches per year). The model was calibrated for below-average conditions by simulating base-flow and water-level measurements made on September 11 and 17, 2001, using a recharge rate of 6.2 in/yr. Average conditions were simulated by adjusting the recharge rate until simulated streamflow at streamflow-measurement station 01472157 matched the long-term (1968-2001) average base flow of 54.1 cubic feet per second. The recharge rate used for average conditions was 15.7 in/yr. The effect of drought in the French Creek Basin was simulated using a drought year recharge rate of 8 in/yr for 3 months. After 3 months of drought, the simulated streamflow of French Creek at streamflow-measurement station 01472157 decreased 34 percent. The simulations show that after 6 months of average recharge (15.7 in/yr) following drought, streamflow and water levels recovered almost to pre-drought conditions. The effect of increased ground-water withdrawals on stream base flow in the South Branch French Creek Subbasin was simulated under average and drought conditions with pumping rates equal to 50, 75, and 100 percent of the Delaware River Basin Commission Ground Water Protected Area (GWPA) withdrawal limit (1,393 million gallons per year) with all pumped water removed from the basin. For average recharge conditions, the simulated streamflow of South Branch French Creek at the mouth decreased 18, 28, and 37 percent at a withdrawal rate equal to 50, 75, and 100 percent of the GWPA limit, respectively. After 3 months of drought recharge conditions, the simulated streamflow of South Branch French Creek at the mouth decreased 27, 40, and 52 percent at a withdrawal rate equal to 50, 75, and 100 percent of the GWPA limit, respectively. The effect of well location on base flow, water levels, and the sources of water to the well was simulated by locating a hypothetical well pumping 200 gallons per minute in different places in the Beaver Run Subbasin with all pumped water removed from the basin. The smallest reduction in the base flow of Beaver Run was from a well on the drainage divide

  2. Constructing optimal ensemble projections for predictive environmental modelling in Northern Eurasia

    NASA Astrophysics Data System (ADS)

    Anisimov, Oleg; Kokorev, Vasily

    2013-04-01

    Large uncertainties in climate impact modelling are associated with the forcing climate data. This study is targeted at the evaluation of the quality of GCM-based climatic projections in the specific context of predictive environmental modelling in Northern Eurasia. To accomplish this task, we used the output from 36 CMIP5 GCMs from the IPCC AR-5 data base for the control period 1975-2005 and calculated several climatic characteristics and indexes that are most often used in the impact models, i.e. the summer warmth index, duration of the vegetation growth period, precipitation sums, dryness index, thawing degree-day sums, and the annual temperature amplitude. We used data from 744 weather stations in Russia and neighbouring countries to analyze the spatial patterns of modern climatic change and to delineate 17 large regions with coherent temperature changes in the past few decades. GSM results and observational data were averaged over the coherent regions and compared with each other. Ultimately, we evaluated the skills of individual models, ranked them in the context of regional impact modelling and identified top-end GCMs that "better than average" reproduce modern regional changes of the selected meteorological parameters and climatic indexes. Selected top-end GCMs were used to compose several ensembles, each combining results from the different number of models. Ensembles were ranked using the same algorithm and outliers eliminated. We then used data from top-end ensembles for the 2000-2100 period to construct the climatic projections that are likely to be "better than average" in predicting climatic parameters that govern the state of environment in Northern Eurasia. The ultimate conclusions of our study are the following. • High-end GCMs that demonstrate excellent skills in conventional atmospheric model intercomparison experiments are not necessarily the best in replicating climatic characteristics that govern the state of environment in Northern Eurasia, and independent model evaluation on regional level is necessary to identify "better than average" GCMs. • Each of the ensembles combining results from several "better than average" models replicate selected meteorological parameters and climatic indexes better than any single GCM. The ensemble skills are parameter-specific and depend on models it consists of. The best results are not necessarily those based on the ensemble comprised by all "better than average" models. • Comprehensive evaluation of climatic scenarios using specific criteria narrows the range of uncertainties in environmental projections.

  3. Potential breeding distributions of U.S. birds predicted with both short-term variability and long-term average climate data

    Treesearch

    Brooke L. Bateman; Anna M. Pidgeon; Volker C. Radeloff; Curtis H. Flather; Jeremy VanDerWal; H. Resit Akcakaya; Wayne E. Thogmartin; Thomas P. Albright; Stephen J. Vavrus; Patricia J. Heglund

    2016-01-01

    Climate conditions, such as temperature or precipitation, averaged over several decades strongly affect species distributions, as evidenced by experimental results and a plethora of models demonstrating statistical relations between species occurrences and long-term climate averages. However, long-term averages can conceal climate changes that have occurred in...

  4. How Reliable is Bayesian Model Averaging Under Noisy Data? Statistical Assessment and Implications for Robust Model Selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang

    2014-05-01

    Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate our suggested approach with an application to model selection between different soil-plant models following up on a study by Wöhling et al. (2013). Results show that measurement noise compromises the reliability of model ranking and causes a significant amount of weighting uncertainty, if the calibration data time series is not long enough to compensate for its noisiness. This additional contribution to the overall predictive uncertainty is neglected without our approach. Thus, we strongly advertise to include our suggested upgrade in the Bayesian model averaging routine.

  5. Analysing the accuracy of machine learning techniques to develop an integrated influent time series model: case study of a sewage treatment plant, Malaysia.

    PubMed

    Ansari, Mozafar; Othman, Faridah; Abunama, Taher; El-Shafie, Ahmed

    2018-04-01

    The function of a sewage treatment plant is to treat the sewage to acceptable standards before being discharged into the receiving waters. To design and operate such plants, it is necessary to measure and predict the influent flow rate. In this research, the influent flow rate of a sewage treatment plant (STP) was modelled and predicted by autoregressive integrated moving average (ARIMA), nonlinear autoregressive network (NAR) and support vector machine (SVM) regression time series algorithms. To evaluate the models' accuracy, the root mean square error (RMSE) and coefficient of determination (R 2 ) were calculated as initial assessment measures, while relative error (RE), peak flow criterion (PFC) and low flow criterion (LFC) were calculated as final evaluation measures to demonstrate the detailed accuracy of the selected models. An integrated model was developed based on the individual models' prediction ability for low, average and peak flow. An initial assessment of the results showed that the ARIMA model was the least accurate and the NAR model was the most accurate. The RE results also prove that the SVM model's frequency of errors above 10% or below - 10% was greater than the NAR model's. The influent was also forecasted up to 44 weeks ahead by both models. The graphical results indicate that the NAR model made better predictions than the SVM model. The final evaluation of NAR and SVM demonstrated that SVM made better predictions at peak flow and NAR fit well for low and average inflow ranges. The integrated model developed includes the NAR model for low and average influent and the SVM model for peak inflow.

  6. Evaluation of column-averaged methane in models and TCCON with a focus on the stratosphere

    NASA Astrophysics Data System (ADS)

    Ostler, Andreas; Sussmann, Ralf; Patra, Prabir K.; Houweling, Sander; De Bruine, Marko; Stiller, Gabriele P.; Haenel, Florian J.; Plieninger, Johannes; Bousquet, Philippe; Yin, Yi; Saunois, Marielle; Walker, Kaley A.; Deutscher, Nicholas M.; Griffith, David W. T.; Blumenstock, Thomas; Hase, Frank; Warneke, Thorsten; Wang, Zhiting; Kivi, Rigel; Robinson, John

    2016-09-01

    The distribution of methane (CH4) in the stratosphere can be a major driver of spatial variability in the dry-air column-averaged CH4 mixing ratio (XCH4), which is being measured increasingly for the assessment of CH4 surface emissions. Chemistry-transport models (CTMs) therefore need to simulate the tropospheric and stratospheric fractional columns of XCH4 accurately for estimating surface emissions from XCH4. Simulations from three CTMs are tested against XCH4 observations from the Total Carbon Column Network (TCCON). We analyze how the model-TCCON agreement in XCH4 depends on the model representation of stratospheric CH4 distributions. Model equivalents of TCCON XCH4 are computed with stratospheric CH4 fields from both the model simulations and from satellite-based CH4 distributions from MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) and MIPAS CH4 fields adjusted to ACE-FTS (Atmospheric Chemistry Experiment Fourier Transform Spectrometer) observations. Using MIPAS-based stratospheric CH4 fields in place of model simulations improves the model-TCCON XCH4 agreement for all models. For the Atmospheric Chemistry Transport Model (ACTM) the average XCH4 bias is significantly reduced from 38.1 to 13.7 ppb, whereas small improvements are found for the models TM5 (Transport Model, version 5; from 8.7 to 4.3 ppb) and LMDz (Laboratoire de Météorologie Dynamique model with zooming capability; from 6.8 to 4.3 ppb). Replacing model simulations with MIPAS stratospheric CH4 fields adjusted to ACE-FTS reduces the average XCH4 bias for ACTM (3.3 ppb), but increases the average XCH4 bias for TM5 (10.8 ppb) and LMDz (20.0 ppb). These findings imply that model errors in simulating stratospheric CH4 contribute to model biases. Current satellite instruments cannot definitively measure stratospheric CH4 to sufficient accuracy to eliminate these biases. Applying transport diagnostics to the models indicates that model-to-model differences in the simulation of stratospheric transport, notably the age of stratospheric air, can largely explain the inter-model spread in stratospheric CH4 and, hence, its contribution to XCH4. Therefore, it would be worthwhile to analyze how individual model components (e.g., physical parameterization, meteorological data sets, model horizontal/vertical resolution) impact the simulation of stratospheric CH4 and XCH4.

  7. Climatic Models Ensemble-based Mid-21st Century Runoff Projections: A Bayesian Framework

    NASA Astrophysics Data System (ADS)

    Achieng, K. O.; Zhu, J.

    2017-12-01

    There are a number of North American Regional Climate Change Assessment Program (NARCCAP) climatic models that have been used to project surface runoff in the mid-21st century. Statistical model selection techniques are often used to select the model that best fits data. However, model selection techniques often lead to different conclusions. In this study, ten models are averaged in Bayesian paradigm to project runoff. Bayesian Model Averaging (BMA) is used to project and identify effect of model uncertainty on future runoff projections. Baseflow separation - a two-digital filter which is also called Eckhardt filter - is used to separate USGS streamflow (total runoff) into two components: baseflow and surface runoff. We use this surface runoff as the a priori runoff when conducting BMA of runoff simulated from the ten RCM models. The primary objective of this study is to evaluate how well RCM multi-model ensembles simulate surface runoff, in a Bayesian framework. Specifically, we investigate and discuss the following questions: How well do ten RCM models ensemble jointly simulate surface runoff by averaging over all the models using BMA, given a priori surface runoff? What are the effects of model uncertainty on surface runoff simulation?

  8. Analysis and comparison of safety models using average daily, average hourly, and microscopic traffic.

    PubMed

    Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie

    2018-02-01

    There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. DEVELOMENT AND EVALUATION OF A MODEL FOR ESTIMATING LONG-TERM AVERAGE OZONE EXPOSURES TO CHILDREN

    EPA Science Inventory

    Long-term average exposures of school-age children can be modelled using longitudinal measurements collected during the Harvard Southern California Chronic Ozone Exposure Study over a 12-month period: June, 1995-May, 1996. The data base contains over 200 young children with perso...

  10. Estimating effective soil properties of heterogeneous areas for modeling infiltration and redistribution

    USDA-ARS?s Scientific Manuscript database

    Field scale water infiltration and soil-water and solute transport models require spatially-averaged “effective” soil hydraulic parameters to represent the average flux and storage. The values of these effective parameters vary for different conditions, processes, and component soils in a field. For...

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marra, Valerio; Kolb, Edward W.; Matarrese, Sabino

    We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaitre-Tolman-Bondi solution of Einstein's equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model;more » it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the {lambda}CDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w{sub 0} and w{sub a} follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model.« less

  12. A mathematical model of reservoir sediment quality prediction based on land-use and erosion processes in watershed

    NASA Astrophysics Data System (ADS)

    Junakova, N.; Balintova, M.; Junak, J.

    2017-10-01

    The aim of this paper is to propose a mathematical model for determining of total nitrogen (N) and phosphorus (P) content in eroded soil particles with emphasis on prediction of bottom sediment quality in reservoirs. The adsorbed nutrient concentrations are calculated using the Universal Soil Loss Equation (USLE) extended by the determination of the average soil nutrient concentration in top soils. The average annual vegetation and management factor is divided into five periods of the cropping cycle. For selected plants, the average plant nutrient uptake divided into five cropping periods is also proposed. The average nutrient concentrations in eroded soil particles in adsorbed form are modified by sediment enrichment ratio to obtain the total nutrient content in transported soil particles. The model was designed for the conditions of north-eastern Slovakia. The study was carried out in the agricultural basin of the small water reservoir Klusov.

  13. Modeling the effect of control on the wake of a utility-scale turbine via large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Yang, Xiaolei; Annoni, Jennifer; Seiler, Pete; Sotiropoulos, Fotis

    2014-06-01

    A model of the University of Minnesota EOLOS research turbine (Clipper Liberty C96) is developed, integrating the C96 torque control law with a high fidelity actuator line large- eddy simulation (LES) model. Good agreement with the blade element momentum theory is obtained for the power coefficient curve under uniform inflow. Three different cases, fixed rotor rotational speed ω, fixed tip-speed ratio (TSR) and generator torque control, have been simulated for turbulent inflow. With approximately the same time-averaged ω, the time- averaged power is in good agreement with measurements for all three cases. Although the time-averaged aerodynamic torque is nearly the same for the three cases, the root-mean-square (rms) of the aerodynamic torque fluctuations is significantly larger for the case with fixed ω. No significant differences have been observed for the time-averaged flow fields behind the turbine for these three cases.

  14. Derivation and calibration of a gas metal arc welding (GMAW) dynamic droplet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reutzel, E.W.; Einerson, C.J.; Johnson, J.A.

    1996-12-31

    A rudimentary, existing dynamic model for droplet growth and detachment in gas metal arc welding (GMAW) was improved and calibrated to match experimental data. The model simulates droplets growing at the end of an imaginary spring. Mass is added to the drop as the electrode melts, the droplet grows, and the spring is displaced. Detachment occurs when one of two criteria is met, and the amount of mass that is detached is a function of the droplet velocity at the time of detachment. Improvements to the model include the addition of a second criterion for drop detachment, a more sophisticatedmore » model of the power supply and secondary electric circuit, and the incorporation of a variable electrode resistance. Relevant physical parameters in the model were adjusted during model calibration. The average current, droplet frequency, and parameter-space location of globular-to-streaming mode transition were used as criteria for tuning the model. The average current predicted by the calibrated model matched the experimental average current to within 5% over a wide range of operating conditions.« less

  15. [Comparison of predictive effect between the single auto regressive integrated moving average (ARIMA) model and the ARIMA-generalized regression neural network (GRNN) combination model on the incidence of scarlet fever].

    PubMed

    Zhu, Yu; Xia, Jie-lai; Wang, Jing

    2009-09-01

    Application of the 'single auto regressive integrated moving average (ARIMA) model' and the 'ARIMA-generalized regression neural network (GRNN) combination model' in the research of the incidence of scarlet fever. Establish the auto regressive integrated moving average model based on the data of the monthly incidence on scarlet fever of one city, from 2000 to 2006. The fitting values of the ARIMA model was used as input of the GRNN, and the actual values were used as output of the GRNN. After training the GRNN, the effect of the single ARIMA model and the ARIMA-GRNN combination model was then compared. The mean error rate (MER) of the single ARIMA model and the ARIMA-GRNN combination model were 31.6%, 28.7% respectively and the determination coefficient (R(2)) of the two models were 0.801, 0.872 respectively. The fitting efficacy of the ARIMA-GRNN combination model was better than the single ARIMA, which had practical value in the research on time series data such as the incidence of scarlet fever.

  16. Evaluation of column-averaged methane in models and TCCON with a focus on the stratosphere

    DOE PAGES

    Ostler, Andreas; Sussmann, Ralf; Patra, Prabir K.; ...

    2016-09-28

    The distribution of methane (CH 4) in the stratosphere can be a major driver of spatial variability in the dry-air column-averaged CH 4 mixing ratio (XCH 4), which is being measured increasingly for the assessment of CH 4 surface emissions. Chemistry-transport models (CTMs) therefore need to simulate the tropospheric and stratospheric fractional columns of XCH 4 accurately for estimating surface emissions from XCH 4. Simulations from three CTMs are tested against XCH 4 observations from the Total Carbon Column Network (TCCON). We analyze how the model–TCCON agreement in XCH 4 depends on the model representation of stratospheric CH 4 distributions.more » Model equivalents of TCCON XCH 4 are computed with stratospheric CH 4 fields from both the model simulations and from satellite-based CH 4 distributions from MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) and MIPAS CH 4 fields adjusted to ACE-FTS (Atmospheric Chemistry Experiment Fourier Transform Spectrometer) observations. Using MIPAS-based stratospheric CH 4 fields in place of model simulations improves the model–TCCON XCH 4 agreement for all models. For the Atmospheric Chemistry Transport Model (ACTM) the average XCH 4 bias is significantly reduced from 38.1 to 13.7 ppb, whereas small improvements are found for the models TM5 (Transport Model, version 5; from 8.7 to 4.3 ppb) and LMDz (Laboratoire de Météorologie Dynamique model with zooming capability; from 6.8 to 4.3 ppb). Replacing model simulations with MIPAS stratospheric CH 4 fields adjusted to ACE-FTS reduces the average XCH 4 bias for ACTM (3.3 ppb), but increases the average XCH 4 bias for TM5 (10.8 ppb) and LMDz (20.0 ppb). These findings imply that model errors in simulating stratospheric CH 4 contribute to model biases. Current satellite instruments cannot definitively measure stratospheric CH 4 to sufficient accuracy to eliminate these biases. Applying transport diagnostics to the models indicates that model-to-model differences in the simulation of stratospheric transport, notably the age of stratospheric air, can largely explain the inter-model spread in stratospheric CH 4 and, hence, its contribution to XCH 4. Furthermore, it would be worthwhile to analyze how individual model components (e.g., physical parameterization, meteorological data sets, model horizontal/vertical resolution) impact the simulation of stratospheric CH 4 and XCH 4.« less

  17. Evaluation of column-averaged methane in models and TCCON with a focus on the stratosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ostler, Andreas; Sussmann, Ralf; Patra, Prabir K.

    The distribution of methane (CH 4) in the stratosphere can be a major driver of spatial variability in the dry-air column-averaged CH 4 mixing ratio (XCH 4), which is being measured increasingly for the assessment of CH 4 surface emissions. Chemistry-transport models (CTMs) therefore need to simulate the tropospheric and stratospheric fractional columns of XCH 4 accurately for estimating surface emissions from XCH 4. Simulations from three CTMs are tested against XCH 4 observations from the Total Carbon Column Network (TCCON). We analyze how the model–TCCON agreement in XCH 4 depends on the model representation of stratospheric CH 4 distributions.more » Model equivalents of TCCON XCH 4 are computed with stratospheric CH 4 fields from both the model simulations and from satellite-based CH 4 distributions from MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) and MIPAS CH 4 fields adjusted to ACE-FTS (Atmospheric Chemistry Experiment Fourier Transform Spectrometer) observations. Using MIPAS-based stratospheric CH 4 fields in place of model simulations improves the model–TCCON XCH 4 agreement for all models. For the Atmospheric Chemistry Transport Model (ACTM) the average XCH 4 bias is significantly reduced from 38.1 to 13.7 ppb, whereas small improvements are found for the models TM5 (Transport Model, version 5; from 8.7 to 4.3 ppb) and LMDz (Laboratoire de Météorologie Dynamique model with zooming capability; from 6.8 to 4.3 ppb). Replacing model simulations with MIPAS stratospheric CH 4 fields adjusted to ACE-FTS reduces the average XCH 4 bias for ACTM (3.3 ppb), but increases the average XCH 4 bias for TM5 (10.8 ppb) and LMDz (20.0 ppb). These findings imply that model errors in simulating stratospheric CH 4 contribute to model biases. Current satellite instruments cannot definitively measure stratospheric CH 4 to sufficient accuracy to eliminate these biases. Applying transport diagnostics to the models indicates that model-to-model differences in the simulation of stratospheric transport, notably the age of stratospheric air, can largely explain the inter-model spread in stratospheric CH 4 and, hence, its contribution to XCH 4. Furthermore, it would be worthwhile to analyze how individual model components (e.g., physical parameterization, meteorological data sets, model horizontal/vertical resolution) impact the simulation of stratospheric CH 4 and XCH 4.« less

  18. Identification of coffee bean varieties using hyperspectral imaging: influence of preprocessing methods and pixel-wise spectra analysis.

    PubMed

    Zhang, Chu; Liu, Fei; He, Yong

    2018-02-01

    Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.

  19. Average capacity of the ground to train communication link of a curved track in the turbulence of gamma-gamma distribution

    NASA Astrophysics Data System (ADS)

    Yang, Yanqiu; Yu, Lin; Zhang, Yixin

    2017-04-01

    A model of the average capacity of optical wireless communication link with pointing errors for the ground-to-train of the curved track is established based on the non-Kolmogorov. By adopting the gamma-gamma distribution model, we derive the average capacity expression for this channel. The numerical analysis reveals that heavier fog reduces the average capacity of link. The strength of atmospheric turbulence, the variance of pointing errors, and the covered track length need to be reduced for the larger average capacity of link. The normalized beamwidth and the average signal-to-noise ratio (SNR) of the turbulence-free link need to be increased. We can increase the transmit aperture to expand the beamwidth and enhance the signal intensity, thereby decreasing the impact of the beam wander accordingly. As the system adopting the automatic tracking of beam at the receiver positioned on the roof of the train, for eliminating the pointing errors caused by beam wander and train vibration, the equivalent average capacity of the channel will achieve a maximum value. The impact of the non-Kolmogorov spectral index's variation on the average capacity of link can be ignored.

  20. The electrostatics of a dusty plasma

    NASA Technical Reports Server (NTRS)

    Whipple, E. C.; Mendis, D. A.; Northrop, T. G.

    1986-01-01

    The potential distribution in a plasma containing dust grains were derived where the Debye length can be larger or smaller than the average intergrain spacing. Three models were treated for the grain-plasma system, with the assumption that the system of dust and plasma is charge-neutral: a permeable grain model, an impermeable grain model, and a capacitor model that does not require the nearest neighbor approximation of the other two models. A gauge-invariant form of Poisson's equation was used which is linearized about the average potential in the system. The charging currents to a grain are functions of the difference between the grain potential and this average potential. Expressions were obtained for the equilibrium potential of the grain and for the gauge-invariant capacitance between the grain and the plasma. The charge on a grain is determined by the product of this capacitance and the grain-plasma potential difference.

  1. Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory

    NASA Astrophysics Data System (ADS)

    Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.

    2016-12-01

    The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.

  2. Financial Incentives and Cervical Cancer Screening Participation in Ontario's Primary Care Practice Models.

    PubMed

    Pendrith, Ciara; Thind, Amardeep; Zaric, Gregory S; Sarma, Sisira

    2016-08-01

    The primary objective of this paper is to compare cervical cancer screening rates of family physicians in Ontario's two dominant reformed practice models, Family Health Group (FHG) and Family Health Organization (FHO), and traditional fee-for-service (FFS) model. Both reformed models formally enrol patients and offer extensive pay-for-performance incentives; however, they differ by remuneration for core services (FHG is FFS; FHO is capitated). The secondary objective is to estimate the average and marginal costs of screening in each model. Using administrative data on 7,298 family physicians and their 2,083,633 female patients aged 35-69 eligible for cervical cancer screening in 2011, we assessed screening rates after adjusting for patient and physician characteristics. Predicted screening rates, fees and bonus payments were used to estimate the average and marginal costs of cervical cancer screening. Adjusted screening rates were highest in the FHG (81.9%), followed by the FHO (79.6%), and then the traditional FFS model (74.2%). The cost of a cervical cancer screening was $18.30 in the FFS model. The estimated average cost of screening in the FHGs and FHOs were $29.71 and $35.02, respectively, while the corresponding marginal costs were $33.05 and $39.06. We found significant differences in cervical cancer screening rates across Ontario's primary care practice models. Cervical screening rates were significantly higher in practice models eligible for incentives (FHGs and FHOs) than the traditional FFS model. However, the average and marginal cost of screening were lowest in the traditional FFS model and highest in the FHOs. Copyright © 2016 Longwoods Publishing.

  3. Comparing Satellite Rainfall Estimates with Rain-Gauge Data: Optimal Strategies Suggested by a Spectral Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Validation of satellite remote-sensing methods for estimating rainfall against rain-gauge data is attractive because of the direct nature of the rain-gauge measurements. Comparisons of satellite estimates to rain-gauge data are difficult, however, because of the extreme variability of rain and the fact that satellites view large areas over a short time while rain gauges monitor small areas continuously. In this paper, a statistical model of rainfall variability developed for studies of sampling error in averages of satellite data is used to examine the impact of spatial and temporal averaging of satellite and gauge data on intercomparison results. The model parameters were derived from radar observations of rain, but the model appears to capture many of the characteristics of rain-gauge data as well. The model predicts that many months of data from areas containing a few gauges are required to validate satellite estimates over the areas, and that the areas should be of the order of several hundred km in diameter. Over gauge arrays of sufficiently high density, the optimal areas and averaging times are reduced. The possibility of using time-weighted averages of gauge data is explored.

  4. Frequentist Model Averaging in Structural Equation Modelling.

    PubMed

    Jin, Shaobo; Ankargren, Sebastian

    2018-06-04

    Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.

  5. Neural net forecasting for geomagnetic activity

    NASA Technical Reports Server (NTRS)

    Hernandez, J. V.; Tajima, T.; Horton, W.

    1993-01-01

    We use neural nets to construct nonlinear models to forecast the AL index given solar wind and interplanetary magnetic field (IMF) data. We follow two approaches: (1) the state space reconstruction approach, which is a nonlinear generalization of autoregressive-moving average models (ARMA) and (2) the nonlinear filter approach, which reduces to a moving average model (MA) in the linear limit. The database used here is that of Bargatze et al. (1985).

  6. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... nearest 0.1 mpg; or (iii) For natural gas-fueled model types, the fuel economy value calculated for that... as determined in § 600.208-12(b)(5)(i). (vi) For natural gas dual fuel model types, for model years... natural gas as determined in § 600.208-12(b)(5)(ii) divided by 0.15 provided the requirements of paragraph...

  7. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... nearest 0.1 mpg; or (iii) For natural gas-fueled model types, the fuel economy value calculated for that... as determined in § 600.208-12(b)(5)(i). (vi) For natural gas dual fuel model types, for model years... natural gas as determined in § 600.208-12(b)(5)(ii) divided by 0.15 provided the requirements of paragraph...

  8. An Investigation of the Fit of Linear Regression Models to Data from an SAT[R] Validity Study. Research Report 2011-3

    ERIC Educational Resources Information Center

    Kobrin, Jennifer L.; Sinharay, Sandip; Haberman, Shelby J.; Chajewski, Michael

    2011-01-01

    This study examined the adequacy of a multiple linear regression model for predicting first-year college grade point average (FYGPA) using SAT[R] scores and high school grade point average (HSGPA). A variety of techniques, both graphical and statistical, were used to examine if it is possible to improve on the linear regression model. The results…

  9. The pitch of short-duration fundamental frequency glissandos.

    PubMed

    d'Alessandro, C; Rosset, S; Rossi, J P

    1998-10-01

    Pitch perception for short-duration fundamental frequency (F0) glissandos was studied. In the first part, new measurements using the method of adjustment are reported. Stimuli were F0 glissandos centered at 220 Hz. The parameters under study were: F0 glissando extents (0, 0.8, 1.5, 3, 6, and 12 semitones, i.e., 0, 10.17, 18.74, 38.17, 76.63, and 155.56 Hz), F0 glissando durations (50, 100, 200, and 300 ms), F0 glissando directions (rising or falling), and the extremity of F0 glissandos matched (beginning or end). In the second part, the main results are discussed: (1) perception seems to correspond to an average of the frequencies present in the vicinity of the extremity matched; (2) the higher extremities of the glissando seem more important; (3) adjustments at the end are closer to the extremities than adjustments at the beginning. In the third part, numerical models accounting for the experimental data are proposed: a time-average model and a weighted time-average model. Optimal parameters for these models are derived. The weighted time-average model achieves a 94% accurate prediction rate for the experimental data. The numerical model is successful in predicting the pitch of short-duration F0 glissandos.

  10. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    PubMed Central

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-01-01

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087

  11. Competition model for aperiodic stochastic resonance in a Fitzhugh-Nagumo model of cardiac sensory neurons.

    PubMed

    Kember, G C; Fenton, G A; Armour, J A; Kalyaniwalla, N

    2001-04-01

    Regional cardiac control depends upon feedback of the status of the heart from afferent neurons responding to chemical and mechanical stimuli as transduced by an array of sensory neurites. Emerging experimental evidence shows that neural control in the heart may be partially exerted using subthreshold inputs that are amplified by noisy mechanical fluctuations. This amplification is known as aperiodic stochastic resonance (ASR). Neural control in the noisy, subthreshold regime is difficult to see since there is a near absence of any correlation between input and the output, the latter being the average firing (spiking) rate of the neuron. This lack of correlation is unresolved by traditional energy models of ASR since these models are unsuitable for identifying "cause and effect" between such inputs and outputs. In this paper, the "competition between averages" model is used to determine what portion of a noisy, subthreshold input is responsible, on average, for the output of sensory neurons as represented by the Fitzhugh-Nagumo equations. A physiologically relevant conclusion of this analysis is that a nearly constant amount of input is responsible for a spike, on average, and this amount is approximately independent of the firing rate. Hence, correlation measures are generally reduced as the firing rate is lowered even though neural control under this model is actually unaffected.

  12. Queues with Choice via Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Pender, Jamol; Rand, Richard H.; Wesson, Elizabeth

    Delay or queue length information has the potential to influence the decision of a customer to join a queue. Thus, it is imperative for managers of queueing systems to understand how the information that they provide will affect the performance of the system. To this end, we construct and analyze two two-dimensional deterministic fluid models that incorporate customer choice behavior based on delayed queue length information. In the first fluid model, customers join each queue according to a Multinomial Logit Model, however, the queue length information the customer receives is delayed by a constant Δ. We show that the delay can cause oscillations or asynchronous behavior in the model based on the value of Δ. In the second model, customers receive information about the queue length through a moving average of the queue length. Although it has been shown empirically that giving patients moving average information causes oscillations and asynchronous behavior to occur in U.S. hospitals, we analytically and mathematically show for the first time that the moving average fluid model can exhibit oscillations and determine their dependence on the moving average window. Thus, our analysis provides new insight on how operators of service systems should report queue length information to customers and how delayed information can produce unwanted system dynamics.

  13. Simulated impacts of climate change on phosphorus loading to Lake Michigan

    USGS Publications Warehouse

    Robertson, Dale M.; Saad, David A.; Christiansen, Daniel E.; Lorenz, David J

    2016-01-01

    Phosphorus (P) loading to the Great Lakes has caused various types of eutrophication problems. Future climatic changes may modify this loading because climatic models project changes in future meteorological conditions, especially for the key hydrologic driver — precipitation. Therefore, the goal of this study is to project how P loading may change from the range of projected climatic changes. To project the future response in P loading, the HydroSPARROW approach was developed that links results from two spatially explicit models, the SPAtially Referenced Regression on Watershed attributes (SPARROW) transport and fate watershed model and the water-quantity Precipitation Runoff Modeling System (PRMS). PRMS was used to project changes in streamflow throughout the Lake Michigan Basin using downscaled meteorological data from eight General Circulation Models (GCMs) subjected to three greenhouse gas emission scenarios. Downscaled GCMs project a + 2.1 to + 4.0 °C change in average-annual air temperature (+ 2.6 °C average) and a − 5.1% to + 16.7% change in total annual precipitation (+ 5.1% average) for this geographic area by the middle of this century (2045–2065) and larger changes by the end of the century. The climatic changes by mid-century are projected to result in a − 21.2% to + 8.9% change in total annual streamflow (− 1.8% average) and a − 29.6% to + 17.2% change in total annual P loading (− 3.1% average). Although the average projected changes in streamflow and P loading are relatively small for the entire basin, considerable variability exists spatially and among GCMs because of their variability in projected future precipitation.

  14. PIV measurements in the near wakes of hollow cylinders with holes

    NASA Astrophysics Data System (ADS)

    Firat, Erhan; Ozkan, Gokturk M.; Akilli, Huseyin

    2017-05-01

    The wake flows behind fixed, hollow, rigid circular cylinders with two rows of holes connecting the front and rear stagnation lines were investigated using particle image velocimetry (PIV) for various combinations of three hole diameters, d = 0.1 D, 0.15 D, and 0.20 D, six hole-to-hole distances, l = 2 d, 3 d, 4 d, 5 d, 6 d, and 7 d, and ten angles of incidence ( α), from 0° to 45° in steps of 5°, at a Reynolds number of Re = 6,900. Time-averaged velocity distributions, instantaneous and time-averaged vorticity patterns, time-averaged streamline topology, and hot spots of turbulent kinetic energy occurred through the interaction of shear layers from the models were presented to show how the wake flow was modified by the presence of the self-issuing jets with various momentums emanating from the downstream holes. In general, as hole diameter which is directly related to jet momentum increased, the values of time-averaged wake characteristics (length of time-averaged recirculation region, vortex formation length, length of shear layers, and gap between the shear layers) increased. Irrespective to d and l tested, the values of the vortex formation length of the models are greater than that of the cylinder without hole (reference model). That is, vortex formation process was shifted downstream by aid of jets. It was found that time-averaged wake characteristics were very sensitive to α. As α increased, the variation of these characteristics can be modeled by exponential decay functions. The effect of l on the three-dimensional vortex shedding patterns in the near wake of the models was also discussed.

  15. Twelve-month, 12 km resolution North American WRF-Chem v3.4 air quality simulation: performance evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tessum, C. W.; Hill, J. D.; Marshall, J. D.

    We present results from and evaluate the performance of a 12-month, 12 km horizontal resolution year 2005 air pollution simulation for the contiguous United States using the WRF-Chem (Weather Research and Forecasting with Chemistry) meteorology and chemical transport model (CTM). We employ the 2005 US National Emissions Inventory, the Regional Atmospheric Chemistry Mechanism (RACM), and the Modal Aerosol Dynamics Model for Europe (MADE) with a volatility basis set (VBS) secondary aerosol module. Overall, model performance is comparable to contemporary modeling efforts used for regulatory and health-effects analysis, with an annual average daytime ozone (O 3) mean fractional bias (MFB) ofmore » 12% and an annual average fine particulate matter (PM 2.5) MFB of −1%. WRF-Chem, as configured here, tends to overpredict total PM 2.5 at some high concentration locations and generally overpredicts average 24 h O 3 concentrations. Performance is better at predicting daytime-average and daily peak O 3 concentrations, which are more relevant for regulatory and health effects analyses relative to annual average values. Predictive performance for PM 2.5 subspecies is mixed: the model overpredicts particulate sulfate (MFB = 36%), underpredicts particulate nitrate (MFB = −110%) and organic carbon (MFB = −29%), and relatively accurately predicts particulate ammonium (MFB = 3%) and elemental carbon (MFB = 3%), so that the accuracy in total PM 2.5 predictions is to some extent a function of offsetting over- and underpredictions of PM 2.5 subspecies. Model predictive performance for PM 2.5 and its subspecies is in general worse in winter and in the western US than in other seasons and regions, suggesting spatial and temporal opportunities for future WRF-Chem model development and evaluation.« less

  16. Twelve-month, 12 km resolution North American WRF-Chem v3.4 air quality simulation: performance evaluation

    DOE PAGES

    Tessum, C. W.; Hill, J. D.; Marshall, J. D.

    2015-04-07

    We present results from and evaluate the performance of a 12-month, 12 km horizontal resolution year 2005 air pollution simulation for the contiguous United States using the WRF-Chem (Weather Research and Forecasting with Chemistry) meteorology and chemical transport model (CTM). We employ the 2005 US National Emissions Inventory, the Regional Atmospheric Chemistry Mechanism (RACM), and the Modal Aerosol Dynamics Model for Europe (MADE) with a volatility basis set (VBS) secondary aerosol module. Overall, model performance is comparable to contemporary modeling efforts used for regulatory and health-effects analysis, with an annual average daytime ozone (O 3) mean fractional bias (MFB) ofmore » 12% and an annual average fine particulate matter (PM 2.5) MFB of −1%. WRF-Chem, as configured here, tends to overpredict total PM 2.5 at some high concentration locations and generally overpredicts average 24 h O 3 concentrations. Performance is better at predicting daytime-average and daily peak O 3 concentrations, which are more relevant for regulatory and health effects analyses relative to annual average values. Predictive performance for PM 2.5 subspecies is mixed: the model overpredicts particulate sulfate (MFB = 36%), underpredicts particulate nitrate (MFB = −110%) and organic carbon (MFB = −29%), and relatively accurately predicts particulate ammonium (MFB = 3%) and elemental carbon (MFB = 3%), so that the accuracy in total PM 2.5 predictions is to some extent a function of offsetting over- and underpredictions of PM 2.5 subspecies. Model predictive performance for PM 2.5 and its subspecies is in general worse in winter and in the western US than in other seasons and regions, suggesting spatial and temporal opportunities for future WRF-Chem model development and evaluation.« less

  17. Simulated effects of irrigation on salinity in the Arkansas River Valley in Colorado

    USGS Publications Warehouse

    Goff, K.; Lewis, M.E.; Person, M.A.; Konikow, Leonard F.

    1998-01-01

    Agricultural irrigation has a substantial impact on water quantity and quality in the lower Arkansas River valley of southeastern Colorado. A two-dimensional flow and solute transport model was used to evaluate the potential effects of changes in irrigation on the quantity and quality of water in the alluvial aquifer and in the Arkansas River along an 17.7 km reach of the fiver. The model was calibrated to aquifer water level and dissolved solids concentration data collected throughout the 24 year study period (197195). Two categories of irrigation management were simulated with the calibrated model: (1) a decrease in ground water withdrawals for irrigation; and (2) cessation of all irrigation from ground water and surface water sources. In the modeled category of decreased irrigation from ground water pumping, there was a resulting 6.9% decrease in the average monthly ground water salinity, a 0.6% decrease in average monthly river salinity, and an 11.1% increase in ground water return flows to the river. In the modeled category of the cessation of all irrigation, average monthly ground water salinity decreased by 25%; average monthly river salinity decreased by 4.4%; and ground water return flows to the river decreased by an average of 64%. In all scenarios, simulated ground water salinity decreased relative to historical conditions for about 12 years before reaching a new dynamic equilibrium condition. Aquifer water levels were not sensitive to any of the modeled scenarios. These potential changes in salinity could result in improved water quality for irrigation purposes downstream from the affected area.

  18. An analytical study of various telecomminication networks using markov models

    NASA Astrophysics Data System (ADS)

    Ramakrishnan, M.; Jayamani, E.; Ezhumalai, P.

    2015-04-01

    The main aim of this paper is to examine issues relating to the performance of various Telecommunication networks, and applied queuing theory for better design and improved efficiency. Firstly, giving an analytical study of queues deals with quantifying the phenomenon of waiting lines using representative measures of performances, such as average queue length (on average number of customers in the queue), average waiting time in queue (on average time to wait) and average facility utilization (proportion of time the service facility is in use). In the second, using Matlab simulator, summarizes the finding of the investigations, from which and where we obtain results and describing methodology for a) compare the waiting time and average number of messages in the queue in M/M/1 and M/M/2 queues b) Compare the performance of M/M/1 and M/D/1 queues and study the effect of increasing the number of servers on the blocking probability M/M/k/k queue model.

  19. A state-based probabilistic model for tumor respiratory motion prediction

    NASA Astrophysics Data System (ADS)

    Kalet, Alan; Sandison, George; Wu, Huanmei; Schmitz, Ruth

    2010-12-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more general HMM-type predictive models. RMS errors for the time average model approach the theoretical limit of the HMM, and predicted state sequences are well correlated with sequences known to fit the data.

  20. A highly detailed FEM volume conductor model based on the ICBM152 average head template for EEG source imaging and TCS targeting.

    PubMed

    Haufe, Stefan; Huang, Yu; Parra, Lucas C

    2015-08-01

    In electroencephalographic (EEG) source imaging as well as in transcranial current stimulation (TCS), it is common to model the head using either three-shell boundary element (BEM) or more accurate finite element (FEM) volume conductor models. Since building FEMs is computationally demanding and labor intensive, they are often extensively reused as templates even for subjects with mismatching anatomies. BEMs can in principle be used to efficiently build individual volume conductor models; however, the limiting factor for such individualization are the high acquisition costs of structural magnetic resonance images. Here, we build a highly detailed (0.5mm(3) resolution, 6 tissue type segmentation, 231 electrodes) FEM based on the ICBM152 template, a nonlinear average of 152 adult human heads, which we call ICBM-NY. We show that, through more realistic electrical modeling, our model is similarly accurate as individual BEMs. Moreover, through using an unbiased population average, our model is also more accurate than FEMs built from mismatching individual anatomies. Our model is made available in Matlab format.

  1. EXPERIMENTAL AND MODEL-COMPUTED AREA AVERAGED VERTICAL PROFILES OF WIND SPEED FOR EVALUATION OF MESOSCALE URBAN CANOPY SCHEMES

    EPA Science Inventory

    Numerous urban canopy schemes have recently been developed for mesoscale models in order to approximate the drag and turbulent production effects of a city on the air flow. However, little data exists by which to evaluate the efficacy of the schemes since "area-averaged&quo...

  2. 77 FR 69924 - Request for Comments on a Renewal of a Previously Approved Information Collection: Production...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-21

    ...: Kenneth R. Katz, Fuel Economy Division, Office of International Policy, Fuel Economy and Consumer Programs... Parts 531 and 533 Passenger Car Average Fuel Economy Standards--Model Years 2016-2025; Light Truck Average Fuel Economy Standards--Model Years 2016-2025; Production Plan Data. OMB Control Number: 2127-0655...

  3. 49 CFR 526.5 - Earning offsetting monetary credits in future model years.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... each planned product action (e.g., new model, mix change) which will affect the average fuel economy of... Act must contain the following information: (a) Projected average fuel economy and production levels for the class of automobiles which may fail to comply with a fuel economy standard and for any other...

  4. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... MOTOR VEHICLES Fuel Economy Regulations for Model Year 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer... will be calculated to the nearest 0.1 mpg for the categories of automobiles identified in this section...

  5. Flame-conditioned turbulence modeling for reacting flows

    NASA Astrophysics Data System (ADS)

    Macart, Jonathan F.; Mueller, Michael E.

    2017-11-01

    Conventional approaches to turbulence modeling in reacting flows rely on unconditional averaging or filtering, that is, consideration of the momentum equations only in physical space, implicitly assuming that the flame only weakly affects the turbulence, aside from a variation in density. Conversely, for scalars, which are strongly coupled to the flame structure, their evolution equations are often projected onto a reduced-order manifold, that is, conditionally averaged or filtered, on a flame variable such as a mixture fraction or progress variable. Such approaches include Conditional Moment Closure (CMC) and related variants. However, recent observations from Direct Numerical Simulation (DNS) have indicated that the flame can strongly affect turbulence in premixed combustion at low Karlovitz number. In this work, a new approach to turbulence modeling for reacting flows is investigated in which conditionally averaged or filtered equations are evolved for the momentum. The conditionally-averaged equations for the velocity and its covariances are derived, and budgets are evaluated from DNS databases of turbulent premixed planar jet flames. The most important terms in these equations are identified, and preliminary closure models are proposed.

  6. Development of sustainable precision farming systems for swine: estimating real-time individual amino acid requirements in growing-finishing pigs.

    PubMed

    Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C

    2012-07-01

    The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation. Thus, the amino acid requirements estimated by model are animal- and time-dependent and follow, in real time, the individual DFI and BW growth patterns. The proposed model can follow the average feed intake and feed weight trajectory of each individual pig in real time with good accuracy. Based on these trajectories and using classical factorial equations, the model makes it possible to estimate dynamically the AA requirements of each animal, taking into account the intake and growth changes of the animal.

  7. The application of a geometric optical canopy reflectance model to semiarid shrub vegetation

    NASA Technical Reports Server (NTRS)

    Franklin, Janet; Turner, Debra L.

    1992-01-01

    Estimates are obtained of the average plant size and density of shrub vegetation on the basis of SPOT High Resolution Visible Multispectral imagery from Chihuahuan desert areas, using the Li and Strahler (1985) model. The aggregated predictions for a number of stands within a class were accurate to within one or two standard errors of the observed average value. Accuracy was highest for those classes of vegetation where the nonrandom scrub pattern was characterized for the class on the basis of the average coefficient of the determination of density.

  8. A Lagrangian subgrid-scale model with dynamic estimation of Lagrangian time scale for large eddy simulation of complex flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman; Mahesh, Krishnan

    2012-08-01

    The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.

  9. Relationship between road traffic accidents and conflicts recorded by drive recorders.

    PubMed

    Lu, Guangquan; Cheng, Bo; Kuzumaki, Seigo; Mei, Bingsong

    2011-08-01

    Road traffic conflicts can be used to estimate the probability of accident occurrence, assess road safety, or evaluate road safety programs if the relationship between road traffic accidents and conflicts is known. To this end, we propose a model for the relationship between road traffic accidents and conflicts recorded by drive recorders (DRs). DRs were installed in 50 cars in Beijing to collect records of traffic conflicts. Data containing 1366 conflicts were collected in 193 days. The hourly distributions of conflicts and accidents were used to model the relationship between accidents and conflicts. To eliminate time series and base number effects, we defined and used 2 parameters: average annual number of accidents per 10,000 vehicles per hour and average number of conflicts per 10,000 vehicles per hour. A model was developed to describe the relationship between the two parameters. If A(i) = average annual number of accidents per 10,000 vehicles per hour at hour i, and E(i) = average number of conflicts per 10,000 vehicles per hour at hour i, the relationship can be expressed as [Formula in text] (α>0, β>0). The average number of traffic accidents increases as the number of conflicts rises, but the rate of increase decelerates as the number of conflicts increases further. The proposed model can describe the relationship between road traffic accidents and conflicts in a simple manner. According to our analysis, the model fits the present data.

  10. Model selection and model averaging in phylogenetics: advantages of akaike information criterion and bayesian approaches over likelihood ratio tests.

    PubMed

    Posada, David; Buckley, Thomas R

    2004-10-01

    Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).

  11. 49 CFR 531.5 - Fuel economy standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.5 Fuel... automobiles shall comply with the fleet average fuel economy standards in Table I, expressed in miles per... passenger automobile fleet shall comply with the fleet average fuel economy level calculated for that model...

  12. 49 CFR 531.5 - Fuel economy standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.5 Fuel... automobiles shall comply with the fleet average fuel economy standards in Table I, expressed in miles per... passenger automobile fleet shall comply with the fleet average fuel economy level calculated for that model...

  13. Calibration of a texture-based model of a ground-water flow system, western San Joaquin Valley, California

    USGS Publications Warehouse

    Phillips, Steven P.; Belitz, Kenneth

    1991-01-01

    The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.

  14. SU-F-BRD-01: A Logistic Regression Model to Predict Objective Function Weights in Prostate Cancer IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boutilier, J; Chan, T; Lee, T

    2014-06-15

    Purpose: To develop a statistical model that predicts optimization objective function weights from patient geometry for intensity-modulation radiotherapy (IMRT) of prostate cancer. Methods: A previously developed inverse optimization method (IOM) is applied retrospectively to determine optimal weights for 51 treated patients. We use an overlap volume ratio (OVR) of bladder and rectum for different PTV expansions in order to quantify patient geometry in explanatory variables. Using the optimal weights as ground truth, we develop and train a logistic regression (LR) model to predict the rectum weight and thus the bladder weight. Post hoc, we fix the weights of the leftmore » femoral head, right femoral head, and an artificial structure that encourages conformity to the population average while normalizing the bladder and rectum weights accordingly. The population average of objective function weights is used for comparison. Results: The OVR at 0.7cm was found to be the most predictive of the rectum weights. The LR model performance is statistically significant when compared to the population average over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and mean voxel dose to the bladder, rectum, CTV, and PTV. On average, the LR model predicted bladder and rectum weights that are both 63% closer to the optimal weights compared to the population average. The treatment plans resulting from the LR weights have, on average, a rectum V70Gy that is 35% closer to the clinical plan and a bladder V70Gy that is 43% closer. Similar results are seen for bladder V54Gy and rectum V54Gy. Conclusion: Statistical modelling from patient anatomy can be used to determine objective function weights in IMRT for prostate cancer. Our method allows the treatment planners to begin the personalization process from an informed starting point, which may lead to more consistent clinical plans and reduce overall planning time.« less

  15. Evaluation of Near-Tropopause Ozone Distributions in the Global Modeling Initiative Combined Stratosphere/Troposphere Model with Ozonesonde Data

    NASA Technical Reports Server (NTRS)

    Considine, David B.; Logan, Jennifer A.; Olsen, Mark A.

    2008-01-01

    The NASA Global Modeling Initiative has developed a combined stratosphere/troposphere chemistry and transport model which fully represents the processes governing atmospheric composition near the tropopause. We evaluate model ozone distributions near the tropopause, using two high vertical resolution monthly mean ozone profile climatologies constructed with ozonesonde data, one by averaging on pressure levels and the other relative to the thermal tropopause. Model ozone is high biased at the SH tropical and NH midlatitude tropopause by approx. 45% in a 4 deg. latitude x 5 deg. longitude model simulation. Increasing the resolution to 2 deg. x 2.5 deg. increases the NH tropopause high bias to approx. 60%, but decreases the tropical tropopause bias to approx. 30%, an effect of a better-resolved residual circulation. The tropopause ozone biases appear not to be due to an overly vigorous residual circulation or excessive stratosphere/troposphere exchange, but are more likely due to insufficient vertical resolution or excessive vertical diffusion near the tropopause. In the upper troposphere and lower stratosphere, model/measurement intercomparisons are strongly affected by the averaging technique. NH and tropical mean model lower stratospheric biases are less than 20%. In the upper troposphere, the 2 deg. x 2.5 deg. simulation exhibits mean high biases of approx. 20% and approx. 35% during April in the tropics and NH midlatitudes, respectively, compared to the pressure averaged climatology. However, relative-to-tropopause averaging produces upper troposphere high biases of approx. 30% and 70% in the tropics and NH midlatitudes. This is because relative-to-tropopause averaging better preserves large cross-tropopause O3 gradients, which are seen in the daily sonde data, but not in daily model profiles. The relative annual cycle of ozone near the tropopause is reproduced very well in the model Northern Hemisphere midlatitudes. In the tropics, the model amplitude of the near tropopause annual cycle is weak. This is likely due to the annual amplitude of mean vertical upwelling near the tropopause, which analysis suggests is approx. 30% weaker than in the real atmosphere.

  16. Light propagation in Swiss-cheese models of random close-packed Szekeres structures: Effects of anisotropy and comparisons with perturbative results

    NASA Astrophysics Data System (ADS)

    Koksbang, S. M.

    2017-03-01

    Light propagation in two Swiss-cheese models based on anisotropic Szekeres structures is studied and compared with light propagation in Swiss-cheese models based on the Szekeres models' underlying Lemaitre-Tolman-Bondi models. The study shows that the anisotropy of the Szekeres models has only a small effect on quantities such as redshift-distance relations, projected shear and expansion rate along individual light rays. The average angular diameter distance to the last scattering surface is computed for each model. Contrary to earlier studies, the results obtained here are (mostly) in agreement with perturbative results. In particular, a small negative shift, δ DA≔D/A-DA ,b g DA ,b g , in the angular diameter distance is obtained upon line-of-sight averaging in three of the four models. The results are, however, not statistically significant. In the fourth model, there is a small positive shift which has an especially small statistical significance. The line-of-sight averaged inverse magnification at z =1100 is consistent with 1 to a high level of confidence for all models, indicating that the area of the surface corresponding to z =1100 is close to that of the background.

  17. A systematic comparison of two-equation Reynolds-averaged Navier-Stokes turbulence models applied to shock-cloud interactions

    NASA Astrophysics Data System (ADS)

    Goodson, Matthew D.; Heitsch, Fabian; Eklund, Karl; Williams, Virginia A.

    2017-07-01

    Turbulence models attempt to account for unresolved dynamics and diffusion in hydrodynamical simulations. We develop a common framework for two-equation Reynolds-averaged Navier-Stokes turbulence models, and we implement six models in the athena code. We verify each implementation with the standard subsonic mixing layer, although the level of agreement depends on the definition of the mixing layer width. We then test the validity of each model into the supersonic regime, showing that compressibility corrections can improve agreement with experiment. For models with buoyancy effects, we also verify our implementation via the growth of the Rayleigh-Taylor instability in a stratified medium. The models are then applied to the ubiquitous astrophysical shock-cloud interaction in three dimensions. We focus on the mixing of shock and cloud material, comparing results from turbulence models to high-resolution simulations (up to 200 cells per cloud radius) and ensemble-averaged simulations. We find that the turbulence models lead to increased spreading and mixing of the cloud, although no two models predict the same result. Increased mixing is also observed in inviscid simulations at resolutions greater than 100 cells per radius, which suggests that the turbulent mixing begins to be resolved.

  18. Analyzing average and conditional effects with multigroup multilevel structural equation models

    PubMed Central

    Mayer, Axel; Nagengast, Benjamin; Fletcher, John; Steyer, Rolf

    2014-01-01

    Conventionally, multilevel analysis of covariance (ML-ANCOVA) has been the recommended approach for analyzing treatment effects in quasi-experimental multilevel designs with treatment application at the cluster-level. In this paper, we introduce the generalized ML-ANCOVA with linear effect functions that identifies average and conditional treatment effects in the presence of treatment-covariate interactions. We show how the generalized ML-ANCOVA model can be estimated with multigroup multilevel structural equation models that offer considerable advantages compared to traditional ML-ANCOVA. The proposed model takes into account measurement error in the covariates, sampling error in contextual covariates, treatment-covariate interactions, and stochastic predictors. We illustrate the implementation of ML-ANCOVA with an example from educational effectiveness research where we estimate average and conditional effects of early transition to secondary schooling on reading comprehension. PMID:24795668

  19. The B-dot Earth Average Magnetic Field

    NASA Technical Reports Server (NTRS)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  20. Brazil wheat yield covariance model

    NASA Technical Reports Server (NTRS)

    Callis, S. L.; Sakamoto, C.

    1984-01-01

    A model based on multiple regression was developed to estimate wheat yields for the wheat growing states of Rio Grande do Sul, Parana, and Santa Catarina in Brazil. The meteorological data of these three states were pooled and the years 1972 to 1979 were used to develop the model since there was no technological trend in the yields during these years. Predictor variables were derived from monthly total precipitation, average monthly mean temperature, and average monthly maximum temperature.

  1. Argentina soybean yield model

    NASA Technical Reports Server (NTRS)

    Callis, S. L.; Sakamoto, C.

    1984-01-01

    A model based on multiple regression was developed to estimate soybean yields for the country of Argentina. A meteorological data set was obtained for the country by averaging data for stations within the soybean growing area. Predictor variables for the model were derived from monthly total precipitation and monthly average temperature. A trend variable was included for the years 1969 to 1978 since an increasing trend in yields due to technology was observed between these years.

  2. Forecasting asthma-related hospital admissions in London using negative binomial models.

    PubMed

    Soyiri, Ireneous N; Reidpath, Daniel D; Sarran, Christophe

    2013-05-01

    Health forecasting can improve health service provision and individual patient outcomes. Environmental factors are known to impact chronic respiratory conditions such as asthma, but little is known about the extent to which these factors can be used for forecasting. Using weather, air quality and hospital asthma admissions, in London (2005-2006), two related negative binomial models were developed and compared with a naive seasonal model. In the first approach, predictive forecasting models were fitted with 7-day averages of each potential predictor, and then a subsequent multivariable model is constructed. In the second strategy, an exhaustive search of the best fitting models between possible combinations of lags (0-14 days) of all the environmental effects on asthma admission was conducted. Three models were considered: a base model (seasonal effects), contrasted with a 7-day average model and a selected lags model (weather and air quality effects). Season is the best predictor of asthma admissions. The 7-day average and seasonal models were trivial to implement. The selected lags model was computationally intensive, but of no real value over much more easily implemented models. Seasonal factors can predict daily hospital asthma admissions in London, and there is a little evidence that additional weather and air quality information would add to forecast accuracy.

  3. Mass Function of Galaxy Clusters in Relativistic Inhomogeneous Cosmology

    NASA Astrophysics Data System (ADS)

    Ostrowski, Jan J.; Buchert, Thomas; Roukema, Boudewijn F.

    The current cosmological model (ΛCDM) with the underlying FLRW metric relies on the assumption of local isotropy, hence homogeneity of the Universe. Difficulties arise when one attempts to justify this model as an average description of the Universe from first principles of general relativity, since in general, the Einstein tensor built from the averaged metric is not equal to the averaged stress-energy tensor. In this context, the discrepancy between these quantities is called "cosmological backreaction" and has been the subject of scientific debate among cosmologists and relativists for more than 20 years. Here we present one of the methods to tackle this problem, i.e. averaging the scalar parts of the Einstein equations, together with its application, the cosmological mass function of galaxy clusters.

  4. Modelling the average velocity of propagation of the flame front in a gasoline engine with hydrogen additives

    NASA Astrophysics Data System (ADS)

    Smolenskaya, N. M.; Smolenskii, V. V.

    2018-01-01

    The paper presents models for calculating the average velocity of propagation of the flame front, obtained from the results of experimental studies. Experimental studies were carried out on a single-cylinder gasoline engine UIT-85 with hydrogen additives up to 6% of the mass of fuel. The article shows the influence of hydrogen addition on the average velocity propagation of the flame front in the main combustion phase. The dependences of the turbulent propagation velocity of the flame front in the second combustion phase on the composition of the mixture and operating modes. The article shows the influence of the normal combustion rate on the average flame propagation velocity in the third combustion phase.

  5. 40 CFR 86.449 - Averaging provisions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Later New Motorcycles, General Provisions § 86.449 Averaging provisions. (a) This section describes how... credits may not be banked for use in later model years, except as specified in paragraph (j) of this... average emission levels are at or below the applicable standards in § 86.410-2006. (2) Compliance with the...

  6. Computational problems in autoregressive moving average (ARMA) models

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.

    1981-01-01

    The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.

  7. 40 CFR 600.503-78 - Abbreviations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy and Manufacturer's Average Carbon...

  8. 40 CFR 600.505-78 - Recordkeeping.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 1978 Passenger Automobiles and for 1979 and Later Model Year Automobiles (Light Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy and Manufacturer's Average Carbon...

  9. Hyperspectral remote sensing of plant biochemistry using Bayesian model averaging with variable and band selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Kaiguang; Valle, Denis; Popescu, Sorin

    2013-05-15

    Model specification remains challenging in spectroscopy of plant biochemistry, as exemplified by the availability of various spectral indices or band combinations for estimating the same biochemical. This lack of consensus in model choice across applications argues for a paradigm shift in hyperspectral methods to address model uncertainty and misspecification. We demonstrated one such method using Bayesian model averaging (BMA), which performs variable/band selection and quantifies the relative merits of many candidate models to synthesize a weighted average model with improved predictive performances. The utility of BMA was examined using a portfolio of 27 foliage spectral–chemical datasets representing over 80 speciesmore » across the globe to estimate multiple biochemical properties, including nitrogen, hydrogen, carbon, cellulose, lignin, chlorophyll (a or b), carotenoid, polar and nonpolar extractives, leaf mass per area, and equivalent water thickness. We also compared BMA with partial least squares (PLS) and stepwise multiple regression (SMR). Results showed that all the biochemicals except carotenoid were accurately estimated from hyerspectral data with R2 values > 0.80.« less

  10. Metainference: A Bayesian inference method for heterogeneous systems.

    PubMed

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.

  11. Elucidating fluctuating diffusivity in center-of-mass motion of polymer models with time-averaged mean-square-displacement tensor

    NASA Astrophysics Data System (ADS)

    Miyaguchi, Tomoshige

    2017-10-01

    There have been increasing reports that the diffusion coefficient of macromolecules depends on time and fluctuates randomly. Here a method is developed to elucidate this fluctuating diffusivity from trajectory data. Time-averaged mean-square displacement (MSD), a common tool in single-particle-tracking (SPT) experiments, is generalized to a second-order tensor with which both magnitude and orientation fluctuations of the diffusivity can be clearly detected. This method is used to analyze the center-of-mass motion of four fundamental polymer models: the Rouse model, the Zimm model, a reptation model, and a rigid rodlike polymer. It is found that these models exhibit distinctly different types of magnitude and orientation fluctuations of diffusivity. This is an advantage of the present method over previous ones, such as the ergodicity-breaking parameter and a non-Gaussian parameter, because with either of these parameters it is difficult to distinguish the dynamics of the four polymer models. Also, the present method of a time-averaged MSD tensor could be used to analyze trajectory data obtained in SPT experiments.

  12. A Lagrangian stochastic model for aerial spray transport above an oak forest

    USGS Publications Warehouse

    Wang, Yansen; Miller, David R.; Anderson, Dean E.; McManus, Michael L.

    1995-01-01

    An aerial spray droplets' transport model has been developed by applying recent advances in Lagrangian stochastic simulation of heavy particles. A two-dimensional Lagrangian stochastic model was adopted to simulate the spray droplet dispersion in atmospheric turbulence by adjusting the Lagrangian integral time scale along the drop trajectory. The other major physical processes affecting the transport of spray droplets above a forest canopy, the aircraft wingtip vortices and the droplet evaporation, were also included in each time step of the droplets' transport.The model was evaluated using data from an aerial spray field experiment. In generally neutral stability conditions, the accuracy of the model predictions varied from run-to-run as expected. The average root-mean-square error was 24.61 IU cm−2, and the average relative error was 15%. The model prediction was adequate in two-dimensional steady wind conditions, but was less accurate in variable wind condition. The results indicated that the model can simulate successfully the ensemble; average transport of aerial spray droplets under neutral, steady atmospheric wind conditions.

  13. Cosmological backreaction within the Szekeres model and emergence of spatial curvature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolejko, Krzysztof, E-mail: krzysztof.bolejko@sydney.edu.au

    This paper discusses the phenomenon of backreaction within the Szekeres model. Cosmological backreaction describes how the mean global evolution of the Universe deviates from the Friedmannian evolution. The analysis is based on models of a single cosmological environment and the global ensemble of the Szekeres models (of the Swiss-Cheese-type and Styrofoam-type). The obtained results show that non-linear growth of cosmic structures is associated with the growth of the spatial curvature Ω{sub R} (in the FLRW limit Ω{sub R} → Ω {sub k} ). If averaged over global scales the result depends on the assumed global model of the Universe. Withinmore » the Swiss-Cheese model, which does have a fixed background, the volume average follows the evolution of the background, and the global spatial curvature averages out to zero (the background model is the ΛCDM model, which is spatially flat). In the Styrofoam-type model, which does not have a fixed background, the mean evolution deviates from the spatially flat ΛCDM model, and the mean spatial curvature evolves from Ω{sub R} =0 at the CMB to Ω{sub R} ∼ 0.1 at 0 z =. If the Styrofoam-type model correctly captures evolutionary features of the real Universe then one should expect that in our Universe, the spatial curvature should build up (local growth of cosmic structures) and its mean global average should deviate from zero (backreaction). As a result, this paper predicts that the low-redshift Universe should not be spatially flat (i.e. Ω {sub k} ≠ 0, even if in the early Universe Ω {sub k} = 0) and therefore when analysing low- z cosmological data one should keep Ω {sub k} as a free parameter and independent from the CMB constraints.« less

  14. Regression modeling of gas-particle partitioning of atmospheric oxidized mercury from temperature data

    NASA Astrophysics Data System (ADS)

    Cheng, Irene; Zhang, Leiming; Blanchard, Pierrette

    2014-10-01

    Models describing the partitioning of atmospheric oxidized mercury (Hg(II)) between the gas and fine particulate phases were developed as a function of temperature. The models were derived from regression analysis of the gas-particle partitioning parameters, defined by a partition coefficient (Kp) and Hg(II) fraction in fine particles (fPBM) and temperature data from 10 North American sites. The generalized model, log(1/Kp) = 12.69-3485.30(1/T) (R2 = 0.55; root-mean-square error (RMSE) of 1.06 m3/µg for Kp), predicted the observed average Kp at 7 of the 10 sites. Discrepancies between the predicted and observed average Kp were found at the sites impacted by large Hg sources because the model had not accounted for the different mercury speciation profile and aerosol compositions of different sources. Site-specific equations were also generated from average Kp and fPBM corresponding to temperature interval data. The site-specific models were more accurate than the generalized Kp model at predicting the observations at 9 of the 10 sites as indicated by RMSE of 0.22-0.5 m3/µg for Kp and 0.03-0.08 for fPBM. Both models reproduced the observed monthly average values, except for a peak in Hg(II) partitioning observed during summer at two locations. Weak correlations between the site-specific model Kp or fPBM and observations suggest the role of aerosol composition, aerosol water content, and relative humidity factors on Hg(II) partitioning. The use of local temperature data to parameterize Hg(II) partitioning in the proposed models potentially improves the estimation of mercury cycling in chemical transport models and elsewhere.

  15. Cosmological backreaction within the Szekeres model and emergence of spatial curvature

    NASA Astrophysics Data System (ADS)

    Bolejko, Krzysztof

    2017-06-01

    This paper discusses the phenomenon of backreaction within the Szekeres model. Cosmological backreaction describes how the mean global evolution of the Universe deviates from the Friedmannian evolution. The analysis is based on models of a single cosmological environment and the global ensemble of the Szekeres models (of the Swiss-Cheese-type and Styrofoam-type). The obtained results show that non-linear growth of cosmic structures is associated with the growth of the spatial curvature ΩScript R (in the FLRW limit ΩScript R → Ωk). If averaged over global scales the result depends on the assumed global model of the Universe. Within the Swiss-Cheese model, which does have a fixed background, the volume average follows the evolution of the background, and the global spatial curvature averages out to zero (the background model is the ΛCDM model, which is spatially flat). In the Styrofoam-type model, which does not have a fixed background, the mean evolution deviates from the spatially flat ΛCDM model, and the mean spatial curvature evolves from ΩScript R =0 at the CMB to ΩScript R ~ 0.1 at 0z =. If the Styrofoam-type model correctly captures evolutionary features of the real Universe then one should expect that in our Universe, the spatial curvature should build up (local growth of cosmic structures) and its mean global average should deviate from zero (backreaction). As a result, this paper predicts that the low-redshift Universe should not be spatially flat (i.e. Ωk ≠ 0, even if in the early Universe Ωk = 0) and therefore when analysing low-z cosmological data one should keep Ωk as a free parameter and independent from the CMB constraints.

  16. Reformers, Batting Averages, and Malpractice: The Case for Caution in Value-Added Use

    ERIC Educational Resources Information Center

    Gleason, Daniel

    2014-01-01

    The essay considers two analogies that help to reveal the limitations of value-added modeling: the first, a comparison with batting averages, shows that the model's reliability is quite limited even though year-to-year correlation figures may seem impressive; the second, a comparison between medical malpractice and so-called educational…

  17. Group-theoretical model of developed turbulence and renormalization of the Navier-Stokes equation.

    PubMed

    Saveliev, V L; Gorokhovski, M A

    2005-07-01

    On the basis of the Euler equation and its symmetry properties, this paper proposes a model of stationary homogeneous developed turbulence. A regularized averaging formula for the product of two fields is obtained. An equation for the averaged turbulent velocity field is derived from the Navier-Stokes equation by renormalization-group transformation.

  18. The Effect on Non-Normal Distributions on the Integrated Moving Average Model of Time-Series Analysis.

    ERIC Educational Resources Information Center

    Doerann-George, Judith

    The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…

  19. 40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of § 86.1801-12(j), CO2 fleet average exhaust emission standards apply to: (i) 2012 and later model... businesses meeting certain criteria may be exempted from the greenhouse gas emission standards in § 86.1818... standards applicable in a given model year are calculated separately for passenger automobiles and light...

  20. Initial Conditions in the Averaging Cognitive Model

    ERIC Educational Resources Information Center

    Noventa, S.; Massidda, D.; Vidotto, G.

    2010-01-01

    The initial state parameters s[subscript 0] and w[subscript 0] are intricate issues of the averaging cognitive models in Information Integration Theory. Usually they are defined as a measure of prior information (Anderson, 1981; 1982) but there are no general rules to deal with them. In fact, there is no agreement as to their treatment except in…

  1. Mean-field theory for pedestrian outflow through an exit.

    PubMed

    Yanagisawa, Daichi; Nishinari, Katsuhiro

    2007-12-01

    The average pedestrian flow through an exit is one of the most important indices in evaluating pedestrian dynamics. In order to study the flow in detail, the floor field model, which is a crowd model using cellular automata, is extended by taking into account realistic behavior of pedestrians around the exit. The model is studied by both numerical simulations and cluster analysis to obtain a theoretical expression for the average pedestrian flow through the exit. It is found quantitatively that the effects of exit door width, the wall, and the pedestrian mood of competition or cooperation significantly influence the average flow. The results show that there is a suitable width and position of the exit according to the pedestrians' mood.

  2. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    PubMed

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  3. PAB3D: Its History in the Use of Turbulence Models in the Simulation of Jet and Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Pao, S. Paul; Hunter, Craig A.; Deere, Karen A.; Massey, Steven J.; Elmiligui, Alaa

    2006-01-01

    This is a review paper for PAB3D s history in the implementation of turbulence models for simulating jet and nozzle flows. We describe different turbulence models used in the simulation of subsonic and supersonic jet and nozzle flows. The time-averaged simulations use modified linear or nonlinear two-equation models to account for supersonic flow as well as high temperature mixing. Two multiscale-type turbulence models are used for unsteady flow simulations. These models require modifications to the Reynolds Averaged Navier-Stokes (RANS) equations. The first scheme is a hybrid RANS/LES model utilizing the two-equation (k-epsilon) model with a RANS/LES transition function, dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier-Stokes (PANS) formulation. All of these models are implemented in the three-dimensional Navier-Stokes code PAB3D. This paper discusses computational methods, code implementation, computed results for a wide range of nozzle configurations at various operating conditions, and comparisons with available experimental data. Very good agreement is shown between the numerical solutions and available experimental data over a wide range of operating conditions.

  4. The Gaussian atmospheric transport model and its sensitivity to the joint frequency distribution and parametric variability.

    PubMed

    Hamby, D M

    2002-01-01

    Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.

  5. Ergodicity Breaking in Geometric Brownian Motion

    NASA Astrophysics Data System (ADS)

    Peters, O.; Klein, W.

    2013-03-01

    Geometric Brownian motion (GBM) is a model for systems as varied as financial instruments and populations. The statistical properties of GBM are complicated by nonergodicity, which can lead to ensemble averages exhibiting exponential growth while any individual trajectory collapses according to its time average. A common tactic for bringing time averages closer to ensemble averages is diversification. In this Letter, we study the effects of diversification using the concept of ergodicity breaking.

  6. Averaged head phantoms from magnetic resonance images of Korean children and young adults

    NASA Astrophysics Data System (ADS)

    Han, Miran; Lee, Ae-Kyoung; Choi, Hyung-Do; Jung, Yong Wook; Park, Jin Seo

    2018-02-01

    Increased use of mobile phones raises concerns about the health risks of electromagnetic radiation. Phantom heads are routinely used for radiofrequency dosimetry simulations, and the purpose of this study was to construct averaged phantom heads for children and young adults. Using magnetic resonance images (MRI), sectioned cadaver images, and a hybrid approach, we initially built template phantoms representing 6-, 9-, 12-, 15-year-old children and young adults. Our subsequent approach revised the template phantoms using 29 averaged items that were identified by averaging the MRI data from 500 children and young adults. In females, the brain size and cranium thickness peaked in the early teens and then decreased. This is contrary to what was observed in males, where brain size and cranium thicknesses either plateaued or grew continuously. The overall shape of brains was spherical in children and became ellipsoidal by adulthood. In this study, we devised a method to build averaged phantom heads by constructing surface and voxel models. The surface model could be used for phantom manipulation, whereas the voxel model could be used for compliance test of specific absorption rate (SAR) for users of mobile phones or other electronic devices.

  7. Scaling laws and fluctuations in the statistics of word frequencies

    NASA Astrophysics Data System (ADS)

    Gerlach, Martin; Altmann, Eduardo G.

    2014-11-01

    In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths.

  8. FIFRELIN - TRIPOLI-4® coupling for Monte Carlo simulations with a fission model. Application to shielding calculations

    NASA Astrophysics Data System (ADS)

    Petit, Odile; Jouanne, Cédric; Litaize, Olivier; Serot, Olivier; Chebboubi, Abdelhazize; Pénéliau, Yannick

    2017-09-01

    TRIPOLI-4® Monte Carlo transport code and FIFRELIN fission model have been coupled by means of external files so that neutron transport can take into account fission distributions (multiplicities and spectra) that are not averaged, as is the case when using evaluated nuclear data libraries. Spectral effects on responses in shielding configurations with fission sampling are then expected. In the present paper, the principle of this coupling is detailed and a comparison between TRIPOLI-4® fission distributions at the emission of fission neutrons is presented when using JEFF-3.1.1 evaluated data or FIFRELIN data generated either through a n/g-uncoupled mode or through a n/g-coupled mode. Finally, an application to a modified version of the ASPIS benchmark is performed and the impact of using FIFRELIN data on neutron transport is analyzed. Differences noticed on average reaction rates on the surfaces closest to the fission source are mainly due to the average prompt fission spectrum. Moreover, when working with the same average spectrum, a complementary analysis based on non-average reaction rates still shows significant differences that point out the real impact of using a fission model in neutron transport simulations.

  9. Flow over bedforms in a large sand-bed river: A field investigation

    USGS Publications Warehouse

    Holmes, Robert R.; Garcia, Marcelo H.

    2008-01-01

    An experimental field study of flows over bedforms was conducted on the Missouri River near St. Charles, Missouri. Detailed velocity data were collected under two different flow conditions along bedforms in this sand-bed river. The large river-scale data reflect flow characteristics similar to those of laboratory-scale flows, with flow separation occurring downstream of the bedform crest and flow reattachment on the stoss side of the next downstream bedform. Wave-like responses of the flow to the bedforms were detected, with the velocity decreasing throughout the flow depth over bedform troughs, and the velocity increasing over bedform crests. Local and spatially averaged velocity distributions were logarithmic for both datasets. The reach-wise spatially averaged vertical-velocity profile from the standard velocity-defect model was evaluated. The vertically averaged mean flow velocities for the velocity-defect model were within 5% of the measured values and estimated spatially averaged point velocities were within 10% for the upper 90% of the flow depth. The velocity-defect model, neglecting the wake function, was evaluated and found to estimate thevertically averaged mean velocity within 1% of the measured values.  

  10. Comparison of cursive models for handwriting instruction.

    PubMed

    Karlsdottir, R

    1997-12-01

    The efficiency of four different cursive handwriting styles as model alphabets for handwriting instruction of primary school children was compared in a cross-sectional field experiment from Grade 3 to 6 in terms of the average handwriting speed developed by the children and the average rate of convergence of the children's handwriting to the style of their model. It was concluded that styles with regular entry stroke patterns give the steadiest rate of convergence to the model and styles with short ascenders and descenders and strokes with not too high curvatures give the highest handwriting speed.

  11. Ergodicity of financial indices

    NASA Astrophysics Data System (ADS)

    Kolesnikov, A. V.; Rühl, T.

    2010-05-01

    We introduce the concept of the ensemble averaging for financial markets. We address the question of equality of ensemble and time averaging in their sequence and investigate if these averagings are equivalent for large amount of equity indices and branches. We start with the model of Gaussian-distributed returns, equal-weighted stocks in each index and absence of correlations within a single day and show that even this oversimplified model captures already the run of the corresponding index reasonably well due to its self-averaging properties. We introduce the concept of the instant cross-sectional volatility and discuss its relation to the ordinary time-resolved counterpart. The role of the cross-sectional volatility for the description of the corresponding index as well as the role of correlations between the single stocks and the role of non-Gaussianity of stock distributions is briefly discussed. Our model reveals quickly and efficiently some anomalies or bubbles in a particular financial market and gives an estimate of how large these effects can be and how quickly they disappear.

  12. Statistical theory of nucleation in the presence of uncharacterized impurities

    NASA Astrophysics Data System (ADS)

    Sear, Richard P.

    2004-08-01

    First order phase transitions proceed via nucleation. The rate of nucleation varies exponentially with the free-energy barrier to nucleation, and so is highly sensitive to variations in this barrier. In practice, very few systems are absolutely pure, there are typically some impurities present which are rather poorly characterized. These interact with the nucleus, causing the barrier to vary, and so must be taken into account. Here the impurity-nucleus interactions are modelled by random variables. The rate then has the same form as the partition function of Derrida’s random energy model, and as in this model there is a regime in which the behavior is non-self-averaging. Non-self-averaging nucleation is nucleation with a rate that varies significantly from one realization of the random variables to another. In experiment this corresponds to variation in the nucleation rate from one sample to another. General analytic expressions are obtained for the crossover from a self-averaging to a non-self-averaging rate of nucleation.

  13. Optimal weighted averaging of event related activity from acquisitions with artifacts.

    PubMed

    Vollero, Luca; Petrichella, Sara; Innello, Giulio

    2016-08-01

    In several biomedical applications that require the signal processing of biological data, the starting procedure for noise reduction is the ensemble averaging of multiple repeated acquisitions (trials). This method is based on the assumption that each trial is composed of two additive components: (i) a time-locked activity related to some sensitive/stimulation phenomenon (ERA, Event Related Activity in the following) and (ii) a sum of several other non time-locked background activities. The averaging aims at estimating the ERA activity under very low Signal to Noise and Interference Ratio (SNIR). Although averaging is a well established tool, its performance can be improved in the presence of high-power disturbances (artifacts) by a trials classification and removal stage. In this paper we propose, model and evaluate a new approach that avoids trials removal, managing trials classified as artifact-free and artifact-prone with two different weights. Based on the model, a weights tuning is possible and through modeling and simulations we show that, when optimally configured, the proposed solution outperforms classical approaches.

  14. [The reentrant binomial model of nuclear anomalies growth in rhabdomyosarcoma RA-23 cell populations under increasing doze of rare ionizing radiation].

    PubMed

    Alekseeva, N P; Alekseev, A O; Vakhtin, Iu B; Kravtsov, V Iu; Kuzovatov, S N; Skorikova, T I

    2008-01-01

    Distributions of nuclear morphology anomalies in transplantable rabdomiosarcoma RA-23 cell populations were investigated under effect of ionizing radiation from 0 to 45 Gy. Internuclear bridges, nuclear protrusions and dumbbell-shaped nuclei were accepted for morphological anomalies. Empirical distributions of the number of anomalies per 100 nuclei were used. The adequate model of reentrant binomial distribution has been found. The sum of binomial random variables with binomial number of summands has such distribution. Averages of these random variables were named, accordingly, internal and external average reentrant components. Their maximum likelihood estimations were received. Statistical properties of these estimations were investigated by means of statistical modeling. It has been received that at equally significant correlation between the radiation dose and the average of nuclear anomalies in cell populations after two-three cellular cycles from the moment of irradiation in vivo the irradiation doze significantly correlates with internal average reentrant component, and in remote descendants of cell transplants irradiated in vitro - with external one.

  15. The influence of averaging procedure on the accuracy of IVIVC predictions: immediate release dosage form case study.

    PubMed

    Ostrowski, Michalł; Wilkowska, Ewa; Baczek, Tomasz

    2010-12-01

    In vivo-in vitro correlation (IVIVC) is an effective tool to predict absorption behavior of active substances from pharmaceutical dosage forms. The model for immediate release dosage form containing amoxicillin was used in the presented study to check if the calculation method of absorption profiles can influence final results achieved. The comparison showed that an averaging of individual absorption profiles performed by Wagner-Nelson (WN) conversion method can lead to lose the discrimination properties of the model. The approach considering individual plasma concentration versus time profiles enabled to average absorption profiles prior WN conversion. In turn, that enabled to find differences between dispersible tablets and capsules. It was concluded that in the case of immediate release dosage form, the decision to use averaging method should be based on an individual situation; however, it seems that the influence of such a procedure on the discrimination properties of the model is then more significant. © 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  16. Weighed scalar averaging in LTB dust models: part II. A formalism of exact perturbations

    NASA Astrophysics Data System (ADS)

    Sussman, Roberto A.

    2013-03-01

    We examine the exact perturbations that arise from the q-average formalism that was applied in the preceding article (part I) to Lemaître-Tolman-Bondi (LTB) models. By introducing an initial value parametrization, we show that all LTB scalars that take an FLRW ‘look-alike’ form (frequently used in the literature dealing with LTB models) follow as q-averages of covariant scalars that are common to FLRW models. These q-scalars determine for every averaging domain a unique FLRW background state through Darmois matching conditions at the domain boundary, though the definition of this background does not require an actual matching with an FLRW region (Swiss cheese-type models). Local perturbations describe the deviation from the FLRW background state through the local gradients of covariant scalars at the boundary of every comoving domain, while non-local perturbations do so in terms of the intuitive notion of a ‘contrast’ of local scalars with respect to FLRW reference values that emerge from q-averages assigned to the whole domain or the whole time slice in the asymptotic limit. We derive fluid flow evolution equations that completely determine the dynamics of the models in terms of the q-scalars and both types of perturbations. A rigorous formalism of exact spherical nonlinear perturbations is defined over the FLRW background state associated with the q-scalars, recovering the standard results of linear perturbation theory in the appropriate limit. We examine the notion of the amplitude and illustrate the differences between local and non-local perturbations by qualitative diagrams and through an example of a cosmic density void that follows from the numeric solution of the evolution equations.

  17. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    DTIC Science & Technology

    1990-11-01

    1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and

  18. Work-related accidents among the Iranian population: a time series analysis, 2000–2011

    PubMed Central

    Karimlou, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood

    2015-01-01

    Background Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. Objectives To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. Methods In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box–Jenkins modeling to develop a time series model of the total number of accidents. Results There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). Conclusions The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection. PMID:26119774

  19. Work-related accidents among the Iranian population: a time series analysis, 2000-2011.

    PubMed

    Karimlou, Masoud; Salehi, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood

    2015-01-01

    Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box-Jenkins modeling to develop a time series model of the total number of accidents. There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection.

  20. Climate model biases in seasonality of continental water storage revealed by satellite gravimetry

    USGS Publications Warehouse

    Swenson, Sean; Milly, P.C.D.

    2006-01-01

    Satellite gravimetric observations of monthly changes in continental water storage are compared with outputs from five climate models. All models qualitatively reproduce the global pattern of annual storage amplitude, and the seasonal cycle of global average storage is reproduced well, consistent with earlier studies. However, global average agreements mask systematic model biases in low latitudes. Seasonal extrema of low‐latitude, hemispheric storage generally occur too early in the models, and model‐specific errors in amplitude of the low‐latitude annual variations are substantial. These errors are potentially explicable in terms of neglected or suboptimally parameterized water stores in the land models and precipitation biases in the climate models.

  1. Real time detection of farm-level swine mycobacteriosis outbreak using time series modeling of the number of condemned intestines in abattoirs.

    PubMed

    Adachi, Yasumoto; Makita, Kohei

    2015-09-01

    Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.

  2. Dimension reduction method for SPH equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2011-08-26

    Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less

  3. A new method to estimate average hourly global solar radiation on the horizontal surface

    NASA Astrophysics Data System (ADS)

    Pandey, Pramod K.; Soupir, Michelle L.

    2012-10-01

    A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.

  4. Test techniques for model development of repetitive service energy storage capacitors

    NASA Astrophysics Data System (ADS)

    Thompson, M. C.; Mauldin, G. H.

    1984-03-01

    The performance of the Sandia perfluorocarbon family of energy storage capacitors was evaluated. The capacitors have a much lower charge noise signature creating new instrumentation performance goals. Thermal response to power loading and the importance of average and spot heating in the bulk regions require technical advancements in real time temperature measurements. Reduction and interpretation of thermal data are crucial to the accurate development of an intelligent thermal transport model. The thermal model is of prime interest in the high repetition rate, high average power applications of power conditioning capacitors. The accurate identification of device parasitic parameters has ramifications in both the average power loss mechanisms and peak current delivery. Methods to determine the parasitic characteristics and their nonlinearities and terminal effects are considered. Meaningful interpretations for model development, performance history, facility development, instrumentation, plans for the future, and present data are discussed.

  5. Toward {U}(N|M) knot invariant from ABJM theory

    NASA Astrophysics Data System (ADS)

    Eynard, Bertrand; Kimura, Taro

    2017-06-01

    We study {U}(N|M) character expectation value with the supermatrix Chern-Simons theory, known as the ABJM matrix model, with emphasis on its connection to the knot invariant. This average just gives the half-BPS circular Wilson loop expectation value in ABJM theory, which shall correspond to the unknot invariant. We derive the determinantal formula, which gives {U}(N|M) character expectation values in terms of {U}(1|1) averages for a particular type of character representations. This means that the {U}(1|1) character expectation value is a building block for the {U}(N|M) averages and also, by an appropriate limit, for the {U}(N) invariants. In addition to the original model, we introduce another supermatrix model obtained through the symplectic transform, which is motivated by the torus knot Chern-Simons matrix model. We obtain the Rosso-Jones-type formula and the spectral curve for this case.

  6. Models of brachial to finger pulse wave distortion and pressure decrement.

    PubMed

    Gizdulich, P; Prentza, A; Wesseling, K H

    1997-03-01

    To model the pulse wave distortion and pressure decrement occurring between brachial and finger arteries. Distortion reversion and decrement correction were also our aims. Brachial artery pressure was recorded intra-arterially and finger pressure was recorded non-invasively by the Finapres technique in 53 adult human subjects. Mean pressure was subtracted from each pressure waveform and Fourier analysis applied to the pulsations. A distortion model was estimated for each subject and averaged over the group. The average inverse model was applied to the full finger pressure waveform. The pressure decrement was modelled by multiple regression on finger systolic and diastolic levels. Waveform distortion could be described by a general, frequency dependent model having a resonance at 7.3 Hz. The general inverse model has an anti-resonance at this frequency. It converts finger to brachial pulsations thereby reducing average waveform distortion from 9.7 (s.d. 3.2) mmHg per sample for the finger pulse to 3.7 (1.7) mmHg for the converted pulse. Systolic and diastolic level differences between finger and brachial arterial pressures changed from -4 (15) and -8 (11) to +8 (14) and +8 (12) mmHg, respectively, after inverse modelling, with pulse pressures correct on average. The pressure decrement model reduced both the mean and the standard deviation of systolic and diastolic level differences to 0 (13) and 0 (8) mmHg. Diastolic differences were thus reduced most. Brachial to finger pulse wave distortion due to wave reflection in arteries is almost identical in all subjects and can be modelled by a single resonance. The pressure decrement due to flow in arteries is greatest for high pulse pressures superimposed on low means.

  7. Modelling radiative transfer through ponded first-year Arctic sea ice with a plane-parallel model

    NASA Astrophysics Data System (ADS)

    Taskjelle, Torbjørn; Hudson, Stephen R.; Granskog, Mats A.; Hamre, Børge

    2017-09-01

    Under-ice irradiance measurements were done on ponded first-year pack ice along three transects during the ICE12 expedition north of Svalbard. Bulk transmittances (400-900 nm) were found to be on average 0.15-0.20 under bare ice, and 0.39-0.46 under ponded ice. Radiative transfer modelling was done with a plane-parallel model. While simulated transmittances deviate significantly from measured transmittances close to the edge of ponds, spatially averaged bulk transmittances agree well. That is, transect-average bulk transmittances, calculated using typical simulated transmittances for ponded and bare ice weighted by the fractional coverage of the two surface types, are in good agreement with the measured values. Radiative heating rates calculated from model output indicates that about 20 % of the incident solar energy is absorbed in bare ice, and 50 % in ponded ice (35 % in pond itself, 15 % in the underlying ice). This large difference is due to the highly scattering surface scattering layer (SSL) increasing the albedo of the bare ice.

  8. Digital flow model of the Chowan River estuary, North Carolina

    USGS Publications Warehouse

    Daniel, C.C.

    1977-01-01

    A one-dimensional deterministic flow model based on the continuity equation had been developed to provide estimates of daily flow past a number of points on the Chowan River estuary of northeast North Carolina. The digital model, programmed in Fortran IV, computes daily average discharge for nine sites; four of these represent inflow at the mouths of major tributaries, the five other sites are at stage stations along the estuary. Because flows within the Chowan River and the lower reaches of its tributaries are tidally affected, flows occur in both upstream and downstream directions. The period of record generated by the model extends from April 1, 1974, to March 31, 1976. During the two years of model operation the average discharge at Edenhouse near the mouth of the estuary was 5,830 cfs (cubic feet per second). Daily average flows during this period ranged from 55,900 cfs in the downstream direction on July 17, 1975, to 14,200 cfs in the upstream direction on November 30, 1974

  9. Approximation to cutoffs of higher modes of Rayleigh waves for a layered earth model

    USGS Publications Warehouse

    Xu, Y.; Xia, J.; Miller, R.D.

    2009-01-01

    A cutoff defines the long-period termination of a Rayleigh-wave higher mode and, therefore is a key characteristic of higher mode energy relationship to several material properties of the subsurface. Cutoffs have been used to estimate the shear-wave velocity of an underlying half space of a layered earth model. In this study, we describe a method that replaces the multilayer earth model with a single surface layer overlying the half-space model, accomplished by harmonic averaging of velocities and arithmetic averaging of densities. Using numerical comparisons with theoretical models validates the single-layer approximation. Accuracy of this single-layer approximation is best defined by values of the calculated error in the frequency and phase velocity estimate at a cutoff. Our proposed method is intuitively explained using ray theory. Numerical results indicate that a cutoffs frequency is controlled by the averaged elastic properties within the passing depth of Rayleigh waves and the shear-wave velocity of the underlying half space. ?? Birkh??user Verlag, Basel 2009.

  10. Using a GIS to link digital spatial data and the precipitation-runoff modeling system, Gunnison River Basin, Colorado

    USGS Publications Warehouse

    Battaglin, William A.; Kuhn, Gerhard; Parker, Randolph S.

    1993-01-01

    The U.S. Geological Survey Precipitation-Runoff Modeling System, a modular, distributed-parameter, watershed-modeling system, is being applied to 20 smaller watersheds within the Gunnison River basin. The model is used to derive a daily water balance for subareas in a watershed, ultimately producing simulated streamflows that can be input into routing and accounting models used to assess downstream water availability under current conditions, and to assess the sensitivity of water resources in the basin to alterations in climate. A geographic information system (GIS) is used to automate a method for extracting physically based hydrologic response unit (HRU) distributed parameter values from digital data sources, and for the placement of those estimates into GIS spatial datalayers. The HRU parameters extracted are: area, mean elevation, average land-surface slope, predominant aspect, predominant land-cover type, predominant soil type, average total soil water-holding capacity, and average water-holding capacity of the root zone.

  11. 49 CFR 535.7 - Averaging, banking, and trading (ABT) program.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... procedures of 40 CFR part 1065 or using the post-transmission test procedures. (2) Post-transmission hybrid...) Averaging. Averaging is the exchange of FCC among a manufacturer's engines or vehicle families or test... expiration date of five model years after the year in which the credits are earned. For example, credits...

  12. 40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... different strategies are and why they are used. (i) Calculating the fleet average carbon-related exhaust emissions. (1) Manufacturers must compute separate production-weighted fleet average carbon-related exhaust... as defined in § 86.1818-12. The model type carbon-related exhaust emission results determined...

  13. 40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... different strategies are and why they are used. (i) Calculating the fleet average carbon-related exhaust emissions. (1) Manufacturers must compute separate production-weighted fleet average carbon-related exhaust... as defined in § 86.1818-12. The model type carbon-related exhaust emission results determined...

  14. 40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... different strategies are and why they are used. (i) Calculating the fleet average carbon-related exhaust emissions. (1) Manufacturers must compute separate production-weighted fleet average carbon-related exhaust... as defined in § 86.1818-12. The model type carbon-related exhaust emission results determined...

  15. Midwife-led continuity models versus other models of care for childbearing women.

    PubMed

    Sandall, Jane; Soltani, Hora; Gates, Simon; Shennan, Andrew; Devane, Declan

    2016-04-28

    Midwives are primary providers of care for childbearing women around the world. However, there is a lack of synthesised information to establish whether there are differences in morbidity and mortality, effectiveness and psychosocial outcomes between midwife-led continuity models and other models of care. To compare midwife-led continuity models of care with other models of care for childbearing women and their infants. We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (25 January 2016) and reference lists of retrieved studies. All published and unpublished trials in which pregnant women are randomly allocated to midwife-led continuity models of care or other models of care during pregnancy and birth. Two review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. The quality of the evidence was assessed using the GRADE approach. We included 15 trials involving 17,674 women. We assessed the quality of the trial evidence for all primary outcomes (i.e. regional analgesia (epidural/spinal), caesarean birth, instrumental vaginal birth (forceps/vacuum), spontaneous vaginal birth, intact perineum, preterm birth (less than 37 weeks) and all fetal loss before and after 24 weeks plus neonatal death using the GRADE methodology: all primary outcomes were graded as of high quality.For the primary outcomes, women who had midwife-led continuity models of care were less likely to experience regional analgesia (average risk ratio (RR) 0.85, 95% confidence interval (CI) 0.78 to 0.92; participants = 17,674; studies = 14; high quality), instrumental vaginal birth (average RR 0.90, 95% CI 0.83 to 0.97; participants = 17,501; studies = 13; high quality), preterm birth less than 37 weeks (average RR 0.76, 95% CI 0.64 to 0.91; participants = 13,238; studies = eight; high quality) and less all fetal loss before and after 24 weeks plus neonatal death (average RR 0.84, 95% CI 0.71 to 0.99; participants = 17,561; studies = 13; high quality evidence). Women who had midwife-led continuity models of care were more likely to experience spontaneous vaginal birth (average RR 1.05, 95% CI 1.03 to 1.07; participants = 16,687; studies = 12; high quality). There were no differences between groups for caesarean births or intact perineum.For the secondary outcomes, women who had midwife-led continuity models of care were less likely to experience amniotomy (average RR 0.80, 95% CI 0.66 to 0.98; participants = 3253; studies = four), episiotomy (average RR 0.84, 95% CI 0.77 to 0.92; participants = 17,674; studies = 14) and fetal loss less than 24 weeks and neonatal death (average RR 0.81, 95% CI 0.67 to 0.98; participants = 15,645; studies = 11). Women who had midwife-led continuity models of care were more likely to experience no intrapartum analgesia/anaesthesia (average RR 1.21, 95% CI 1.06 to 1.37; participants = 10,499; studies = seven), have a longer mean length of labour (hours) (mean difference (MD) 0.50, 95% CI 0.27 to 0.74; participants = 3328; studies = three) and more likely to be attended at birth by a known midwife (average RR 7.04, 95% CI 4.48 to 11.08; participants = 6917; studies = seven). There were no differences between groups for fetal loss equal to/after 24 weeks and neonatal death, induction of labour, antenatal hospitalisation, antepartum haemorrhage, augmentation/artificial oxytocin during labour, opiate analgesia, perineal laceration requiring suturing, postpartum haemorrhage, breastfeeding initiation, low birthweight infant, five-minute Apgar score less than or equal to seven, neonatal convulsions, admission of infant to special care or neonatal intensive care unit(s) or in mean length of neonatal hospital stay (days).Due to a lack of consistency in measuring women's satisfaction and assessing the cost of various maternity models, these outcomes were reported narratively. The majority of included studies reported a higher rate of maternal satisfaction in midwife-led continuity models of care. Similarly, there was a trend towards a cost-saving effect for midwife-led continuity care compared to other care models. This review suggests that women who received midwife-led continuity models of care were less likely to experience intervention and more likely to be satisfied with their care with at least comparable adverse outcomes for women or their infants than women who received other models of care.Further research is needed to explore findings of fewer preterm births and fewer fetal deaths less than 24 weeks, and all fetal loss/neonatal death associated with midwife-led continuity models of care.

  16. Midwife-led continuity models versus other models of care for childbearing women.

    PubMed

    Sandall, Jane; Soltani, Hora; Gates, Simon; Shennan, Andrew; Devane, Declan

    2015-09-15

    Midwives are primary providers of care for childbearing women around the world. However, there is a lack of synthesised information to establish whether there are differences in morbidity and mortality, effectiveness and psychosocial outcomes between midwife-led continuity models and other models of care. To compare midwife-led continuity models of care with other models of care for childbearing women and their infants. We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2015) and reference lists of retrieved studies. All published and unpublished trials in which pregnant women are randomly allocated to midwife-led continuity models of care or other models of care during pregnancy and birth. Two review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy. We included 15 trials involving 17,674 women. We assessed the quality of the trial evidence for all primary outcomes (i.e., regional analgesia (epidural/spinal), caesarean birth, instrumental vaginal birth (forceps/vacuum), spontaneous vaginal birth, intact perineum, preterm birth (less than 37 weeks) and overall fetal loss and neonatal death (fetal loss was assessed by gestation using 24 weeks as the cut-off for viability in many countries) using the GRADE methodology: All primary outcomes were graded as of high quality.For the primary outcomes, women who had midwife-led continuity models of care were less likely to experience regional analgesia (average risk ratio (RR) 0.85, 95% confidence interval (CI) 0.78 to 0.92; participants = 17,674; studies = 14; high quality), instrumental vaginal birth (average RR 0.90, 95% CI 0.83 to 0.97; participants = 17,501; studies = 13; high quality), preterm birth less than 37 weeks (average RR 0.76, 95% CI 0.64 to 0.91; participants = 13,238; studies = 8; high quality) and less overall fetal/neonatal death (average RR 0.84, 95% CI 0.71 to 0.99; participants = 17,561; studies = 13; high quality evidence). Women who had midwife-led continuity models of care were more likely to experience spontaneous vaginal birth (average RR 1.05, 95% CI 1.03 to 1.07; participants = 16,687; studies = 12; high quality). There were no differences between groups for caesarean births or intact perineum.For the secondary outcomes, women who had midwife-led continuity models of care were less likely to experience amniotomy (average RR 0.80, 95% CI 0.66 to 0.98; participants = 3253; studies = 4), episiotomy (average RR 0.84, 95% CI 0.77 to 0.92; participants = 17,674; studies = 14) and fetal loss/neonatal death before 24 weeks (average RR 0.81, 95% CI 0.67 to 0.98; participants = 15,645; studies = 11). Women who had midwife-led continuity models of care were more likely to experience no intrapartum analgesia/anaesthesia (average RR 1.21, 95% CI 1.06 to 1.37; participants = 10,499; studies = 7), have a longer mean length of labour (hours) (mean difference (MD) 0.50, 95% CI 0.27 to 0.74; participants = 3328; studies = 3) and more likely to be attended at birth by a known midwife (average RR 7.04, 95% CI 4.48 to 11.08; participants = 6917; studies = 7). There were no differences between groups for fetal loss or neonatal death more than or equal to 24 weeks, induction of labour, antenatal hospitalisation, antepartum haemorrhage, augmentation/artificial oxytocin during labour, opiate analgesia, perineal laceration requiring suturing, postpartum haemorrhage, breastfeeding initiation, low birthweight infant, five-minute Apgar score less than or equal to seven, neonatal convulsions, admission of infant to special care or neonatal intensive care unit(s) or in mean length of neonatal hospital stay (days).Due to a lack of consistency in measuring women's satisfaction and assessing the cost of various maternity models, these outcomes were reported narratively. The majority of included studies reported a higher rate of maternal satisfaction in midwife-led continuity models of care. Similarly, there was a trend towards a cost-saving effect for midwife-led continuity care compared to other care models. This review suggests that women who received midwife-led continuity models of care were less likely to experience intervention and more likely to be satisfied with their care with at least comparable adverse outcomes for women or their infants than women who received other models of care.Further research is needed to explore findings of fewer preterm births and fewer fetal deaths less than 24 weeks, and overall fetal loss/neonatal death associated with midwife-led continuity models of care.

  17. Evaluation of black carbon estimations in global aerosol models

    NASA Astrophysics Data System (ADS)

    Koch, D.; Schulz, M.; Kinne, S.; McNaughton, C.; Spackman, J. R.; Balkanski, Y.; Bauer, S.; Berntsen, T.; Bond, T. C.; Boucher, O.; Chin, M.; Clarke, A.; de Luca, N.; Dentener, F.; Diehl, T.; Dubovik, O.; Easter, R.; Fahey, D. W.; Feichter, J.; Fillmore, D.; Freitag, S.; Ghan, S.; Ginoux, P.; Gong, S.; Horowitz, L.; Iversen, T.; Kirkevåg, A.; Klimont, Z.; Kondo, Y.; Krol, M.; Liu, X.; Miller, R.; Montanaro, V.; Moteki, N.; Myhre, G.; Penner, J. E.; Perlwitz, J.; Pitari, G.; Reddy, S.; Sahu, L.; Sakamoto, H.; Schuster, G.; Schwarz, J. P.; Seland, Ø.; Stier, P.; Takegawa, N.; Takemura, T.; Textor, C.; van Aardenne, J. A.; Zhao, Y.

    2009-11-01

    We evaluate black carbon (BC) model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentrations, aerosol absorption optical depth (AAOD) retrievals from AERONET and Ozone Monitoring Instrument (OMI) and BC column estimations based on AERONET. In regions other than Asia, most models are biased high compared to surface concentration measurements. However compared with (column) AAOD or BC burden retreivals, the models are generally biased low. The average ratio of model to retrieved AAOD is less than 0.7 in South American and 0.6 in African biomass burning regions; both of these regions lack surface concentration measurements. In Asia the average model to observed ratio is 0.7 for AAOD and 0.5 for BC surface concentrations. Compared with aircraft measurements over the Americas at latitudes between 0 and 50N, the average model is a factor of 8 larger than observed, and most models exceed the measured BC standard deviation in the mid to upper troposphere. At higher latitudes the average model to aircraft BC ratio is 0.4 and models underestimate the observed BC loading in the lower and middle troposphere associated with springtime Arctic haze. Low model bias for AAOD but overestimation of surface and upper atmospheric BC concentrations at lower latitudes suggests that most models are underestimating BC absorption and should improve estimates for refractive index, particle size, and optical effects of BC coating. Retrieval uncertainties and/or differences with model diagnostic treatment may also contribute to the model-measurement disparity. Largest AeroCom model diversity occurred in northern Eurasia and the remote Arctic, regions influenced by anthropogenic sources. Changing emissions, aging, removal, or optical properties within a single model generated a smaller change in model predictions than the range represented by the full set of AeroCom models. Upper tropospheric concentrations of BC mass from the aircraft measurements are suggested to provide a unique new benchmark to test scavenging and vertical dispersion of BC in global models.

  18. Rainier Mesa CAU Infiltration Model using INFILv3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levitt, Daniel G.; Kwicklis, Edward M.

    The outline of this presentation are: (1) Model Inputs - DEM, Precipitation, Air temp, Soil props, Surface geology, Vegetation; (2) Model Pre-processing - Runoff Routing and sinks, Slope and Azimuth, Soil Ksat reduction with slope (to mitigate bathtub ring), Soil-Bedrock Interface permeabilities; (3) Model Calibration - ET using PEST, Chloride mass balance data, Streamflow using PEST; (4) Model Validation - Streamflow data not used for calibration; (5) Uncertainty Analysis; and (6) Results. Conclusions are: (1) Average annual infiltration rates =11 to 18 mm/year for RM domain; (2) Average annual infiltration rates = 7 to 11 mm/year for SM domain; (3)more » ET = 70% of precipitation for both domains; (4) Runoff = 8-9% for RM; and 22-24% for SM - Apparently high average runoff is caused by the truncation of the lowerelevation portions of watersheds where much of the infiltration of runoff waters would otherwise occur; (5) Model results are calibrated to measured ET, CMB data, and streamflow observations; (6) Model results are validated using streamflow observations discovered after model calibration was complete; (7) Use of soil Ksat reduction with slope to mitigate bathtub ring was successful (based on calibration results); and (8) Soil-bedrock K{_}interface is innovative approach.« less

  19. Vehicle-specific emissions modeling based upon on-road measurements.

    PubMed

    Frey, H Christopher; Zhang, Kaishan; Rouphail, Nagui M

    2010-05-01

    Vehicle-specific microscale fuel use and emissions rate models are developed based upon real-world hot-stabilized tailpipe measurements made using a portable emissions measurement system. Consecutive averaging periods of one to three multiples of the response time are used to compare two semiempirical physically based modeling schemes. One scheme is based on internally observable variables (IOVs), such as engine speed and manifold absolute pressure, while the other is based on externally observable variables (EOVs), such as speed, acceleration, and road grade. For NO, HC, and CO emission rates, the average R(2) ranged from 0.41 to 0.66 for the former and from 0.17 to 0.30 for the latter. The EOV models have R(2) for CO(2) of 0.43 to 0.79 versus 0.99 for the IOV models. The models are sensitive to episodic events in driving cycles such as high acceleration. Intervehicle and fleet average modeling approaches are compared; the former account for microscale variations that might be useful for some types of assessments. EOV-based models have practical value for traffic management or simulation applications since IOVs usually are not available or not used for emission estimation.

  20. The Effect of Inhibitory Neuron on the Evolution Model of Higher-Order Coupling Neural Oscillator Population

    PubMed Central

    Qi, Yi; Wang, Rubin; Jiao, Xianfa; Du, Ying

    2014-01-01

    We proposed a higher-order coupling neural network model including the inhibitory neurons and examined the dynamical evolution of average number density and phase-neural coding under the spontaneous activity and external stimulating condition. The results indicated that increase of inhibitory coupling strength will cause decrease of average number density, whereas increase of excitatory coupling strength will cause increase of stable amplitude of average number density. Whether the neural oscillator population is able to enter the new synchronous oscillation or not is determined by excitatory and inhibitory coupling strength. In the presence of external stimulation, the evolution of the average number density is dependent upon the external stimulation and the coupling term in which the dominator will determine the final evolution. PMID:24516505

  1. Properties of bright solitons in averaged and unaveraged models for SDG fibres

    NASA Astrophysics Data System (ADS)

    Kumar, Ajit; Kumar, Atul

    1996-04-01

    Using the slowly varying envelope approximation and averaging over the fibre cross-section the evolution equation for optical pulses in semiconductor-doped glass (SDG) fibres is derived from the nonlinear wave equation. Bright soliton solutions of this equation are obtained numerically and their properties are studied and compared with those of the bright solitons in the unaveraged model.

  2. The Performance of Multilevel Growth Curve Models under an Autoregressive Moving Average Process

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Pituch, Keenan A.

    2009-01-01

    The authors examined the robustness of multilevel linear growth curve modeling to misspecification of an autoregressive moving average process. As previous research has shown (J. Ferron, R. Dailey, & Q. Yi, 2002; O. Kwok, S. G. West, & S. B. Green, 2007; S. Sivo, X. Fan, & L. Witta, 2005), estimates of the fixed effects were unbiased, and Type I…

  3. PTM Modeling of Dredged Suspended Sediment at Proposed Polaris Point and Ship Repair Facility CVN Berthing Sites - Apra Harbor, Guam

    DTIC Science & Technology

    2017-09-01

    ADCP locations used for model calibration. ......................................................................... 12 Figure 4-3. Sample water...Example of fine sediment sample [Set d, Sample B30]. (B) Example of coarse sediment sample [Set d, sample B05...Turning Basin average sediment size distribution curve. ................................................... 21 Figure 5-5. Turning Basin average size

  4. The Use of Individualized Video Modeling to Enhance Positive Peer Interactions in Three Preschool Children

    ERIC Educational Resources Information Center

    Green, Vanessa A.; Prior, Tessa; Smart, Emily; Boelema, Tanya; Drysdale, Heather; Harcourt, Susan; Roche, Laura; Waddington, Hannah

    2017-01-01

    The study described in this article sought to enhance the social interaction skills of 3 preschool children using video modeling. All children had been assessed as having difficulties in their interactions with peers. Two were above average on internalizing problems and the third was above average on externalizing problems. The study used a…

  5. Moisture transfer through the membrane of a cross-flow energy recovery ventilator: Measurement and simple data-driven modeling

    Treesearch

    CR Boardman; Samuel V. Glass

    2015-01-01

    The moisture transfer effectiveness (or latent effectiveness) of a cross-flow, membrane based energy recovery ventilator is measured and modeled. Analysis of in situ measurements for a full year shows that energy recovery ventilator latent effectiveness increases with increasing average relative humidity and surprisingly increases with decreasing average temperature. A...

  6. Using Baidu Search Index to Predict Dengue Outbreak in China

    NASA Astrophysics Data System (ADS)

    Liu, Kangkang; Wang, Tao; Yang, Zhicong; Huang, Xiaodong; Milinovich, Gabriel J.; Lu, Yi; Jing, Qinlong; Xia, Yao; Zhao, Zhengyang; Yang, Yang; Tong, Shilu; Hu, Wenbiao; Lu, Jiahai

    2016-12-01

    This study identified the possible threshold to predict dengue fever (DF) outbreaks using Baidu Search Index (BSI). Time-series classification and regression tree models based on BSI were used to develop a predictive model for DF outbreak in Guangzhou and Zhongshan, China. In the regression tree models, the mean autochthonous DF incidence rate increased approximately 30-fold in Guangzhou when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 382. When the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 91.8, there was approximately 9-fold increase of the mean autochthonous DF incidence rate in Zhongshan. In the classification tree models, the results showed that when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 99.3, there was 89.28% chance of DF outbreak in Guangzhou, while, in Zhongshan, when the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 68.1, the chance of DF outbreak rose up to 100%. The study indicated that less cost internet-based surveillance systems can be the valuable complement to traditional DF surveillance in China.

  7. Modelling gas dynamics in 1D ducts with abrupt area change

    NASA Astrophysics Data System (ADS)

    Menina, R.; Saurel, R.; Zereg, M.; Houas, L.

    2011-09-01

    Most gas dynamic computations in industrial ducts are done in one dimension with cross-section-averaged Euler equations. This poses a fundamental difficulty as soon as geometrical discontinuities are present. The momentum equation contains a non-conservative term involving a surface pressure integral, responsible for momentum loss. Definition of this integral is very difficult from a mathematical standpoint as the flow may contain other discontinuities (shocks, contact discontinuities). From a physical standpoint, geometrical discontinuities induce multidimensional vortices that modify the surface pressure integral. In the present paper, an improved 1D flow model is proposed. An extra energy (or entropy) equation is added to the Euler equations expressing the energy and turbulent pressure stored in the vortices generated by the abrupt area variation. The turbulent energy created by the flow-area change interaction is determined by a specific estimate of the surface pressure integral. Model's predictions are compared with 2D-averaged results from numerical solution of the Euler equations. Comparison with shock tube experiments is also presented. The new 1D-averaged model improves the conventional cross-section-averaged Euler equations and is able to reproduce the main flow features.

  8. Variable diffusion in stock market fluctuations

    NASA Astrophysics Data System (ADS)

    Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2015-02-01

    We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.

  9. Customer focus level following implementation of quality improvement model in Tehran social security hospitals.

    PubMed

    Mehrabi, F; Nasiripour, A; Delgoshaei, B

    2008-01-01

    The key factor for the success of total quality management programs in an organization is focusing on the customer. The purpose of this paper is to assess customer focus level following implementation of a quality improvement model in social security hospitals in Tehran Province. This research was descriptive-comparative in nature. The study population consisted of the implementers of quality improvement model in four Tehran social security hospitals. The data were gathered through a checklist addressing customer knowledge and customer satisfaction. The research findings indicated that the average scores on customer knowledge in Shahriar, Alborz, Milad, and Varamin hospitals were 64.1, 61.2, 54.1, and 46.6, respectively. The average scores on customer satisfaction in Shahriar, Alborz, Milad, and Varamin hospitals were 67.7, 65, 59.4, and 50, respectively. The customer focus average scores in Shahriar, Alborz, Milad, and Varamin hospitals were 66.3, 63.3, 57.3, and 48.6, respectively. The total average scores on customer knowledge, satisfaction and customer focus in the investigated hospitals proved to be 56.4, 60.5, and 58.9, respectively. The paper is of value in showing that implementation of the quality improvement model could considerably improve customer focus level.

  10. In-use activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks.

    PubMed

    Sandhu, Gurdas S; Frey, H Christopher; Bartelt-Hunt, Shannon; Jones, Elizabeth

    2015-03-01

    The objectives of this study were to quantify real-world activity, fuel use, and emissions for heavy duty diesel roll-off refuse trucks; evaluate the contribution of duty cycles and emissions controls to variability in cycle average fuel use and emission rates; quantify the effect of vehicle weight on fuel use and emission rates; and compare empirical cycle average emission rates with the U.S. Environmental Protection Agency's MOVES emission factor model predictions. Measurements were made at 1 Hz on six trucks of model years 2005 to 2012, using onboard systems. The trucks traveled 870 miles, had an average speed of 16 mph, and collected 165 tons of trash. The average fuel economy was 4.4 mpg, which is approximately twice previously reported values for residential trash collection trucks. On average, 50% of time is spent idling and about 58% of emissions occur in urban areas. Newer trucks with selective catalytic reduction and diesel particulate filter had NOx and PM cycle average emission rates that were 80% lower and 95% lower, respectively, compared to older trucks without. On average, the combined can and trash weight was about 55% of chassis weight. The marginal effect of vehicle weight on fuel use and emissions is highest at low loads and decreases as load increases. Among 36 cycle average rates (6 trucks×6 cycles), MOVES-predicted values and estimates based on real-world data have similar relative trends. MOVES-predicted CO2 emissions are similar to those of the real world, while NOx and PM emissions are, on average, 43% lower and 300% higher, respectively. The real-world data presented here can be used to estimate benefits of replacing old trucks with new trucks. Further, the data can be used to improve emission inventories and model predictions. In-use measurements of the real-world activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks can be used to improve the accuracy of predictive models, such as MOVES, and emissions inventories. Further, the activity data from this study can be used to generate more representative duty cycles for more accurate chassis dynamometer testing. Comparisons of old and new model year diesel trucks are useful in analyzing the effect of fleet turnover. The analysis of effect of haul weight on fuel use can be used by fleet managers to optimize operations to reduce fuel cost.

  11. How well the Reliable Ensemble Averaging Method (REA) for 15 CMIP5 GCMs simulations works for Mexico?

    NASA Astrophysics Data System (ADS)

    Colorado, G.; Salinas, J. A.; Cavazos, T.; de Grau, P.

    2013-05-01

    15 CMIP5 GCMs precipitation simulations were combined in a weighted ensemble using the Reliable Ensemble Averaging (REA) method, obtaining the weight of each model. This was done for a historical period (1961-2000) and for the future emissions based on low (RCP4.5) and high (RCP8.5) radiating forcing for the period 2075-2099. The annual cycle of simple ensemble of the historical GCMs simulations, the historical REA average and the Climate Research Unit (CRU TS3.1) database was compared in four zones of México. In the case of precipitation we can see the improvements by using the REA method, especially in the two northern zones of México where the REA average is more close to the observations (CRU) that the simple average. However in the southern zones although there is an improvement it is not as good as it is in the north, particularly in the southeast where instead of the REA average is able to reproduce qualitatively good the annual cycle with the mid-summer drought it was greatly underestimated. The main reason is because the precipitation is underestimated for all the models and the mid-summer drought do not even exists in some models. In the REA average of the future scenarios, as we can expected, the most drastic decrease in precipitation was simulated using the RCP8.5 especially in the monsoon area and in the south of Mexico in summer and in winter. In the center and southern of Mexico however, the same scenario in autumn simulates an increase of precipitation.

  12. Measuring coding intensity in the Medicare Advantage program.

    PubMed

    Kronick, Richard; Welch, W Pete

    2014-01-01

    In 2004, Medicare implemented a system of paying Medicare Advantage (MA) plans that gave them greater incentive than fee-for-service (FFS) providers to report diagnoses. Risk scores for all Medicare beneficiaries 2004-2013 and Medicare Current Beneficiary Survey (MCBS) data, 2006-2011. Change in average risk score for all enrollees and for stayers (beneficiaries who were in either FFS or MA for two consecutive years). Prevalence rates by Hierarchical Condition Category (HCC). Each year the average MA risk score increased faster than the average FFS score. Using the risk adjustment model in place in 2004, the average MA score as a ratio of the average FFS score would have increased from 90% in 2004 to 109% in 2013. Using the model partially implemented in 2014, the ratio would have increased from 88% to 102%. The increase in relative MA scores appears to largely reflect changes in diagnostic coding, not real increases in the morbidity of MA enrollees. In survey-based data for 2006-2011, the MA-FFS ratio of risk scores remained roughly constant at 96%. Intensity of coding varies widely by contract, with some contracts coding very similarly to FFS and others coding much more intensely than the MA average. Underpinning this relative growth in scores is particularly rapid relative growth in a subset of HCCs. Medicare has taken significant steps to mitigate the effects of coding intensity in MA, including implementing a 3.4% coding intensity adjustment in 2010 and revising the risk adjustment model in 2013 and 2014. Given the continuous relative increase in the average MA risk score, further policy changes will likely be necessary.

  13. Stress drop inferred from dynamic rupture simulations consistent with Moment-Rupture area empirical scaling models: Effects of week shallow zone

    NASA Astrophysics Data System (ADS)

    Dalguer, L. A.; Miyake, H.; Irikura, K.; Wu, H., Sr.

    2016-12-01

    Empirical scaling models of seismic moment and rupture area provide constraints to parameterize source parameters, such as stress drop, for numerical simulations of ground motion. There are several scaling models published in the literature. The effect of the finite width seismogenic zone and the free-surface have been attributed to cause the breaking of the well know self-similar scaling (e.g. Dalguer et al, 2008) given origin to the so called L and W models for large faults. These models imply the existence of three-stage scaling relationship between seismic moment and rupture area (e.g. Irikura and Miyake, 2011). In this paper we extend the work done by Dalguer et al 2008, in which these authors calibrated fault models that match the observations showing that the average stress drop is independent of earthquake size for buried earthquakes, but scale dependent for surface-rupturing earthquakes. Here we have developed additional sets of dynamic rupture models for vertical strike slip faults to evaluate the effect of the weak shallow layer (WSL) zone for the calibration of stress drop. Rupture in the WSL zone is expected to operate with enhanced energy absorption mechanism. The set of dynamic models consists of fault models with width 20km and fault length L=20km, 40km, 60km, 80km, 100km, 120km, 200km, 300km and 400km and average stress drop values of 2.0MPa, 2.5MPa, 3.0MPa, 3.5MPa, 5.0MPa and 7.5MPa. For models that break the free-surface, the WSL zone is modeled assuming a 2km width with stress drop 0.0MPa or -2.0 MPa. Our results show that depending on the characterization of the WSL zone, the average stress drop at the seismogenic zone that fit the empirical models changes. If WSL zone is not considered, that is, stress drop at SL zone is the same as the seismogenic zone, average stress drop is about 20% smaller than models with WSL zone. By introducing more energy absorption at the SL zone, that could be the case of large mature faults, the average stress drop in the seismogenic zone increases. Suggesting that large earthquakes need higher stress drop to break the fault than buried and moderate earthquakes. Therefore, the value of the average stress drop for large events that break the free-source depend on the definition of the WSL. Suggesting that the WSL plays an important role on the prediction of final slip and fault displacement.

  14. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment

    NASA Technical Reports Server (NTRS)

    Baurle, Robert A.; Edwards, Jack R.

    2010-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure assumptions.

  15. Stand age and climate drive forest carbon balance recovery

    NASA Astrophysics Data System (ADS)

    Besnard, Simon; Carvalhais, Nuno; Clevers, Jan; Herold, Martin; Jung, Martin; Reichstein, Markus

    2016-04-01

    Forests play an essential role in the terrestrial carbon (C) cycle, especially in the C exchanges between the terrestrial biosphere and the atmosphere. Ecological disturbances and forest management are drivers of forest dynamics and strongly impact the forest C budget. However, there is a lack of knowledge on the exogenous and endogenous factors driving forest C recovery. Our analysis includes 68 forest sites in different climate zones to determine the relative influence of stand age and climate conditions on the forest carbon balance recovery. In this study, we only included forest regrowth after clear-cut stand replacement (e.g. harvest, fire), and afforestation/reforestation processes. We synthesized net ecosystem production (NEP), gross primary production (GPP), ecosystem respiration (Re), the photosynthetic respiratory ratio (GPP to Re ratio), the ecosystem carbon use efficiency (CUE), that is NEP to GPP ratio, and CUEclimax, where GPP is derived from the climate conditions. We implemented a non-linear regression analysis in order to identify the best model representing the C flux patterns with stand age. Furthermore, we showed that each C flux have a non-linear relationship with stand age, annual precipitation (P) and mean annual temperature (MAT), therefore, we proposed to use non-linear transformations of the covariates for C fluxes'estimates. Non-linear stand age and climate models were, therefore, used to establish multiple linear regressions for C flux predictions and for determining the contribution of stand age and climate in forest carbon recovery. Our findings depicted that a coupled stand age-climate model explained 33% (44%, average site), 62% (76%, average site), 56% (71%, average site), 41% (59%, average site), 50% (65%, average site) and 36% (50%, average site) of the variance of annual NEP, GPP, Re, photosynthetic respiratory ratio, CUE and CUEclimax across sites, respectively. In addition, we showed that gross fluxes (e.g. GPP and Re) are mainly climatically driven with 54.2% (68.4%, average site) and 54.1% (71.0%, average site) of GPP and Re variability, respectively, explained by the sum of MAT and P. However, annual NEP, GPP to Re ratio and CUEclimax are affected by both forest stand age and climate conditions, in particular MAT. The key result is that forest stand age plays a crucial role in determining CUE (36.4% and 48.2% for all years per site and average site, respectively), while climate conditions have less effect on CUE (13.6% and 15.4% for all years per site and average site, respectively). These findings are relevant for the implementation of Earth system models and imply that information both on forest stand age and climate conditions are critical to improve the accuracy of global terrestrial C models's estimates.

  16. The effects of ground hydrology on climate sensitivity to solar constant variations

    NASA Technical Reports Server (NTRS)

    Chou, S. H.; Curran, R. J.; Ohring, G.

    1979-01-01

    The effects of two different evaporation parameterizations on the climate sensitivity to solar constant variations are investigated by using a zonally averaged climate model. The model is based on a two-level quasi-geostrophic zonally averaged annual mean model. One of the evaporation parameterizations tested is a nonlinear formulation with the Bowen ratio determined by the predicted vertical temperature and humidity gradients near the earth's surface. The other is the linear formulation with the Bowen ratio essentially determined by the prescribed linear coefficient.

  17. An Universal and Easy-to-Use Model for the Pressure of Arbitrary-Shape 3D Multifunctional Integumentary Cardiac Membranes.

    PubMed

    Su, Yewang; Liu, Zhuangjian; Xu, Lizhi

    2016-04-20

    Recently developed concepts for 3D, organ-mounted electronics for cardiac applications require a universal and easy-to-use mechanical model to calculate the average pressure associated with operation of the device, which is crucial for evaluation of design efficacy and optimization. This work proposes a simple, accurate, easy-to-use, and universal model to quantify the average pressure for arbitrary-shape organs. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Channel Characterization for Free-Space Optical Communications

    DTIC Science & Technology

    2012-07-01

    parameters. From the path- average parameters, a 2nC profile model, called the HAP model, was constructed so that the entire channel from air to ground...SR), both of which are required to estimate the Power in the Bucket (PIB) and Power in the Fiber (PIF) associated with the FOENEX data beam. UCF was...of the path-average values of 2nC , the resulting HAP 2nC profile model led to values of ground level 2 nC that compared very well with actual

  19. Developing and Testing a Robust, Multi-Scale Framework for the Recovery of Longleaf Pine Understory Communities

    DTIC Science & Technology

    2015-05-01

    Model averaging for species richness on post-agricultural sites (1000 m2) with a landscape radius of 150 m. Table 3.4.8. Model selection for species ... richness on post-agricultural sites (1000 m2) with a landscape radius of 150 m. Table 3.4.9. Model averaging for proportion of reference species on...Direct, indirect, and total standardized effects on species richness . Table 4.1.1. Species and number of seeds added to the experimental plots at

  20. Time series forecasting using ERNN and QR based on Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Pwasong, Augustine; Sathasivam, Saratha

    2017-08-01

    The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.

  1. Applications of Analytical Self-Similar Solutions of Reynolds-Averaged Models for Instability-Induced Turbulent Mixing

    NASA Astrophysics Data System (ADS)

    Hartland, Tucker; Schilling, Oleg

    2017-11-01

    Analytical self-similar solutions to several families of single- and two-scale, eddy viscosity and Reynolds stress turbulence models are presented for Rayleigh-Taylor, Richtmyer-Meshkov, and Kelvin-Helmholtz instability-induced turbulent mixing. The use of algebraic relationships between model coefficients and physical observables (e.g., experimental growth rates) following from the self-similar solutions to calibrate a member of a given family of turbulence models is shown. It is demonstrated numerically that the algebraic relations accurately predict the value and variation of physical outputs of a Reynolds-averaged simulation in flow regimes that are consistent with the simplifying assumptions used to derive the solutions. The use of experimental and numerical simulation data on Reynolds stress anisotropy ratios to calibrate a Reynolds stress model is briefly illustrated. The implications of the analytical solutions for future Reynolds-averaged modeling of hydrodynamic instability-induced mixing are briefly discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  2. The effect of inquiry-flipped classroom model toward students' achievement on chemical reaction rate

    NASA Astrophysics Data System (ADS)

    Paristiowati, Maria; Fitriani, Ella; Aldi, Nurul Hanifah

    2017-08-01

    The aim of this research is to find out the effect of Inquiry-Flipped Classroom Models toward Students' Achievement on Chemical Reaction Rate topic. This study was conducted at SMA Negeri 3 Tangerang in Eleventh Graders. The Quasi Experimental Method with Non-equivalent Control Group design was implemented in this study. 72 students as the sample was selected by purposive sampling. Students in experimental group were learned through inquiry-flipped classroom model. Meanwhile, in control group, students were learned through guided inquiry learning model. Based on the data analysis, it can be seen that there is significant difference in the result of the average achievement of the students. The average achievement of the students in inquiry-flipped classroom model was 83,44 and the average achievement of the students in guided inquiry learning model was 74,06. It can be concluded that the students' achievement with inquiry-flipped classroom better than guided inquiry. The difference of students' achievement were significant through t-test which is tobs 3.056 > ttable 1.994 (α = 0.005).

  3. Monthly streamflow forecasting with auto-regressive integrated moving average

    NASA Astrophysics Data System (ADS)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  4. Vertical profiles for SO2 and SO on Venus from different one-dimensional simulations

    NASA Astrophysics Data System (ADS)

    Mills, Franklin P.; Jessup, Kandis-Lea; Yung, Yuk

    2017-10-01

    Sulfur dioxide (SO2) plays many roles in Venus’ atmosphere. It is a precursor for the sulfuric acid that condenses to form the global cloud layers and is likely a precursor for the unidentified UV absorber, which, along with CO2 near the tops of the clouds, appears to be responsible for absorbing about half of the energy deposited in Venus’ atmosphere [1]. Most published simulations of Venus’ mesospheric chemistry have used one-dimensional numerical models intended to represent global-average or diurnal-average conditions [eg, 2, 3, 4]. Observations, however, have found significant variations of SO and SO2 with latitude and local time throughout the mesosphere [eg, 5, 6]. Some recent simulations have examined local time variations of SO and SO2 using analytical models [5], one-dimensional steady-state solar-zenith-angle-dependent numerical models [6], and three-dimensional general circulation models (GCMs) [7]. As an initial step towards a quantitative comparison among these different types of models, this poster compares simulated SO, SO2, and SO/SO2 from global-average, diurnal-average, and solar-zenith-angle (SZA) dependent steady-state models for the mesosphere.The Caltech/JPL photochemical model [8] was used with vertical transport via eddy diffusion set based on observations and observationally-defined lower boundary conditions for HCl, CO, and OCS. Solar fluxes are based on SORCE SOLSTICE and SORCE SIM measurements from 26 December 2010 [9, 10]. The results indicate global-average and diurnal-average models may have significant limitations when used to interpret latitude- and local-time-dependent observations of SO2 and SO.[1] Titov D et al (2007) in Exploring Venus as a Terrestrial Planet, 121-138. [2] Zhang X et al (2012) Icarus, 217, 714-739. [3] Krasnopolsky V A (2012) Icarus, 218, 230-246. [4] Parkinson C D et al (2015) Planet Space Sci, 113-114, 226-236. [5] Sandor B J et al (2010) Icarus, 208, 49-60. [6] Jessup K-L et al (2015) Icarus, 258, 309-336. [7] Stolzenbach A et al (2014) EGU General Assembly 2014, 16, EGU2014-5315. [8] Allen M et al (1981) J Geophys Res, 86, 3617-3627. [9] Harder J W et al (2010) Sol Phys, 263, 3-24. [10] Snow M et al (2005) Sol Phys, 230, 295-324.

  5. Isolating the cow-specific part of residual energy intake in lactating dairy cows using random regressions.

    PubMed

    Fischer, A; Friggens, N C; Berry, D P; Faverdin, P

    2018-07-01

    The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of measured NEI. Among all compared mixed models, the variance of the cow-specific part of predicted NEI represented between 53% and 59% of the variance of REI estimated from the lactation average model or between 4% and 5% of the variance of measured NEI. The remaining 41% to 47% of the variance of REI estimated with the lactation average model may therefore reflect model fitting errors or measurement errors. In conclusion, the use of a mixed model framework with cow-specific random regressions seems to be a promising method to isolate the cow-specific component of REI in dairy cows.

  6. Numerical investigation of airflow in an idealised human extra-thoracic airway: a comparison study

    PubMed Central

    Chen, Jie; Gutmark, Ephraim

    2013-01-01

    Large eddy simulation (LES) technique is employed to numerically investigate the airflow through an idealised human extra-thoracic airway under different breathing conditions, 10 l/min, 30 l/min, and 120 l/min. The computational results are compared with single and cross hot-wire measurements, and with time-averaged flow field computed by standard k-ω and k-ω-SST Reynolds averaged Navier-Stokes (RANS) models and the Lattice-Boltzmann method (LBM). The LES results are also compared to root-mean-square (RMS) flow field computed by the Reynolds stress model (RSM) and LBM. LES generally gives better prediction of the time-averaged flow field than RANS models and LBM. LES also provides better estimation of the RMS flow field than both the RSM and the LBM. PMID:23619907

  7. Optimal firing rate estimation

    NASA Technical Reports Server (NTRS)

    Paulin, M. G.; Hoffman, L. F.

    2001-01-01

    We define a measure for evaluating the quality of a predictive model of the behavior of a spiking neuron. This measure, information gain per spike (Is), indicates how much more information is provided by the model than if the prediction were made by specifying the neuron's average firing rate over the same time period. We apply a maximum Is criterion to optimize the performance of Gaussian smoothing filters for estimating neural firing rates. With data from bullfrog vestibular semicircular canal neurons and data from simulated integrate-and-fire neurons, the optimal bandwidth for firing rate estimation is typically similar to the average firing rate. Precise timing and average rate models are limiting cases that perform poorly. We estimate that bullfrog semicircular canal sensory neurons transmit in the order of 1 bit of stimulus-related information per spike.

  8. Quantifying predictability variations in a low-order ocean-atmosphere model - A dynamical systems approach

    NASA Technical Reports Server (NTRS)

    Nese, Jon M.; Dutton, John A.

    1993-01-01

    The predictability of the weather and climatic states of a low-order moist general circulation model is quantified using a dynamic systems approach, and the effect of incorporating a simple oceanic circulation on predictability is evaluated. The predictability and the structure of the model attractors are compared using Liapunov exponents, local divergence rates, and the correlation and Liapunov dimensions. It was found that the activation of oceanic circulation increases the average error doubling time of the atmosphere and the coupled ocean-atmosphere system by 10 percent and decreases the variance of the largest local divergence rate by 20 percent. When an oceanic circulation develops, the average predictability of annually averaged states is improved by 25 percent and the variance of the largest local divergence rate decreases by 25 percent.

  9. Averaged model to study long-term dynamics of a probe about Mercury

    NASA Astrophysics Data System (ADS)

    Tresaco, Eva; Carvalho, Jean Paulo S.; Prado, Antonio F. B. A.; Elipe, Antonio; de Moraes, Rodolpho Vilhena

    2018-02-01

    This paper provides a method for finding initial conditions of frozen orbits for a probe around Mercury. Frozen orbits are those whose orbital elements remain constant on average. Thus, at the same point in each orbit, the satellite always passes at the same altitude. This is very interesting for scientific missions that require close inspection of any celestial body. The orbital dynamics of an artificial satellite about Mercury is governed by the potential attraction of the main body. Besides the Keplerian attraction, we consider the inhomogeneities of the potential of the central body. We include secondary terms of Mercury gravity field from J_2 up to J_6, and the tesseral harmonics \\overline{C}_{22} that is of the same magnitude than zonal J_2. In the case of science missions about Mercury, it is also important to consider third-body perturbation (Sun). Circular restricted three body problem can not be applied to Mercury-Sun system due to its non-negligible orbital eccentricity. Besides the harmonics coefficients of Mercury's gravitational potential, and the Sun gravitational perturbation, our average model also includes Solar acceleration pressure. This simplified model captures the majority of the dynamics of low and high orbits about Mercury. In order to capture the dominant characteristics of the dynamics, short-period terms of the system are removed applying a double-averaging technique. This algorithm is a two-fold process which firstly averages over the period of the satellite, and secondly averages with respect to the period of the third body. This simplified Hamiltonian model is introduced in the Lagrange Planetary equations. Thus, frozen orbits are characterized by a surface depending on three variables: the orbital semimajor axis, eccentricity and inclination. We find frozen orbits for an average altitude of 400 and 1000 km, which are the predicted values for the BepiColombo mission. Finally, the paper delves into the orbital stability of frozen orbits and the temporal evolution of the eccentricity of these orbits.

  10. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  11. Evaluation of Observation-Fused Regional Air Quality Model Results for Population Air Pollution Exposure Estimation

    PubMed Central

    Chen, Gang; Li, Jingyi; Ying, Qi; Sherman, Seth; Perkins, Neil; Rajeshwari, Sundaram; Mendola, Pauline

    2014-01-01

    In this study, Community Multiscale Air Quality (CMAQ) model was applied to predict ambient gaseous and particulate concentrations during 2001 to 2010 in 15 hospital referral regions (HRRs) using a 36-km horizontal resolution domain. An inverse distance weighting based method was applied to produce exposure estimates based on observation-fused regional pollutant concentration fields using the differences between observations and predictions at grid cells where air quality monitors were located. Although the raw CMAQ model is capable of producing satisfying results for O3 and PM2.5 based on EPA guidelines, using the observation data fusing technique to correct CMAQ predictions leads to significant improvement of model performance for all gaseous and particulate pollutants. Regional average concentrations were calculated using five different methods: 1) inverse distance weighting of observation data alone, 2) raw CMAQ results, 3) observation-fused CMAQ results, 4) population-averaged raw CMAQ results and 5) population-averaged fused CMAQ results. It shows that while O3 (as well as NOx) monitoring networks in the HRR regions are dense enough to provide consistent regional average exposure estimation based on monitoring data alone, PM2.5 observation sites (as well as monitors for CO, SO2, PM10 and PM2.5 components) are usually sparse and the difference between the average concentrations estimated by the inverse distance interpolated observations, raw CMAQ and fused CMAQ results can be significantly different. Population-weighted average should be used to account spatial variation in pollutant concentration and population density. Using raw CMAQ results or observations alone might lead to significant biases in health outcome analyses. PMID:24747248

  12. The impact of reforestation in the northeast United States on precipitation and surface temperature

    NASA Astrophysics Data System (ADS)

    Clark, Allyson

    Since the 1920s, forest coverage in the northeastern United States has recovered from disease, clearing for agricultural and urban development, and the demands of the timber industry. Such a dramatic change in ground cover can influence heat and moisture fluxes to the atmosphere, as measured in altered landscapes in Australia, Israel, and the Amazon. In this study, the impacts of recent reforestation in the northeastern United States on summertime precipitation and surface temperature were quantified by comparing average modern values to 1950s values. Weak positive (negative) relationships between reforestation and average monthly precipitation and daily minimum temperatures (average daily maximum surface temperature) were found. There was no relationship between reforestation and average surface temperature. Results of the observational analysis were compared with results obtained from reforestation scenarios simulated with the BUGS5 global climate model. The single difference between the model runs was the amount of forest coverage in the northeast United States; three levels of forest were defined - a grassland state, with 0% forest coverage, a completely forested state, with approximately 100% forest coverage, and a control state, with forest coverage closely resembling modern forest coverage. The three simulations were compared, and had larger magnitude average changes in precipitation and in all temperature variables. The difference in magnitudes between the model simulations observations was much larger than the difference in the amount of reforestation in each case. Additionally, unlike in observations, a negative relationship was found between average daily minimum temperature and amount of forest coverage, implying that additional factors influence temperature and precipitation in the real world that are not accounted for in the model.

  13. An Interactive Multi-Model for Consensus on Climate Change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kocarev, Ljupco

    This project purports to develop a new scheme for forming consensus among alternative climate models, that give widely divergent projections as to the details of climate change, that is more intelligent than simply averaging the model outputs, or averaging with ex post facto weighting factors. The method under development effectively allows models to assimilate data from one another in run time with weights that are chosen in an adaptive training phase using 20th century data, so that the models synchronize with one another as well as with reality. An alternate approach that is being explored in parallel is the automatedmore » combination of equations from different models in an expert-system-like framework.« less

  14. Quantification of errors induced by temporal resolution on Lagrangian particles in an eddy-resolving model

    NASA Astrophysics Data System (ADS)

    Qin, Xuerong; van Sebille, Erik; Sen Gupta, Alexander

    2014-04-01

    Lagrangian particle tracking within ocean models is an important tool for the examination of ocean circulation, ventilation timescales and connectivity and is increasingly being used to understand ocean biogeochemistry. Lagrangian trajectories are obtained by advecting particles within velocity fields derived from hydrodynamic ocean models. For studies of ocean flows on scales ranging from mesoscale up to basin scales, the temporal resolution of the velocity fields should ideally not be more than a few days to capture the high frequency variability that is inherent in mesoscale features. However, in reality, the model output is often archived at much lower temporal resolutions. Here, we quantify the differences in the Lagrangian particle trajectories embedded in velocity fields of varying temporal resolution. Particles are advected from 3-day to 30-day averaged fields in a high-resolution global ocean circulation model. We also investigate whether adding lateral diffusion to the particle movement can compensate for the reduced temporal resolution. Trajectory errors reveal the expected degradation of accuracy in the trajectory positions when decreasing the temporal resolution of the velocity field. Divergence timescales associated with averaging velocity fields up to 30 days are faster than the intrinsic dispersion of the velocity fields but slower than the dispersion caused by the interannual variability of the velocity fields. In experiments focusing on the connectivity along major currents, including western boundary currents, the volume transport carried between two strategically placed sections tends to increase with increased temporal averaging. Simultaneously, the average travel times tend to decrease. Based on these two bulk measured diagnostics, Lagrangian experiments that use temporal averaging of up to nine days show no significant degradation in the flow characteristics for a set of six currents investigated in more detail. The addition of random-walk-style diffusion does not mitigate the errors introduced by temporal averaging for large-scale open ocean Lagrangian simulations.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boutilier, Justin J., E-mail: j.boutilier@mail.utoronto.ca; Lee, Taewoo; Craig, Tim

    Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and appliedmore » three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR weight prediction methodologies perform comparably to the LR model and can produce clinical quality treatment plans by simultaneously predicting multiple weights that capture trade-offs associated with sparing multiple OARs.« less

  16. Combining NMR spectral and structural data to form models of polychlorinated dibenzodioxins, dibenzofurans, and biphenyls binding to the AhR

    NASA Astrophysics Data System (ADS)

    Beger, Richard D.; Buzatu, Dan A.; Wilkes, Jon G.

    2002-10-01

    A three-dimensional quantitative spectrometric data-activity relationship (3D-QSDAR) modeling technique which uses NMR spectral and structural information that is combined in a 3D-connectivity matrix has been developed. A 3D-connectivity matrix was built by displaying all possible assigned carbon NMR chemical shifts, carbon-to-carbon connections, and distances between the carbons. Two-dimensional 13C-13C COSY and 2D slices from the distance dimension of the 3D-connectivity matrix were used to produce a relationship among the 2D spectral patterns for polychlorinated dibenzofurans, dibenzodioxins, and biphenyls (PCDFs, PCDDs, and PCBs respectively) binding to the aryl hydrocarbon receptor (AhR). We refer to this technique as comparative structural connectivity spectral analysis (CoSCoSA) modeling. All CoSCoSA models were developed using forward multiple linear regression analysis of the predicted 13C NMR structure-connectivity spectral bins. A CoSCoSA model for 26 PCDFs had an explained variance (r2) of 0.93 and an average leave-four-out cross-validated variance (q4 2) of 0.89. A CoSCoSA model for 14 PCDDs produced an r2 of 0.90 and an average leave-two-out cross-validated variance (q2 2) of 0.79. One CoSCoSA model for 12 PCBs gave an r2 of 0.91 and an average q2 2 of 0.80. Another CoSCoSA model for all 52 compounds had an r2 of 0.85 and an average q4 2 of 0.52. Major benefits of CoSCoSA modeling include ease of development since the technique does not use molecular docking routines.

  17. Mantle structure and composition to 800-km depth beneath southern Africa and surrounding oceans from broadband body waves

    NASA Astrophysics Data System (ADS)

    Simon, R. E.; Wright, C.; Kwadiba, M. T. O.; Kgaswane, E. M.

    2003-12-01

    Average one-dimensional P and S wavespeed models from the surface to depths of 800 km were derived for the southern African region using travel times and waveforms from earthquakes recorded at stations of the Kaapvaal and South African seismic networks. The Herglotz-Wiechert method combined with ray tracing was used to derive a preliminary P wavespeed model, followed by refinements using phase-weighted stacking and synthetic seismograms to yield the final model. Travel times combined with ray tracing were used to derive the S wavespeed model, which was also refined using phase-weighted stacking and synthetic seismograms. The presence of a high wavespeed upper mantle lid in the S model overlying a low wavespeed zone (LWZ) around 210- to ˜345-km depth that is not observed in the P wavespeed model was inferred. The 410-km discontinuity shows similar characteristics to that in other continental regions, but occurs slightly deeper at 420 km. Depletion of iron and/or enrichment in aluminium relative to other regions are the preferred explanation, since the P wavespeeds throughout the transition zone are slightly higher than average. The average S wavespeed structure beneath southern Africa within and below the transition zone is similar to that of the IASP91 model. There is no evidence for discontinuity at 520-km depth. The 660-km discontinuity also appears to be slightly deeper than average (668 km), although the estimated thickness of the transition zone is 248 km, similar to the global average of 241 km. The small size of the 660-km discontinuity for P waves, compared with many other regions, suggests that interpretation of the discontinuity as the transformation of spinel to perovskite and magnesiowüstite may require modification. Alternative explanations include the presence of garnetite-rich material or ilmenite-forming phase transformations above the 660-km discontinuity, and the garnet-perovskite transformation as the discontinuity.

  18. Physical Models of Layered Polar Firn Brightness Temperatures from 0.5 to 2 GHz

    NASA Technical Reports Server (NTRS)

    Tan, Shurun; Aksoy, Mustafa; Brogioni, Marco; Macelloni, Giovanni; Durand, Michael; Jezek, Kenneth C.; Wang, Tian-Lin; Tsang, Leung; Johnson, Joel T.; Drinkwater, Mark R.; hide

    2015-01-01

    We investigate physical effects influencing 0.5-2 GHz brightness temperatures of layered polar firn to support the Ultra Wide Band Software Defined Radiometer (UWBRAD) experiment to be conducted in Greenland and in Antarctica. We find that because ice particle grain sizes are very small compared to the 0.5-2 GHz wavelengths, volume scattering effects are small. Variations in firn density over cm- to m-length scales, however, cause significant effects. Both incoherent and coherent models are used to examine these effects. Incoherent models include a 'cloud model' that neglects any reflections internal to the ice sheet, and the DMRT-ML and MEMLS radiative transfer codes that are publicly available. The coherent model is based on the layered medium implementation of the fluctuation dissipation theorem for thermal microwave radiation from a medium having a nonuniform temperature. Density profiles are modeled using a stochastic approach, and model predictions are averaged over a large number of realizations to take into account an averaging over the radiometer footprint. Density profiles are described by combining a smooth average density profile with a spatially correlated random process to model density fluctuations. It is shown that coherent model results after ensemble averaging depend on the correlation lengths of the vertical density fluctuations. If the correlation length is moderate or long compared with the wavelength (approximately 0.6x longer or greater for Gaussian correlation function without regard for layer thinning due to compaction), coherent and incoherent model results are similar (within approximately 1 K). However, when the correlation length is short compared to the wavelength, coherent model results are significantly different from the incoherent model by several tens of kelvins. For a 10-cm correlation length, the differences are significant between 0.5 and 1.1 GHz, and less for 1.1-2 GHz. Model results are shown to be able to match the v-pol SMOS data closely and predict the h-pol data for small observation angles.

  19. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  20. The abundances of hydrogen, helium, oxygen, and iron accelerated in large solar particle events

    NASA Technical Reports Server (NTRS)

    Mazur, J. E.; Mason, G. M.; Klecker, B.; Mcguire, R. E.

    1993-01-01

    Energy spectra measured in 10 large flares with the University of Maryland/Max-Planck-Institut sensors on ISEE I and Goddard Space Flight Center sensors on IMP 8 allowed us to determine the average H, He, O, and Fe abundances as functions of energy in the range of about 0.3-80 MeV/nucleon. Model fits to the spectra of individual events using the predictions of a steady state stochastic acceleration model with rigidity-dependent diffusion provided a means of interpolating small portions of the energy spectra not measured with the instrumentation. Particles with larger mass-to-charge ratios were relatively less abundant at higher energies in the flare-averaged composition. The Fe/O enhancement at low SEP energies was less than the Fe/O ratios observed in He-3-rich flares. Unlike the SEP composition averaged above 5 MeV/nucleon, the average SEP abundances above 0.3 MeV/nucleon were similar to the average solar wind.

  1. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  2. Planning and Management of Faculty Resources. AIR Forum 1981 Paper.

    ERIC Educational Resources Information Center

    Montgomery, James R.; And Others

    A computerized faculty allocation and reallocation model is presented to aid the decision maker in evaluating the outcomes of various strategies. A unique goal can be computed for each department based on the average index of the institution, the average of the college, the preceding average of the department, and a goal established by management…

  3. A benefit-cost analysis of ten tree species in Modesto, California, U.S.A

    Treesearch

    E.G. McPherson

    2003-01-01

    Tree work records for ten species were analyzed to estimate average annual management costs by dbh class for six activity areas. Average annual benefits were calculated by dbh class for each species with computer modeling. Average annual net benefits per tree were greatest for London plane (Platanus acerifolia) ($178.57), hackberry (...

  4. Pricing and Enrollment Planning.

    ERIC Educational Resources Information Center

    Martin, Robert E.

    2003-01-01

    Presents a management model for pricing and enrollment planning that yields optimal pricing decisions relative to student fees and average scholarship, the institution's financial ability to support students, and an average cost-pricing rule. (SLD)

  5. Predicting top-of-atmosphere radiance for arbitrary viewing geometries from the visible to thermal infrared: generalization to arbitrary average scene temperatures

    NASA Astrophysics Data System (ADS)

    Florio, Christopher J.; Cota, Steve A.; Gaffney, Stephanie K.

    2010-08-01

    In a companion paper presented at this conference we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) may be used in conjunction with a limited number of runs of AFRL's MODTRAN4 radiative transfer code, to quickly predict the top-of-atmosphere (TOA) radiance received in the visible through midwave IR (MWIR) by an earth viewing sensor, for any arbitrary combination of solar and sensor elevation angles. The method is particularly useful for large-scale scene simulations where each pixel could have a unique value of reflectance/emissivity and temperature, making the run-time required for direct prediction via MODTRAN4 prohibitive. In order to be self-consistent, the method described requires an atmospheric model (defined, at a minimum, as a set of vertical temperature, pressure and water vapor profiles) that is consistent with the average scene temperature. MODTRAN4 provides only six model atmospheres, ranging from sub-arctic winter to tropical conditions - too few to cover with sufficient temperature resolution the full range of average scene temperatures that might be of interest. Model atmospheres consistent with intermediate temperature values can be difficult to come by, and in any event, their use would be too cumbersome for use in trade studies involving a large number of average scene temperatures. In this paper we describe and assess a method for predicting TOA radiance for any arbitrary average scene temperature, starting from only a limited number of model atmospheres.

  6. Comparing CT perfusion with oxygen partial pressure in a rabbit VX2 soft-tissue tumor model.

    PubMed

    Sun, Chang-Jin; Li, Chao; Lv, Hai-Bo; Zhao, Cong; Yu, Jin-Ming; Wang, Guang-Hui; Luo, Yun-Xiu; Li, Yan; Xiao, Mingyong; Yin, Jun; Lang, Jin-Yi

    2014-01-01

    The aim of this study was to evaluate the oxygen partial pressure of the rabbit model of the VX2 tumor using a 64-slice perfusion CT and to compare the results with that obtained using the oxygen microelectrode method. Perfusion CT was performed for 45 successfully constructed rabbit models of a VX2 brain tumor. The perfusion values of the brain tumor region of interest, the blood volume (BV), the time to peak (TTP) and the peak enhancement intensity (PEI) were measured. The results were compared with the partial pressure of oxygen (PO2) of that region of interest obtained using the oxygen microelectrode method. The perfusion values of the brain tumor region of interest in 45 successfully constructed rabbit models of a VX2 brain tumor ranged from 1.3-127.0 (average, 21.1 ± 26.7 ml/min/ml); BV ranged from 1.2-53.5 ml/100g (average, 22.2 ± 13.7 ml/100g); PEI ranged from 8.7-124.6 HU (average, 43.5 ± 28.7 HU); and TTP ranged from 8.2-62.3 s (average, 38.8 ± 14.8 s). The PO2 in the corresponding region ranged from 0.14-47 mmHg (average, 16 ± 14.8 mmHg). The perfusion CT positively correlated with the tumor PO2, which can be used for evaluating the tumor hypoxia in clinical practice.

  7. A geometrical optics approach for modeling aperture averaging in free space optical communication applications

    NASA Astrophysics Data System (ADS)

    Yuksel, Heba; Davis, Christopher C.

    2006-09-01

    Intensity fluctuations at the receiver in free space optical (FSO) communication links lead to a received power variance that depends on the size of the receiver aperture. Increasing the size of the receiver aperture reduces the power variance. This effect of the receiver size on power variance is called aperture averaging. If there were no aperture size limitation at the receiver, then there would be no turbulence-induced scintillation. In practice, there is always a tradeoff between aperture size, transceiver weight, and potential transceiver agility for pointing, acquisition and tracking (PAT) of FSO communication links. We have developed a geometrical simulation model to predict the aperture averaging factor. This model is used to simulate the aperture averaging effect at given range by using a large number of rays, Gaussian as well as uniformly distributed, propagating through simulated turbulence into a circular receiver of varying aperture size. Turbulence is simulated by filling the propagation path with spherical bubbles of varying sizes and refractive index discontinuities statistically distributed according to various models. For each statistical representation of the atmosphere, the three-dimensional trajectory of each ray is analyzed using geometrical optics. These Monte Carlo techniques have proved capable of assessing the aperture averaging effect, in particular, the quantitative expected reduction in intensity fluctuations with increasing aperture diameter. In addition, beam wander results have demonstrated the range-cubed dependence of mean-squared beam wander. An effective turbulence parameter can also be determined by correlating beam wander behavior with the path length.

  8. A model-averaging method for assessing groundwater conceptual model uncertainty.

    PubMed

    Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M

    2010-01-01

    This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.

  9. Metainference: A Bayesian inference method for heterogeneous systems

    PubMed Central

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300

  10. Impact of Bias-Correction Type and Conditional Training on Bayesian Model Averaging over the Northeast United States

    Treesearch

    Michael J. Erickson; Brian A. Colle; Joseph J. Charney

    2012-01-01

    The performance of a multimodel ensemble over the northeast United States is evaluated before and after applying bias correction and Bayesian model averaging (BMA). The 13-member Stony Brook University (SBU) ensemble at 0000 UTC is combined with the 21-member National Centers for Environmental Prediction (NCEP) Short-Range Ensemble Forecast (SREF) system at 2100 UTC....

  11. Predicting Student Grade Point Average at a Community College from Scholastic Aptitude Tests and from Measures Representing Three Constructs in Vroom's Expectancy Theory Model of Motivation.

    ERIC Educational Resources Information Center

    Malloch, Douglas C.; Michael, William B.

    1981-01-01

    This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…

  12. Implementation of ICARE learning model using visualization animation on biotechnology course

    NASA Astrophysics Data System (ADS)

    Hidayat, Habibi

    2017-12-01

    ICARE is a learning model that directly ensure the students to actively participate in the learning process using animation media visualization. ICARE have five key elements of learning experience from children and adult that is introduction, connection, application, reflection and extension. The use of Icare system to ensure that participants have opportunity to apply what have been they learned. So that, the message delivered by lecture to students can be understood and recorded by students in a long time. Learning model that was deemed capable of improving learning outcomes and interest to learn in following learning process Biotechnology with applying the ICARE learning model using visualization animation. This learning model have been giving motivation to participate in the learning process and learning outcomes obtained becomes more increased than before. From the results of student learning in subjects Biotechnology by applying the ICARE learning model using Visualization Animation can improving study results of student from the average value of middle test amounted to 70.98 with the percentage of 75% increased value of final test to be 71.57 with the percentage of 68.63%. The interest to learn from students more increasing visits of student activities at each cycle, namely the first cycle obtained average value by 33.5 with enough category. The second cycle is obtained an average value of 36.5 to good category and third cycle the average value of 36.5 with a student activity to good category.

  13. Local and average structure of Mn- and La-substituted BiFeO3

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Selbach, Sverre M.

    2017-06-01

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO3 is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space group symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO3. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions.

  14. Quantifying the association between obesity, automobile travel, and caloric intake.

    PubMed

    Behzad, Banafsheh; King, Douglas M; Jacobson, Sheldon H

    2013-02-01

    The objective of this study is to assess the association between average adult body mass index (BMI), automobile travel, and caloric intake in the US in order to predict future trends of adult obesity. Annual BMI data (1984-2010) from the Behavioral Risk Factor Surveillance System (BRFSS), vehicle miles traveled data (1970-2009) from the Federal Highway Administration, licensed drivers data (1970-2009) from the Federal Highway Administration, and adult average daily caloric intake data (1970-2009) from the US Department of Agriculture were collected. A statistical model is proposed to capture multicollinearity across the independent variables. The proposed statistical model provides an estimate of changes in the average adult BMI associated with changes in automobile travel and caloric intake. According to this model, reducing daily automobile travel by one mile per driver would be associated with a 0.21 kg/m(2) reduction in the national average BMI after six years. Reducing daily caloric intake by 100 calories per person would be associated with a 0.16 kg/m(2) reduction in the national average BMI after three years. Making small changes in travel or diet choices may lead to comparable obesity interventions, implying that travel-based interventions may be as effective as dietary interventions. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Econophysics of adaptive power markets: When a market does not dampen fluctuations but amplifies them

    NASA Astrophysics Data System (ADS)

    Krause, Sebastian M.; Börries, Stefan; Bornholdt, Stefan

    2015-07-01

    The average economic agent is often used to model the dynamics of simple markets, based on the assumption that the dynamics of a system of many agents can be averaged over in time and space. A popular idea that is based on this seemingly intuitive notion is to dampen electric power fluctuations from fluctuating sources (as, e.g., wind or solar) via a market mechanism, namely by variable power prices that adapt demand to supply. The standard model of an average economic agent predicts that fluctuations are reduced by such an adaptive pricing mechanism. However, the underlying assumption that the actions of all agents average out on the time axis is not always true in a market of many agents. We numerically study an econophysics agent model of an adaptive power market that does not assume averaging a priori. We find that when agents are exposed to source noise via correlated price fluctuations (as adaptive pricing schemes suggest), the market may amplify those fluctuations. In particular, small price changes may translate to large load fluctuations through catastrophic consumer synchronization. As a result, an adaptive power market may cause the opposite effect than intended: Power demand fluctuations are not dampened but amplified instead.

  16. Local and average structure of Mn- and La-substituted BiFeO 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Bo; Selbach, Sverre M.

    2017-06-01

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO 3 is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space groupmore » symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO 3. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions.« less

  17. A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China

    PubMed Central

    Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin

    2014-01-01

    Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046

  18. Model uncertainty and multimodel inference in reliability estimation within a longitudinal framework.

    PubMed

    Alonso, Ariel; Laenen, Annouschka

    2013-05-01

    Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.

  19. Sources of secondary organic aerosols over North China Plain in winter

    NASA Astrophysics Data System (ADS)

    Xing, L.; Li, G.; Tie, X.; Junji, C.; Long, X.

    2017-12-01

    Organic aerosol (OA) concentrations are simulated over the North China Plain (NCP) from 10th to 26th January, 2014 using the Weather Research and Forecasting model coupled to chemistry (WRF-CHEM), with the goal of examining the impact of heterogeneous HONO sources on atmospheric oxidation capacity and consequently on SOA formation and SOA formation from different pathways in winter. Generally, the model well reproduced the spatial and temporal distribution of PM2.5, SO2, NO2, and O3 concentrations. The heterogeneous HONO formation contributed a major part of atmospheric HONO concentrations in Beijing. The heterogeneous HONO sources significantly increased the daily maximum OH concentrations by 260% on average in Beijing, which enhanced the atmospheric oxidation capacity and consequently SOA concentrations by 80% in Beijing on average. Under severe haze pollution on January 16th 2014, the regional average HONO concentration over NCP was 0.86 ppb, which increased SOA concentration by 68% on average. The average mass fractions of ASOA (SOA from oxidation of anthropogenic VOCs), BSOA (SOA from oxidation of biogenic VOCs), PSOA (SOA from oxidation of evaporated POA), and GSOA (SOA from irreversible uptake of glyoxal and methylglyoxal) during the simulation period over NCP were 24%, 5%, 26% and 45%, respectively. GSOA contributed most to the total SOA mass over NCP in winter. The model sensitivity simulation revealed that GSOA in winter was mainly from primary residential sources. The regional average of GSOA from primary residential sources constituted 87% of total GSOA mass.

  20. Reliability ensemble averaging of 21st century projections of terrestrial net primary productivity reduces global and regional uncertainties

    NASA Astrophysics Data System (ADS)

    Exbrayat, Jean-François; Bloom, A. Anthony; Falloon, Pete; Ito, Akihiko; Smallman, T. Luke; Williams, Mathew

    2018-02-01

    Multi-model averaging techniques provide opportunities to extract additional information from large ensembles of simulations. In particular, present-day model skill can be used to evaluate their potential performance in future climate simulations. Multi-model averaging methods have been used extensively in climate and hydrological sciences, but they have not been used to constrain projected plant productivity responses to climate change, which is a major uncertainty in Earth system modelling. Here, we use three global observationally orientated estimates of current net primary productivity (NPP) to perform a reliability ensemble averaging (REA) method using 30 global simulations of the 21st century change in NPP based on the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) business as usual emissions scenario. We find that the three REA methods support an increase in global NPP by the end of the 21st century (2095-2099) compared to 2001-2005, which is 2-3 % stronger than the ensemble ISIMIP mean value of 24.2 Pg C y-1. Using REA also leads to a 45-68 % reduction in the global uncertainty of 21st century NPP projection, which strengthens confidence in the resilience of the CO2 fertilization effect to climate change. This reduction in uncertainty is especially clear for boreal ecosystems although it may be an artefact due to the lack of representation of nutrient limitations on NPP in most models. Conversely, the large uncertainty that remains on the sign of the response of NPP in semi-arid regions points to the need for better observations and model development in these regions.

Top