Sample records for moving average analysis

  1. The Effect on Non-Normal Distributions on the Integrated Moving Average Model of Time-Series Analysis.

    ERIC Educational Resources Information Center

    Doerann-George, Judith

    The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…

  2. Quantifying rapid changes in cardiovascular state with a moving ensemble average.

    PubMed

    Cieslak, Matthew; Ryan, William S; Babenko, Viktoriya; Erro, Hannah; Rathbun, Zoe M; Meiring, Wendy; Kelsey, Robert M; Blascovich, Jim; Grafton, Scott T

    2018-04-01

    MEAP, the moving ensemble analysis pipeline, is a new open-source tool designed to perform multisubject preprocessing and analysis of cardiovascular data, including electrocardiogram (ECG), impedance cardiogram (ICG), and continuous blood pressure (BP). In addition to traditional ensemble averaging, MEAP implements a moving ensemble averaging method that allows for the continuous estimation of indices related to cardiovascular state, including cardiac output, preejection period, heart rate variability, and total peripheral resistance, among others. Here, we define the moving ensemble technique mathematically, highlighting its differences from fixed-window ensemble averaging. We describe MEAP's interface and features for signal processing, artifact correction, and cardiovascular-based fMRI analysis. We demonstrate the accuracy of MEAP's novel B point detection algorithm on a large collection of hand-labeled ICG waveforms. As a proof of concept, two subjects completed a series of four physical and cognitive tasks (cold pressor, Valsalva maneuver, video game, random dot kinetogram) on 3 separate days while ECG, ICG, and BP were recorded. Critically, the moving ensemble method reliably captures the rapid cyclical cardiovascular changes related to the baroreflex during the Valsalva maneuver and the classic cold pressor response. Cardiovascular measures were seen to vary considerably within repetitions of the same cognitive task for each individual, suggesting that a carefully designed paradigm could be used to capture fast-acting event-related changes in cardiovascular state. © 2017 Society for Psychophysiological Research.

  3. A comparison of several techniques for imputing tree level data

    Treesearch

    David Gartner

    2002-01-01

    As Forest Inventory and Analysis (FIA) changes from periodic surveys to the multipanel annual survey, new analytical methods become available. The current official statistic is the moving average. One alternative is an updated moving average. Several methods of updating plot per acre volume have been discussed previously. However, these methods may not be appropriate...

  4. Correcting for day of the week and public holiday effects: improving a national daily syndromic surveillance service for detecting public health threats.

    PubMed

    Buckingham-Jeffery, Elizabeth; Morbey, Roger; House, Thomas; Elliot, Alex J; Harcourt, Sally; Smith, Gillian E

    2017-05-19

    As service provision and patient behaviour varies by day, healthcare data used for public health surveillance can exhibit large day of the week effects. These regular effects are further complicated by the impact of public holidays. Real-time syndromic surveillance requires the daily analysis of a range of healthcare data sources, including family doctor consultations (called general practitioners, or GPs, in the UK). Failure to adjust for such reporting biases during analysis of syndromic GP surveillance data could lead to misinterpretations including false alarms or delays in the detection of outbreaks. The simplest smoothing method to remove a day of the week effect from daily time series data is a 7-day moving average. Public Health England developed the working day moving average in an attempt also to remove public holiday effects from daily GP data. However, neither of these methods adequately account for the combination of day of the week and public holiday effects. The extended working day moving average was developed. This is a further data-driven method for adding a smooth trend curve to a time series graph of daily healthcare data, that aims to take both public holiday and day of the week effects into account. It is based on the assumption that the number of people seeking healthcare services is a combination of illness levels/severity and the ability or desire of patients to seek healthcare each day. The extended working day moving average was compared to the seven-day and working day moving averages through application to data from two syndromic indicators from the GP in-hours syndromic surveillance system managed by Public Health England. The extended working day moving average successfully smoothed the syndromic healthcare data by taking into account the combined day of the week and public holiday effects. In comparison, the seven-day and working day moving averages were unable to account for all these effects, which led to misleading smoothing curves. The results from this study make it possible to identify trends and unusual activity in syndromic surveillance data from GP services in real-time independently of the effects caused by day of the week and public holidays, thereby improving the public health action resulting from the analysis of these data.

  5. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    PubMed

    Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah

    2016-01-01

    The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  6. Mechanistic approach to generalized technical analysis of share prices and stock market indices

    NASA Astrophysics Data System (ADS)

    Ausloos, M.; Ivanova, K.

    2002-05-01

    Classical technical analysis methods of stock evolution are recalled, i.e. the notion of moving averages and momentum indicators. The moving averages lead to define death and gold crosses, resistance and support lines. Momentum indicators lead the price trend, thus give signals before the price trend turns over. The classical technical analysis investment strategy is thereby sketched. Next, we present a generalization of these tricks drawing on physical principles, i.e. taking into account not only the price of a stock but also the volume of transactions. The latter becomes a time dependent generalized mass. The notion of pressure, acceleration and force are deduced. A generalized (kinetic) energy is easily defined. It is understood that the momentum indicators take into account the sign of the fluctuations, while the energy is geared toward the absolute value of the fluctuations. They have different patterns which are checked by searching for the crossing points of their respective moving averages. The case of IBM evolution over 1990-2000 is used for illustrations.

  7. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies

    PubMed Central

    2016-01-01

    The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA′) in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA′. The Efficacy Ratio adjusts the AMA′ to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA′ is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA′ are superior to the passive buy-and-hold strategy. Specifically, AMA′ outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets. PMID:27574972

  8. Kumaraswamy autoregressive moving average models for double bounded environmental data

    NASA Astrophysics Data System (ADS)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  9. Work-related accidents among the Iranian population: a time series analysis, 2000–2011

    PubMed Central

    Karimlou, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood

    2015-01-01

    Background Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. Objectives To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. Methods In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box–Jenkins modeling to develop a time series model of the total number of accidents. Results There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). Conclusions The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection. PMID:26119774

  10. Work-related accidents among the Iranian population: a time series analysis, 2000-2011.

    PubMed

    Karimlou, Masoud; Salehi, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood

    2015-01-01

    Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box-Jenkins modeling to develop a time series model of the total number of accidents. There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection.

  11. TERMA Framework for Biomedical Signal Analysis: An Economic-Inspired Approach.

    PubMed

    Elgendi, Mohamed

    2016-11-02

    Biomedical signals contain features that represent physiological events, and each of these events has peaks. The analysis of biomedical signals for monitoring or diagnosing diseases requires the detection of these peaks, making event detection a crucial step in biomedical signal processing. Many researchers have difficulty detecting these peaks to investigate, interpret and analyze their corresponding events. To date, there is no generic framework that captures these events in a robust, efficient and consistent manner. A new method referred to for the first time as two event-related moving averages ("TERMA") involves event-related moving averages and detects events in biomedical signals. The TERMA framework is flexible and universal and consists of six independent LEGO building bricks to achieve high accuracy detection of biomedical events. Results recommend that the window sizes for the two moving averages ( W 1 and W 2 ) have to follow the inequality ( 8 × W 1 ) ≥ W 2 ≥ ( 2 × W 1 ) . Moreover, TERMA is a simple yet efficient event detector that is suitable for wearable devices, point-of-care devices, fitness trackers and smart watches, compared to more complex machine learning solutions.

  12. Alternatives to the Moving Average

    Treesearch

    Paul C. van Deusen

    2001-01-01

    There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...

  13. Quantified moving average strategy of crude oil futures market based on fuzzy logic rules and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing

    2017-09-01

    The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.

  14. Multifractal detrending moving-average cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2011-07-01

    There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.

  15. Detrending moving average algorithm for multifractals

    NASA Astrophysics Data System (ADS)

    Gu, Gao-Feng; Zhou, Wei-Xing

    2010-07-01

    The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.

  16. TERMA Framework for Biomedical Signal Analysis: An Economic-Inspired Approach

    PubMed Central

    Elgendi, Mohamed

    2016-01-01

    Biomedical signals contain features that represent physiological events, and each of these events has peaks. The analysis of biomedical signals for monitoring or diagnosing diseases requires the detection of these peaks, making event detection a crucial step in biomedical signal processing. Many researchers have difficulty detecting these peaks to investigate, interpret and analyze their corresponding events. To date, there is no generic framework that captures these events in a robust, efficient and consistent manner. A new method referred to for the first time as two event-related moving averages (“TERMA”) involves event-related moving averages and detects events in biomedical signals. The TERMA framework is flexible and universal and consists of six independent LEGO building bricks to achieve high accuracy detection of biomedical events. Results recommend that the window sizes for the two moving averages (W1 and W2) have to follow the inequality (8×W1)≥W2≥(2×W1). Moreover, TERMA is a simple yet efficient event detector that is suitable for wearable devices, point-of-care devices, fitness trackers and smart watches, compared to more complex machine learning solutions. PMID:27827852

  17. Queues with Choice via Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Pender, Jamol; Rand, Richard H.; Wesson, Elizabeth

    Delay or queue length information has the potential to influence the decision of a customer to join a queue. Thus, it is imperative for managers of queueing systems to understand how the information that they provide will affect the performance of the system. To this end, we construct and analyze two two-dimensional deterministic fluid models that incorporate customer choice behavior based on delayed queue length information. In the first fluid model, customers join each queue according to a Multinomial Logit Model, however, the queue length information the customer receives is delayed by a constant Δ. We show that the delay can cause oscillations or asynchronous behavior in the model based on the value of Δ. In the second model, customers receive information about the queue length through a moving average of the queue length. Although it has been shown empirically that giving patients moving average information causes oscillations and asynchronous behavior to occur in U.S. hospitals, we analytically and mathematically show for the first time that the moving average fluid model can exhibit oscillations and determine their dependence on the moving average window. Thus, our analysis provides new insight on how operators of service systems should report queue length information to customers and how delayed information can produce unwanted system dynamics.

  18. Computational problems in autoregressive moving average (ARMA) models

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.

    1981-01-01

    The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.

  19. PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.

  20. Comparison of estimators for rolling samples using Forest Inventory and Analysis data

    Treesearch

    Devin S. Johnson; Michael S. Williams; Raymond L. Czaplewski

    2003-01-01

    The performance of three classes of weighted average estimators is studied for an annual inventory design similar to the Forest Inventory and Analysis program of the United States. The first class is based on an ARIMA(0,1,1) time series model. The equal weight, simple moving average is a member of this class. The second class is based on an ARIMA(0,2,2) time series...

  1. An impact analysis of forecasting methods and forecasting parameters on bullwhip effect

    NASA Astrophysics Data System (ADS)

    Silitonga, R. Y. H.; Jelly, N.

    2018-04-01

    Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.

  2. Learning curves for single incision and conventional laparoscopic right hemicolectomy: a multidimensional analysis.

    PubMed

    Park, Yoonah; Yong, Yuen Geng; Yun, Seong Hyeon; Jung, Kyung Uk; Huh, Jung Wook; Cho, Yong Beom; Kim, Hee Cheol; Lee, Woo Yong; Chun, Ho-Kyung

    2015-05-01

    This study aimed to compare the learning curves and early postoperative outcomes for conventional laparoscopic (CL) and single incision laparoscopic (SIL) right hemicolectomy (RHC). This retrospective study included the initial 35 cases in each group. Learning curves were evaluated by the moving average of operative time, mean operative time of every five consecutive cases, and cumulative sum (CUSUM) analysis. The learning phase was considered overcome when the moving average of operative times reached a plateau, and when the mean operative time of every five consecutive cases reached a low point and subsequently did not vary by more than 30 minutes. Six patients with missing data in the CL RHC group were excluded from the analyses. According to the mean operative time of every five consecutive cases, learning phase of SIL and CL RHC was completed between 26 and 30 cases, and 16 and 20 cases, respectively. Moving average analysis revealed that approximately 31 (SIL) and 25 (CL) cases were needed to complete the learning phase, respectively. CUSUM analysis demonstrated that 10 (SIL) and two (CL) cases were required to reach a steady state of complication-free performance, respectively. Postoperative complications rate was higher in SIL than in CL group, but the difference was not statistically significant (17.1% vs. 3.4%). The learning phase of SIL RHC is longer than that of CL RHC. Early oncological outcomes of both techniques were comparable. However, SIL RHC had a statistically insignificant higher complication rate than CL RHC during the learning phase.

  3. Ambient temperature and biomarkers of heart failure: a repeated measures analysis.

    PubMed

    Wilker, Elissa H; Yeh, Gloria; Wellenius, Gregory A; Davis, Roger B; Phillips, Russell S; Mittleman, Murray A

    2012-08-01

    Extreme temperatures have been associated with hospitalization and death among individuals with heart failure, but few studies have explored the underlying mechanisms. We hypothesized that outdoor temperature in the Boston, Massachusetts, area (1- to 4-day moving averages) would be associated with higher levels of biomarkers of inflammation and myocyte injury in a repeated-measures study of individuals with stable heart failure. We analyzed data from a completed clinical trial that randomized 100 patients to 12 weeks of tai chi classes or to time-matched education control. B-type natriuretic peptide (BNP), C-reactive protein (CRP), and tumor necrosis factor (TNF) were measured at baseline, 6 weeks, and 12 weeks. Endothelin-1 was measured at baseline and 12 weeks. We used fixed effects models to evaluate associations with measures of temperature that were adjusted for time-varying covariates. Higher apparent temperature was associated with higher levels of BNP beginning with 2-day moving averages and reached statistical significance for 3- and 4-day moving averages. CRP results followed a similar pattern but were delayed by 1 day. A 5°C change in 3- and 4-day moving averages of apparent temperature was associated with 11.3% [95% confidence interval (CI): 1.1, 22.5; p = 0.03) and 11.4% (95% CI: 1.2, 22.5; p = 0.03) higher BNP. A 5°C change in the 4-day moving average of apparent temperature was associated with 21.6% (95% CI: 2.5, 44.2; p = 0.03) higher CRP. No clear associations with TNF or endothelin-1 were observed. Among patients undergoing treatment for heart failure, we observed positive associations between temperature and both BNP and CRP-predictors of heart failure prognosis and severity.

  4. A Case Study to Improve Emergency Room Patient Flow at Womack Army Medical Center

    DTIC Science & Technology

    2009-06-01

    use just the previous month, moving average 2-month period ( MA2 ) uses the average from the previous two months, moving average 3-month period (MA3...ED prior to discharge by provider) MA2 /MA3/MA4 - moving averages of 2-4 months in length MAD - mean absolute deviation (measure of accuracy for

  5. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  6. Learning curves for single incision and conventional laparoscopic right hemicolectomy: a multidimensional analysis

    PubMed Central

    Park, Yoonah; Yong, Yuen Geng; Jung, Kyung Uk; Huh, Jung Wook; Cho, Yong Beom; Kim, Hee Cheol; Lee, Woo Yong; Chun, Ho-Kyung

    2015-01-01

    Purpose This study aimed to compare the learning curves and early postoperative outcomes for conventional laparoscopic (CL) and single incision laparoscopic (SIL) right hemicolectomy (RHC). Methods This retrospective study included the initial 35 cases in each group. Learning curves were evaluated by the moving average of operative time, mean operative time of every five consecutive cases, and cumulative sum (CUSUM) analysis. The learning phase was considered overcome when the moving average of operative times reached a plateau, and when the mean operative time of every five consecutive cases reached a low point and subsequently did not vary by more than 30 minutes. Results Six patients with missing data in the CL RHC group were excluded from the analyses. According to the mean operative time of every five consecutive cases, learning phase of SIL and CL RHC was completed between 26 and 30 cases, and 16 and 20 cases, respectively. Moving average analysis revealed that approximately 31 (SIL) and 25 (CL) cases were needed to complete the learning phase, respectively. CUSUM analysis demonstrated that 10 (SIL) and two (CL) cases were required to reach a steady state of complication-free performance, respectively. Postoperative complications rate was higher in SIL than in CL group, but the difference was not statistically significant (17.1% vs. 3.4%). Conclusion The learning phase of SIL RHC is longer than that of CL RHC. Early oncological outcomes of both techniques were comparable. However, SIL RHC had a statistically insignificant higher complication rate than CL RHC during the learning phase. PMID:25960990

  7. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    PubMed

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.

  8. The Hurst exponent in energy futures prices

    NASA Astrophysics Data System (ADS)

    Serletis, Apostolos; Rosenberg, Aryeh Adam

    2007-07-01

    This paper extends the work in Elder and Serletis [Long memory in energy futures prices, Rev. Financial Econ., forthcoming, 2007] and Serletis et al. [Detrended fluctuation analysis of the US stock market, Int. J. Bifurcation Chaos, forthcoming, 2007] by re-examining the empirical evidence for random walk type behavior in energy futures prices. In doing so, it uses daily data on energy futures traded on the New York Mercantile Exchange, over the period from July 2, 1990 to November 1, 2006, and a statistical physics approach-the ‘detrending moving average’ technique-providing a reliable framework for testing the information efficiency in financial markets as shown by Alessio et al. [Second-order moving average and scaling of stochastic time series, Eur. Phys. J. B 27 (2002) 197-200] and Carbone et al. [Time-dependent hurst exponent in financial time series. Physica A 344 (2004) 267-271; Analysis of clusters formed by the moving average of a long-range correlated time series. Phys. Rev. E 69 (2004) 026105]. The results show that energy futures returns display long memory and that the particular form of long memory is anti-persistence.

  9. Use of Time-Series, ARIMA Designs to Assess Program Efficacy.

    ERIC Educational Resources Information Center

    Braden, Jeffery P.; And Others

    1990-01-01

    Illustrates use of time-series designs for determining efficacy of interventions with fictitious data describing drug-abuse prevention program. Discusses problems and procedures associated with time-series data analysis using Auto Regressive Integrated Moving Averages (ARIMA) models. Example illustrates application of ARIMA analysis for…

  10. Time series modelling of increased soil temperature anomalies during long period

    NASA Astrophysics Data System (ADS)

    Shirvani, Amin; Moradi, Farzad; Moosavi, Ali Akbar

    2015-10-01

    Soil temperature just beneath the soil surface is highly dynamic and has a direct impact on plant seed germination and is probably the most distinct and recognisable factor governing emergence. Autoregressive integrated moving average as a stochastic model was developed to predict the weekly soil temperature anomalies at 10 cm depth, one of the most important soil parameters. The weekly soil temperature anomalies for the periods of January1986-December 2011 and January 2012-December 2013 were taken into consideration to construct and test autoregressive integrated moving average models. The proposed model autoregressive integrated moving average (2,1,1) had a minimum value of Akaike information criterion and its estimated coefficients were different from zero at 5% significance level. The prediction of the weekly soil temperature anomalies during the test period using this proposed model indicated a high correlation coefficient between the observed and predicted data - that was 0.99 for lead time 1 week. Linear trend analysis indicated that the soil temperature anomalies warmed up significantly by 1.8°C during the period of 1986-2011.

  11. Monthly streamflow forecasting with auto-regressive integrated moving average

    NASA Astrophysics Data System (ADS)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  12. The Mathematical Analysis of Style: A Correlation-Based Approach.

    ERIC Educational Resources Information Center

    Oppenheim, Rosa

    1988-01-01

    Examines mathematical models of style analysis, focusing on the pattern in which literary characteristics occur. Describes an autoregressive integrated moving average model (ARIMA) for predicting sentence length in different works by the same author and comparable works by different authors. This technique is valuable in characterizing stylistic…

  13. A New Trend-Following Indicator: Using SSA to Design Trading Rules

    NASA Astrophysics Data System (ADS)

    Leles, Michel Carlo Rodrigues; Mozelli, Leonardo Amaral; Guimarães, Homero Nogueira

    Singular Spectrum Analysis (SSA) is a non-parametric approach that can be used to decompose a time-series as trends, oscillations and noise. Trend-following strategies rely on the principle that financial markets move in trends for an extended period of time. Moving Averages (MAs) are the standard indicator to design such strategies. In this study, SSA is used as an alternative method to enhance trend resolution in comparison with the traditional MA. New trading rules using SSA as indicator are proposed. This paper shows that for the Down Jones Industrial Average (DJIA) and Shangai Securities Composite Index (SSCI) time-series the SSA trading rules provided, in general, better results in comparison to MA trading rules.

  14. Robust Semi-Active Ride Control under Stochastic Excitation

    DTIC Science & Technology

    2014-01-01

    broad classes of time-series models which are of practical importance; the Auto-Regressive (AR) models, the Integrated (I) models, and the Moving...Average (MA) models [12]. Combinations of these models result in autoregressive moving average (ARMA) and autoregressive integrated moving average...Down Up 4) Down Down These four cases can be written in compact form as: (20) Where is the Heaviside

  15. Dynamics of actin-based movement by Rickettsia rickettsii in vero cells.

    PubMed

    Heinzen, R A; Grieshaber, S S; Van Kirk, L S; Devin, C J

    1999-08-01

    Actin-based motility (ABM) is a virulence mechanism exploited by invasive bacterial pathogens in the genera Listeria, Shigella, and Rickettsia. Due to experimental constraints imposed by the lack of genetic tools and their obligate intracellular nature, little is known about rickettsial ABM relative to Listeria and Shigella ABM systems. In this study, we directly compared the dynamics and behavior of ABM of Rickettsia rickettsii and Listeria monocytogenes. A time-lapse video of moving intracellular bacteria was obtained by laser-scanning confocal microscopy of infected Vero cells synthesizing beta-actin coupled to green fluorescent protein (GFP). Analysis of time-lapse images demonstrated that R. rickettsii organisms move through the cell cytoplasm at an average rate of 4.8 +/- 0.6 micrometer/min (mean +/- standard deviation). This speed was 2.5 times slower than that of L. monocytogenes, which moved at an average rate of 12.0 +/- 3.1 micrometers/min. Although rickettsiae moved more slowly, the actin filaments comprising the actin comet tail were significantly more stable, with an average half-life approximately three times that of L. monocytogenes (100.6 +/- 19.2 s versus 33.0 +/- 7.6 s, respectively). The actin tail associated with intracytoplasmic rickettsiae remained stationary in the cytoplasm as the organism moved forward. In contrast, actin tails of rickettsiae trapped within the nucleus displayed dramatic movements. The observed phenotypic differences between the ABM of Listeria and Rickettsia may indicate fundamental differences in the mechanisms of actin recruitment and polymerization.

  16. Comparing the performance of FA, DFA and DMA using different synthetic long-range correlated time series

    PubMed Central

    Shao, Ying-Hui; Gu, Gao-Feng; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Sornette, Didier

    2012-01-01

    Notwithstanding the significant efforts to develop estimators of long-range correlations (LRC) and to compare their performance, no clear consensus exists on what is the best method and under which conditions. In addition, synthetic tests suggest that the performance of LRC estimators varies when using different generators of LRC time series. Here, we compare the performances of four estimators [Fluctuation Analysis (FA), Detrended Fluctuation Analysis (DFA), Backward Detrending Moving Average (BDMA), and Centred Detrending Moving Average (CDMA)]. We use three different generators [Fractional Gaussian Noises, and two ways of generating Fractional Brownian Motions]. We find that CDMA has the best performance and DFA is only slightly worse in some situations, while FA performs the worst. In addition, CDMA and DFA are less sensitive to the scaling range than FA. Hence, CDMA and DFA remain “The Methods of Choice” in determining the Hurst index of time series. PMID:23150785

  17. Direct determination approach for the multifractal detrending moving average analysis

    NASA Astrophysics Data System (ADS)

    Xu, Hai-Chuan; Gu, Gao-Feng; Zhou, Wei-Xing

    2017-11-01

    In the canonical framework, we propose an alternative approach for the multifractal analysis based on the detrending moving average method (MF-DMA). We define a canonical measure such that the multifractal mass exponent τ (q ) is related to the partition function and the multifractal spectrum f (α ) can be directly determined. The performances of the direct determination approach and the traditional approach of the MF-DMA are compared based on three synthetic multifractal and monofractal measures generated from the one-dimensional p -model, the two-dimensional p -model, and the fractional Brownian motions. We find that both approaches have comparable performances to unveil the fractal and multifractal nature. In other words, without loss of accuracy, the multifractal spectrum f (α ) can be directly determined using the new approach with less computation cost. We also apply the new MF-DMA approach to the volatility time series of stock prices and confirm the presence of multifractality.

  18. Finite-size effect and the components of multifractality in transport economics volatility based on multifractal detrending moving average method

    NASA Astrophysics Data System (ADS)

    Chen, Feier; Tian, Kang; Ding, Xiaoxu; Miao, Yuqi; Lu, Chunxia

    2016-11-01

    Analysis of freight rate volatility characteristics attracts more attention after year 2008 due to the effect of credit crunch and slowdown in marine transportation. The multifractal detrended fluctuation analysis technique is employed to analyze the time series of Baltic Dry Bulk Freight Rate Index and the market trend of two bulk ship sizes, namely Capesize and Panamax for the period: March 1st 1999-February 26th 2015. In this paper, the degree of the multifractality with different fluctuation sizes is calculated. Besides, multifractal detrending moving average (MF-DMA) counting technique has been developed to quantify the components of multifractal spectrum with the finite-size effect taken into consideration. Numerical results show that both Capesize and Panamax freight rate index time series are of multifractal nature. The origin of multifractality for the bulk freight rate market series is found mostly due to nonlinear correlation.

  19. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  20. An Optimization of Inventory Demand Forecasting in University Healthcare Centre

    NASA Astrophysics Data System (ADS)

    Bon, A. T.; Ng, T. K.

    2017-01-01

    Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.

  1. 25 CFR 700.173 - Average net earnings of business or farm.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 2 2011-04-01 2011-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings of...

  2. 25 CFR 700.173 - Average net earnings of business or farm.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings of...

  3. Identification of moving vehicle forces on bridge structures via moving average Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin

    2017-08-01

    Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.

  4. Time Series in Education: The Analysis of Daily Attendance in Two High Schools

    ERIC Educational Resources Information Center

    Koopmans, Matthijs

    2011-01-01

    This presentation discusses the use of a time series approach to the analysis of daily attendance in two urban high schools over the course of one school year (2009-10). After establishing that the series for both schools were stationary, they were examined for moving average processes, autoregression, seasonal dependencies (weekly cycles),…

  5. A Comparison of Alternative Approaches to the Analysis of Interrupted Time-Series.

    ERIC Educational Resources Information Center

    Harrop, John W.; Velicer, Wayne F.

    1985-01-01

    Computer generated data representative of 16 Auto Regressive Integrated Moving Averages (ARIMA) models were used to compare the results of interrupted time-series analysis using: (1) the known model identification, (2) an assumed (l,0,0) model, and (3) an assumed (3,0,0) model as an approximation to the General Transformation approach. (Author/BW)

  6. Effect of parameters in moving average method for event detection enhancement using phase sensitive OTDR

    NASA Astrophysics Data System (ADS)

    Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum

    2017-04-01

    We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.

  7. Time series analysis of collective motions in proteins

    NASA Astrophysics Data System (ADS)

    Alakent, Burak; Doruker, Pemra; ćamurdan, Mehmet C.

    2004-01-01

    The dynamics of α-amylase inhibitor tendamistat around its native state is investigated using time series analysis of the principal components of the Cα atomic displacements obtained from molecular dynamics trajectories. Collective motion along a principal component is modeled as a homogeneous nonstationary process, which is the result of the damped oscillations in local minima superimposed on a random walk. The motion in local minima is described by a stationary autoregressive moving average model, consisting of the frequency, damping factor, moving average parameters and random shock terms. Frequencies for the first 50 principal components are found to be in the 3-25 cm-1 range, which are well correlated with the principal component indices and also with atomistic normal mode analysis results. Damping factors, though their correlation is less pronounced, decrease as principal component indices increase, indicating that low frequency motions are less affected by friction. The existence of a positive moving average parameter indicates that the stochastic force term is likely to disturb the mode in opposite directions for two successive sampling times, showing the modes tendency to stay close to minimum. All these four parameters affect the mean square fluctuations of a principal mode within a single minimum. The inter-minima transitions are described by a random walk model, which is driven by a random shock term considerably smaller than that for the intra-minimum motion. The principal modes are classified into three subspaces based on their dynamics: essential, semiconstrained, and constrained, at least in partial consistency with previous studies. The Gaussian-type distributions of the intermediate modes, called "semiconstrained" modes, are explained by asserting that this random walk behavior is not completely free but between energy barriers.

  8. Efficiency and multifractality analysis of CSI 300 based on multifractal detrending moving average algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Weijie; Dang, Yaoguo; Gu, Rongbao

    2013-03-01

    We apply the multifractal detrending moving average (MFDMA) to investigate and compare the efficiency and multifractality of 5-min high-frequency China Securities Index 300 (CSI 300). The results show that the CSI 300 market becomes closer to weak-form efficiency after the introduction of CSI 300 future. We find that the CSI 300 is featured by multifractality and there are less complexity and risk after the CSI 300 index future was introduced. With the shuffling, surrogating and removing extreme values procedures, we unveil that extreme events and fat-distribution are the main origin of multifractality. Besides, we discuss the knotting phenomena in multifractality, and find that the scaling range and the irregular fluctuations for large scales in the Fq(s) vs s plot can cause a knot.

  9. An algorithm for testing the efficient market hypothesis.

    PubMed

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).

  10. An Algorithm for Testing the Efficient Market Hypothesis

    PubMed Central

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148

  11. Defense Applications of Signal Processing

    DTIC Science & Technology

    1999-08-27

    class of multiscale autoregressive moving average (MARMA) processes. These are generalisations of ARMA models in time series analysis , and they contain...including the two theoretical sinusoidal components. Analysis of the amplitude and frequency time series provided some novel insight into the real...communication channels, underwater acoustic signals, radar systems , economic time series and biomedical signals [7]. The alpha stable (aS) distribution has

  12. Singularity analysis based on wavelet transform of fractal measures for identifying geochemical anomaly in mineral exploration

    NASA Astrophysics Data System (ADS)

    Chen, Guoxiong; Cheng, Qiuming

    2016-02-01

    Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.

  13. Cluster structure of EU-15 countries derived from the correlation matrix analysis of macroeconomic index fluctuations

    NASA Astrophysics Data System (ADS)

    Gligor, M.; Ausloos, M.

    2007-05-01

    The statistical distances between countries, calculated for various moving average time windows, are mapped into the ultrametric subdominant space as in classical Minimal Spanning Tree methods. The Moving Average Minimal Length Path (MAMLP) algorithm allows a decoupling of fluctuations with respect to the mass center of the system from the movement of the mass center itself. A Hamiltonian representation given by a factor graph is used and plays the role of cost function. The present analysis pertains to 11 macroeconomic (ME) indicators, namely the GDP (x1), Final Consumption Expenditure (x2), Gross Capital Formation (x3), Net Exports (x4), Consumer Price Index (y1), Rates of Interest of the Central Banks (y2), Labour Force (z1), Unemployment (z2), GDP/hour worked (z3), GDP/capita (w1) and Gini coefficient (w2). The target group of countries is composed of 15 EU countries, data taken between 1995 and 2004. By two different methods (the Bipartite Factor Graph Analysis and the Correlation Matrix Eigensystem Analysis) it is found that the strongly correlated countries with respect to the macroeconomic indicators fluctuations can be partitioned into stable clusters.

  14. Timescale Halo: Average-Speed Targets Elicit More Positive and Less Negative Attributions than Slow or Fast Targets

    PubMed Central

    Hernandez, Ivan; Preston, Jesse Lee; Hepler, Justin

    2014-01-01

    Research on the timescale bias has found that observers perceive more capacity for mind in targets moving at an average speed, relative to slow or fast moving targets. The present research revisited the timescale bias as a type of halo effect, where normal-speed people elicit positive evaluations and abnormal-speed (slow and fast) people elicit negative evaluations. In two studies, participants viewed videos of people walking at a slow, average, or fast speed. We find evidence for a timescale halo effect: people walking at an average-speed were attributed more positive mental traits, but fewer negative mental traits, relative to slow or fast moving people. These effects held across both cognitive and emotional dimensions of mind and were mediated by overall positive/negative ratings of the person. These results suggest that, rather than eliciting greater perceptions of general mind, the timescale bias may reflect a generalized positivity toward average speed people relative to slow or fast moving people. PMID:24421882

  15. Tropical Cyclone Activity in the North Atlantic Basin During the Weather Satellite Era, 1960-2014

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2016-01-01

    This Technical Publication (TP) represents an extension of previous work concerning the tropical cyclone activity in the North Atlantic basin during the weather satellite era, 1960-2014, in particular, that of an article published in The Journal of the Alabama Academy of Science. With the launch of the TIROS-1 polar-orbiting satellite in April 1960, a new era of global weather observation and monitoring began. Prior to this, the conditions of the North Atlantic basin were determined only from ship reports, island reports, and long-range aircraft reconnaissance. Consequently, storms that formed far from land, away from shipping lanes, and beyond the reach of aircraft possibly could be missed altogether, thereby leading to an underestimate of the true number of tropical cyclones forming in the basin. Additionally, new analysis techniques have come into use which sometimes has led to the inclusion of one or more storms at the end of a nominal hurricane season that otherwise would not have been included. In this TP, examined are the yearly (or seasonal) and 10-year moving average (10-year moving average) values of the (1) first storm day (FSD), last storm day (LSD), and length of season (LOS); (2) frequencies of tropical cyclones (by class); (3) average peak 1-minute sustained wind speed () and average lowest pressure (); (4) average genesis location in terms of north latitudinal () and west longitudinal () positions; (5) sum and average power dissipation index (); (6) sum and average accumulated cyclone energy (); (7) sum and average number of storm days (); (8) sum of the number of hurricane days (NHD) and number of major hurricane days (NMHD); (9) net tropical cyclone activity index (NTCA); (10) largest individual storm (LIS) PWS, LP, PDI, ACE, NSD, NHD, NMHD; and (11) number of category 4 and 5 hurricanes (N4/5). Also examined are the December-May (D-M) and June-November (J-N) averages and 10-year moving average values of several climatic factors, including the (1) oceanic Nino index (); (2) Atlantic multi-decadal oscillation () index; (3) Atlantic meridional mode () index; (4) global land-ocean temperature index (); and (5) quasi-biennial oscillation () index. Lastly, the associational aspects (using both linear and nonparametric statistical tests) between selected tropical cyclone parameters and the climatic factors are examined based on their 10-year moving average trend values.

  16. Leg kinematics and muscle activity during treadmill running in the cockroach, Blaberus discoidalis: I. Slow running.

    PubMed

    Watson, J T; Ritzmann, R E

    1998-01-01

    We have combined high-speed video motion analysis of leg movements with electromyogram (EMG) recordings from leg muscles in cockroaches running on a treadmill. The mesothoracic (T2) and metathoracic (T3) legs have different kinematics. While in each leg the coxa-femur (CF) joint moves in unison with the femurtibia (FT) joint, the relative joint excursions differ between T2 and T3 legs. In T3 legs, the two joints move through approximately the same excursion. In T2 legs, the FT joint moves through a narrower range of angles than the CF joint. In spite of these differences in motion, no differences between the T2 and T3 legs were seen in timing or qualitative patterns of depressor coxa and extensor tibia activity. The average firing frequencies of slow depressor coxa (Ds) and slow extensor tibia (SETi) motor neurons are directly proportional to the average angular velocity of their joints during stance. The average Ds and SETi firing frequency appears to be modulated on a cycle-by-cycle basis to control running speed and orientation. In contrast, while the frequency variations within Ds and SETi bursts were consistent across cycles, the variations within each burst did not parallel variations in the velocity of the relevant joints.

  17. Examination of the Armagh Observatory Annual Mean Temperature Record, 1844-2004

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2006-01-01

    The long-term annual mean temperature record (1844-2004) of the Armagh Observatory (Armagh, Northern Ireland, United Kingdom) is examined for evidence of systematic variation, in particular, as related to solar/geomagnetic forcing and secular variation. Indeed, both are apparent in the temperature record. Moving averages for 10 years of temperature are found to highly correlate against both 10-year moving averages of the aa-geomagnetic index and sunspot number, having correlation coefficients of approx. 0.7, inferring that nearly half the variance in the 10-year moving average of temperature can be explained by solar/geomagnetic forcing. The residuals appear episodic in nature, with cooling seen in the 1880s and again near 1980. Seven of the last 10 years of the temperature record has exceeded 10 C, unprecedented in the overall record. Variation of sunspot cyclic averages and 2-cycle moving averages of temperature strongly associate with similar averages for the solar/geomagnetic cycle, with the residuals displaying an apparent 9-cycle variation and a steep rise in temperature associated with cycle 23. Hale cycle averages of temperature for even-odd pairs of sunspot cycles correlate against similar averages for the solar/geomagnetic cycle and, especially, against the length of the Hale cycle. Indications are that annual mean temperature will likely exceed 10 C over the next decade.

  18. Scaling range of power laws that originate from fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Grech, Dariusz; Mazur, Zygmunt

    2013-05-01

    We extend our previous study of scaling range properties performed for detrended fluctuation analysis (DFA) [Physica A0378-437110.1016/j.physa.2013.01.049 392, 2384 (2013)] to other techniques of fluctuation analysis (FA). The new technique, called modified detrended moving average analysis (MDMA), is introduced, and its scaling range properties are examined and compared with those of detrended moving average analysis (DMA) and DFA. It is shown that contrary to DFA, DMA and MDMA techniques exhibit power law dependence of the scaling range with respect to the length of the searched signal and with respect to the accuracy R2 of the fit to the considered scaling law imposed by DMA or MDMA methods. This power law dependence is satisfied for both uncorrelated and autocorrelated data. We find also a simple generalization of this power law relation for series with a different level of autocorrelations measured in terms of the Hurst exponent. Basic relations between scaling ranges for different techniques are also discussed. Our findings should be particularly useful for local FA in, e.g., econophysics, finances, or physiology, where the huge number of short time series has to be examined at once and wherever the preliminary check of the scaling range regime for each of the series separately is neither effective nor possible.

  19. Moving Average Models with Bivariate Exponential and Geometric Distributions.

    DTIC Science & Technology

    1985-03-01

    ordinary time series and of point processes. Developments in Statistics, Vol. 1, P.R. Krishnaiah , ed. Academic Press, New York. [9] Esary, J.D. and...valued and discrete - valued time series with ARMA correlation structure. Multivariate Analysis V, P.R. Krishnaiah , ed. North-Holland. 151-166. [28

  20. Neonatal heart rate prediction.

    PubMed

    Abdel-Rahman, Yumna; Jeremic, Aleksander; Tan, Kenneth

    2009-01-01

    Technological advances have caused a decrease in the number of infant deaths. Pre-term infants now have a substantially increased chance of survival. One of the mechanisms that is vital to saving the lives of these infants is continuous monitoring and early diagnosis. With continuous monitoring huge amounts of data are collected with so much information embedded in them. By using statistical analysis this information can be extracted and used to aid diagnosis and to understand development. In this study we have a large dataset containing over 180 pre-term infants whose heart rates were recorded over the length of their stay in the Neonatal Intensive Care Unit (NICU). We test two types of models, empirical bayesian and autoregressive moving average. We then attempt to predict future values. The autoregressive moving average model showed better results but required more computation.

  1. Behavior and Frequency Analysis of Aurelia aurita by Using in situ Target Strength at a Port in Southwestern Korea

    NASA Astrophysics Data System (ADS)

    Yoon, Eun-A.; Hwang, Doo-Jin; Chae, Jinho; Yoon, Won Duk; Lee, Kyounghoon

    2018-03-01

    This study was carried out to determine the in situ target strength and behavioral characteristics of moon jellyfish ( Aurelia aurita) using two frequencies (38 and 120 kHz) that present a 2- frequency-difference method for distinguishing A. aurita from other marine planktonic organisms. The average TS was shown as -71.9 -67.9 dB at 38 kHz and -75.5 -66.0 dB at 120 kHz and the average ΔMVBS120-38 kHz was similar at -1.5 3.5 dB. The TS values varied in a range of about 14 dB from -83.3 and -69.0 dB depending on the pulsation of A. aurita. The species moved in a range of -0.1 1.0 m and they mostly moved horizontally with moving speeds of 0.3 0.6 m·s-1. The TS and behavioral characteristics of A. aurita can distinguish the species from others. The acoustic technology can also contribute to understanding the distribution and abundance of the species.

  2. Considerations for monitoring raptor population trends based on counts of migrants

    USGS Publications Warehouse

    Titus, K.; Fuller, M.R.; Ruos, J.L.; Meyburg, B-U.; Chancellor, R.D.

    1989-01-01

    Various problems were identified with standardized hawk count data as annually collected at six sites. Some of the hawk lookouts increased their hours of observation from 1979-1985, thereby confounding the total counts. Data recording and missing data hamper coding of data and their use with modern analytical techniques. Coefficients of variation among years in counts averaged about 40%. The advantages and disadvantages of various analytical techniques are discussed including regression, non-parametric rank correlation trend analysis, and moving averages.

  3. Forecasting coconut production in the Philippines with ARIMA model

    NASA Astrophysics Data System (ADS)

    Lim, Cristina Teresa

    2015-02-01

    The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.

  4. THE VELOCITY DISTRIBUTION OF NEARBY STARS FROM HIPPARCOS DATA. II. THE NATURE OF THE LOW-VELOCITY MOVING GROUPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bovy, Jo; Hogg, David W., E-mail: jo.bovy@nyu.ed

    2010-07-10

    The velocity distribution of nearby stars ({approx}<100 pc) contains many overdensities or 'moving groups', clumps of comoving stars, that are inconsistent with the standard assumption of an axisymmetric, time-independent, and steady-state Galaxy. We study the age and metallicity properties of the low-velocity moving groups based on the reconstruction of the local velocity distribution in Paper I of this series. We perform stringent, conservative hypothesis testing to establish for each of these moving groups whether it could conceivably consist of a coeval population of stars. We conclude that they do not: the moving groups are neither trivially associated with their eponymousmore » open clusters nor with any other inhomogeneous star formation event. Concerning a possible dynamical origin of the moving groups, we test whether any of the moving groups has a higher or lower metallicity than the background population of thin disk stars, as would generically be the case if the moving groups are associated with resonances of the bar or spiral structure. We find clear evidence that the Hyades moving group has higher than average metallicity and weak evidence that the Sirius moving group has lower than average metallicity, which could indicate that these two groups are related to the inner Lindblad resonance of the spiral structure. Further, we find weak evidence that the Hercules moving group has higher than average metallicity, as would be the case if it is associated with the bar's outer Lindblad resonance. The Pleiades moving group shows no clear metallicity anomaly, arguing against a common dynamical origin for the Hyades and Pleiades groups. Overall, however, the moving groups are barely distinguishable from the background population of stars, raising the likelihood that the moving groups are associated with transient perturbations.« less

  5. In-use activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks.

    PubMed

    Sandhu, Gurdas S; Frey, H Christopher; Bartelt-Hunt, Shannon; Jones, Elizabeth

    2015-03-01

    The objectives of this study were to quantify real-world activity, fuel use, and emissions for heavy duty diesel roll-off refuse trucks; evaluate the contribution of duty cycles and emissions controls to variability in cycle average fuel use and emission rates; quantify the effect of vehicle weight on fuel use and emission rates; and compare empirical cycle average emission rates with the U.S. Environmental Protection Agency's MOVES emission factor model predictions. Measurements were made at 1 Hz on six trucks of model years 2005 to 2012, using onboard systems. The trucks traveled 870 miles, had an average speed of 16 mph, and collected 165 tons of trash. The average fuel economy was 4.4 mpg, which is approximately twice previously reported values for residential trash collection trucks. On average, 50% of time is spent idling and about 58% of emissions occur in urban areas. Newer trucks with selective catalytic reduction and diesel particulate filter had NOx and PM cycle average emission rates that were 80% lower and 95% lower, respectively, compared to older trucks without. On average, the combined can and trash weight was about 55% of chassis weight. The marginal effect of vehicle weight on fuel use and emissions is highest at low loads and decreases as load increases. Among 36 cycle average rates (6 trucks×6 cycles), MOVES-predicted values and estimates based on real-world data have similar relative trends. MOVES-predicted CO2 emissions are similar to those of the real world, while NOx and PM emissions are, on average, 43% lower and 300% higher, respectively. The real-world data presented here can be used to estimate benefits of replacing old trucks with new trucks. Further, the data can be used to improve emission inventories and model predictions. In-use measurements of the real-world activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks can be used to improve the accuracy of predictive models, such as MOVES, and emissions inventories. Further, the activity data from this study can be used to generate more representative duty cycles for more accurate chassis dynamometer testing. Comparisons of old and new model year diesel trucks are useful in analyzing the effect of fleet turnover. The analysis of effect of haul weight on fuel use can be used by fleet managers to optimize operations to reduce fuel cost.

  6. [The trial of business data analysis at the Department of Radiology by constructing the auto-regressive integrated moving-average (ARIMA) model].

    PubMed

    Tani, Yuji; Ogasawara, Katsuhiko

    2012-01-01

    This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.

  7. On the Relationship between Solar Wind Speed, Earthward-Directed Coronal Mass Ejections, Geomagnetic Activity, and the Sunspot Cycle Using 12-Month Moving Averages

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2008-01-01

    For 1996 .2006 (cycle 23), 12-month moving averages of the aa geomagnetic index strongly correlate (r = 0.92) with 12-month moving averages of solar wind speed, and 12-month moving averages of the number of coronal mass ejections (CMEs) (halo and partial halo events) strongly correlate (r = 0.87) with 12-month moving averages of sunspot number. In particular, the minimum (15.8, September/October 1997) and maximum (38.0, August 2003) values of the aa geomagnetic index occur simultaneously with the minimum (376 km/s) and maximum (547 km/s) solar wind speeds, both being strongly correlated with the following recurrent component (due to high-speed streams). The large peak of aa geomagnetic activity in cycle 23, the largest on record, spans the interval late 2002 to mid 2004 and is associated with a decreased number of halo and partial halo CMEs, whereas the smaller secondary peak of early 2005 seems to be associated with a slight rebound in the number of halo and partial halo CMEs. Based on the observed aaM during the declining portion of cycle 23, RM for cycle 24 is predicted to be larger than average, being about 168+/-60 (the 90% prediction interval), whereas based on the expected aam for cycle 24 (greater than or equal to 14.6), RM for cycle 24 should measure greater than or equal to 118+/-30, yielding an overlap of about 128+/-20.

  8. Opportunities to improve monitoring of temporal trends with FIA panel data

    Treesearch

    Raymond Czaplewski; Michael Thompson

    2009-01-01

    The Forest Inventory and Analysis (FIA) Program of the Forest Service, Department of Agriculture, is an annual monitoring system for the entire United States. Each year, an independent "panel" of FIA field plots is measured. To improve accuracy, FIA uses the "Moving Average" or "Temporally Indifferent" method to combine estimates from...

  9. The Prediction of Teacher Turnover Employing Time Series Analysis.

    ERIC Educational Resources Information Center

    Costa, Crist H.

    The purpose of this study was to combine knowledge of teacher demographic data with time-series forecasting methods to predict teacher turnover. Moving averages and exponential smoothing were used to forecast discrete time series. The study used data collected from the 22 largest school districts in Iowa, designated as FACT schools. Predictions…

  10. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  11. [A peak recognition algorithm designed for chromatographic peaks of transformer oil].

    PubMed

    Ou, Linjun; Cao, Jian

    2014-09-01

    In the field of the chromatographic peak identification of the transformer oil, the traditional first-order derivative requires slope threshold to achieve peak identification. In terms of its shortcomings of low automation and easy distortion, the first-order derivative method was improved by applying the moving average iterative method and the normalized analysis techniques to identify the peaks. Accurate identification of the chromatographic peaks was realized through using multiple iterations of the moving average of signal curves and square wave curves to determine the optimal value of the normalized peak identification parameters, combined with the absolute peak retention times and peak window. The experimental results show that this algorithm can accurately identify the peaks and is not sensitive to the noise, the chromatographic peak width or the peak shape changes. It has strong adaptability to meet the on-site requirements of online monitoring devices of dissolved gases in transformer oil.

  12. North Atlantic Basin Tropical Cyclone Activity in Relation to Temperature and Decadal- Length Oscillation Patterns

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2009-01-01

    Yearly frequencies of North Atlantic basin tropical cyclones, their locations of origin, peak wind speeds, average peak wind speeds, lowest pressures, and average lowest pressures for the interval 1950-2008 are examined. The effects of El Nino and La Nina on the tropical cyclone parametric values are investigated. Yearly and 10-year moving average (10-yma) values of tropical cyclone parameters are compared against those of temperature and decadal-length oscillation, employing both linear and bi-variate analysis, and first differences in the 10-yma are determined. Discussion of the 2009 North Atlantic basin hurricane season, updating earlier results, is given.

  13. Hydromagnetic couple-stress nanofluid flow over a moving convective wall: OHAM analysis

    NASA Astrophysics Data System (ADS)

    Awais, M.; Saleem, S.; Hayat, T.; Irum, S.

    2016-12-01

    This communication presents the magnetohydrodynamics (MHD) flow of a couple-stress nanofluid over a convective moving wall. The flow dynamics are analyzed in the boundary layer region. Convective cooling phenomenon combined with thermophoresis and Brownian motion effects has been discussed. Similarity transforms are utilized to convert the system of partial differential equations into coupled non-linear ordinary differential equation. Optimal homotopy analysis method (OHAM) is utilized and the concept of minimization is employed by defining the average squared residual errors. Effects of couple-stress parameter, convective cooling process parameter and energy enhancement parameters are displayed via graphs and discussed in detail. Various tables are also constructed to present the error analysis and a comparison of obtained results with the already published data. Stream lines are plotted showing a difference of Newtonian fluid model and couplestress fluid model.

  14. Associations between Changes in City and Address Specific Temperature and QT Interval - The VA Normative Aging Study

    PubMed Central

    Mehta, Amar J.; Kloog, Itai; Zanobetti, Antonella; Coull, Brent A.; Sparrow, David; Vokonas, Pantel; Schwartz, Joel

    2014-01-01

    Background The underlying mechanisms of the association between ambient temperature and cardiovascular morbidity and mortality are not well understood, particularly for daily temperature variability. We evaluated if daily mean temperature and standard deviation of temperature was associated with heart rate-corrected QT interval (QTc) duration, a marker of ventricular repolarization in a prospective cohort of older men. Methods This longitudinal analysis included 487 older men participating in the VA Normative Aging Study with up to three visits between 2000–2008 (n = 743). We analyzed associations between QTc and moving averages (1–7, 14, 21, and 28 days) of the 24-hour mean and standard deviation of temperature as measured from a local weather monitor, and the 24-hour mean temperature estimated from a spatiotemporal prediction model, in time-varying linear mixed-effect regression. Effect modification by season, diabetes, coronary heart disease, obesity, and age was also evaluated. Results Higher mean temperature as measured from the local monitor, and estimated from the prediction model, was associated with longer QTc at moving averages of 21 and 28 days. Increased 24-hr standard deviation of temperature was associated with longer QTc at moving averages from 4 and up to 28 days; a 1.9°C interquartile range increase in 4-day moving average standard deviation of temperature was associated with a 2.8 msec (95%CI: 0.4, 5.2) longer QTc. Associations between 24-hr standard deviation of temperature and QTc were stronger in colder months, and in participants with diabetes and coronary heart disease. Conclusion/Significance In this sample of older men, elevated mean temperature was associated with longer QTc, and increased variability of temperature was associated with longer QTc, particularly during colder months and among individuals with diabetes and coronary heart disease. These findings may offer insight of an important underlying mechanism of temperature-related cardiovascular morbidity and mortality in an older population. PMID:25238150

  15. A 12-Year Analysis of Nonbattle Injury Among US Service Members Deployed to Iraq and Afghanistan.

    PubMed

    Le, Tuan D; Gurney, Jennifer M; Nnamani, Nina S; Gross, Kirby R; Chung, Kevin K; Stockinger, Zsolt T; Nessen, Shawn C; Pusateri, Anthony E; Akers, Kevin S

    2018-05-30

    Nonbattle injury (NBI) among deployed US service members increases the burden on medical systems and results in high rates of attrition, affecting the available force. The possible causes and trends of NBI in the Iraq and Afghanistan wars have, to date, not been comprehensively described. To describe NBI among service members deployed to Iraq and Afghanistan, quantify absolute numbers of NBIs and proportion of NBIs within the Department of Defense Trauma Registry, and document the characteristics of this injury category. In this retrospective cohort study, data from the Department of Defense Trauma Registry on 29 958 service members injured in Iraq and Afghanistan from January 1, 2003, through December 31, 2014, were obtained. Injury incidence, patterns, and severity were characterized by battle injury and NBI. Trends in NBI were modeled using time series analysis with autoregressive integrated moving average and the weighted moving average method. Statistical analysis was performed from January 1, 2003, to December 31, 2014. Primary outcomes were proportion of NBIs and the changes in NBI over time. Among 29 958 casualties (battle injury and NBI) analyzed, 29 003 were in men and 955 were in women; the median age at injury was 24 years (interquartile range, 21-29 years). Nonbattle injury caused 34.1% of total casualties (n = 10 203) and 11.5% of all deaths (206 of 1788). Rates of NBI were higher among women than among men (63.2% [604 of 955] vs 33.1% [9599 of 29 003]; P < .001) and in Operation New Dawn (71.0% [298 of 420]) and Operation Iraqi Freedom (36.3% [6655 of 18 334]) compared with Operation Enduring Freedom (29.0% [3250 of 11 204]) (P < .001). A higher proportion of NBIs occurred in members of the Air Force (66.3% [539 of 810]) and Navy (48.3% [394 of 815]) than in members of the Army (34.7% [7680 of 22 154]) and Marine Corps (25.7% [1584 of 6169]) (P < .001). Leading mechanisms of NBI included falls (2178 [21.3%]), motor vehicle crashes (1921 [18.8%]), machinery or equipment accidents (1283 [12.6%]), blunt objects (1107 [10.8%]), gunshot wounds (728 [7.1%]), and sports (697 [6.8%]), causing predominantly blunt trauma (7080 [69.4%]). The trend in proportion of NBIs did not decrease over time, remaining at approximately 35% (by weighted moving average) after 2006 and approximately 39% by autoregressive integrated moving average. Assuming stable battlefield conditions, the autoregressive integrated moving average model estimated that the proportion of NBIs from 2015 to 2022 would be approximately 41.0% (95% CI, 37.8%-44.3%). In this study, approximately one-third of injuries during the Iraq and Afghanistan wars resulted from NBI, and the proportion of NBIs was steady for 12 years. Understanding the possible causes of NBI during military operations may be useful to target protective measures and safety interventions, thereby conserving fighting strength on the battlefield.

  16. A High Precision Prediction Model Using Hybrid Grey Dynamic Model

    ERIC Educational Resources Information Center

    Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro

    2008-01-01

    In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…

  17. Relating Factor Models for Longitudinal Data to Quasi-Simplex and NARMA Models

    ERIC Educational Resources Information Center

    Rovine, Michael J.; Molenaar, Peter C. M.

    2005-01-01

    In this article we show the one-factor model can be rewritten as a quasi-simplex model. Using this result along with addition theorems from time series analysis, we describe a common general model, the nonstationary autoregressive moving average (NARMA) model, that includes as a special case, any latent variable model with continuous indicators…

  18. An Intelligent Decision Support System for Workforce Forecast

    DTIC Science & Technology

    2011-01-01

    ARIMA ) model to forecast the demand for construction skills in Hong Kong. This model was based...Decision Trees ARIMA Rule Based Forecasting Segmentation Forecasting Regression Analysis Simulation Modeling Input-Output Models LP and NLP Markovian...data • When results are needed as a set of easily interpretable rules 4.1.4 ARIMA Auto-regressive, integrated, moving-average ( ARIMA ) models

  19. Annual forest inventory estimates based on the moving average

    Treesearch

    Francis A. Roesch; James R. Steinman; Michael T. Thompson

    2002-01-01

    Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...

  20. 78 FR 26879 - Medicare Program; Inpatient Rehabilitation Facility Prospective Payment System for Federal Fiscal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-08

    ...: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION: Proposed rule. SUMMARY: This proposed rule..., especially the teaching status adjustment factor. Therefore, we implemented a 3-year moving average approach... moving average to calculate the facility-level adjustment factors. For FY 2011, we issued a notice to...

  1. Respiratory sinus arrhythmia: time domain characterization using autoregressive moving average analysis

    NASA Technical Reports Server (NTRS)

    Triedman, J. K.; Perrott, M. H.; Cohen, R. J.; Saul, J. P.

    1995-01-01

    Fourier-based techniques are mathematically noncausal and are therefore limited in their application to feedback-containing systems, such as the cardiovascular system. In this study, a mathematically causal time domain technique, autoregressive moving average (ARMA) analysis, was used to parameterize the relations of respiration and arterial blood pressure to heart rate in eight humans before and during total cardiac autonomic blockade. Impulse-response curves thus generated showed the relation of respiration to heart rate to be characterized by an immediate increase in heart rate of 9.1 +/- 1.8 beats.min-1.l-1, followed by a transient mild decrease in heart rate to -1.2 +/- 0.5 beats.min-1.l-1 below baseline. The relation of blood pressure to heart rate was characterized by a slower decrease in heart rate of -0.5 +/- 0.1 beats.min-1.mmHg-1, followed by a gradual return to baseline. Both of these relations nearly disappeared after autonomic blockade, indicating autonomic mediation. Maximum values obtained from the respiration to heart rate impulse responses were also well correlated with frequency domain measures of high-frequency "vagal" heart rate control (r = 0.88). ARMA analysis may be useful as a time domain representation of autonomic heart rate control for cardiovascular modeling.

  2. Watershed Regressions for Pesticides (WARP) for Predicting Annual Maximum and Annual Maximum Moving-Average Concentrations of Atrazine in Streams

    USGS Publications Warehouse

    Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.

    2008-01-01

    Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize the probable levels of atrazine for comparison to specific water-quality benchmarks. Sites with a high probability of exceeding a benchmark for human health or aquatic life can be prioritized for monitoring.

  3. Moving in the Right Direction: Helping Children Cope with a Relocation

    ERIC Educational Resources Information Center

    Kruse, Tricia

    2012-01-01

    According to national figures, 37.1 million people moved in 2009 (U.S. Census Bureau, 2010). In fact, the average American will move 11.7 times in their lifetime. Why are Americans moving so much? There are a variety of reasons. Regardless of the reason, moving is a common experience for children. If one looks at the developmental characteristics…

  4. Relationship research between meteorological disasters and stock markets based on a multifractal detrending moving average algorithm

    NASA Astrophysics Data System (ADS)

    Li, Qingchen; Cao, Guangxi; Xu, Wei

    2018-01-01

    Based on a multifractal detrending moving average algorithm (MFDMA), this study uses the fractionally autoregressive integrated moving average process (ARFIMA) to demonstrate the effectiveness of MFDMA in the detection of auto-correlation at different sample lengths and to simulate some artificial time series with the same length as the actual sample interval. We analyze the effect of predictable and unpredictable meteorological disasters on the US and Chinese stock markets and the degree of long memory in different sectors. Furthermore, we conduct a preliminary investigation to determine whether the fluctuations of financial markets caused by meteorological disasters are derived from the normal evolution of the financial system itself or not. We also propose several reasonable recommendations.

  5. No Evidence of Suicide Increase Following Terrorist Attacks in the United States: An Interrupted Time-Series Analysis of September 11 and Oklahoma City

    ERIC Educational Resources Information Center

    Pridemore, William Alex; Trahan, Adam; Chamlin, Mitchell B.

    2009-01-01

    There is substantial evidence of detrimental psychological sequelae following disasters, including terrorist attacks. The effect of these events on extreme responses such as suicide, however, is unclear. We tested competing hypotheses about such effects by employing autoregressive integrated moving average techniques to model the impact of…

  6. The Press Relations of a Local School District: An Analysis of the Emergence of School Issues.

    ERIC Educational Resources Information Center

    Morris, Jon R.; Guenter, Cornelius

    Press coverage of a suburban midwest school district is analyzed as a set of time series of observations including the amount and quality of coverage. Possible shifts in these series because of the emergence of controversial issues are analyzed statistically using the Integrated Moving Average Time Series Model. Evidence of significant shifts in…

  7. Highly-resolved numerical simulations of bed-load transport in a turbulent open-channel flow

    NASA Astrophysics Data System (ADS)

    Vowinckel, Bernhard; Kempe, Tobias; Nikora, Vladimir; Jain, Ramandeep; Fröhlich, Jochen

    2015-11-01

    The study presents the analysis of phase-resolving Direct Numerical Simulations of a horizontal turbulent open-channel flow laden with a large number of spherical particles. These particles have a mobility close to their threshold of incipient motion andare transported in bed-load mode. The coupling of the fluid phase with the particlesis realized by an Immersed Boundary Method. The Double-Averaging Methodology is applied for the first time convolutingthe data into a handy set of quantities averaged in time and space to describe the most prominent flow features.In addition, a systematic study elucidatesthe impact of mobility and sediment supply on the pattern formation of particle clusters ina very large computational domain. A detailed description of fluid quantities links the developed particle patterns to the enhancement of turbulence and to a modified hydraulic resistance. Conditional averaging isapplied toerosion events providingthe processes involved inincipient particle motion. Furthermore, the detection of moving particle clusters as well as their surrounding flow field is addressedby a a moving frameanalysis. Funded by German Research Foundation (DFG), project FR 1593/5-2, computational time provided by ZIH Dresden, Germany, and JSC Juelich, Germany.

  8. Analysis Monthly Import of Palm Oil Products Using Box-Jenkins Model

    NASA Astrophysics Data System (ADS)

    Ahmad, Nurul F. Y.; Khalid, Kamil; Saifullah Rusiman, Mohd; Ghazali Kamardan, M.; Roslan, Rozaini; Che-Him, Norziha

    2018-04-01

    The palm oil industry has been an important component of the national economy especially the agriculture sector. The aim of this study is to identify the pattern of import of palm oil products, to model the time series using Box-Jenkins model and to forecast the monthly import of palm oil products. The method approach is included in the statistical test for verifying the equivalence model and statistical measurement of three models, namely Autoregressive (AR) model, Moving Average (MA) model and Autoregressive Moving Average (ARMA) model. The model identification of all product import palm oil is different in which the AR(1) was found to be the best model for product import palm oil while MA(3) was found to be the best model for products import palm kernel oil. For the palm kernel, MA(4) was found to be the best model. The results forecast for the next four months for products import palm oil, palm kernel oil and palm kernel showed the most significant decrease compared to the actual data.

  9. The Performance of Multilevel Growth Curve Models under an Autoregressive Moving Average Process

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Pituch, Keenan A.

    2009-01-01

    The authors examined the robustness of multilevel linear growth curve modeling to misspecification of an autoregressive moving average process. As previous research has shown (J. Ferron, R. Dailey, & Q. Yi, 2002; O. Kwok, S. G. West, & S. B. Green, 2007; S. Sivo, X. Fan, & L. Witta, 2005), estimates of the fixed effects were unbiased, and Type I…

  10. Dynamics of slow-moving landslides from permanent scatterer analysis.

    PubMed

    Hilley, George E; Bürgmann, Roland; Ferretti, Alessandro; Novali, Fabrizio; Rocca, Fabio

    2004-06-25

    High-resolution interferometric synthetic aperture radar (InSAR) permanent scatterer data allow us to resolve the rates and variations in the rates of slow-moving landslides. Satellite-to-ground distances (range changes) on landslides increase at rates of 5 to 7 millimeters per year, indicating average downslope sliding velocities from 27 to 38 millimeters per year. Time-series analysis shows that displacement occurs mainly during the high-precipitation season; during the 1997-1998 El Niño event, rates of range change increased to as much as 11 millimeters per year. The observed nonlinear relationship of creep and precipitation rates suggests that increased pore fluid pressures within the shallow subsurface may initiate and accelerate these features. Changes in the slope of a hill resulting from increases in the pore pressure and lithostatic stress gradients may then lead to landslides.

  11. Using Baidu Search Index to Predict Dengue Outbreak in China

    NASA Astrophysics Data System (ADS)

    Liu, Kangkang; Wang, Tao; Yang, Zhicong; Huang, Xiaodong; Milinovich, Gabriel J.; Lu, Yi; Jing, Qinlong; Xia, Yao; Zhao, Zhengyang; Yang, Yang; Tong, Shilu; Hu, Wenbiao; Lu, Jiahai

    2016-12-01

    This study identified the possible threshold to predict dengue fever (DF) outbreaks using Baidu Search Index (BSI). Time-series classification and regression tree models based on BSI were used to develop a predictive model for DF outbreak in Guangzhou and Zhongshan, China. In the regression tree models, the mean autochthonous DF incidence rate increased approximately 30-fold in Guangzhou when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 382. When the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 91.8, there was approximately 9-fold increase of the mean autochthonous DF incidence rate in Zhongshan. In the classification tree models, the results showed that when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 99.3, there was 89.28% chance of DF outbreak in Guangzhou, while, in Zhongshan, when the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 68.1, the chance of DF outbreak rose up to 100%. The study indicated that less cost internet-based surveillance systems can be the valuable complement to traditional DF surveillance in China.

  12. Comparison between wavelet transform and moving average as filter method of MODIS imagery to recognize paddy cropping pattern in West Java

    NASA Astrophysics Data System (ADS)

    Dwi Nugroho, Kreshna; Pebrianto, Singgih; Arif Fatoni, Muhammad; Fatikhunnada, Alvin; Liyantono; Setiawan, Yudi

    2017-01-01

    Information on the area and spatial distribution of paddy field are needed to support sustainable agricultural and food security program. Mapping or distribution of cropping pattern paddy field is important to obtain sustainability paddy field area. It can be done by direct observation and remote sensing method. This paper discusses remote sensing for paddy field monitoring based on MODIS time series data. In time series MODIS data, difficult to direct classified of data, because of temporal noise. Therefore wavelet transform and moving average are needed as filter methods. The Objective of this study is to recognize paddy cropping pattern with wavelet transform and moving average in West Java using MODIS imagery (MOD13Q1) from 2001 to 2015 then compared between both of methods. The result showed the spatial distribution almost have the same cropping pattern. The accuracy of wavelet transform (75.5%) is higher than moving average (70.5%). Both methods showed that the majority of the cropping pattern in West Java have pattern paddy-fallow-paddy-fallow with various time planting. The difference of the planting schedule was occurs caused by the availability of irrigation water.

  13. Driving-forces model on individual behavior in scenarios considering moving threat agents

    NASA Astrophysics Data System (ADS)

    Li, Shuying; Zhuang, Jun; Shen, Shifei; Wang, Jia

    2017-09-01

    The individual behavior model is a contributory factor to improve the accuracy of agent-based simulation in different scenarios. However, few studies have considered moving threat agents, which often occur in terrorist attacks caused by attackers with close-range weapons (e.g., sword, stick). At the same time, many existing behavior models lack validation from cases or experiments. This paper builds a new individual behavior model based on seven behavioral hypotheses. The driving-forces model is an extension of the classical social force model considering scenarios including moving threat agents. An experiment was conducted to validate the key components of the model. Then the model is compared with an advanced Elliptical Specification II social force model, by calculating the fitting errors between the simulated and experimental trajectories, and being applied to simulate a specific circumstance. Our results show that the driving-forces model reduced the fitting error by an average of 33.9% and the standard deviation by an average of 44.5%, which indicates the accuracy and stability of the model in the studied situation. The new driving-forces model could be used to simulate individual behavior when analyzing the risk of specific scenarios using agent-based simulation methods, such as risk analysis of close-range terrorist attacks in public places.

  14. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    PubMed

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  15. Method and Apparatus for the Portable Identification of Material Thickness and Defects Using Spatially Controlled Heat Application

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott (Inventor); Winfree, William P. (Inventor)

    1999-01-01

    A method and a portable apparatus for the nondestructive identification of defects in structures. The apparatus comprises a heat source and a thermal imager that move at a constant speed past a test surface of a structure. The thermal imager is off set at a predetermined distance from the heat source. The heat source induces a constant surface temperature. The imager follows the heat source and produces a video image of the thermal characteristics of the test surface. Material defects produce deviations from the constant surface temperature that move at the inverse of the constant speed. Thermal noise produces deviations that move at random speed. Computer averaging of the digitized thermal image data with respect to the constant speed minimizes noise and improves the signal of valid defects. The motion of thermographic equipment coupled with the high signal to noise ratio render it suitable for portable, on site analysis.

  16. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  17. Generalized seasonal autoregressive integrated moving average models for count data with application to malaria time series with low case numbers.

    PubMed

    Briët, Olivier J T; Amerasinghe, Priyanie H; Vounatsou, Penelope

    2013-01-01

    With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions' impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during "consolidation" and "pre-elimination" phases. Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.

  18. Generalized Seasonal Autoregressive Integrated Moving Average Models for Count Data with Application to Malaria Time Series with Low Case Numbers

    PubMed Central

    Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope

    2013-01-01

    Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448

  19. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    NASA Astrophysics Data System (ADS)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  20. Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.

    PubMed

    Kis, Maria

    2005-01-01

    In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.

  1. SM91: Observations of interchange between acceleration and thermalization processes in auroral electrons

    NASA Technical Reports Server (NTRS)

    Pongratz, M.

    1972-01-01

    Results from a Nike-Tomahawk sounding rocket flight launched from Fort Churchill are presented. The rocket was launched into a breakup aurora at magnetic local midnight on 21 March 1968. The rocket was instrumented to measure electrons with an electrostatic analyzer electron spectrometer which made 29 measurements in the energy interval 0.5 KeV to 30 KeV. Complete energy spectra were obtained at a rate of 10/sec. Pitch angle information is presented via 3 computed average per rocket spin. The dumped electron average corresponds to averages over electrons moving nearly parallel to the B vector. The mirroring electron average corresponds to averages over electrons moving nearly perpendicular to the B vector. The average was also computed over the entire downward hemisphere (the precipitated electron average). The observations were obtained in an altitude range of 10 km at 230 km altitude.

  2. Improved Statistical Fault Detection Technique and Application to Biological Phenomena Modeled by S-Systems.

    PubMed

    Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N

    2017-09-01

    In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to combine the advantages brought forward by the proposed EWMA-GLRT fault detection chart with the KPCA model. Thus, it is used to enhance fault detection of the Cad System in E. coli model through monitoring some of the key variables involved in this model such as enzymes, transport proteins, regulatory proteins, lysine, and cadaverine. The results demonstrate the effectiveness of the proposed KPCA-based EWMA-GLRT method over Q , GLRT, EWMA, Shewhart, and moving window-GLRT methods. The detection performance is assessed and evaluated in terms of FAR, missed detection rates, and average run length (ARL 1 ) values.

  3. Neural net forecasting for geomagnetic activity

    NASA Technical Reports Server (NTRS)

    Hernandez, J. V.; Tajima, T.; Horton, W.

    1993-01-01

    We use neural nets to construct nonlinear models to forecast the AL index given solar wind and interplanetary magnetic field (IMF) data. We follow two approaches: (1) the state space reconstruction approach, which is a nonlinear generalization of autoregressive-moving average models (ARMA) and (2) the nonlinear filter approach, which reduces to a moving average model (MA) in the linear limit. The database used here is that of Bargatze et al. (1985).

  4. Modeling and roles of meteorological factors in outbreaks of highly pathogenic avian influenza H5N1.

    PubMed

    Biswas, Paritosh K; Islam, Md Zohorul; Debnath, Nitish C; Yamage, Mat

    2014-01-01

    The highly pathogenic avian influenza A virus subtype H5N1 (HPAI H5N1) is a deadly zoonotic pathogen. Its persistence in poultry in several countries is a potential threat: a mutant or genetically reassorted progenitor might cause a human pandemic. Its world-wide eradication from poultry is important to protect public health. The global trend of outbreaks of influenza attributable to HPAI H5N1 shows a clear seasonality. Meteorological factors might be associated with such trend but have not been studied. For the first time, we analyze the role of meteorological factors in the occurrences of HPAI outbreaks in Bangladesh. We employed autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to assess the roles of different meteorological factors in outbreaks of HPAI. Outbreaks were modeled best when multiplicative seasonality was incorporated. Incorporation of any meteorological variable(s) as inputs did not improve the performance of any multivariable models, but relative humidity (RH) was a significant covariate in several ARIMA and SARIMA models with different autoregressive and moving average orders. The variable cloud cover was also a significant covariate in two SARIMA models, but air temperature along with RH might be a predictor when moving average (MA) order at lag 1 month is considered.

  5. Cause Resolving of Typhoon Precipitation Using Principle Component Analysis under Complex Interactive Effect of Terrain, Monsoon and Typhoon Vortex

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.

    2015-12-01

    This study develops a novel methodology to resolve the cause of typhoon-induced precipitation using principle component analysis (PCA) and to develop a long lead-time precipitation prediction model. The discovered spatial and temporal features of rainfall are utilized to develop a state-of-the-art descriptive statistical model which can be used to predict long lead-time precipitation during typhoons. The time series of 12-hour precipitation from different types of invasive moving track of typhoons are respectively precede the signal analytical process to qualify the causes of rainfall and to quantify affected degree of each induced cause. The causes include: (1) interaction between typhoon rain band and terrain; (2) co-movement effect induced by typhoon wind field with monsoon; (3) pressure gradient; (4) wind velocity; (5) temperature environment; (6) characteristic distance between typhoon center and surface target station; (7) distance between grade 7 storm radius and surface target station; and (8) relative humidity. The results obtained from PCA can detect the hidden pattern of the eight causes in space and time and can understand the future trends and changes of precipitation. This study applies the developed methodology in Taiwan Island which is constituted by complex diverse terrain formation and height. Results show that: (1) for the typhoon moving toward the direction of 245° to 330°, Causes (1), (2) and (6) are the primary ones to generate rainfall; and (2) for the direction of 330° to 380°, Causes (1), (4) and (6) are the primary ones. Besides, the developed precipitation prediction model by using PCA with the distributed moving track approach (PCA-DMT) is 32% more accurate by that of PCA without distributed moving track approach, and the former model can effectively achieve long lead-time precipitation prediction with an average predicted error of 13% within average 48 hours of forecasted lead-time.

  6. Time-Series Analysis: Assessing the Effects of Multiple Educational Interventions in a Small-Enrollment Course

    NASA Astrophysics Data System (ADS)

    Warren, Aaron R.

    2009-11-01

    Time-series designs are an alternative to pretest-posttest methods that are able to identify and measure the impacts of multiple educational interventions, even for small student populations. Here, we use an instrument employing standard multiple-choice conceptual questions to collect data from students at regular intervals. The questions are modified by asking students to distribute 100 Confidence Points among the options in order to indicate the perceived likelihood of each answer option being the correct one. Tracking the class-averaged ratings for each option produces a set of time-series. ARIMA (autoregressive integrated moving average) analysis is then used to test for, and measure, changes in each series. In particular, it is possible to discern which educational interventions produce significant changes in class performance. Cluster analysis can also identify groups of students whose ratings evolve in similar ways. A brief overview of our methods and an example are presented.

  7. Tennessee's forest land area was stable 1999-2005 but early successional forest area declined

    Treesearch

    Christopher M. Oswalt

    2008-01-01

    A new analysis of the most recent (2005) annualized moving average data for Tennessee indicates that the area of forest land in the State remained stable between 1999 and 2005. Although trends in forest land area vary from region to region within the State, Tennessee neither lost nor gained forest land between 1999 and 2005. However, Tennessee had more than 2.5 times...

  8. Road traffic accidents prediction modelling: An analysis of Anambra State, Nigeria.

    PubMed

    Ihueze, Chukwutoo C; Onwurah, Uchendu O

    2018-03-01

    One of the major problems in the world today is the rate of road traffic crashes and deaths on our roads. Majority of these deaths occur in low-and-middle income countries including Nigeria. This study analyzed road traffic crashes in Anambra State, Nigeria with the intention of developing accurate predictive models for forecasting crash frequency in the State using autoregressive integrated moving average (ARIMA) and autoregressive integrated moving average with explanatory variables (ARIMAX) modelling techniques. The result showed that ARIMAX model outperformed the ARIMA (1,1,1) model generated when their performances were compared using the lower Bayesian information criterion, mean absolute percentage error, root mean square error; and higher coefficient of determination (R-Squared) as accuracy measures. The findings of this study reveal that incorporating human, vehicle and environmental related factors in time series analysis of crash dataset produces a more robust predictive model than solely using aggregated crash count. This study contributes to the body of knowledge on road traffic safety and provides an approach to forecasting using many human, vehicle and environmental factors. The recommendations made in this study if applied will help in reducing the number of road traffic crashes in Nigeria. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. MARD—A moving average rose diagram application for the geosciences

    NASA Astrophysics Data System (ADS)

    Munro, Mark A.; Blenkinsop, Thomas G.

    2012-12-01

    MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.

  10. Atmospheric mold spore counts in relation to meteorological parameters

    NASA Astrophysics Data System (ADS)

    Katial, R. K.; Zhang, Yiming; Jones, Richard H.; Dyer, Philip D.

    Fungal spore counts of Cladosporium, Alternaria, and Epicoccum were studied during 8 years in Denver, Colorado. Fungal spore counts were obtained daily during the pollinating season by a Rotorod sampler. Weather data were obtained from the National Climatic Data Center. Daily averages of temperature, relative humidity, daily precipitation, barometric pressure, and wind speed were studied. A time series analysis was performed on the data to mathematically model the spore counts in relation to weather parameters. Using SAS PROC ARIMA software, a regression analysis was performed, regressing the spore counts on the weather variables assuming an autoregressive moving average (ARMA) error structure. Cladosporium was found to be positively correlated (P<0.02) with average daily temperature, relative humidity, and negatively correlated with precipitation. Alternaria and Epicoccum did not show increased predictability with weather variables. A mathematical model was derived for Cladosporium spore counts using the annual seasonal cycle and significant weather variables. The model for Alternaria and Epicoccum incorporated the annual seasonal cycle. Fungal spore counts can be modeled by time series analysis and related to meteorological parameters controlling for seasonallity; this modeling can provide estimates of exposure to fungal aeroallergens.

  11. Naive vs. Sophisticated Methods of Forecasting Public Library Circulations.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1984-01-01

    Two sophisticated--autoregressive integrated moving average (ARIMA), straight-line regression--and two naive--simple average, monthly average--forecasting techniques were used to forecast monthly circulation totals of 34 public libraries. Comparisons of forecasts and actual totals revealed that ARIMA and monthly average methods had smallest mean…

  12. Short-term forecasts gain in accuracy. [Regression technique using ''Box-Jenkins'' analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Box-Jenkins time-series models offer accuracy for short-term forecasts that compare with large-scale macroeconomic forecasts. Utilities need to be able to forecast peak demand in order to plan their generating, transmitting, and distribution systems. This new method differs from conventional models by not assuming specific data patterns, but by fitting available data into a tentative pattern on the basis of auto-correlations. Three types of models (autoregressive, moving average, or mixed autoregressive/moving average) can be used according to which provides the most appropriate combination of autocorrelations and related derivatives. Major steps in choosing a model are identifying potential models, estimating the parametersmore » of the problem, and running a diagnostic check to see if the model fits the parameters. The Box-Jenkins technique is well suited for seasonal patterns, which makes it possible to have as short as hourly forecasts of load demand. With accuracy up to two years, the method will allow electricity price-elasticity forecasting that can be applied to facility planning and rate design. (DCK)« less

  13. Estimation of inhaled airborne particle number concentration by subway users in Seoul, Korea.

    PubMed

    Kim, Minhae; Park, Sechan; Namgung, Hyeong-Gyu; Kwon, Soon-Bark

    2017-12-01

    Exposure to airborne particulate matter (PM) causes several diseases in the human body. The smaller particles, which have relatively large surface areas, are actually more harmful to the human body since they can penetrate deeper parts of the lungs or become secondary pollutants by bonding with other atmospheric pollutants, such as nitrogen oxides. The purpose of this study is to present the number of PM inhaled by subway users as a possible reference material for any analysis of the hazards to the human body arising from the inhalation of such PM. Two transfer stations in Seoul, Korea, which have the greatest number of users, were selected for this study. For 0.3-0.422 μm PM, particle number concentration (PNC) was highest outdoors but decreased as the tester moved deeper underground. On the other hand, the PNC between 1 and 10 μm increased as the tester moved deeper underground and showed a high number concentration inside the subway train as well. An analysis of the particles to which subway users are actually exposed to (inhaled particle number), using particle concentration at each measurement location, the average inhalation rate of an adult, and the average stay time at each location, all showed that particles sized 0.01-0.422 μm are mostly inhaled from the outdoor air whereas particles sized 1-10 μm are inhaled as the passengers move deeper underground. Based on these findings, we expect that the inhaled particle number of subway users can be used as reference data for an evaluation of the hazards to health caused by PM inhalation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    DTIC Science & Technology

    1990-11-01

    1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and

  15. Forecasting Instability Indicators in the Horn of Africa

    DTIC Science & Technology

    2008-03-01

    further than 2 (Makridakis, et al, 1983, 359). 2-32 Autoregressive Integrated Moving Average ( ARIMA ) Model . Similar to the ARMA model except for...stationary process. ARIMA models are described as ARIMA (p,d,q), where p is the order of the autoregressive process, d is the degree of the...differential process, and q is the order of the moving average process. The ARMA (1,1) model shown above is equivalent to an ARIMA (1,0,1) model . An ARIMA

  16. Decadal Trends of Atlantic Basin Tropical Cyclones (1950-1999)

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Ten-year moving averages of the seasonal rates for 'named storms,' tropical storms, hurricanes, and major (or intense) hurricanes in the Atlantic basin suggest that the present epoch is one of enhanced activity, marked by seasonal rates typically equal to or above respective long-term median rates. As an example, the 10-year moving average of the seasonal rates for named storms is now higher than for any previous year over the past 50 years, measuring 10.65 in 1994, or 2.65 units higher than its median rate of 8. Also, the 10-year moving average for tropical storms has more than doubled, from 2.15 in 1955 to 4.60 in 1992, with 16 of the past 20 years having a seasonal rate of three or more (the median rate). For hurricanes and major hurricanes, their respective 10-year moving averages turned upward, rising above long-term median rates (5.5 and 2, respectively) in 1992, a response to the abrupt increase in seasonal rates that occurred in 1995. Taken together, the outlook for future hurricane seasons is for all categories of Atlantic basin tropical cyclones to have seasonal rates at levels equal to or above long-term median rates, especially during non-El Nino-related seasons. Only during El Nino-related seasons does it appear likely that seasonal rates might be slightly diminished.

  17. Tree-level imputation techniques to estimate current plot-level attributes in the Pacific Northwest using paneled inventory data

    Treesearch

    Bianca Eskelson; Temesgen Hailemariam; Tara Barrett

    2009-01-01

    The Forest Inventory and Analysis program (FIA) of the US Forest Service conducts a nationwide annual inventory. One panel (20% or 10% of all plots in the eastern and western United States, respectively) is measured each year. The precision of the estimates for any given year from one panel is low, and the moving average (MA), which is considered to be the default...

  18. Rapid and safe learning of robotic gastrectomy for gastric cancer: multidimensional analysis in a comparison with laparoscopic gastrectomy.

    PubMed

    Kim, H-I; Park, M S; Song, K J; Woo, Y; Hyung, W J

    2014-10-01

    The learning curve of robotic gastrectomy has not yet been evaluated in comparison with the laparoscopic approach. We compared the learning curves of robotic gastrectomy and laparoscopic gastrectomy based on operation time and surgical success. We analyzed 172 robotic and 481 laparoscopic distal gastrectomies performed by single surgeon from May 2003 to April 2009. The operation time was analyzed using a moving average and non-linear regression analysis. Surgical success was evaluated by a cumulative sum plot with a target failure rate of 10%. Surgical failure was defined as laparoscopic or open conversion, insufficient lymph node harvest for staging, resection margin involvement, postoperative morbidity, and mortality. Moving average and non-linear regression analyses indicated stable state for operation time at 95 and 121 cases in robotic gastrectomy, and 270 and 262 cases in laparoscopic gastrectomy, respectively. The cumulative sum plot identified no cut-off point for surgical success in robotic gastrectomy and 80 cases in laparoscopic gastrectomy. Excluding the initial 148 laparoscopic gastrectomies that were performed before the first robotic gastrectomy, the two groups showed similar number of cases to reach steady state in operation time, and showed no cut-off point in analysis of surgical success. The experience of laparoscopic surgery could affect the learning process of robotic gastrectomy. An experienced laparoscopic surgeon requires fewer cases of robotic gastrectomy to reach steady state. Moreover, the surgical outcomes of robotic gastrectomy were satisfactory. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Electromyographic analysis of trunk and hip muscles during resisted lateral band walking.

    PubMed

    Youdas, James W; Foley, Brooke M; Kruger, BreAnna L; Mangus, Jessica M; Tortorelli, Alis M; Madson, Timothy J; Hollman, John H

    2013-02-01

    The purpose of this study was to simultaneously quantify bilateral activation/recruitment levels (% maximum voluntary isometric contraction [MVIC]) for trunk and hip musculature on both moving and stance lower limbs during resisted lateral band walking. Differential electromyographic (EMG) activity was recorded in neutral, internal, and external hip rotation in 21 healthy participants. EMG signals were collected with DE-3.1 double-differential surface electrodes at a sampling frequency of 1,000 Hz during three consecutive lateral steps. Gluteus medius average EMG activation was greater (p = 0.001) for the stance limb (52 SD 18% MVIC) than moving limb (35 SD 16% MVIC). Gluteus maximus EMG activation was greater (p = 0.002) for the stance limb (19 SD 13% MVIC) than moving limb (13 SD 9% MVIC). Erector spinae activation was greater (p = 0.007) in hip internal rotation (30 SD 13% MVIC) than neutral rotation (26 SD 10% MVIC) and the moving limb (31 SD 15% MVIC) was greater (p = 0.039) than the stance limb (23 SD 11% MVIC). Gluteus medius and maximus muscle activation were greater on the stance limb than moving limb during resisted lateral band walking. Therefore, clinicians may wish to consider using the involved limb as the stance limb during resisted lateral band walking exercise.

  20. Effects of weather anomalies on the intellectual performance: Chess mistakes of the world top-ranked players

    NASA Astrophysics Data System (ADS)

    Mika, J.; Verőci, Zs.; Fülöp, A.; Hirsch, T.; Dúll, A.

    2009-04-01

    Weather disturbances like fronts, influence human biorhythm, our biological balance becomes manipulated, and adaptation mechanisms are impaired. Our working hypothesis is that even the best chess players of the world are not exceptions from this rule. As their movements on the chess board, as well as the best possible ones, if they missed to make, are already assessed by computers objectively, we can use this game as a model of intellectual performance. By the date of the Abstract edition, 580 wrong chess moves were selected with the threshold of over 1/3 peasant to be lost. I.e. this is the minimum difference between the assessment of the positions after the best possible and the really performed move. (Obviously, all moves both sides in ca. the same number of games were checked, i.e. over 35,000 moves were assessed.) For assessing the moves, the most popular database is MegaDatabase 2006 (ChessBase- Hamburg), Chess Informant Expert from Chess Informant Beograd and the program ChessBase 9.0 together with the engines Fritz 10, Rybka 2.3, Junior 10. First of all the World Chess Champions, Karpov, Kasparov, Kramnik and Anand were examined played in the traditional big chess tournaments, category 19th and more (average rating more the 2701 Elo-points). We further selected the games by the top-ranked players of the world between 2005 and 2008. This selection is explained by the likely fact that they make the less wrong moves for simply the lack of chess understanding, moreover, as full professionals, they allow the minimum of non-weather disturbing circumstances (e.g. imperfect sleeping before the game, etc.). Their moves were selected as (i) very wrong move with more than 3.0 differences, (i.e. unforced loss of a knight, or a bishop, (ii) very weak move with an assessment of 1.0-3.0, (i.e. unforced loss between one peasant and one bishop/knight) and (iii) weak move with less than 1.0 assessment of the passed chance, or unforced loss of less than one peasant. These new data on mental behavior are statistically compared to a common set of diurnal meteorological parameters, including various near-surface and lower troposphere temperature values, sea-level pressures, relative topographies, precipitation amount and existence (duration) and wind speed. The data and the aerologic fields are retrieved from the ECMWF ERA-40 (until 2002) and ECMWF operational analysis (after 2002) for the date and site of the individual mistakes. According to our preliminary results, the wrong moves fall to the lower or higher than average parts of the diurnal mean temperature distribution. Even if we should be careful because of the well known bi-modal distribution of the temperature (if not performing any seasonal correction), but, even after considering these differences the best players make more frequent mistakes in case of higher or lower than normal temperature situations. Another preliminary experience is that decreasing tendency of the RT850/500 hPa relative topography also indicates increase of wrong and very wrong moves. After performing this analysis, the result will be compared to the better known empirical paradigms of medical meteorology and experimental psychology.

  1. Automated digital magnetofluidics

    NASA Astrophysics Data System (ADS)

    Schneider, J.; Garcia, A. A.; Marquez, M.

    2008-08-01

    Drops can be moved in complex patterns on superhydrophobic surfaces using a reconfigured computer-controlled x-y metrology stage with a high degree of accuracy, flexibility, and reconfigurability. The stage employs a DMC-4030 controller which has a RISC-based, clock multiplying processor with DSP functions, accepting encoder inputs up to 22 MHz, provides servo update rates as high as 32 kHz, and processes commands at rates as fast as 40 milliseconds. A 6.35 mm diameter cylindrical NdFeB magnet is translated by the stage causing water drops to move by the action of induced magnetization of coated iron microspheres that remain in the drop and are attracted to the rare earth magnet through digital magnetofluidics. Water drops are easily moved in complex patterns in automated digital magnetofluidics at an average speed of 2.8 cm/s over a superhydrophobic polyethylene surface created by solvent casting. With additional components, some potential uses for this automated microfluidic system include characterization of superhydrophobic surfaces, water quality analysis, and medical diagnostics.

  2. Motile and non-motile sperm diagnostic manipulation using optoelectronic tweezers.

    PubMed

    Ohta, Aaron T; Garcia, Maurice; Valley, Justin K; Banie, Lia; Hsu, Hsan-Yin; Jamshidi, Arash; Neale, Steven L; Lue, Tom; Wu, Ming C

    2010-12-07

    Optoelectronic tweezers was used to manipulate human spermatozoa to determine whether their response to OET predicts sperm viability among non-motile sperm. We review the electro-physical basis for how live and dead human spermatozoa respond to OET. The maximal velocity that non-motile spermatozoa could be induced to move by attraction or repulsion to a moving OET field was measured. Viable sperm are attracted to OET fields and can be induced to move at an average maximal velocity of 8.8 ± 4.2 µm s(-1), while non-viable sperm are repelled to OET, and are induced to move at an average maximal velocity of -0.8 ± 1.0 µm s(-1). Manipulation of the sperm using OET does not appear to result in increased DNA fragmentation, making this a potential method by which to identify viable non-motile sperm for assisted reproductive technologies.

  3. Transport of the moving barrier driven by chiral active particles

    NASA Astrophysics Data System (ADS)

    Liao, Jing-jing; Huang, Xiao-qun; Ai, Bao-quan

    2018-03-01

    Transport of a moving V-shaped barrier exposed to a bath of chiral active particles is investigated in a two-dimensional channel. Due to the chirality of active particles and the transversal asymmetry of the barrier position, active particles can power and steer the directed transport of the barrier in the longitudinal direction. The transport of the barrier is determined by the chirality of active particles. The moving barrier and active particles move in the opposite directions. The average velocity of the barrier is much larger than that of active particles. There exist optimal parameters (the chirality, the self-propulsion speed, the packing fraction, and the channel width) at which the average velocity of the barrier takes its maximal value. In particular, tailoring the geometry of the barrier and the active concentration provides novel strategies to control the transport properties of micro-objects or cargoes in an active medium.

  4. Studies in astronomical time series analysis. I - Modeling random processes in the time domain

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1981-01-01

    Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.

  5. [A new kinematics method of determing elbow rotation axis and evaluation of its feasibility].

    PubMed

    Han, W; Song, J; Wang, G Z; Ding, H; Li, G S; Gong, M Q; Jiang, X Y; Wang, M Y

    2016-04-18

    To study a new positioning method of elbow external fixation rotation axis, and to evaluate its feasibility. Four normal adult volunteers and six Sawbone elbow models were brought into this experiment. The kinematic data of five elbow flexion were collected respectively by optical positioning system. The rotation axes of the elbow joints were fitted by the least square method. The kinematic data and fitting results were visually displayed. According to the fitting results, the average moving planes and rotation axes were calculated. Thus, the rotation axes of new kinematic methods were obtained. By using standard clinical methods, the entrance and exit points of rotation axes of six Sawbone elbow models were located under X-ray. And The kirschner wires were placed as the representatives of rotation axes using traditional positioning methods. Then, the entrance point deviation, the exit point deviation and the angle deviation of two kinds of located rotation axes were compared. As to the four volunteers, the indicators represented circular degree and coplanarity of elbow flexion movement trajectory of each volunteer were both about 1 mm. All the distance deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 3 mm. All the angle deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 5°. As to the six Sawbone models, the average entrance point deviations, the average exit point deviations and the average angle deviations of two different rotation axes determined by two kinds of located methods were respectively 1.697 2 mm, 1.838 3 mm and 1.321 7°. All the deviations were very small. They were all in an acceptable range of clinical practice. The values that represent circular degree and coplanarity of volunteer's elbow single curvature movement trajectory are very small. The result shows that the elbow single curvature movement can be regarded as the approximate fixed axis movement. The new method can replace the traditional method in accuracy. It can make up the deficiency of the traditional fixed axis method.

  6. Focus on Teacher Salaries: An Update on Average Salaries and Recent Legislative Actions in the SREB States.

    ERIC Educational Resources Information Center

    Gaines, Gale F.

    Focused state efforts have helped teacher salaries in Southern Regional Education Board (SREB) states move toward the national average. Preliminary 2000-01 estimates put SREB's average teacher salary at its highest point in 22 years compared to the national average. The SREB average teacher salary is approximately 90 percent of the national…

  7. Granger causality for state-space models

    NASA Astrophysics Data System (ADS)

    Barnett, Lionel; Seth, Anil K.

    2015-04-01

    Granger causality has long been a prominent method for inferring causal interactions between stochastic variables for a broad range of complex physical systems. However, it has been recognized that a moving average (MA) component in the data presents a serious confound to Granger causal analysis, as routinely performed via autoregressive (AR) modeling. We solve this problem by demonstrating that Granger causality may be calculated simply and efficiently from the parameters of a state-space (SS) model. Since SS models are equivalent to autoregressive moving average models, Granger causality estimated in this fashion is not degraded by the presence of a MA component. This is of particular significance when the data has been filtered, downsampled, observed with noise, or is a subprocess of a higher dimensional process, since all of these operations—commonplace in application domains as diverse as climate science, econometrics, and the neurosciences—induce a MA component. We show how Granger causality, conditional and unconditional, in both time and frequency domains, may be calculated directly from SS model parameters via solution of a discrete algebraic Riccati equation. Numerical simulations demonstrate that Granger causality estimators thus derived have greater statistical power and smaller bias than AR estimators. We also discuss how the SS approach facilitates relaxation of the assumptions of linearity, stationarity, and homoscedasticity underlying current AR methods, thus opening up potentially significant new areas of research in Granger causal analysis.

  8. The impact of using weight estimated from mammographic images vs. self-reported weight on breast cancer risk calculation

    NASA Astrophysics Data System (ADS)

    Nair, Kalyani P.; Harkness, Elaine F.; Gadde, Soujanye; Lim, Yit Y.; Maxwell, Anthony J.; Moschidis, Emmanouil; Foden, Philip; Cuzick, Jack; Brentnall, Adam; Evans, D. Gareth; Howell, Anthony; Astley, Susan M.

    2017-03-01

    Personalised breast screening requires assessment of individual risk of breast cancer, of which one contributory factor is weight. Self-reported weight has been used for this purpose, but may be unreliable. We explore the use of volume of fat in the breast, measured from digital mammograms. Volumetric breast density measurements were used to determine the volume of fat in the breasts of 40,431 women taking part in the Predicting Risk Of Cancer At Screening (PROCAS) study. Tyrer-Cuzick risk using self-reported weight was calculated for each woman. Weight was also estimated from the relationship between self-reported weight and breast fat volume in the cohort, and used to re-calculate Tyrer-Cuzick risk. Women were assigned to risk categories according to 10 year risk (below average <2%, average 2-3.49%, above average 3.5-4.99%, moderate 5-7.99%, high >=8%) and the original and re-calculated Tyrer-Cuzick risks were compared. Of the 716 women diagnosed with breast cancer during the study, 15 (2.1%) moved into a lower risk category, and 37 (5.2%) moved into a higher category when using weight estimated from breast fat volume. Of the 39,715 women without a cancer diagnosis, 1009 (2.5%) moved into a lower risk category, and 1721 (4.3%) into a higher risk category. The majority of changes were between below average and average risk categories (38.5% of those with a cancer diagnosis, and 34.6% of those without). No individual moved more than one risk group. Automated breast fat measures may provide a suitable alternative to self-reported weight for risk assessment in personalized screening.

  9. Forecast of Frost Days Based on Monthly Temperatures

    NASA Astrophysics Data System (ADS)

    Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.

    2009-04-01

    Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.

  10. Volatility-constrained multifractal detrended cross-correlation analysis: Cross-correlation among Mainland China, US, and Hong Kong stock markets

    NASA Astrophysics Data System (ADS)

    Cao, Guangxi; Zhang, Minjia; Li, Qingchen

    2017-04-01

    This study focuses on multifractal detrended cross-correlation analysis of the different volatility intervals of Mainland China, US, and Hong Kong stock markets. A volatility-constrained multifractal detrended cross-correlation analysis (VC-MF-DCCA) method is proposed to study the volatility conductivity of Mainland China, US, and Hong Kong stock markets. Empirical results indicate that fluctuation may be related to important activities in real markets. The Hang Seng Index (HSI) stock market is more influential than the Shanghai Composite Index (SCI) stock market. Furthermore, the SCI stock market is more influential than the Dow Jones Industrial Average stock market. The conductivity between the HSI and SCI stock markets is the strongest. HSI was the most influential market in the large fluctuation interval of 1991 to 2014. The autoregressive fractionally integrated moving average method is used to verify the validity of VC-MF-DCCA. Results show that VC-MF-DCCA is effective.

  11. Books average previous decade of economic misery.

    PubMed

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  12. Up-down Asymmetries in Speed Perception

    NASA Technical Reports Server (NTRS)

    Thompson, Peter; Stone, Leland S.

    1997-01-01

    We compared speed matches for pairs of stimuli that moved in opposite directions (upward and downward). Stimuli were elliptical patches (2 deg horizontally by 1 deg vertically) of horizontal sinusoidal gratings of spatial. frequency 2 cycles/deg. Two sequential 380 msec reveal presentations were compared. One of each pair of gratings (the standard) moved at 4 Hz (2 deg/sec), the other (the test) moved at a rate determined by a simple up-down staircase. The point of subjectively equal speed was calculated from the average of the last eight reversals. The task was to fixate a central point and to determine which one of the pair appeared to move faster. Eight of 10 observers perceived the upward drifting grating as moving faster than a grating moving downward but otherwise identical. on average (N = 10), when the standard moved downward, it was matched by a test moving upward at 94.7+/-1.7(SE)% of the standard speed, and when the standard moved upward it was matched by a test moving downward at 105.1+/-2.3(SE)% of the standard speed. Extending this paradigm over a range of spatial (1.5 to 13.5 c/d) and temporal (1.5 to 13.5 Hz) frequencies, preliminary results (N = 4) suggest that, under the conditions of our experiment, upward matter is seen as faster than downward for speeds greater than approx.1 deg/sec, but the effect appears to reverse at speeds below approx.1 deg/sec with downward motion perceived as faster. Given that an up-down asymmetry has been observed for the optokinetic response, both perceptual and oculomotor contributions to this phenomenon deserve exploration.

  13. $1.8 Million and counting: how volatile agent education has decreased our spending $1000 per day.

    PubMed

    Miller, Scott A; Aschenbrenner, Carol A; Traunero, Justin R; Bauman, Loren A; Lobell, Samuel S; Kelly, Jeffrey S; Reynolds, John E

    2016-12-01

    Volatile anesthetic agents comprise a substantial portion of every hospital's pharmacy budget. Challenged with an initiative to lower anesthetic drug expenditures, we developed an education-based intervention focused on reducing volatile anesthetic costs while preserving access to all available volatile anesthetics. When postintervention evaluation demonstrated a dramatic year-over-year reduction in volatile agent acquisition costs, we undertook a retrospective analysis of volatile anesthetic purchasing data using time series analysis to determine the impact of our educational initiative. We obtained detailed volatile anesthetic purchasing data from the Central Supply of Wake Forest Baptist Health from 2007 to 2014 and integrated these data with the time course of our educational intervention. Aggregate volatile anesthetic purchasing data were analyzed for 7 consecutive fiscal years. The educational initiative emphasized tissue partition coefficients of volatile anesthetics in adipose tissue and muscle and their impact on case management. We used an interrupted time series analysis of monthly cost per unit data using autoregressive integrated moving average modeling, with the monthly cost per unit being the amount spent per bottle of anesthetic agent per month. The cost per unit decreased significantly after the intervention (t=-6.73, P<.001). The autoregressive integrated moving average model predicted that the average cost per unit decreased $48 after the intervention, with 95% confidence interval of $34 to $62. As evident from the data, the purchasing of desflurane and sevoflurane decreased, whereas that of isoflurane increased. An educational initiative focused solely on the selection of volatile anesthetic agent per case significantly reduced volatile anesthetic expense at a tertiary medical center. This approach appears promising for application in other hospitals in the rapidly evolving, value-added health care environment. We were able to accomplish this with instruction on tissue partition coefficients and each agent's individual cost per MAC-hour delivered. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. On nonstationarity and antipersistency in global temperature series

    NASA Astrophysics Data System (ADS)

    KäRner, O.

    2002-10-01

    Statistical analysis is carried out for satellite-based global daily tropospheric and stratospheric temperature anomaly and solar irradiance data sets. Behavior of the series appears to be nonstationary with stationary daily increments. Estimating long-range dependence between the increments reveals a remarkable difference between the two temperature series. Global average tropospheric temperature anomaly behaves similarly to the solar irradiance anomaly. Their daily increments show antipersistency for scales longer than 2 months. The property points at a cumulative negative feedback in the Earth climate system governing the tropospheric variability during the last 22 years. The result emphasizes a dominating role of the solar irradiance variability in variations of the tropospheric temperature and gives no support to the theory of anthropogenic climate change. The global average stratospheric temperature anomaly proceeds like a 1-dim random walk at least up to 11 years, allowing good presentation by means of the autoregressive integrated moving average (ARIMA) models for monthly series.

  15. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    PubMed Central

    Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450

  16. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    PubMed

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  17. Profitability of simple technical trading rules of Chinese stock exchange indexes

    NASA Astrophysics Data System (ADS)

    Zhu, Hong; Jiang, Zhi-Qiang; Li, Sai-Ping; Zhou, Wei-Xing

    2015-12-01

    Although technical trading rules have been widely used by practitioners in financial markets, their profitability still remains controversial. We here investigate the profitability of moving average (MA) and trading range break (TRB) rules by using the Shanghai Stock Exchange Composite Index (SHCI) from May 21, 1992 through December 31, 2013 and Shenzhen Stock Exchange Component Index (SZCI) from April 3, 1991 through December 31, 2013. The t-test is adopted to check whether the mean returns which are conditioned on the trading signals are significantly different from unconditioned returns and whether the mean returns conditioned on the buy signals are significantly different from the mean returns conditioned on the sell signals. We find that TRB rules outperform MA rules and short-term variable moving average (VMA) rules outperform long-term VMA rules. By applying White's Reality Check test and accounting for the data snooping effects, we find that the best trading rule outperforms the buy-and-hold strategy when transaction costs are not taken into consideration. Once transaction costs are included, trading profits will be eliminated completely. Our analysis suggests that simple trading rules like MA and TRB cannot beat the standard buy-and-hold strategy for the Chinese stock exchange indexes.

  18. Proceedings of the Conference on the Design of Experiments in Army Research Development and Testing (32nd)

    DTIC Science & Technology

    1987-06-01

    number of series among the 63 which were identified as a particular ARIMA form and were "best" modeled by a particular technique. Figure 1 illustrates a...th time from xe’s. The integrbted autoregressive - moving average model , denoted by ARIMA (p,d,q) is a result of combining d-th differencing process...Experiments, (4) Data Analysis and Modeling , (5) Theory and Probablistic Inference, (6) Fuzzy Statistics, (7) Forecasting and Prediction, (8) Small Sample

  19. Nodding motions of accretion rings and disks - A short-term period in SS 433

    NASA Technical Reports Server (NTRS)

    Katz, J. I.; Anderson, S. F.; Grandi, S. A.; Margon, B.

    1982-01-01

    It is pointed out that accretion disks and rings in mass transfer binaries have been observed spectroscopically and calculated theoretically for many years. The present investigation is partly based on the availability of several years of spectroscopic observations of the Doppler shifts of the moving lines in SS433. A formalism is presented to compute frequencies and amplitudes of short-term 'nodding' motions in precessing accretion disks in close binary systems. This formalism is applied to an analysis of the moving-line Doppler shifts in SS433. The 35d X-ray cycle of Hercules X-1 is also discussed. In the considered model, the companion star exerts a gravitational torque on the disk rim. Averaged over the binary orbit, this yields a steady torque which results in the mean driven counterprecession of the disk.

  20. Kinesin-microtubule interactions during gliding assays under magnetic force

    NASA Astrophysics Data System (ADS)

    Fallesen, Todd L.

    Conventional kinesin is a motor protein capable of converting the chemical energy of ATP into mechanical work. In the cell, this is used to actively transport vesicles through the intracellular matrix. The relationship between the velocity of a single kinesin, as it works against an increasing opposing load, has been well studied. The relationship between the velocity of a cargo being moved by multiple kinesin motors against an opposing load has not been established. A major difficulty in determining the force-velocity relationship for multiple motors is determining the number of motors that are moving a cargo against an opposing load. Here I report on a novel method for detaching microtubules bound to a superparamagnetic bead from kinesin anchor points in an upside down gliding assay using a uniform magnetic field perpendicular to the direction of microtubule travel. The anchor points are presumably kinesin motors bound to the surface which microtubules are gliding over. Determining the distance between anchor points, d, allows the calculation of the average number of kinesins, n, that are moving a microtubule. It is possible to calculate the fraction of motors able to move microtubules as well, which is determined to be ˜ 5%. Using a uniform magnetic field parallel to the direction of microtubule travel, it is possible to impart a uniform magnetic field on a microtubule bound to a superparamagnetic bead. We are able to decrease the average velocity of microtubules driven by multiple kinesin motors moving against an opposing force. Using the average number of kinesins on a microtubule, we estimate that there are an average 2-7 kinesins acting against the opposing force. By fitting Gaussians to the smoothed distributions of microtubule velocities acting against an opposing force, multiple velocities are seen, presumably for n, n-1, n-2, etc motors acting together. When these velocities are scaled for the average number of motors on a microtubule, the force-velocity relationship for multiple motors follows the same trend as for one motor, supporting the hypothesis that multiple motors share the load.

  1. Class III correction using an inter-arch spring-loaded module

    PubMed Central

    2014-01-01

    Background A retrospective study was conducted to determine the cephalometric changes in a group of Class III patients treated with the inter-arch spring-loaded module (CS2000®, Dynaflex, St. Ann, MO, USA). Methods Thirty Caucasian patients (15 males, 15 females) with an average pre-treatment age of 9.6 years were treated consecutively with this appliance and compared with a control group of subjects from the Bolton-Brush Study who were matched in age, gender, and craniofacial morphology to the treatment group. Lateral cephalograms were taken before treatment and after removal of the CS2000® appliance. The treatment effects of the CS2000® appliance were calculated by subtracting the changes due to growth (control group) from the treatment changes. Results All patients were improved to a Class I dental arch relationship with a positive overjet. Significant sagittal, vertical, and angular changes were found between the pre- and post-treatment radiographs. With an average treatment time of 1.3 years, the maxillary base moved forward by 0.8 mm, while the mandibular base moved backward by 2.8 mm together with improvements in the ANB and Wits measurements. The maxillary incisor moved forward by 1.3 mm and the mandibular incisor moved forward by 1.0 mm. The maxillary molar moved forward by 1.0 mm while the mandibular molar moved backward by 0.6 mm. The average overjet correction was 3.9 mm and 92% of the correction was due to skeletal contribution and 8% was due to dental contribution. The average molar correction was 5.2 mm and 69% of the correction was due to skeletal contribution and 31% was due to dental contribution. Conclusions Mild to moderate Class III malocclusion can be corrected using the inter-arch spring-loaded appliance with minimal patient compliance. The overjet correction was contributed by forward movement of the maxilla, backward and downward movement of the mandible, and proclination of the maxillary incisors. The molar relationship was corrected by mesialization of the maxillary molars, distalization of the mandibular molars together with a rotation of the occlusal plane. PMID:24934153

  2. Books Average Previous Decade of Economic Misery

    PubMed Central

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  3. Studies on the dynamic stability of an axially moving nanobeam based on the nonlocal strain gradient theory

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Shen, Huoming; Zhang, Bo; Liu, Juan

    2018-06-01

    In this paper, we studied the parametric resonance issue of an axially moving viscoelastic nanobeam with varying velocity. Based on the nonlocal strain gradient theory, we established the transversal vibration equation of the axially moving nanobeam and the corresponding boundary condition. By applying the average method, we obtained a set of self-governing ordinary differential equations when the excitation frequency of the moving parameters is twice the intrinsic frequency or near the sum of certain second-order intrinsic frequencies. On the plane of parametric excitation frequency and excitation amplitude, we can obtain the instability region generated by the resonance, and through numerical simulation, we analyze the influence of the scale effect and system parameters on the instability region. The results indicate that the viscoelastic damping decreases the resonance instability region, and the average velocity and stiffness make the instability region move to the left- and right-hand sides. Meanwhile, the scale effect of the system is obvious. The nonlocal parameter exhibits not only the stiffness softening effect but also the damping weakening effect, while the material characteristic length parameter exhibits the stiffness hardening effect and damping reinforcement effect.

  4. Analysis of forecasting and inventory control of raw material supplies in PT INDAC INT’L

    NASA Astrophysics Data System (ADS)

    Lesmana, E.; Subartini, B.; Riaman; Jabar, D. A.

    2018-03-01

    This study discusses the data forecasting sales of carbon electrodes at PT. INDAC INT L uses winters and double moving average methods, while for predicting the amount of inventory and cost required in ordering raw material of carbon electrode next period using Economic Order Quantity (EOQ) model. The result of error analysis shows that winters method for next period gives result of MAE, MSE, and MAPE, the winters method is a better forecasting method for forecasting sales of carbon electrode products. So that PT. INDAC INT L is advised to provide products that will be sold following the sales amount by the winters method.

  5. A strategy to decide whether to move the last case of the day in an operating room to another empty operating room to decrease overtime labor costs.

    PubMed

    Dexter, F

    2000-10-01

    We examined how to program an operating room (OR) information system to assist the OR manager in deciding whether to move the last case of the day in one OR to another OR that is empty to decrease overtime labor costs. We first developed a statistical strategy to predict whether moving the case would decrease overtime labor costs for first shift nurses and anesthesia providers. The strategy was based on using historical case duration data stored in a surgical services information system. Second, we estimated the incremental overtime labor costs achieved if our strategy was used for moving cases versus movement of cases by an OR manager who knew in advance exactly how long each case would last. We found that if our strategy was used to decide whether to move cases, then depending on parameter values, only 2.0 to 4.3 more min of overtime would be required per case than if the OR manager had perfect retrospective knowledge of case durations. The use of other information technologies to assist in the decision of whether to move a case, such as real-time patient tracking information systems, closed-circuit cameras, or graphical airport-style displays can, on average, reduce overtime by no more than only 2 to 4 min per case that can be moved. The use of other information technologies to assist in the decision of whether to move a case, such as real-time patient tracking information systems, closed-circuit cameras, or graphical airport-style displays, can, on average, reduce overtime by no more than only 2 to 4 min per case that can be moved.

  6. Peak Running Intensity of International Rugby: Implications for Training Prescription.

    PubMed

    Delaney, Jace A; Thornton, Heidi R; Pryor, John F; Stewart, Andrew M; Dascombe, Ben J; Duthie, Grant M

    2017-09-01

    To quantify the duration and position-specific peak running intensities of international rugby union for the prescription and monitoring of specific training methodologies. Global positioning systems (GPS) were used to assess the activity profile of 67 elite-level rugby union players from 2 nations across 33 international matches. A moving-average approach was used to identify the peak relative distance (m/min), average acceleration/deceleration (AveAcc; m/s 2 ), and average metabolic power (P met ) for a range of durations (1-10 min). Differences between positions and durations were described using a magnitude-based network. Peak running intensity increased as the length of the moving average decreased. There were likely small to moderate increases in relative distance and AveAcc for outside backs, halfbacks, and loose forwards compared with the tight 5 group across all moving-average durations (effect size [ES] = 0.27-1.00). P met demands were at least likely greater for outside backs and halfbacks than for the tight 5 (ES = 0.86-0.99). Halfbacks demonstrated the greatest relative distance and P met outputs but were similar to outside backs and loose forwards in AveAcc demands. The current study has presented a framework to describe the peak running intensities achieved during international rugby competition by position, which are considerably higher than previously reported whole-period averages. These data provide further knowledge of the peak activity profiles of international rugby competition, and this information can be used to assist coaches and practitioners in adequately preparing athletes for the most demanding periods of play.

  7. Analysis of the learning curve for peroral endoscopic myotomy for esophageal achalasia: Single-center, two-operator experience.

    PubMed

    Lv, Houning; Zhao, Ningning; Zheng, Zhongqing; Wang, Tao; Yang, Fang; Jiang, Xihui; Lin, Lin; Sun, Chao; Wang, Bangmao

    2017-05-01

    Peroral endoscopic myotomy (POEM) has emerged as an advanced technique for the treatment of achalasia, and defining the learning curve is mandatory. From August 2011 to June 2014, two operators in our institution (A&B) carried out POEM on 35 and 33 consecutive patients, respectively. Moving average and cumulative sum (CUSUM) methods were used to analyze the POEM learning curve for corrected operative time (cOT), referring to duration of per centimeter myotomy. Additionally, perioperative outcomes were compared among distinct learning curve phases. Using the moving average method, cOT reached a plateau at the 29th case and at the 24th case for operators A and B, respectively. CUSUM analysis identified three phases: initial learning period (Phase 1), efficiency period (Phase 2) and mastery period (Phase 3). The relatively smooth state in the CUSUM graph occurred at the 26th case and at the 24th case for operators A and B, respectively. Mean cOT of distinct phases for operator A were 8.32, 5.20 and 3.97 min, whereas they were 5.99, 3.06 and 3.75 min for operator B, respectively. Eckardt score and lower esophageal sphincter pressure significantly decreased during the 1-year follow-up period. Data were comparable regarding patient characteristics and perioperative outcomes. This single-center study demonstrated that expert endoscopists with experience in esophageal endoscopic submucosal dissection reached a plateau in learning of POEM after approximately 25 cases. © 2016 Japan Gastroenterological Endoscopy Society.

  8. Geophysical Factor Resolving of Rainfall Mechanism for Super Typhoons by Using Multiple Spatiotemporal Components Analysis

    NASA Astrophysics Data System (ADS)

    Huang, Chien-Lin; Hsu, Nien-Sheng

    2016-04-01

    This study develops a novel methodology to resolve the geophysical cause of typhoon-induced rainfall considering diverse dynamic co-evolution at multiple spatiotemporal components. The multi-order hidden patterns of complex hydrological process in chaos are detected to understand the fundamental laws of rainfall mechanism. The discovered spatiotemporal features are utilized to develop a state-of-the-art descriptive statistical model for mechanism validation, modeling and further prediction during typhoons. The time series of hourly typhoon precipitation from different types of moving track, atmospheric field and landforms are respectively precede the signal analytical process to qualify each type of rainfall cause and to quantify the corresponding affected degree based on the measured geophysical atmospheric-hydrological variables. This study applies the developed methodology in Taiwan Island which is constituted by complex diverse landform formation. The identified driving-causes include: (1) cloud height to ground surface; (2) co-movement effect induced by typhoon wind field with monsoon; (3) stem capacity; (4) interaction between typhoon rain band and terrain; (5) structural intensity variance of typhoon; and (6) integrated cloudy density of rain band. Results show that: (1) for the central maximum wind speed exceeding 51 m/sec, Causes (1) and (3) are the primary ones to generate rainfall; (2) for the typhoon moving toward the direction of 155° to 175°, Cause (2) is the primary one; (3) for the direction of 90° to 155°, Cause (4) is the primary one; (4) for the typhoon passing through mountain chain which above 3500 m, Cause (5) is the primary one; and (5) for the moving speed lower than 18 km/hr, Cause (6) is the primary one. Besides, the multiple geophysical component-based precipitation modeling can achieve 81% of average accuracy and 0.732 of average correlation coefficient (CC) within average 46 hours of duration, that improve their predictability.

  9. Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

    2013-06-01

    Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

  10. Forecasting the incidence of tuberculosis in China using the seasonal auto-regressive integrated moving average (SARIMA) model.

    PubMed

    Mao, Qiang; Zhang, Kai; Yan, Wu; Cheng, Chaonan

    2018-05-02

    The aims of this study were to develop a forecasting model for the incidence of tuberculosis (TB) and analyze the seasonality of infections in China; and to provide a useful tool for formulating intervention programs and allocating medical resources. Data for the monthly incidence of TB from January 2004 to December 2015 were obtained from the National Scientific Data Sharing Platform for Population and Health (China). The Box-Jenkins method was applied to fit a seasonal auto-regressive integrated moving average (SARIMA) model to forecast the incidence of TB over the subsequent six months. During the study period of 144 months, 12,321,559 TB cases were reported in China, with an average monthly incidence of 6.4426 per 100,000 of the population. The monthly incidence of TB showed a clear 12-month cycle, and a seasonality with two peaks occurring in January and March and a trough in December. The best-fit model was SARIMA (1,0,0)(0,1,1) 12 , which demonstrated adequate information extraction (white noise test, p>0.05). Based on the analysis, the incidence of TB from January to June 2016 were 6.6335, 4.7208, 5.8193, 5.5474, 5.2202 and 4.9156 per 100,000 of the population, respectively. According to the seasonal pattern of TB incidence in China, the SARIMA model was proposed as a useful tool for monitoring epidemics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Relevance analysis and short-term prediction of PM2.5 concentrations in Beijing based on multi-source data

    NASA Astrophysics Data System (ADS)

    Ni, X. Y.; Huang, H.; Du, W. P.

    2017-02-01

    The PM2.5 problem is proving to be a major public crisis and is of great public-concern requiring an urgent response. Information about, and prediction of PM2.5 from the perspective of atmospheric dynamic theory is still limited due to the complexity of the formation and development of PM2.5. In this paper, we attempted to realize the relevance analysis and short-term prediction of PM2.5 concentrations in Beijing, China, using multi-source data mining. A correlation analysis model of PM2.5 to physical data (meteorological data, including regional average rainfall, daily mean temperature, average relative humidity, average wind speed, maximum wind speed, and other pollutant concentration data, including CO, NO2, SO2, PM10) and social media data (microblog data) was proposed, based on the Multivariate Statistical Analysis method. The study found that during these factors, the value of average wind speed, the concentrations of CO, NO2, PM10, and the daily number of microblog entries with key words 'Beijing; Air pollution' show high mathematical correlation with PM2.5 concentrations. The correlation analysis was further studied based on a big data's machine learning model- Back Propagation Neural Network (hereinafter referred to as BPNN) model. It was found that the BPNN method performs better in correlation mining. Finally, an Autoregressive Integrated Moving Average (hereinafter referred to as ARIMA) Time Series model was applied in this paper to explore the prediction of PM2.5 in the short-term time series. The predicted results were in good agreement with the observed data. This study is useful for helping realize real-time monitoring, analysis and pre-warning of PM2.5 and it also helps to broaden the application of big data and the multi-source data mining methods.

  12. MOVES regional level sensitivity analysis

    DOT National Transportation Integrated Search

    2012-01-01

    The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...

  13. Distractor Interference during Smooth Pursuit Eye Movements

    ERIC Educational Resources Information Center

    Spering, Miriam; Gegenfurtner, Karl R.; Kerzel, Dirk

    2006-01-01

    When 2 targets for pursuit eye movements move in different directions, the eye velocity follows the vector average (S. G. Lisberger & V. P. Ferrera, 1997). The present study investigates the mechanisms of target selection when observers are instructed to follow a predefined horizontal target and to ignore a moving distractor stimulus. Results show…

  14. Sound source identification and sound radiation modeling in a moving medium using the time-domain equivalent source method.

    PubMed

    Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang

    2015-05-01

    Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.

  15. Long-Term PM2.5 Exposure and Respiratory, Cancer, and Cardiovascular Mortality in Older US Adults.

    PubMed

    Pun, Vivian C; Kazemiparkouhi, Fatemeh; Manjourides, Justin; Suh, Helen H

    2017-10-15

    The impact of chronic exposure to fine particulate matter (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm (PM2.5)) on respiratory disease and lung cancer mortality is poorly understood. In a cohort of 18.9 million Medicare beneficiaries (4.2 million deaths) living across the conterminous United States between 2000 and 2008, we examined the association between chronic PM2.5 exposure and cause-specific mortality. We evaluated confounding through adjustment for neighborhood behavioral covariates and decomposition of PM2.5 into 2 spatiotemporal scales. We found significantly positive associations of 12-month moving average PM2.5 exposures (per 10-μg/m3 increase) with respiratory, chronic obstructive pulmonary disease, and pneumonia mortality, with risk ratios ranging from 1.10 to 1.24. We also found significant PM2.5-associated elevated risks for cardiovascular and lung cancer mortality. Risk ratios generally increased with longer moving averages; for example, an elevation in 60-month moving average PM2.5 exposures was linked to 1.33 times the lung cancer mortality risk (95% confidence interval: 1.24, 1.40), as compared with 1.13 (95% confidence interval: 1.11, 1.15) for 12-month moving average exposures. Observed associations were robust in multivariable models, although evidence of unmeasured confounding remained. In this large cohort of US elderly, we provide important new evidence that long-term PM2.5 exposure is significantly related to increased mortality from respiratory disease, lung cancer, and cardiovascular disease. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Effect of air pollution on pediatric respiratory emergency room visits and hospital admissions.

    PubMed

    Farhat, S C L; Paulo, R L P; Shimoda, T M; Conceição, G M S; Lin, C A; Braga, A L F; Warth, M P N; Saldiva, P H N

    2005-02-01

    In order to assess the effect of air pollution on pediatric respiratory morbidity, we carried out a time series study using daily levels of PM10, SO2, NO2, ozone, and CO and daily numbers of pediatric respiratory emergency room visits and hospital admissions at the Children's Institute of the University of Sao Paulo Medical School, from August 1996 to August 1997. In this period there were 43,635 hospital emergency room visits, 4534 of which were due to lower respiratory tract disease. The total number of hospital admissions was 6785, 1021 of which were due to lower respiratory tract infectious and/or obstructive diseases. The three health end-points under investigation were the daily number of emergency room visits due to lower respiratory tract diseases, hospital admissions due to pneumonia, and hospital admissions due to asthma or bronchiolitis. Generalized additive Poisson regression models were fitted, controlling for smooth functions of time, temperature and humidity, and an indicator of weekdays. NO2 was positively associated with all outcomes. Interquartile range increases (65.04 microg/m3) in NO2 moving averages were associated with an 18.4% increase (95% confidence interval, 95% CI = 12.5-24.3) in emergency room visits due to lower respiratory tract diseases (4-day moving average), a 17.6% increase (95% CI = 3.3-32.7) in hospital admissions due to pneumonia or bronchopneumonia (3-day moving average), and a 31.4% increase (95% CI = 7.2-55.7) in hospital admissions due to asthma or bronchiolitis (2-day moving average). The study showed that air pollution considerably affects children's respiratory morbidity, deserving attention from the health authorities.

  17. Seasonal trend analysis and ARIMA modeling of relative humidity and wind speed time series around Yamula Dam

    NASA Astrophysics Data System (ADS)

    Eymen, Abdurrahman; Köylü, Ümran

    2018-02-01

    Local climate change is determined by analysis of long-term recorded meteorological data. In the statistical analysis of the meteorological data, the Mann-Kendall rank test, which is one of the non-parametrical tests, has been used; on the other hand, for determining the power of the trend, Theil-Sen method has been used on the data obtained from 16 meteorological stations. The stations cover the provinces of Kayseri, Sivas, Yozgat, and Nevşehir in the Central Anatolia region of Turkey. Changes in land-use affect local climate. Dams are structures that cause major changes on the land. Yamula Dam is located 25 km northwest of Kayseri. The dam has huge water body which is approximately 85 km2. The mentioned tests have been used for detecting the presence of any positive or negative trend in meteorological data. The meteorological data in relation to the seasonal average, maximum, and minimum values of the relative humidity and seasonal average wind speed have been organized as time series and the tests have been conducted accordingly. As a result of these tests, the following have been identified: increase was observed in minimum relative humidity values in the spring, summer, and autumn seasons. As for the seasonal average wind speed, decrease was detected for nine stations in all seasons, whereas increase was observed in four stations. After the trend analysis, pre-dam mean relative humidity time series were modeled with Autoregressive Integrated Moving Averages (ARIMA) model which is statistical modeling tool. Post-dam relative humidity values were predicted by ARIMA models.

  18. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  19. Effect of environmental factors on Internet searches related to sinusitis.

    PubMed

    Willson, Thomas J; Lospinoso, Joshua; Weitzel, Erik K; McMains, Kevin C

    2015-11-01

    Sinusitis significantly affects the population of the United States, exacting direct cost and lost productivity. Patients are likely to search the Internet for information related to their health before seeking care by a healthcare professional. Utilizing data generated from these searches may serve as an epidemiologic surrogate. A retrospective time series analysis was performed. Google search trend data from the Dallas-Fort Worth metro region for the years 2012 and 2013 were collected from www.google.com/trends for terms related to sinusitis based on literature outlining the most important symptoms for diagnosis. Additional terms were selected based on common English language terms used to describe the disease. Twelve months of data from the same time period and location for common pollutants (nitrogen dioxide, ozone, sulfur dioxide, and particulates), pollen and mold counts, and influenza-like illness were also collected. Statistical analysis was performed using Pearson correlation coefficients, and potential search activity predictors were assessed using autoregressive integrated moving average. Pearson correlation was strongest between the terms congestion and influenza-like illness (r=0.615), and sinus and influenza-like illness (r=0.534) and nitrogen dioxide (r=0.487). Autoregressive integrated moving average analysis revealed ozone, influenza-like illness, and nitrogen dioxide levels to be potential predictors for sinus pressure searches, with estimates of 0.118, 0.349, and 0.438, respectively. Nitrogen dioxide was also a potential predictor for the terms congestion and sinus, with estimates of 0.191 and 0.272, respectively. Google search activity for related terms follow the pattern of seasonal influenza-like illness and nitrogen dioxide. These data highlight the epidemiologic potential of this novel surveillance method. NA. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Statistical properties of the yuan exchange rate index

    NASA Astrophysics Data System (ADS)

    Wang, Dong-Hua; Yu, Xiao-Wen; Suo, Yuan-Yuan

    2012-06-01

    We choice the yuan exchange rate index based on a basket of currencies as the effective exchange rate of the yuan and investigate the statistical properties of the yuan exchange rate index after China's exchange rate system reform on the 21st July 2005. After dividing the time series into two parts according to the change in the yuan exchange rate regime in July 2008, we compare the statistical properties of the yuan exchange rate index during these two periods. We find that the distribution of the two return series has the exponential form. We also perform the detrending moving average analysis (DMA) and the multifractal detrending moving average analysis (MFDMA). The two periods possess different degrees of long-range correlations, and the multifractal nature is also unveiled in these two time series. Significant difference is found in the scaling exponents τ(q) and singularity spectra f(α) of the two periods obtained from the MFDMA analysis. Besides, in order to detect the sources of multifractality, shuffling and phase randomization procedures are applied to destroy the long-range temporal correlation and fat-tailed distribution of the yuan exchange rate index respectively. We find that the fat-tailedness plays a critical role in the sources of multifractality in the first period, while the long memory is the major cause in the second period. The results suggest that the change in China's exchange rate regime in July 2008 gives rise to the different multifractal properties of the yuan exchange rate index in these two periods, and thus has an effect on the effective exchange rate of the yuan after the exchange rate reform on the 21st July 2005.

  1. The learning curve to achieve satisfactory completion rates in upper GI endoscopy: an analysis of a national training database.

    PubMed

    Ward, S T; Hancox, A; Mohammed, M A; Ismail, T; Griffiths, E A; Valori, R; Dunckley, P

    2017-06-01

    The aim of this study was to determine the number of OGDs (oesophago-gastro-duodenoscopies) trainees need to perform to acquire competency in terms of successful unassisted completion to the second part of the duodenum 95% of the time. OGD data were retrieved from the trainee e-portfolio developed by the Joint Advisory Group on GI Endoscopy (JAG) in the UK. All trainees were included unless they were known to have a baseline experience of >20 procedures or had submitted data for <20 procedures. The primary outcome measure was OGD completion, defined as passage of the endoscope to the second part of the duodenum without physical assistance. The number of OGDs required to achieve a 95% completion rate was calculated by the moving average method and learning curve cumulative summation (LC-Cusum) analysis. To determine which factors were independently associated with OGD completion, a mixed effects logistic regression model was constructed with OGD completion as the outcome variable. Data were analysed for 1255 trainees over 288 centres, representing 243 555 OGDs. By moving average method, trainees attained a 95% completion rate at 187 procedures. By LC-Cusum analysis, after 200 procedures, >90% trainees had attained a 95% completion rate. Total number of OGDs performed, trainee age and experience in lower GI endoscopy were factors independently associated with OGD completion. There are limited published data on the OGD learning curve. This is the largest study to date analysing the learning curve for competency acquisition. The JAG competency requirement for 200 procedures appears appropriate. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  2. Determination of baroreflex gain using auto-regressive moving-average analysis during spontaneous breathing.

    PubMed

    O'Leary, D D; Lin, D C; Hughson, R L

    1999-09-01

    The heart rate component of the arterial baroreflex gain (BRG) was determined with auto-regressive moving-average (ARMA) analysis during each of spontaneous (SB) and random breathing (RB) protocols. Ten healthy subjects completed each breathing pattern on two different days in each of two different body positions, supine (SUP) and head-up tilt (HUT). The R-R interval, systolic arterial pressure (SAP) and instantaneous lung volume were recorded continuously. BRG was estimated from the ARMA impulse response relationship of R-R interval to SAP and from the spontaneous sequence method. The results indicated that both the ARMA and spontaneous sequence methods were reproducible (r = 0.76 and r = 0.85, respectively). As expected, BRG was significantly less in the HUT compared to SUP position for both ARMA (mean +/- SEM; 3.5 +/- 0.3 versus 11.2 +/- 1.4 ms mmHg-1; P < 0.01) and spontaneous sequence analysis (10.3 +/- 0.8 versus 31.5 +/- 2.3 ms mmHg-1; P < 0.001). However, no significant difference was found between BRG during RB and SB protocols for either ARMA (7.9 +/- 1.4 versus 6.7 +/- 0.8 ms mmHg-1; P = 0.27) or spontaneous sequence methods (21.8 +/- 2.7 versus 20.0 +/- 2.1 ms mmHg-1; P = 0.24). BRG was correlated during RB and SB protocols (r = 0.80; P < 0.0001). ARMA and spontaneous BRG estimates were correlated (r = 0.79; P < 0.0001), with spontaneous sequence values being consistently larger (P < 0.0001). In conclusion, we have shown that ARMA-derived BRG values are reproducible and that they can be determined during SB conditions, making the ARMA method appropriate for use in a wider range of patients.

  3. Dog days of summer: Influences on decision of wolves to move pups

    USGS Publications Warehouse

    Ausband, David E.; Mitchell, Michael S.; Bassing, Sarah B.; Nordhagen, Matthew; Smith, Douglas W.; Stahler, Daniel R.

    2016-01-01

    For animals that forage widely, protecting young from predation can span relatively long time periods due to the inability of young to travel with and be protected by their parents. Moving relatively immobile young to improve access to important resources, limit detection of concentrated scent by predators, and decrease infestations by ectoparasites can be advantageous. Moving young, however, can also expose them to increased mortality risks (e.g., accidents, getting lost, predation). For group-living animals that live in variable environments and care for young over extended time periods, the influence of biotic factors (e.g., group size, predation risk) and abiotic factors (e.g., temperature and precipitation) on the decision to move young is unknown. We used data from 25 satellite-collared wolves ( Canis lupus ) in Idaho, Montana, and Yellowstone National Park to evaluate how these factors could influence the decision to move pups during the pup-rearing season. We hypothesized that litter size, the number of adults in a group, and perceived predation risk would positively affect the number of times gray wolves moved pups. We further hypothesized that wolves would move their pups more often when it was hot and dry to ensure sufficient access to water. Contrary to our hypothesis, monthly temperature above the 30-year average was negatively related to the number of times wolves moved their pups. Monthly precipitation above the 30-year average, however, was positively related to the amount of time wolves spent at pup-rearing sites after leaving the natal den. We found little relationship between risk of predation (by grizzly bears, humans, or conspecifics) or group and litter sizes and number of times wolves moved their pups. Our findings suggest that abiotic factors most strongly influence the decision of wolves to move pups, although responses to unpredictable biotic events (e.g., a predator encountering pups) cannot be ruled out.

  4. Using a traffic simulation model (VISSIM) with an emissions model (MOVES) to predict emissions from vehicles on a limited-access highway.

    PubMed

    Abou-Senna, Hatem; Radwan, Essam; Westerlund, Kurt; Cooper, C David

    2013-07-01

    The Intergovernmental Panel on Climate Change (IPCC) estimates that baseline global GHG emissions may increase 25-90% from 2000 to 2030, with carbon dioxide (CO2 emissions growing 40-110% over the same period. On-road vehicles are a major source of CO2 emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, for example, carbon monoxide (CO), nitrogen oxides (NO(x)), and particulate matter (PM). Therefore, the need to accurately quantify transportation-related emissions from vehicles is essential. The new US. Environmental Protection Agency (EPA) mobile source emissions model, MOVES2010a (MOVES), can estimate vehicle emissions on a second-by-second basis, creating the opportunity to combine a microscopic traffic simulation model (such as VISSIM) with MOVES to obtain accurate results. This paper presents an examination of four different approaches to capture the environmental impacts of vehicular operations on a 10-mile stretch of Interstate 4 (I-4), an urban limited-access highway in Orlando, FL. First (at the most basic level), emissions were estimated for the entire 10-mile section "by hand" using one average traffic volume and average speed. Then three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link drive schedules (LDS), and second-by-second operating mode distributions (OPMODE). This paper analyzes how the various approaches affect predicted emissions of CO, NO(x), PM2.5, PM10, and CO2. The results demonstrate that obtaining precise and comprehensive operating mode distributions on a second-by-second basis provides more accurate emission estimates. Specifically, emission rates are highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach. Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. With MOVES, there is an opportunity for higher precision and accuracy. Integrating a microscopic traffic simulation model (such as VISSIM) with MOVES allows one to obtain precise and accurate emissions estimates. The proposed emission rate estimation process also can be extended to gridded emissions for ozone modeling, or to localized air quality dispersion modeling, where temporal and spatial resolution of emissions is essential to predict the concentration of pollutants near roadways.

  5. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012

    PubMed Central

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    Background The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. Methods In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). Results The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Conclusion Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis. PMID:26901682

  6. The Association between Air Pollution and Outpatient and Inpatient Visits in Shenzhen, China

    PubMed Central

    Liu, Yachuan; Chen, Shanen; Xu, Jian; Liu, Xiaojian; Wu, Yongsheng; Zhou, Lin; Cheng, Jinquan; Ma, Hanwu; Zheng, Jing; Lin, Denan; Zhang, Li; Chen, Lili

    2018-01-01

    Nowadays, air pollution is a severe environmental problem in China. To investigate the effects of ambient air pollution on health, a time series analysis of daily outpatient and inpatient visits in 2015 were conducted in Shenzhen (China). Generalized additive model was employed to analyze associations between six air pollutants (namely SO2, CO, NO2, O3, PM10, and PM2.5) and daily outpatient and inpatient visits after adjusting confounding meteorological factors, time and day of the week effects. Significant associations between air pollutants and two types of hospital visits were observed. The estimated increase in overall outpatient visits associated with each 10 µg/m3 increase in air pollutant concentration ranged from 0.48% (O3 at lag 2) to 11.48% (SO2 with 2-day moving average); for overall inpatient visits ranged from 0.73% (O3 at lag 7) to 17.13% (SO2 with 8-day moving average). Our results also suggested a heterogeneity of the health effects across different outcomes and in different populations. The findings in present study indicate that even in Shenzhen, a less polluted area in China, significant associations exist between air pollution and daily number of overall outpatient and inpatient visits. PMID:29360738

  7. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012.

    PubMed

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis.

  8. Distributed parameter system coupled ARMA expansion identification and adaptive parallel IIR filtering - A unified problem statement. [Auto Regressive Moving-Average

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Balas, M. J.

    1980-01-01

    A novel interconnection of distributed parameter system (DPS) identification and adaptive filtering is presented, which culminates in a common statement of coupled autoregressive, moving-average expansion or parallel infinite impulse response configuration adaptive parameterization. The common restricted complexity filter objectives are seen as similar to the reduced-order requirements of the DPS expansion description. The interconnection presents the possibility of an exchange of problem formulations and solution approaches not yet easily addressed in the common finite dimensional lumped-parameter system context. It is concluded that the shared problems raised are nevertheless many and difficult.

  9. Forecasting daily meteorological time series using ARIMA and regression models

    NASA Astrophysics Data System (ADS)

    Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir

    2018-04-01

    The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.

  10. A fixed-memory moving, expanding window for obtaining scatter corrections in X-ray CT and other stochastic averages

    NASA Astrophysics Data System (ADS)

    Levine, Zachary H.; Pintar, Adam L.

    2015-11-01

    A simple algorithm for averaging a stochastic sequence of 1D arrays in a moving, expanding window is provided. The samples are grouped in bins which increase exponentially in size so that a constant fraction of the samples is retained at any point in the sequence. The algorithm is shown to have particular relevance for a class of Monte Carlo sampling problems which includes one characteristic of iterative reconstruction in computed tomography. The code is available in the CPC program library in both Fortran 95 and C and is also available in R through CRAN.

  11. [Comparison of predictive effect between the single auto regressive integrated moving average (ARIMA) model and the ARIMA-generalized regression neural network (GRNN) combination model on the incidence of scarlet fever].

    PubMed

    Zhu, Yu; Xia, Jie-lai; Wang, Jing

    2009-09-01

    Application of the 'single auto regressive integrated moving average (ARIMA) model' and the 'ARIMA-generalized regression neural network (GRNN) combination model' in the research of the incidence of scarlet fever. Establish the auto regressive integrated moving average model based on the data of the monthly incidence on scarlet fever of one city, from 2000 to 2006. The fitting values of the ARIMA model was used as input of the GRNN, and the actual values were used as output of the GRNN. After training the GRNN, the effect of the single ARIMA model and the ARIMA-GRNN combination model was then compared. The mean error rate (MER) of the single ARIMA model and the ARIMA-GRNN combination model were 31.6%, 28.7% respectively and the determination coefficient (R(2)) of the two models were 0.801, 0.872 respectively. The fitting efficacy of the ARIMA-GRNN combination model was better than the single ARIMA, which had practical value in the research on time series data such as the incidence of scarlet fever.

  12. Heterogeneous CPU-GPU moving targets detection for UAV video

    NASA Astrophysics Data System (ADS)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  13. Industrial Based Migration in India. A Case Study of Dumdum "Dunlop Industrial Zone"

    NASA Astrophysics Data System (ADS)

    Das, Biplab; Bandyopadhyay, Aditya; Sen, Jayashree

    2012-10-01

    Migration is a very important part in our present society. Basically Millions of people moved during the industrial revolution. Some simply moved from a village to a town in the hope of finding work whilst others moved from one country to another in search of a better way of life. The main reason for moving home during the 19th century was to find work. On one hand this involved migration from the countryside to the growing industrial cities, on the other it involved rates of migration, emigration, and the social changes that were drastically affecting factors such as marriage,birth and death rates. These social changes taking place as a result of capitalism had far ranging affects, such as lowering the average age of marriage and increasing the size of the average family.Migration was not just people moving out of the country, it also invloved a lot of people moving into Britain. In the 1840's Ireland suffered a terrible famine. Faced with a massive cost of feeding the starving population many local landowners paid for labourers to emigrate.There was a shift away from agriculturally based rural dwelling towards urban habitation to meet the mass demand for labour that new industry required. There became great regional differences in population levels and in the structure of their demography. This was due to rates of migration, emigration, and the social changes that were drastically affecting factors such as marriage, birth and death rates. These social changes taking place as a result of capitalism had far ranging affects, such as lowering the average age of marriage and increasing the size of the average family. There is n serious disagreement as to the extent of the population changes that occurred but one key question that always arouses debate is that of whether an expanding population resulted in economic growth or vice versa, i.e. was industrialization a catalyst for population growth? A clear answer is difficult to decipher as the two variables are so closely and fundamentally interlinked, but it seems that both factors provided impetus for each otherís take off. If anything, population and economic growth were complimentary towards one another rather than simply being causative factors.

  14. Earthquakes Magnitude Predication Using Artificial Neural Network in Northern Red Sea Area

    NASA Astrophysics Data System (ADS)

    Alarifi, A. S.; Alarifi, N. S.

    2009-12-01

    Earthquakes are natural hazards that do not happen very often, however they may cause huge losses in life and property. Early preparation for these hazards is a key factor to reduce their damage and consequence. Since early ages, people tried to predicate earthquakes using simple observations such as strange or a typical animal behavior. In this paper, we study data collected from existing earthquake catalogue to give better forecasting for future earthquakes. The 16000 events cover a time span of 1970 to 2009, the magnitude range from greater than 0 to less than 7.2 while the depth range from greater than 0 to less than 100km. We propose a new artificial intelligent predication system based on artificial neural network, which can be used to predicate the magnitude of future earthquakes in northern Red Sea area including the Sinai Peninsula, the Gulf of Aqaba, and the Gulf of Suez. We propose a feed forward new neural network model with multi-hidden layers to predicate earthquakes occurrences and magnitudes in northern Red Sea area. Although there are similar model that have been published before in different areas, to our best knowledge this is the first neural network model to predicate earthquake in northern Red Sea area. Furthermore, we present other forecasting methods such as moving average over different interval, normally distributed random predicator, and uniformly distributed random predicator. In addition, we present different statistical methods and data fitting such as linear, quadratic, and cubic regression. We present a details performance analyses of the proposed methods for different evaluation metrics. The results show that neural network model provides higher forecast accuracy than other proposed methods. The results show that neural network achieves an average absolute error of 2.6% while an average absolute error of 3.8%, 7.3% and 6.17% for moving average, linear regression and cubic regression, respectively. In this work, we show an analysis of earthquakes data in northern Red Sea area for different statistics parameters such as correlation, mean, standard deviation, and other. This analysis is to provide a deep understand of the Seismicity of the area, and existing patterns.

  15. SU-F-T-497: Spatiotemporally Optimal, Personalized Prescription Scheme for Glioblastoma Patients Using the Proliferation and Invasion Glioma Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, M; Rockhill, J; Phillips, M

    Purpose: To investigate a spatiotemporally optimal radiotherapy prescription scheme and its potential benefit for glioblastoma (GBM) patients using the proliferation and invasion (PI) glioma model. Methods: Standard prescription for GBM was assumed to deliver 46Gy in 23 fractions to GTV1+2cm margin and additional 14Gy in 7 fractions to GTV2+2cm margin. We simulated the tumor proliferation and invasion in 2D according to the PI glioma model with a moving velocity of 0.029(slow-move), 0.079(average-move), and 0.13(fast-move) mm/day for GTV2 with a radius of 1 and 2cm. For each tumor, the margin around GTV1 and GTV2 was varied to 0–6 cm and 1–3more » cm respectively. Total dose to GTV1 was constrained such that the equivalent uniform dose (EUD) to normal brain equals EUD with the standard prescription. A non-stationary dose policy, where the fractional dose varies, was investigated to estimate the temporal effect of the radiation dose. The efficacy of an optimal prescription scheme was evaluated by tumor cell-surviving fraction (SF), EUD, and the expected survival time. Results: Optimal prescription for the slow-move tumors was to use 3.0(small)-3.5(large) cm margins to GTV1, and 1.5cm margin to GTV2. For the average- and fast-move tumors, it was optimal to use 6.0cm margin for GTV1 suggesting that whole brain therapy is optimal, and then 1.5cm (average-move) and 1.5–3.0cm (fast-move, small-large) margins for GTV2. It was optimal to deliver the boost sequentially using a linearly decreasing fractional dose for all tumors. Optimal prescription led to 0.001–0.465% of the tumor SF resulted from using the standard prescription, and increased tumor EUD by 25.3–49.3% and the estimated survival time by 7.6–22.2 months. Conclusion: It is feasible to optimize a prescription scheme depending on the individual tumor characteristics. A personalized prescription scheme could potentially increase tumor EUD and the expected survival time significantly without increasing EUD to normal brain.« less

  16. Classical technical analysis of Latin American market indices. Correlations in Latin American Currencies (ARS, CLP, MXP) exchange rates with respect to DEM, GBP, JPY and USD

    NASA Astrophysics Data System (ADS)

    Ausloos, M.; Ivanova, K.

    2004-06-01

    The classical technical analysis methods of financial time series based on the moving average and momentum is recalled. Illustrations use the IBM share price and Latin American (Argentinian MerVal, Brazilian Bovespa and Mexican IPC) market indices. We have also searched for scaling ranges and exponents in exchange rates between Latin American currencies ($ARS$, $CLP$, $MXP$) and other major currencies $DEM$, $GBP$, $JPY$, $USD$, and $SDR$s. We have sorted out correlations and anticorrelations of such exchange rates with respect to $DEM$, $GBP$, $JPY$ and $USD$. They indicate a very complex or speculative behavior.

  17. The Micromechanics of the Moving Contact Line

    NASA Technical Reports Server (NTRS)

    Han, Minsub; Lichter, Seth; Lin, Chih-Yu; Perng, Yeong-Yan

    1996-01-01

    The proposed research is divided into three components concerned with molecular structure, molecular orientation, and continuum averages of discrete systems. In the experimental program, we propose exploring how changes in interfacial molecular structure generate contact line motion. Rather than rely on the electrostatic and electrokinetic fields arising from the molecules themselves, we augment their interactions by an imposed field at the solid/liquid interface. By controling the field, we can manipulate the molecular structure at the solid/liquid interface. In response to controlled changes in molecular structure, we observe the resultant contact line motion. In the analytical portion of the proposed research we seek to formulate a system of equations governing fluid motion which accounts for the orientation of fluid molecules. In preliminary work, we have focused on describing how molecular orientation affects the forces generated at the moving contact line. Ideally, as assumed above, the discrete behavior of molecules can be averaged into a continuum theory. In the numerical portion of the proposed research, we inquire whether the contact line region is, in fact, large enough to possess a well-defined average. Additionally, we ask what types of behavior distinguish discrete systems from continuum systems. Might the smallness of the contact line region, in itself, lead to behavior different from that in the bulk? Taken together, our proposed research seeks to identify and accurately account for some of the molecular dynamics of the moving contact line, and attempts to formulate a description from which one can compute the forces at the moving contact line.

  18. Plasmoid growth and expulsion revealed by two-point ARTEMIS observations

    NASA Astrophysics Data System (ADS)

    Li, S.; Angelopoulos, V.; Runov, A.; kiehas, S.

    2012-12-01

    On 12 October 2011, the two ARTEMIS probes, in lunar orbit ~7 RE north of the neutral sheet, sequentially observed a tailward-moving, expanding plasmoid. Their observations reveal a multi-layered plasma sheet composed of tailward-flowing hot plasma within the plasmoid proper enshrouded by earthward-flowing, less energetic plasma. Prior observations of similar earthward flow structures ahead of or behind plasmoids have been interpreted as earthward outflow from a continuously active distant-tail neutral line (DNL) opposite an approaching plasmoid. However, no evidence of active DNL reconnection was observed by the probes as they traversed the plasmoid's leading and trailing edges, penetrating to slightly above its core. We suggest an alternate interpretation: compression of the ambient plasma by the tailward-moving plasmoid propels the plasma lobeward and earthward, i.e., over and above the plasmoid. Using the propagation velocity obtained from timing analysis, we estimate the average plasmoid size to be 9 RE and its expansion rate to be ~ 7 RE/min at the observation locations. The velocity inside the plasmoid proper was found to be non-uniform; the core likely moves as fast as 500 km/s, yet the outer layers move more slowly (and reverse direction), possibly resulting in the observed expansion. The absence of lobe reconnection, in particular on the earthward side, suggests that plasmoid formation and expulsion result from closed plasma sheet field line reconnection.

  19. Use of spatiotemporal characteristics of ambient PM2.5 in rural South India to infer local versus regional contributions.

    PubMed

    Kumar, M Kishore; Sreekanth, V; Salmon, Maëlle; Tonne, Cathryn; Marshall, Julian D

    2018-08-01

    This study uses spatiotemporal patterns in ambient concentrations to infer the contribution of regional versus local sources. We collected 12 months of monitoring data for outdoor fine particulate matter (PM 2.5 ) in rural southern India. Rural India includes more than one-tenth of the global population and annually accounts for around half a million air pollution deaths, yet little is known about the relative contribution of local sources to outdoor air pollution. We measured 1-min averaged outdoor PM 2.5 concentrations during June 2015-May 2016 in three villages, which varied in population size, socioeconomic status, and type and usage of domestic fuel. The daily geometric-mean PM 2.5 concentration was ∼30 μg m -3 (geometric standard deviation: ∼1.5). Concentrations exceeded the Indian National Ambient Air Quality standards (60 μg m -3 ) during 2-5% of observation days. Average concentrations were ∼25 μg m -3 higher during winter than during monsoon and ∼8 μg m -3 higher during morning hours than the diurnal average. A moving average subtraction method based on 1-min average PM 2.5 concentrations indicated that local contributions (e.g., nearby biomass combustion, brick kilns) were greater in the most populated village, and that overall the majority of ambient PM 2.5 in our study was regional, implying that local air pollution control strategies alone may have limited influence on local ambient concentrations. We compared the relatively new moving average subtraction method against a more established approach. Both methods broadly agree on the relative contribution of local sources across the three sites. The moving average subtraction method has broad applicability across locations. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. A complete passive blind image copy-move forensics scheme based on compound statistics features.

    PubMed

    Peng, Fei; Nie, Yun-ying; Long, Min

    2011-10-10

    Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  1. Microvolt T-Wave Alternans

    PubMed Central

    Verrier, Richard L.; Klingenheben, Thomas; Malik, Marek; El-Sherif, Nabil; Exner, Derek V.; Hohnloser, Stefan H.; Ikeda, Takanori; Martínez, Juan Pablo; Narayan, Sanjiv M.; Nieminen, Tuomo; Rosenbaum, David S.

    2014-01-01

    This consensus guideline was prepared on behalf of the International Society for Holter and Noninvasive Electrocardiology and is cosponsored by the Japanese Circulation Society, the Computers in Cardiology Working Group on e-Cardiology of the European Society of Cardiology, and the European Cardiac Arrhythmia Society. It discusses the electrocardiographic phenomenon of T-wave alternans (TWA) (i.e., a beat-to-beat alternation in the morphology and amplitude of the ST- segment or T-wave). This statement focuses on its physiological basis and measurement technologies and its clinical utility in stratifying risk for life-threatening ventricular arrhythmias. Signal processing techniques including the frequency-domain Spectral Method and the time-domain Modified Moving Average method have demonstrated the utility of TWA in arrhythmia risk stratification in prospective studies in >12,000 patients. The majority of exercise-based studies using both methods have reported high relative risks for cardiovascular mortality and for sudden cardiac death in patients with preserved as well as depressed left ventricular ejection fraction. Studies with ambulatory electrocardiogram-based TWA analysis with Modified Moving Average method have yielded significant predictive capacity. However, negative studies with the Spectral Method have also appeared, including 2 interventional studies in patients with implantable defibrillators. Meta-analyses have been performed to gain insights into this issue. Frontiers of TWA research include use in arrhythmia risk stratification of individuals with preserved ejection fraction, improvements in predictivity with quantitative analysis, and utility in guiding medical as well as device-based therapy. Overall, although TWA appears to be a useful marker of risk for arrhythmic and cardiovascular death, there is as yet no definitive evidence that it can guide therapy. PMID:21920259

  2. Forecasting and prediction of scorpion sting cases in Biskra province, Algeria, using a seasonal autoregressive integrated moving average model.

    PubMed

    Selmane, Schehrazad; L'Hadj, Mohamed

    2016-01-01

    The aims of this study were to highlight some epidemiological aspects of scorpion envenomations, to analyse and interpret the available data for Biskra province, Algeria, and to develop a forecasting model for scorpion sting cases in Biskra province, which records the highest number of scorpion stings in Algeria. In addition to analysing the epidemiological profile of scorpion stings that occurred throughout the year 2013, we used the Box-Jenkins approach to fit a seasonal autoregressive integrated moving average (SARIMA) model to the monthly recorded scorpion sting cases in Biskra from 2000 to 2012. The epidemiological analysis revealed that scorpion stings were reported continuously throughout the year, with peaks in the summer months. The most affected age group was 15 to 49 years old, with a male predominance. The most prone human body areas were the upper and lower limbs. The majority of cases (95.9%) were classified as mild envenomations. The time series analysis showed that a (5,1,0)×(0,1,1) 12 SARIMA model offered the best fit to the scorpion sting surveillance data. This model was used to predict scorpion sting cases for the year 2013, and the fitted data showed considerable agreement with the actual data. SARIMA models are useful for monitoring scorpion sting cases, and provide an estimate of the variability to be expected in future scorpion sting cases. This knowledge is helpful in predicting whether an unusual situation is developing or not, and could therefore assist decision-makers in strengthening the province's prevention and control measures and in initiating rapid response measures.

  3. An analysis of underlying factors for seasonal variation in gonorrhoea in India: a 6-year statistical assessment.

    PubMed

    Kakran, M; Bala, M; Singh, V

    2015-01-01

    A statistical assessment of a disease is often necessary before resources can be allocated to any control programme. No literature on seasonal trends of gonorrhoea is available from India. The objectives were (1) to determine, if any, seasonal trends were present in India (2) to describe factors contributing to seasonality of gonorrhoea (3) to formulate approaches for gonorrhoea control at the national level. Seasonal indices for gonorrhoea were calculated quarterly in terms of a seasonal index between 2005 and 2010. Ratio-to-moving average method was used to determine the seasonal variation. The original data values in the time-series were expressed as percentages of moving averages. Results were also analyzed by second statistical method i.e. seasonal subseries plot. The seasonally adjusted average for culture-positive gonorrhoea cases was highest in the second quarter (128.61%) followed by third quarter (108.48%) while a trough was observed in the first (96.05%) and last quarter (64.85%). The second quarter peak was representative of summer vacations in schools and colleges. Moreover, April is the harvesting month followed by celebrations and social gatherings. Both these factors are associated with increased sexual activity and partner change. A trough in first and last quarter was indicative of festival season and winter leading to less patients reporting to the hospital. The findings highlight the immediate need to strengthen sexual health education among young people in schools and colleges and education on risk-reduction practices especially at crucial points in the calendar year for effective gonorrhoea control.

  4. Mechanical break junctions: enormous information in a nanoscale package.

    PubMed

    Natelson, Douglas

    2012-04-24

    Mechanical break junctions, particularly those in which a metal tip is repeatedly moved in and out of contact with a metal film, have provided many insights into electronic conduction at the atomic and molecular scale, most often by averaging over many possible junction configurations. This averaging throws away a great deal of information, and Makk et al. in this issue of ACS Nano demonstrate that, with both simulated and real experimental data, more sophisticated two-dimensional analysis methods can reveal information otherwise obscured in simple histograms. As additional measured quantities come into play in break junction experiments, including thermopower, noise, and optical response, these more sophisticated analytic approaches are likely to become even more powerful. While break junctions are not directly practical for useful electronic devices, they are incredibly valuable tools for unraveling the electronic transport physics relevant for ultrascaled nanoelectronics.

  5. Rapid range shifts of species associated with high levels of climate warming.

    PubMed

    Chen, I-Ching; Hill, Jane K; Ohlemüller, Ralf; Roy, David B; Thomas, Chris D

    2011-08-19

    The distributions of many terrestrial organisms are currently shifting in latitude or elevation in response to changing climate. Using a meta-analysis, we estimated that the distributions of species have recently shifted to higher elevations at a median rate of 11.0 meters per decade, and to higher latitudes at a median rate of 16.9 kilometers per decade. These rates are approximately two and three times faster than previously reported. The distances moved by species are greatest in studies showing the highest levels of warming, with average latitudinal shifts being generally sufficient to track temperature changes. However, individual species vary greatly in their rates of change, suggesting that the range shift of each species depends on multiple internal species traits and external drivers of change. Rapid average shifts derive from a wide diversity of responses by individual species.

  6. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  7. Are Math Grades Cyclical?

    ERIC Educational Resources Information Center

    Adams, Gerald J.; Dial, Micah

    1998-01-01

    The cyclical nature of mathematics grades was studied for a cohort of elementary school students from a large metropolitan school district in Texas over six years (average cohort size of 8495). The study used an autoregressive integrated moving average (ARIMA) model. Results indicate that grades do exhibit a significant cyclical pattern. (SLD)

  8. Evidence of redshifts in the average solar line profiles of C IV and Si IV from OSO-8 observations

    NASA Technical Reports Server (NTRS)

    Roussel-Dupre, D.; Shine, R. A.

    1982-01-01

    Line profiles of C IV and Si V obtained by the Colorado spectrometer on OSO-8 are presented. It is shown that the mean profiles are redshifted with a magnitude varying from 6-20 km/s, and with a mean of 12 km/s. An apparent average downflow of material in the 50,000-100,000 K temperature range is measured. The redshifts are observed in the line center positions of spatially and temporally averaged profiles and are measured either relative to chromospheric Si I lines or from a comparison of sun center and limb profiles. The observations of 6-20 km/s redshifts place constraints on the mechanisms that dominate EUV line emission since it requires a strong weighting of the emission in regions of downward moving material, and since there is little evidence for corresponding upward moving materials in these lines.

  9. Understanding the health of lorry drivers in context: A critical discourse analysis.

    PubMed

    Caddick, Nick; Varela-Mato, Veronica; Nimmo, Myra A; Clemes, Stacey; Yates, Tom; King, James A

    2017-01-01

    This article moves beyond previous attempts to understand health problems in the lives of professional lorry drivers by placing the study of drivers' health in a wider social and cultural context. A combination of methods including focus groups, interviews and observations were used to collect data from a group of 24 lorry drivers working at a large transport company in the United Kingdom. Employing a critical discourse analysis, we identified the dominant discourses and subject positions shaping the formation of drivers' health and lifestyle choices. This analysis was systematically combined with an exploration of the gendered ways in which an almost exclusively male workforce talked about health. Findings revealed that drivers were constituted within a neoliberal economic discourse, which is reflective of the broader social structure, and which partly restricted drivers' opportunities for healthy living. Concurrently, drivers adopted the subject position of 'average man' as a way of defending their personal and masculine status in regards to health and to justify jettisoning approaches to healthy living that were deemed too extreme or irrational in the face of the constraints of their working lives. Suggestions for driver health promotion include refocusing on the social and cultural - rather than individual - underpinnings of driver health issues and a move away from moralistic approaches to health promotion.

  10. A life cycle assessment of environmental performances of two combustion- and gasification-based waste-to-energy technologies.

    PubMed

    Arena, Umberto; Ardolino, Filomena; Di Gregorio, Fabrizio

    2015-07-01

    An attributional life cycle analysis (LCA) was developed to compare the environmental performances of two waste-to-energy (WtE) units, which utilize the predominant technologies among those available for combustion and gasification processes: a moving grate combustor and a vertical shaft gasifier coupled with direct melting. The two units were assumed to be fed with the same unsorted residual municipal waste, having a composition estimated as a European average. Data from several plants in operation were processed by means of mass and energy balances, and on the basis of the flows and stocks of materials and elements inside and throughout the two units, as provided by a specific substance flow analysis. The potential life cycle environmental impacts related to the operations of the two WtE units were estimated by means of the Impact 2002+ methodology. They indicate that both the technologies have sustainable environmental performances, but those of the moving grate combustion unit are better for most of the selected impact categories. The analysis of the contributions from all the stages of each specific technology suggests where improvements in technological solutions and management criteria should be focused to obtain further and remarkable environmental improvements. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Distractor interference during smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R; Kerzel, Dirk

    2006-10-01

    When 2 targets for pursuit eye movements move in different directions, the eye velocity follows the vector average (S. G. Lisberger & V. P. Ferrera, 1997). The present study investigates the mechanisms of target selection when observers are instructed to follow a predefined horizontal target and to ignore a moving distractor stimulus. Results show that at 140 ms after distractor onset, horizontal eye velocity is decreased by about 25%. Vertical eye velocity increases or decreases by 1 degrees /s in the direction opposite from the distractor. This deviation varies in size with distractor direction, velocity, and contrast. The effect was present during the initiation and steady-state tracking phase of pursuit but only when the observer had prior information about target motion. Neither vector averaging nor winner-take-all models could predict the response to a moving to-be-ignored distractor during steady-state tracking of a predefined target. The contributions of perceptual mislocalization and spatial attention to the vertical deviation in pursuit are discussed. Copyright 2006 APA.

  12. Changes in healthcare use among individuals who move into public housing: a population-based investigation.

    PubMed

    Hinds, Aynslie M; Bechtel, Brian; Distasio, Jino; Roos, Leslie L; Lix, Lisa M

    2018-06-05

    Residence in public housing, a subsidized and managed government program, may affect health and healthcare utilization. We compared healthcare use in the year before individuals moved into public housing with usage during their first year of tenancy. We also described trends in use. We used linked population-based administrative data housed in the Population Research Data Repository at the Manitoba Centre for Health Policy. The cohort consisted of individuals who moved into public housing in 2009 and 2010. We counted the number of hospitalizations, general practitioner (GP) visits, specialist visits, emergency department visits, and prescriptions drugs dispensed in the twelve 30-day intervals (i.e., months) immediately preceding and following the public housing move-in date. Generalized linear models with generalized estimating equations tested for a period (pre/post-move-in) by month interaction. Odds ratios (ORs), incident rate ratios (IRRs), and means are reported along with 95% confidence intervals (95% CIs). The cohort included 1942 individuals; the majority were female (73.4%) who lived in low income areas and received government assistance (68.1%). On average, the cohort had more than four health conditions. Over the 24 30-day intervals, the percentage of the cohort that visited a GP, specialist, and an emergency department ranged between 37.0% and 43.0%, 10.0% and 14.0%, and 6.0% and 10.0%, respectively, while the percentage of the cohort hospitalized ranged from 1.0% to 5.0%. Generally, these percentages were highest in the few months before the move-in date and lowest in the few months after the move-in date. The period by month interaction was statistically significant for hospitalizations, GP visits, and prescription drug use. The average change in the odds, rate, or mean was smaller in the post-move-in period than in the pre-move-in period. Use of some healthcare services declined after people moved into public housing; however, the decrease was only observed in the first few months and utilization rebounded. Knowledge of healthcare trends before individuals move in are informative for ensuring the appropriate supports are available to new public housing residents. Further study is needed to determine if decreased healthcare utilization following a move is attributable to decreased access.

  13. The change of sleeping and lying posture of Japanese black cows after moving into new environment.

    PubMed

    Fukasawa, Michiru; Komatsu, Tokushi; Higashiyama, Yumi

    2018-04-25

    The environmental change is one of the stressful events in livestock production. Change in environment disturbed cow behavior and cows needed several days to reach stable behavioral pattern, especially sleeping posture (SP) and lying posture (LP) have been used as an indicator for relax and well-acclimated to its environment. The aim of this study examines how long does Japanese black cow required for stabilization of SP and LP after moving into new environment. Seven pregnant Japanese black cows were used. Cows were moved into new tie-stall shed and measured sleeping and lying posture 17 times during 35 experimental days. Both SP and LP were detected by accelerometer fixed on middle occipital and hip-cross, respectively. Daily total time, frequency, and average bout of both SP and LP were calculated. Daily SP time was the shortest on day 1, and increased to the highest on day3. It decreased until day 9, after that stabilized about 65 min /day till the end of experiment. The longest average SP bout was shown on day 1, and it decreased to stabilize till day 7. Daily LP time was changed as same manner as daily SP time. The average SP bout showed the longest on day 1, and it decreased to stable level till day 7. On the other hand, the average LP bout showed the shortest on day1, and it was increased to stable level till on day 7. These results showed that pregnant Japanese black cows needed 1 week to stabilize their SP. However, there were different change pattern between the average SP and LP bout, even though the change pattern of daily SP and LP time were similar.

  14. Move-by-move dynamics of the advantage in chess matches reveals population-level learning of the game.

    PubMed

    Ribeiro, Haroldo V; Mendes, Renio S; Lenzi, Ervin K; del Castillo-Mussot, Marcelo; Amaral, Luís A N

    2013-01-01

    The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player's advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player's advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent [Formula: see text] characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments.

  15. Move-by-Move Dynamics of the Advantage in Chess Matches Reveals Population-Level Learning of the Game

    PubMed Central

    Ribeiro, Haroldo V.; Mendes, Renio S.; Lenzi, Ervin K.; del Castillo-Mussot, Marcelo; Amaral, Luís A. N.

    2013-01-01

    The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player’s advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player’s advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments. PMID:23382876

  16. Measurement of greenhouse gas emissions from agricultural sites using open-path optical remote sensing method.

    PubMed

    Ro, Kyoung S; Johnson, Melvin H; Varma, Ravi M; Hashmonay, Ram A; Hunt, Patrick

    2009-08-01

    Improved characterization of distributed emission sources of greenhouse gases such as methane from concentrated animal feeding operations require more accurate methods. One promising method is recently used by the USEPA. It employs a vertical radial plume mapping (VRPM) algorithm using optical remote sensing techniques. We evaluated this method to estimate emission rates from simulated distributed methane sources. A scanning open-path tunable diode laser was used to collect path-integrated concentrations (PICs) along different optical paths on a vertical plane downwind of controlled methane releases. Each cycle consists of 3 ground-level PICs and 2 above ground PICs. Three- to 10-cycle moving averages were used to reconstruct mass equivalent concentration plum maps on the vertical plane. The VRPM algorithm estimated emission rates of methane along with meteorological and PIC data collected concomitantly under different atmospheric stability conditions. The derived emission rates compared well with actual released rates irrespective of atmospheric stability conditions. The maximum error was 22 percent when 3-cycle moving average PICs were used; however, it decreased to 11% when 10-cycle moving average PICs were used. Our validation results suggest that this new VRPM method may be used for improved estimations of greenhouse gas emission from a variety of agricultural sources.

  17. Use of the temporal median and trimmed mean mitigates effects of respiratory motion in multiple-acquisition abdominal diffusion imaging

    NASA Astrophysics Data System (ADS)

    Jerome, N. P.; Orton, M. R.; d'Arcy, J. A.; Feiweier, T.; Tunariu, N.; Koh, D.-M.; Leach, M. O.; Collins, D. J.

    2015-01-01

    Respiratory motion commonly confounds abdominal diffusion-weighted magnetic resonance imaging, where averaging of successive samples at different parts of the respiratory cycle, performed in the scanner, manifests the motion as blurring of tissue boundaries and structural features and can introduce bias into calculated diffusion metrics. Storing multiple averages separately allows processing using metrics other than the mean; in this prospective volunteer study, median and trimmed mean values of signal intensity for each voxel over repeated averages and diffusion-weighting directions are shown to give images with sharper tissue boundaries and structural features for moving tissues, while not compromising non-moving structures. Expert visual scoring of derived diffusion maps is significantly higher for the median than for the mean, with modest improvement from the trimmed mean. Diffusion metrics derived from mono- and bi-exponential diffusion models are comparable for non-moving structures, demonstrating a lack of introduced bias from using the median. The use of the median is a simple and computationally inexpensive alternative to complex and expensive registration algorithms, requiring only additional data storage (and no additional scanning time) while returning visually superior images that will facilitate the appropriate placement of regions-of-interest when analysing abdominal diffusion-weighted magnetic resonance images, for assessment of disease characteristics and treatment response.

  18. A novel algorithm for Bluetooth ECG.

    PubMed

    Pandya, Utpal T; Desai, Uday B

    2012-11-01

    In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

  19. Use of the temporal median and trimmed mean mitigates effects of respiratory motion in multiple-acquisition abdominal diffusion imaging.

    PubMed

    Jerome, N P; Orton, M R; d'Arcy, J A; Feiweier, T; Tunariu, N; Koh, D-M; Leach, M O; Collins, D J

    2015-01-21

    Respiratory motion commonly confounds abdominal diffusion-weighted magnetic resonance imaging, where averaging of successive samples at different parts of the respiratory cycle, performed in the scanner, manifests the motion as blurring of tissue boundaries and structural features and can introduce bias into calculated diffusion metrics. Storing multiple averages separately allows processing using metrics other than the mean; in this prospective volunteer study, median and trimmed mean values of signal intensity for each voxel over repeated averages and diffusion-weighting directions are shown to give images with sharper tissue boundaries and structural features for moving tissues, while not compromising non-moving structures. Expert visual scoring of derived diffusion maps is significantly higher for the median than for the mean, with modest improvement from the trimmed mean. Diffusion metrics derived from mono- and bi-exponential diffusion models are comparable for non-moving structures, demonstrating a lack of introduced bias from using the median. The use of the median is a simple and computationally inexpensive alternative to complex and expensive registration algorithms, requiring only additional data storage (and no additional scanning time) while returning visually superior images that will facilitate the appropriate placement of regions-of-interest when analysing abdominal diffusion-weighted magnetic resonance images, for assessment of disease characteristics and treatment response.

  20. Enhancement of the Comb Filtering Selectivity Using Iterative Moving Average for Periodic Waveform and Harmonic Elimination

    PubMed Central

    Wu, Yan; Aarts, Ronald M.

    2018-01-01

    A recurring problem regarding the use of conventional comb filter approaches for elimination of periodic waveforms is the degree of selectivity achieved by the filtering process. Some applications, such as the gradient artefact correction in EEG recordings during coregistered EEG-fMRI, require a highly selective comb filtering that provides effective attenuation in the stopbands and gain close to unity in the pass-bands. In this paper, we present a novel comb filtering implementation whereby the iterative filtering application of FIR moving average-based approaches is exploited in order to enhance the comb filtering selectivity. Our results indicate that the proposed approach can be used to effectively approximate the FIR moving average filter characteristics to those of an ideal filter. A cascaded implementation using the proposed approach shows to further increase the attenuation in the filter stopbands. Moreover, broadening of the bandwidth of the comb filtering stopbands around −3 dB according to the fundamental frequency of the stopband can be achieved by the novel method, which constitutes an important characteristic to account for broadening of the harmonic gradient artefact spectral lines. In parallel, the proposed filtering implementation can also be used to design a novel notch filtering approach with enhanced selectivity as well. PMID:29599955

  1. Stochastic Flow Cascades

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo I.; Shlesinger, Michael F.

    2012-01-01

    We introduce and explore a Stochastic Flow Cascade (SFC) model: A general statistical model for the unidirectional flow through a tandem array of heterogeneous filters. Examples include the flow of: (i) liquid through heterogeneous porous layers; (ii) shocks through tandem shot noise systems; (iii) signals through tandem communication filters. The SFC model combines together the Langevin equation, convolution filters and moving averages, and Poissonian randomizations. A comprehensive analysis of the SFC model is carried out, yielding closed-form results. Lévy laws are shown to universally emerge from the SFC model, and characterize both heavy tailed retention times (Noah effect) and long-ranged correlations (Joseph effect).

  2. FARMWORKERS, A REPRINT FROM THE 1966 MANPOWER REPORT.

    ERIC Educational Resources Information Center

    Manpower Administration (DOL), Washington, DC.

    ALTHOUGH THE AVERAGE STANDARD OF LIVING OF FARM PEOPLE HAS BEEN RISING STEADILY, THEY CONTINUE TO FACE SEVERE PROBLEMS OF UNDEREMPLOYMENT AND POVERTY. THE AVERAGE PER CAPITA INCOME OF FARM RESIDENTS IS LESS THAN TWO-THIRDS THAT OF THE NONFARM POPULATION. MILLIONS HAVE MOVED TO CITIES, LEAVING STAGNATING RURAL COMMUNITIES, AND INCREASING THE CITY…

  3. Severe Weather Guide - Mediterranean Ports. 7. Marseille

    DTIC Science & Technology

    1988-03-01

    the afternoon. Upper—level westerlies and the associated storm track is moved northward during summer, so extratropical cyclones and associated...autumn as the extratropical storm track moves southward. Precipitation amount is the highest of the year, with an average of 3 inches (76 mm) for the...18 SUBJECT TERMS (Continue on reverse if necessary and identify by block number) Storm haven Mediterranean meteorology Marseille port

  4. Polymer Coatings Degradation Properties

    DTIC Science & Technology

    1985-02-01

    undertaken 124). The Box-Jenkins approach first evaluates the partial auto -correlation function and determines the order of the moving average memory function...78 - Tables 15 and 16 show the resalit- f- a, the partial auto correlation plots. Second order moving .-. "ra ;;th -he appropriate lags were...coated films. Kaempf, Guenter; Papenroth, Wolfgang; Kunststoffe Date: 1982 Volume: 72 Number:7 Pages: 424-429 Parameters influencing the accelerated

  5. Simulation of Unsteady Flows Using an Unstructured Navier-Stokes Solver on Moving and Stationary Grids

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Vatsa, Veer N.; Atkins, Harold L.

    2005-01-01

    We apply an unsteady Reynolds-averaged Navier-Stokes (URANS) solver for unstructured grids to unsteady flows on moving and stationary grids. Example problems considered are relevant to active flow control and stability and control. Computational results are presented using the Spalart-Allmaras turbulence model and are compared to experimental data. The effect of grid and time-step refinement are examined.

  6. A multimodel approach to interannual and seasonal prediction of Danube discharge anomalies

    NASA Astrophysics Data System (ADS)

    Rimbu, Norel; Ionita, Monica; Patrut, Simona; Dima, Mihai

    2010-05-01

    Interannual and seasonal predictability of Danube river discharge is investigated using three model types: 1) time series models 2) linear regression models of discharge with large-scale climate mode indices and 3) models based on stable teleconnections. All models are calibrated using discharge and climatic data for the period 1901-1977 and validated for the period 1978-2008 . Various time series models, like autoregressive (AR), moving average (MA), autoregressive and moving average (ARMA) or singular spectrum analysis and autoregressive moving average (SSA+ARMA) models have been calibrated and their skills evaluated. The best results were obtained using SSA+ARMA models. SSA+ARMA models proved to have the highest forecast skill also for other European rivers (Gamiz-Fortis et al. 2008). Multiple linear regression models using large-scale climatic mode indices as predictors have a higher forecast skill than the time series models. The best predictors for Danube discharge are the North Atlantic Oscillation (NAO) and the East Atlantic/Western Russia patterns during winter and spring. Other patterns, like Polar/Eurasian or Tropical Northern Hemisphere (TNH) are good predictors for summer and autumn discharge. Based on stable teleconnection approach (Ionita et al. 2008) we construct prediction models through a combination of sea surface temperature (SST), temperature (T) and precipitation (PP) from the regions where discharge and SST, T and PP variations are stable correlated. Forecast skills of these models are higher than forecast skills of the time series and multiple regression models. The models calibrated and validated in our study can be used for operational prediction of interannual and seasonal Danube discharge anomalies. References Gamiz-Fortis, S., D. Pozo-Vazquez, R.M. Trigo, and Y. Castro-Diez, Quantifying the predictability of winter river flow in Iberia. Part I: intearannual predictability. J. Climate, 2484-2501, 2008. Gamiz-Fortis, S., D. Pozo-Vazquez, R.M. Trigo, and Y. Castro-Diez, Quantifying the predictability of winter river flow in Iberia. Part II: seasonal predictability. J. Climate, 2503-2518, 2008. Ionita, M., G. Lohmann, and N. Rimbu, Prediction of spring Elbe river discharge based on stable teleconnections with global temperature and precipitation. J. Climate. 6215-6226, 2008.

  7. Traffic-Related Air Pollution, Blood Pressure, and Adaptive Response of Mitochondrial Abundance.

    PubMed

    Zhong, Jia; Cayir, Akin; Trevisi, Letizia; Sanchez-Guerra, Marco; Lin, Xinyi; Peng, Cheng; Bind, Marie-Abèle; Prada, Diddier; Laue, Hannah; Brennan, Kasey J M; Dereix, Alexandra; Sparrow, David; Vokonas, Pantel; Schwartz, Joel; Baccarelli, Andrea A

    2016-01-26

    Exposure to black carbon (BC), a tracer of vehicular-traffic pollution, is associated with increased blood pressure (BP). Identifying biological factors that attenuate BC effects on BP can inform prevention. We evaluated the role of mitochondrial abundance, an adaptive mechanism compensating for cellular-redox imbalance, in the BC-BP relationship. At ≥ 1 visits among 675 older men from the Normative Aging Study (observations=1252), we assessed daily BP and ambient BC levels from a stationary monitor. To determine blood mitochondrial abundance, we used whole blood to analyze mitochondrial-to-nuclear DNA ratio (mtDNA/nDNA) using quantitative polymerase chain reaction. Every standard deviation increase in the 28-day BC moving average was associated with 1.97 mm Hg (95% confidence interval [CI], 1.23-2.72; P<0.0001) and 3.46 mm Hg (95% CI, 2.06-4.87; P<0.0001) higher diastolic and systolic BP, respectively. Positive BC-BP associations existed throughout all time windows. BC moving averages (5-day to 28-day) were associated with increased mtDNA/nDNA; every standard deviation increase in 28-day BC moving average was associated with 0.12 standard deviation (95% CI, 0.03-0.20; P=0.007) higher mtDNA/nDNA. High mtDNA/nDNA significantly attenuated the BC-systolic BP association throughout all time windows. The estimated effect of 28-day BC moving average on systolic BP was 1.95-fold larger for individuals at the lowest mtDNA/nDNA quartile midpoint (4.68 mm Hg; 95% CI, 3.03-6.33; P<0.0001), in comparison with the top quartile midpoint (2.40 mm Hg; 95% CI, 0.81-3.99; P=0.003). In older adults, short-term to moderate-term ambient BC levels were associated with increased BP and blood mitochondrial abundance. Our findings indicate that increased blood mitochondrial abundance is a compensatory response and attenuates the cardiac effects of BC. © 2015 American Heart Association, Inc.

  8. Identification of coffee bean varieties using hyperspectral imaging: influence of preprocessing methods and pixel-wise spectra analysis.

    PubMed

    Zhang, Chu; Liu, Fei; He, Yong

    2018-02-01

    Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.

  9. Odor-conditioned rheotaxis of the sea lamprey: modeling, analysis and validation

    USGS Publications Warehouse

    Choi, Jongeun; Jean, Soo; Johnson, Nicholas S.; Brant, Cory O.; Li, Weiming

    2013-01-01

    Mechanisms for orienting toward and locating an odor source are sought in both biology and engineering. Chemical ecology studies have demonstrated that adult female sea lamprey show rheotaxis in response to a male pheromone with dichotomous outcomes: sexually mature females locate the source of the pheromone whereas immature females swim by the source and continue moving upstream. Here we introduce a simple switching mechanism modeled after odor-conditioned rheotaxis for the sea lamprey as they search for the source of a pheromone in a one-dimensional riverine environment. In this strategy, the females move upstream only if they detect that the pheromone concentration is higher than a threshold value and drifts down (by turning off control action to save energy) otherwise. In addition, we propose various uncertainty models such as measurement noise, actuator disturbance, and a probabilistic model of a concentration field in turbulent flow. Based on the proposed model with uncertainties, a convergence analysis showed that with this simplistic switching mechanism, the lamprey converges to the source location on average in spite of all such uncertainties. Furthermore, a slightly modified model and its extensive simulation results explain the behaviors of immature female lamprey near the source location.

  10. Decomposition Analyses Applied to a Complex Ultradian Biorhythm: The Oscillating NADH Oxidase Activity of Plasma Membranes Having a Potential Time-Keeping (Clock) Function

    PubMed Central

    Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James

    2003-01-01

    Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112

  11. Intimate partner violence in Madrid: a time series analysis (2008-2016).

    PubMed

    Sanz-Barbero, Belén; Linares, Cristina; Vives-Cases, Carmen; González, José Luis; López-Ossorio, Juan José; Díaz, Julio

    2018-06-02

    This study analyzes whether there are time patterns in different intimate partner violence (IPV) indicators and aims to obtain models that can predict the behavior of these time series. Univariate autoregressive moving average models were used to analyze the time series corresponding to the number of daily calls to the 016 telephone IPV helpline and the number of daily police reports filed in the Community of Madrid during the period 2008-2015. Predictions were made for both dependent variables for 2016. The daily number of calls to the 016 telephone IPV helpline decreased during January 2008-April 2012 and increased during April 2012-December 2015. No statistically significant change was observed in the trend of the number of daily IPV police reports. The number of IPV police reports filed increased on weekends and on Christmas holidays. The number of calls to the 016 IPV help line increased on Mondays. Using data from 2008 to 2015, the univariate autoregressive moving average models predicted 64.2% of calls to the 016 telephone IPV helpline and 73.2% of police reports filed during 2016 in the Community of Madrid. Our results suggest the need for an increase in police and judicial resources on nonwork days. Also, the 016 telephone IPV helpline should be especially active on work days. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Visualizing disease associations: graphic analysis of frequency distributions as a function of age using moving average plots (MAP) with application to Alzheimer's and Parkinson's disease.

    PubMed

    Payami, Haydeh; Kay, Denise M; Zabetian, Cyrus P; Schellenberg, Gerard D; Factor, Stewart A; McCulloch, Colin C

    2010-01-01

    Age-related variation in marker frequency can be a confounder in association studies, leading to both false-positive and false-negative findings and subsequently to inconsistent reproducibility. We have developed a simple method, based on a novel extension of moving average plots (MAP), which allows investigators to inspect the frequency data for hidden age-related variations. MAP uses the standard case-control association data and generates a birds-eye view of the frequency distributions across the age spectrum; a picture in which one can see if, how, and when the marker frequencies in cases differ from that in controls. The marker can be specified as an allele, genotype, haplotype, or environmental factor; and age can be age-at-onset, age when subject was last known to be unaffected, or duration of exposure. Signature patterns that emerge can help distinguish true disease associations from spurious associations due to age effects, age-varying associations from associations that are uniform across all ages, and associations with risk from associations with age-at-onset. Utility of MAP is illustrated by application to genetic and epidemiological association data for Alzheimer's and Parkinson's disease. MAP is intended as a descriptive method, to complement standard statistical techniques. Although originally developed for age patterns, MAP is equally useful for visualizing any quantitative trait.

  13. A framework for activity detection in wide-area motion imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, Reid B; Ruggiero, Christy E; Morrison, Jack D

    2009-01-01

    Wide-area persistent imaging systems are becoming increasingly cost effective and now large areas of the earth can be imaged at relatively high frame rates (1-2 fps). The efficient exploitation of the large geo-spatial-temporal datasets produced by these systems poses significant technical challenges for image and video analysis and data mining. In recent years there has been significant progress made on stabilization, moving object detection and tracking and automated systems now generate hundreds to thousands of vehicle tracks from raw data, with little human intervention. However, the tracking performance at this scale, is unreliable and average track length is much smallermore » than the average vehicle route. This is a limiting factor for applications which depend heavily on track identity, i.e. tracking vehicles from their points of origin to their final destination. In this paper we propose and investigate a framework for wide-area motion imagery (W AMI) exploitation that minimizes the dependence on track identity. In its current form this framework takes noisy, incomplete moving object detection tracks as input, and produces a small set of activities (e.g. multi-vehicle meetings) as output. The framework can be used to focus and direct human users and additional computation, and suggests a path towards high-level content extraction by learning from the human-in-the-loop.« less

  14. The Healthy LifeWorks Project: a pilot study of the economic analysis of a comprehensive workplace wellness program in a Canadian government department.

    PubMed

    Makrides, Lydia; Smith, Steven; Allt, Jane; Farquharson, Jane; Szpilfogel, Claudine; Curwin, Sandra; Veinot, Paula; Wang, Feifei; Edington, Dee

    2011-07-01

    To examine the relationship between health risks and absenteeism and drug costs vis-a-vis comprehensive workplace wellness. Eleven health risks, and change in drug claims, short-term and general illness calculated across four risk change groups. Wellness score examined using Wilcoxon test and regression model for cost change. The results showed 31% at risk; 9 of 11 risks associated with higher drug costs. Employees moving from low to high risk showed highest relative increase (81%) in drug costs; moving from high to low had lowest (24%). Low-high had highest increase in absenteeism costs (160%). With each risk increase, absenteeism costs increased by $CDN248 per year (P < 0.05) with average decrease of 0.07 risk factors and savings $CDN6979 per year. Both high-risk reduction and low-risk maintenance are important to contain drug costs. Only low-risk maintenance also avoids absenteeism costs associated with high risks.

  15. Structured Overlapping Grid Simulations of Contra-rotating Open Rotor Noise

    NASA Technical Reports Server (NTRS)

    Housman, Jeffrey A.; Kiris, Cetin C.

    2015-01-01

    Computational simulations using structured overlapping grids with the Launch Ascent and Vehicle Aerodynamics (LAVA) solver framework are presented for predicting tonal noise generated by a contra-rotating open rotor (CROR) propulsion system. A coupled Computational Fluid Dynamics (CFD) and Computational AeroAcoustics (CAA) numerical approach is applied. Three-dimensional time-accurate hybrid Reynolds Averaged Navier-Stokes/Large Eddy Simulation (RANS/LES) CFD simulations are performed in the inertial frame, including dynamic moving grids, using a higher-order accurate finite difference discretization on structured overlapping grids. A higher-order accurate free-stream preserving metric discretization with discrete enforcement of the Geometric Conservation Law (GCL) on moving curvilinear grids is used to create an accurate, efficient, and stable numerical scheme. The aeroacoustic analysis is based on a permeable surface Ffowcs Williams-Hawkings (FW-H) approach, evaluated in the frequency domain. A time-step sensitivity study was performed using only the forward row of blades to determine an adequate time-step. The numerical approach is validated against existing wind tunnel measurements.

  16. The Impact of Critical Thinking on Clinical Judgment During Simulation With Senior Nursing Students.

    PubMed

    Cazzell, Mary; Anderson, Mindi

    2016-01-01

    The study examined the impact of critical thinking (CT) on clinical judgment (CJ) during a pediatric Objective Structured Clinical Evaluation (OSCE) with 160 pre-licensure nursing students. Educators are called to transform teaching strategies to develop CJ but confusion exists over definitions. A descriptive correlational design was used to examine demographics and Tower of Hanoi (TOH) and Health Science Reasoning Test (HSRT) scores. CJ was measured by scores on the Lasater Clinical Judgment Rubric (LCJR) from videotaped OSCEs. Participants were: 86 percent female, 42 percent Caucasian, median 23 years, with 49 percent having health care experience. Students averaged seven moves over minimum on the TOH. Average scores were: HSRT 25/38 and LCJR 31/44. Statistically significant predictors of CJ were gender, ethnicity, HSRT deduction, and analysis; 11 CT variables accounted for 17 percent of LCJR scores. Educators need to utilize/develop innovative teaching strategies addressing CJ predictors.

  17. Weather variability and the incidence of cryptosporidiosis: comparison of time series poisson regression and SARIMA models.

    PubMed

    Hu, Wenbiao; Tong, Shilu; Mengersen, Kerrie; Connell, Des

    2007-09-01

    Few studies have examined the relationship between weather variables and cryptosporidiosis in Australia. This paper examines the potential impact of weather variability on the transmission of cryptosporidiosis and explores the possibility of developing an empirical forecast system. Data on weather variables, notified cryptosporidiosis cases, and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics for the period of January 1, 1996-December 31, 2004, respectively. Time series Poisson regression and seasonal auto-regression integrated moving average (SARIMA) models were performed to examine the potential impact of weather variability on the transmission of cryptosporidiosis. Both the time series Poisson regression and SARIMA models show that seasonal and monthly maximum temperature at a prior moving average of 1 and 3 months were significantly associated with cryptosporidiosis disease. It suggests that there may be 50 more cases a year for an increase of 1 degrees C maximum temperature on average in Brisbane. Model assessments indicated that the SARIMA model had better predictive ability than the Poisson regression model (SARIMA: root mean square error (RMSE): 0.40, Akaike information criterion (AIC): -12.53; Poisson regression: RMSE: 0.54, AIC: -2.84). Furthermore, the analysis of residuals shows that the time series Poisson regression appeared to violate a modeling assumption, in that residual autocorrelation persisted. The results of this study suggest that weather variability (particularly maximum temperature) may have played a significant role in the transmission of cryptosporidiosis. A SARIMA model may be a better predictive model than a Poisson regression model in the assessment of the relationship between weather variability and the incidence of cryptosporidiosis.

  18. Associations between air pollution and perceived stress: the Veterans Administration Normative Aging Study.

    PubMed

    Mehta, Amar J; Kubzansky, Laura D; Coull, Brent A; Kloog, Itai; Koutrakis, Petros; Sparrow, David; Spiro, Avron; Vokonas, Pantel; Schwartz, Joel

    2015-01-27

    There is mixed evidence suggesting that air pollution may be associated with increased risk of developing psychiatric disorders. We aimed to investigate the association between air pollution and non-specific perceived stress, often a precursor to development of affective psychiatric disorders. This longitudinal analysis consisted of 987 older men participating in at least one visit for the Veterans Administration Normative Aging Study between 1995 and 2007 (n = 2,244 visits). At each visit, participants were administered the 14-item Perceived Stress Scale (PSS), which quantifies stress experienced in the previous week. Scores ranged from 0-56 with higher scores indicating increased stress. Differences in PSS score per interquartile range increase in moving average (1, 2, and 4-weeks) of air pollution exposures were estimated using linear mixed-effects regression after adjustment for age, race, education, physical activity, anti-depressant medication use, seasonality, meteorology, and day of week. We also evaluated effect modification by season (April-September and March-October for warm and cold season, respectively). Fine particles (PM2.5), black carbon (BC), nitrogen dioxide, and particle number counts (PNC) at moving averages of 1, 2, and 4-weeks were associated with higher perceived stress ratings. The strongest associations were observed for PNC; for example, a 15,997 counts/cm(3) interquartile range increase in 1-week average PNC was associated with a 3.2 point (95%CI: 2.1-4.3) increase in PSS score. Season modified the associations for specific pollutants; higher PSS scores in association with PM2.5, BC, and sulfate were observed mainly in colder months. Air pollution was associated with higher levels of perceived stress in this sample of older men, particularly in colder months for specific pollutants.

  19. Least Squares Moving-Window Spectral Analysis.

    PubMed

    Lee, Young Jong

    2017-08-01

    Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.

  20. Forecasting and prediction of scorpion sting cases in Biskra province, Algeria, using a seasonal autoregressive integrated moving average model

    PubMed Central

    2016-01-01

    OBJECTIVES The aims of this study were to highlight some epidemiological aspects of scorpion envenomations, to analyse and interpret the available data for Biskra province, Algeria, and to develop a forecasting model for scorpion sting cases in Biskra province, which records the highest number of scorpion stings in Algeria. METHODS In addition to analysing the epidemiological profile of scorpion stings that occurred throughout the year 2013, we used the Box-Jenkins approach to fit a seasonal autoregressive integrated moving average (SARIMA) model to the monthly recorded scorpion sting cases in Biskra from 2000 to 2012. RESULTS The epidemiological analysis revealed that scorpion stings were reported continuously throughout the year, with peaks in the summer months. The most affected age group was 15 to 49 years old, with a male predominance. The most prone human body areas were the upper and lower limbs. The majority of cases (95.9%) were classified as mild envenomations. The time series analysis showed that a (5,1,0)×(0,1,1)12 SARIMA model offered the best fit to the scorpion sting surveillance data. This model was used to predict scorpion sting cases for the year 2013, and the fitted data showed considerable agreement with the actual data. CONCLUSIONS SARIMA models are useful for monitoring scorpion sting cases, and provide an estimate of the variability to be expected in future scorpion sting cases. This knowledge is helpful in predicting whether an unusual situation is developing or not, and could therefore assist decision-makers in strengthening the province’s prevention and control measures and in initiating rapid response measures. PMID:27866407

  1. The economic impact of a smoke-free bylaw on restaurant and bar sales in Ottawa, Canada.

    PubMed

    Luk, Rita; Ferrence, Roberta; Gmel, Gerhard

    2006-05-01

    On 1 August 2001, the City of Ottawa (Canada's Capital) implemented a smoke-free bylaw that completely prohibited smoking in work-places and public places, including restaurants and bars, with no exemption for separately ventilated smoking rooms. This paper evaluates the effects of this bylaw on restaurant and bar sales. DATA AND MEASURES: We used retail sales tax data from March 1998 to June 2002 to construct two outcome measures: the ratio of licensed restaurant and bar sales to total retail sales and the ratio of unlicensed restaurant sales to total retail sales. Restaurant and bar sales were subtracted from total retail sales in the denominator of these measures. We employed an interrupted time-series design. Autoregressive integrated moving average (ARIMA) intervention analysis was used to test for three possible impacts that the bylaw might have on the sales of restaurants and bars. We repeated the analysis using regression with autoregressive moving average (ARMA) errors method to triangulate our results. Outcome measures showed declining trends at baseline before the bylaw went into effect. Results from ARIMA intervention and regression analyses did not support the hypotheses that the smoke-free bylaw had an impact that resulted in (1) abrupt permanent, (2) gradual permanent or (3) abrupt temporary changes in restaurant and bar sales. While a large body of research has found no significant adverse impact of smoke-free legislation on restaurant and bar sales in the United States, Australia and elsewhere, our study confirms these results in a northern region with a bilingual population, which has important implications for impending policy in Europe and other areas.

  2. Tracking Movements of Individual Anoplophora glabripennis (Coleoptera: Cerambycidae) Adults: Application of Harmonic Radar

    Treesearch

    David W. Williams; Guohong Li; Ruitong Gao

    2004-01-01

    Movements of 55 Anoplophora glabripennis (Motschulsky) adults were monitored on 200 willow trees, Salix babylonica L., at a site appx. 80 km southeast of Beijing, China, for 9-14 d in an individual mark-recapture study using harmonic radar. The average movement distance was appx. 14 m, with many beetles not moving at all and others moving >90 m. The rate of movement...

  3. Beyond Horse Race Comparisons of National Performance Averages: Math Performance Variation within and between Classrooms in 38 Countries

    ERIC Educational Resources Information Center

    Huang, Min-Hsiung

    2009-01-01

    Reports of international studies of student achievement often receive public attention worldwide. However, this attention overly focuses on the national rankings of average student performance. To move beyond the simplistic comparison of national mean scores, this study investigates (a) country differences in the measures of variability as well as…

  4. A novel approach to estimate emissions from large transportation networks: Hierarchical clustering-based link-driving-schedules for EPA-MOVES using dynamic time warping measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, H. M. Abdul; Ukkusuri, Satish V.

    We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less

  5. A novel approach to estimate emissions from large transportation networks: Hierarchical clustering-based link-driving-schedules for EPA-MOVES using dynamic time warping measures

    DOE PAGES

    Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2017-06-29

    We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less

  6. Simulations of moving effect of coastal vegetation on tsunami damping

    NASA Astrophysics Data System (ADS)

    Tsai, Ching-Piao; Chen, Ying-Chi; Octaviani Sihombing, Tri; Lin, Chang

    2017-05-01

    A coupled wave-vegetation simulation is presented for the moving effect of the coastal vegetation on tsunami wave height damping. The problem is idealized by solitary wave propagation on a group of emergent cylinders. The numerical model is based on general Reynolds-averaged Navier-Stokes equations with renormalization group turbulent closure model by using volume of fluid technique. The general moving object (GMO) model developed in computational fluid dynamics (CFD) code Flow-3D is applied to simulate the coupled motion of vegetation with wave dynamically. The damping of wave height and the turbulent kinetic energy along moving and stationary cylinders are discussed. The simulated results show that the damping of wave height and the turbulent kinetic energy by the moving cylinders are clearly less than by the stationary cylinders. The result implies that the wave decay by the coastal vegetation may be overestimated if the vegetation was represented as stationary state.

  7. Analysis of concentric and eccentric contractions in biceps brachii muscles using surface electromyography signals and multifractal analysis.

    PubMed

    Marri, Kiran; Swaminathan, Ramakrishnan

    2016-06-23

    Muscle contractions can be categorized into isometric, isotonic (concentric and eccentric) and isokinetic contractions. The eccentric contractions are very effective for promoting muscle hypertrophy and produce larger forces when compared to the concentric or isometric contractions. Surface electromyography signals are widely used for analyzing muscle activities. These signals are nonstationary, nonlinear and exhibit self-similar multifractal behavior. The research on surface electromyography signals using multifractal analysis is not well established for concentric and eccentric contractions. In this study, an attempt has been made to analyze the concentric and eccentric contractions associated with biceps brachii muscles using surface electromyography signals and multifractal detrended moving average algorithm. Surface electromyography signals were recorded from 20 healthy individuals while performing a single curl exercise. The preprocessed signals were divided into concentric and eccentric cycles and in turn divided into phases based on range of motion: lower (0°-90°) and upper (>90°). The segments of surface electromyography signal were subjected to multifractal detrended moving average algorithm, and multifractal features such as strength of multifractality, peak exponent value, maximum exponent and exponent index were extracted in addition to conventional linear features such as root mean square and median frequency. The results show that surface electromyography signals exhibit multifractal behavior in both concentric and eccentric cycles. The mean strength of multifractality increased by 15% in eccentric contraction compared to concentric contraction. The lowest and highest exponent index values are observed in the upper concentric and lower eccentric contractions, respectively. The multifractal features are observed to be helpful in differentiating surface electromyography signals along the range of motion as compared to root mean square and median frequency. It appears that these multifractal features extracted from the concentric and eccentric contractions can be useful in the assessment of surface electromyography signals in sports medicine and training and also in rehabilitation programs. © IMechE 2016.

  8. SU-E-T-538: Does Abdominal Compression Through Prone Patient Position Reduce Respiratory Motion in Lung Cancer Radiotherapy?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catron, T; Rosu, M; Weiss, E

    2014-06-01

    Purpose: This study assesses the effect of physiological abdominal compression from prone positioning by comparing respiratory-induced tumor movements in supine and prone positions. Methods: 19 lung cancer patients underwent repeated supine and prone free-breathing 4DCT scans. The effect of patient position on motion magnitude was investigated for tumors, lymph nodes (9 cases), and subgroups of central (11 cases), peripheral (8 cases) and small peripheral tumors (5 cases), by evaluating the population average excursions, absolute and relative to a carina-point. Results: Absolute motion analysis: In prone, motion increased by ~20% for tumors and ~25% for lymph nodes. Central tumors moved moremore » compared to peripheral tumors in both supine and prone (~22%, and ~4% respectively). Central tumors movement increased by ~12% in prone. For peripheral tumors the increase in prone position was ~25% (~40% and 29% changes on along RL and AP directions). Motion relative to carina-point analysis: Overall, tumor excursions relative to carina-point increased by ~17% in prone. Lymph node relative magnitudes were lower by ~4%. Likewise, the central tumors moved ~7% less in prone. The subgroup of peripheral tumors exhibited increased amplitudes by ~44%; the small peripheral tumors had even larger relative displacements in prone (~46%). Conclusion: Tumor and lymph node movement in the patient population from this study averaged to be higher in prone than in supine position. Results from carina analysis also suggest that peripheral tissues have more physiologic freedom of motility when placed in the prone position, regardless of size. From these observations we should continue to avoid prone positioning for all types of primary lung tumor, suggesting that patients should receive radiotherapy for primary lung cancer in supine position to minimize target tissue mobility during normal respiratory effort. Further investigation will include more patients with peripheral tumors to validate our observations.« less

  9. Three Least-Squares Minimization Approaches to Interpret Gravity Data Due to Dipping Faults

    NASA Astrophysics Data System (ADS)

    Abdelrahman, E. M.; Essa, K. S.

    2015-02-01

    We have developed three different least-squares minimization approaches to determine, successively, the depth, dip angle, and amplitude coefficient related to the thickness and density contrast of a buried dipping fault from first moving average residual gravity anomalies. By defining the zero-anomaly distance and the anomaly value at the origin of the moving average residual profile, the problem of depth determination is transformed into a constrained nonlinear gravity inversion. After estimating the depth of the fault, the dip angle is estimated by solving a nonlinear inverse problem. Finally, after estimating the depth and dip angle, the amplitude coefficient is determined using a linear equation. This method can be applied to residuals as well as to measured gravity data because it uses the moving average residual gravity anomalies to estimate the model parameters of the faulted structure. The proposed method was tested on noise-corrupted synthetic and real gravity data. In the case of the synthetic data, good results are obtained when errors are given in the zero-anomaly distance and the anomaly value at the origin, and even when the origin is determined approximately. In the case of practical data (Bouguer anomaly over Gazal fault, south Aswan, Egypt), the fault parameters obtained are in good agreement with the actual ones and with those given in the published literature.

  10. A monitoring tool for performance improvement in plastic surgery at the individual level.

    PubMed

    Maruthappu, Mahiben; Duclos, Antoine; Orgill, Dennis; Carty, Matthew J

    2013-05-01

    The assessment of performance in surgery is expanding significantly. Application of relevant frameworks to plastic surgery, however, has been limited. In this article, the authors present two robust graphic tools commonly used in other industries that may serve to monitor individual surgeon operative time while factoring in patient- and surgeon-specific elements. The authors reviewed performance data from all bilateral reduction mammaplasties performed at their institution by eight surgeons between 1995 and 2010. Operative time was used as a proxy for performance. Cumulative sum charts and exponentially weighted moving average charts were generated using a train-test analytic approach, and used to monitor surgical performance. Charts mapped crude, patient case-mix-adjusted, and case-mix and surgical-experience-adjusted performance. Operative time was found to decline from 182 minutes to 118 minutes with surgical experience (p < 0.001). Cumulative sum and exponentially weighted moving average charts were generated using 1995 to 2007 data (1053 procedures) and tested on 2008 to 2010 data (246 procedures). The sensitivity and accuracy of these charts were significantly improved by adjustment for case mix and surgeon experience. The consideration of patient- and surgeon-specific factors is essential for correct interpretation of performance in plastic surgery at the individual surgeon level. Cumulative sum and exponentially weighted moving average charts represent accurate methods of monitoring operative time to control and potentially improve surgeon performance over the course of a career.

  11. Optimization and validation of moving average quality control procedures using bias detection curves and moving average validation charts.

    PubMed

    van Rossum, Huub H; Kemperman, Hans

    2017-02-01

    To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.

  12. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  13. MOVES sensitivity analysis update : Transportation Research Board Summer Meeting 2012 : ADC-20 Air Quality Committee

    DOT National Transportation Integrated Search

    2012-01-01

    OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study

  14. Detailed analysis of particle launch velocities, size distributions and gas densities during normal explosions at Stromboli

    NASA Astrophysics Data System (ADS)

    Harris, Andrew J. L.; Ripepe, Maurizio; Hughes, Elizabeth A.

    2012-06-01

    Using high frame rate (33 Hz) thermal video data we describe and parameterize the emission and ascent dynamics of a mixed plume of gas and particles emitted during a normal explosion at Stromboli (Aeolian Islands, Italy). Analysis of 34 events showed that 31 of them were characterized by a first phase characterized by an initial diffuse spray of relatively small (lapilli-sized) particles moving at high velocities (up to 213 m s- 1; average 66-82 m s- 1). This was followed, typically within 0.1 s, by a burst comprising a mixture of ash and lapilli, but dominated by larger bomb-sized particles, moving at lower exit velocities of up to 129 m s- 1, but typically 46 m s- 1. We interpret these results as revealing initial emission of a previously unrecorded high velocity gas-jet phase, to which the lapilli are coupled. This is followed by emission of slower moving larger particles that are decoupled from the faster moving gas-phase. Diameters for particles carried by the gas phase are typically around 4 cm, but can be up to 9 cm, with the diameter of the particles carried by the gas jet (D) decreasing with increased density and velocity of the erupted gas cloud (ρgas and Ugas). Data for 101 particles identified as moving with the gas jet during 32 eruptions allow us to define a new relation, whereby Ugas = Uparticle + a [ρgas√{D}]b. Here, Uparticle is the velocity of bombs whose motion is decoupled from that of the gas cloud, and a and b are two empirically-derived coefficients. This replaces the old relation, whereby Ugas = Uparticle + k √{D}; a relation that requires a constant gas density for each eruption. This is an assumption that we show to be invalid, with gas density potentially varying between 0.04 kg m- 3 and 9 kg m- 3 for the 32 cases considered, so that k varies between 54 m1/2 s- 1 and 828 m1/2 s- 1, compared with the traditionally used constant of 150 m1/2 s- 1.

  15. Environmental Assessment: Installation Development at Sheppard Air Force Base, Texas

    DTIC Science & Technology

    2007-05-01

    column, or in topographic depressions. Water is then utilized by plants and is respired, or it moves slowly into groundwater and/or eventually to surface...water bodies where it slowly moves through the hydrologic cycle. Removal of vegetation decreases infiltration into the soil column and thereby...School District JP-4 jet propulsion fuel 4 kts knots Ldn Day- Night Average Sound Level Leq equivalent noise level Lmax maximum sound level lb pound

  16. Studies in astronomical time series analysis: Modeling random processes in the time domain

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  17. The application of time series models to cloud field morphology analysis

    NASA Technical Reports Server (NTRS)

    Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.

    1987-01-01

    A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.

  18. GUMAP: A GUPIXWIN-compatible code for extracting regional spectra from nuclear microbeam list mode files

    NASA Astrophysics Data System (ADS)

    Russell, John L.; Campbell, John L.; Boyd, Nicholas I.; Dias, Johnny F.

    2018-02-01

    The newly developed GUMAP software creates element maps from OMDAQ list mode files, displays these maps individually or collectively, and facilitates on-screen definitions of specified regions from which a PIXE spectrum can be built. These include a free-hand region defined by moving the cursor. The regional charge is entered automatically into the spectrum file in a new GUPIXWIN-compatible format, enabling a GUPIXWIN analysis of the spectrum. The code defaults to the OMDAQ dead time treatment but also facilitates two other methods for dead time correction in sample regions with count rates different from the average.

  19. Assessing air quality in Aksaray with time series analysis

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Kadilar, Cem

    2017-04-01

    Sulphur dioxide (SO2) is a major air pollutant caused by the dominant usage of diesel, petrol and fuels by vehicles and industries. One of the most air-polluted city in Turkey is Aksaray. Hence, in this study, the level of SO2 is analyzed in Aksaray based on the database monitored at air quality monitoring station of Turkey. Seasonal Autoregressive Integrated Moving Average (SARIMA) approach is used to forecast the level of SO2 air quality parameter. The results indicate that the seasonal ARIMA model provides reliable and satisfactory predictions for the air quality parameters and expected to be an alternative tool for practical assessment and justification.

  20. Gauging the Nearness and Size of Cycle Maximum

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2003-01-01

    A simple method for monitoring the nearness and size of conventional cycle maximum for an ongoing sunspot cycle is examined. The method uses the observed maximum daily value and the maximum monthly mean value of international sunspot number and the maximum value of the 2-mo moving average of monthly mean sunspot number to effect the estimation. For cycle 23, a maximum daily value of 246, a maximum monthly mean of 170.1, and a maximum 2-mo moving average of 148.9 were each observed in July 2000. Taken together, these values strongly suggest that conventional maximum amplitude for cycle 23 would be approx. 124.5, occurring near July 2002 +/-5 mo, very close to the now well-established conventional maximum amplitude and occurrence date for cycle 23-120.8 in April 2000.

  1. Air quality at night markets in Taiwan.

    PubMed

    Zhao, Ping; Lin, Chi-Chi

    2010-03-01

    In Taiwan, there are more than 300 night markets and they have attracted more and more visitors in recent years. Air quality in night markets has become a public concern. To characterize the current air quality in night markets, four major night markets in Kaohsiung were selected for this study. The results of this study showed that the mean carbon dioxide (CO2) concentrations at fixed and moving sites in night markets ranged from 326 to 427 parts per million (ppm) during non-open hours and from 433 to 916 ppm during open hours. The average carbon monoxide (CO) concentrations at fixed and moving sites in night markets ranged from 0.2 to 2.8 ppm during non-open hours and from 2.1 to 14.1 ppm during open hours. The average 1-hr levels of particulate matter with aerodynamic diameters less than 10 microm (PM10) and less than 2.5 microm (PM2.5) at fixed and moving sites in night markets were high, ranging from 186 to 451 microg/m3 and from 175 to 418 microg/m3, respectively. The levels of PM2.5 accounted for 80-97% of their respective PM10 concentrations. The average formaldehyde (HCHO) concentrations at fixed and moving sites in night markets ranged from 0 to 0.05 ppm during non-open hours and from 0.02 to 0.27 ppm during open hours. The average concentration of individual polycyclic aromatic hydrocarbons (PAHs) was found in the range of 0.09 x 10(4) to 1.8 x 10(4) ng/m3. The total identified PAHs (TIPs) ranged from 7.8 x 10(1) to 20 x 10(1) ng/m3 during non-open hours and from 1.5 x 10(4) to 4.0 x 10(4) ng/m3 during open hours. Of the total analyzed PAHs, the low-molecular-weight PAHs (two to three rings) were the dominant species, corresponding to an average of 97% during non-open hours and 88% during open hours, whereas high-molecular-weight PAHs (four to six rings) represented 3 and 12% of the total detected PAHs in the gas phase during non-open and open hours, respectively.

  2. Nonlinear filtering properties of detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-11-01

    Detrended fluctuation analysis (DFA) has been widely used for quantifying long-range correlation and fractal scaling behavior. In DFA, to avoid spurious detection of scaling behavior caused by a nonstationary trend embedded in the analyzed time series, a detrending procedure using piecewise least-squares fitting has been applied. However, it has been pointed out that the nonlinear filtering properties involved with detrending may induce instabilities in the scaling exponent estimation. To understand this issue, we investigate the adverse effects of the DFA detrending procedure on the statistical estimation. We show that the detrending procedure using piecewise least-squares fitting results in the nonuniformly weighted estimation of the root-mean-square deviation and that this property could induce an increase in the estimation error. In addition, for comparison purposes, we investigate the performance of a centered detrending moving average analysis with a linear detrending filter and sliding window DFA and show that these methods have better performance than the standard DFA.

  3. Are pound and euro the same currency?

    NASA Astrophysics Data System (ADS)

    Matsushita, Raul; Gleria, Iram; Figueiredo, Annibal; da Silva, Sergio

    2007-08-01

    Based on long-range dependence, some analysts claim that the exchange rate time series of the pound sterling and of an artificially extended euro have been locked together for years despite daily changes [M. Ausloos, K. Ivanova, Physica A 286 (2000) 353; K. Ivanova, M. Ausloos, False EUR exchange rates vs DKK, CHF, JPY and USD. What is a strong currency? in: H. Takayasu (Ed.), Empirical Sciences in Financial Fluctuations: The Advent of Econophysics, Springer-Verlag, Berlin, 2002, pp. 62 76]. They conclude that pound and euro are in practice the same currency. We assess the long-range dependence over time through Hurst exponents of pound dollar and extended euro dollar exchange rates employing three alternative techniques, namely rescaled range analysis, detrended fluctuation analysis, and detrended moving average. We find the result above (which is based on detrended fluctuation analysis) not to be robust to the changing techniques and parameterizing.

  4. [Spatiotemporal variation characteristics and related affecting factors of actual evapotranspiration in the Hun-Taizi River Basin, Northeast China].

    PubMed

    Feng, Xue; Cai, Yan-Cong; Guan, De-Xin; Jin, Chang-Jie; Wang, An-Zhi; Wu, Jia-Bing; Yuan, Feng-Hui

    2014-10-01

    Based on the meteorological and hydrological data from 1970 to 2006, the advection-aridity (AA) model with calibrated parameters was used to calculate evapotranspiration in the Hun-Taizi River Basin in Northeast China. The original parameter of the AA model was tuned according to the water balance method and then four subbasins were selected to validate. Spatiotemporal variation characteristics of evapotranspiration and related affecting factors were analyzed using the methods of linear trend analysis, moving average, kriging interpolation and sensitivity analysis. The results showed that the empirical parameter value of 0.75 of AA model was suitable for the Hun-Taizi River Basin with an error of 11.4%. In the Hun-Taizi River Basin, the average annual actual evapotranspiration was 347.4 mm, which had a slightly upward trend with a rate of 1.58 mm · (10 a(-1)), but did not change significantly. It also indicated that the annual actual evapotranspiration presented a single-peaked pattern and its peak value occurred in July; the evapotranspiration in summer was higher than in spring and autumn, and it was the smallest in winter. The annual average evapotranspiration showed a decreasing trend from the northwest to the southeast in the Hun-Taizi River Basin from 1970 to 2006 with minor differences. Net radiation was largely responsible for the change of actual evapotranspiration in the Hun-Taizi River Basin.

  5. [Establishing and applying of autoregressive integrated moving average model to predict the incidence rate of dysentery in Shanghai].

    PubMed

    Li, Jian; Wu, Huan-Yu; Li, Yan-Ting; Jin, Hui-Ming; Gu, Bao-Ke; Yuan, Zheng-An

    2010-01-01

    To explore the feasibility of establishing and applying of autoregressive integrated moving average (ARIMA) model to predict the incidence rate of dysentery in Shanghai, so as to provide the theoretical basis for prevention and control of dysentery. ARIMA model was established based on the monthly incidence rate of dysentery of Shanghai from 1990 to 2007. The parameters of model were estimated through unconditional least squares method, the structure was determined according to criteria of residual un-correlation and conclusion, and the model goodness-of-fit was determined through Akaike information criterion (AIC) and Schwarz Bayesian criterion (SBC). The constructed optimal model was applied to predict the incidence rate of dysentery of Shanghai in 2008 and evaluate the validity of model through comparing the difference of predicted incidence rate and actual one. The incidence rate of dysentery in 2010 was predicted by ARIMA model based on the incidence rate from January 1990 to June 2009. The model ARIMA (1, 1, 1) (0, 1, 2)(12) had a good fitness to the incidence rate with both autoregressive coefficient (AR1 = 0.443) during the past time series, moving average coefficient (MA1 = 0.806) and seasonal moving average coefficient (SMA1 = 0.543, SMA2 = 0.321) being statistically significant (P < 0.01). AIC and SBC were 2.878 and 16.131 respectively and predicting error was white noise. The mathematic function was (1-0.443B) (1-B) (1-B(12))Z(t) = (1-0.806B) (1-0.543B(12)) (1-0.321B(2) x 12) micro(t). The predicted incidence rate in 2008 was consistent with the actual one, with the relative error of 6.78%. The predicted incidence rate of dysentery in 2010 based on the incidence rate from January 1990 to June 2009 would be 9.390 per 100 thousand. ARIMA model can be used to fit the changes of incidence rate of dysentery and to forecast the future incidence rate in Shanghai. It is a predicted model of high precision for short-time forecast.

  6. Rate of Oviposition by Culex Quinquefasciatus in San Antonio, Texas, During Three Years

    DTIC Science & Technology

    1988-09-01

    autoregression and zero orders of integration and moving average ( ARIMA (l,O,O)). This model was chosen initially because rainfall ap- peared to...have no trend requiring integration and no obvious requirement for a moving aver- age component (i.e., no regular periodicity). This ARIMA model was...Say in both the northern and southern hem- ispheres exposes this species to a variety of climatic challenges to its survival. It is able to adjust

  7. Seminar Proceedings Implementation of Nonstructural Measures Held at Ft. Belvoir, Virginia on 15, 16 and 17 November 1983

    DTIC Science & Technology

    1983-11-01

    S-Approximate Household inventory item average chance of being moved (%) High Electric toaster Vacuum cleaner 80 Colour television Medium Record...most rtadily moved are small items of electrical. I equipment and valuable items such as colour televisions. However, many respondents reported that...WESSEX WATER AUTHORITY, "Somerset Land Drainage District, land drainage sur ey report", Wessex Water Authority, Bridgwater, England, 1979. .34 "* • I.U

  8. Tree-ring-based estimates of long-term seasonal precipitation in the Souris River Region of Saskatchewan, North Dakota and Manitoba

    USGS Publications Warehouse

    Ryberg, Karen R.; Vecchia, Aldo V.; Akyüz, F. Adnan; Lin, Wei

    2016-01-01

    Historically unprecedented flooding occurred in the Souris River Basin of Saskatchewan, North Dakota and Manitoba in 2011, during a longer term period of wet conditions in the basin. In order to develop a model of future flows, there is a need to evaluate effects of past multidecadal climate variability and/or possible climate change on precipitation. In this study, tree-ring chronologies and historical precipitation data in a four-degree buffer around the Souris River Basin were analyzed to develop regression models that can be used for predicting long-term variations of precipitation. To focus on longer term variability, 12-year moving average precipitation was modeled in five subregions (determined through cluster analysis of measures of precipitation) of the study area over three seasons (November–February, March–June and July–October). The models used multiresolution decomposition (an additive decomposition based on powers of two using a discrete wavelet transform) of tree-ring chronologies from Canada and the US and seasonal 12-year moving average precipitation based on Adjusted and Homogenized Canadian Climate Data and US Historical Climatology Network data. Results show that precipitation varies on long-term (multidecadal) time scales of 16, 32 and 64 years. Past extended pluvial and drought events, which can vary greatly with season and subregion, were highlighted by the models. Results suggest that the recent wet period may be a part of natural variability on a very long time scale.

  9. Nitrifying moving bed biofilm reactor (MBBR) biofilm and biomass response to long term exposure to 1 °C.

    PubMed

    Hoang, V; Delatolla, R; Abujamel, T; Mottawea, W; Gadbois, A; Laflamme, E; Stintzi, A

    2014-02-01

    This study aims to investigate moving bed biofilm reactor (MBBR) nitrification rates, nitrifying biofilm morphology, biomass viability as well as bacterial community shifts during long-term exposure to 1 °C. Long-term exposure to 1 °C is the key operational condition for potential ammonia removal upgrade units to numerous northern region treatment systems. The average laboratory MBBR ammonia removal rate after long-term exposure to 1 °C was measured to be 18 ± 5.1% as compared to the average removal rate at 20 °C. Biofilm morphology and specifically the thickness along with biomass viability at various depths in the biofilm were investigated using variable pressure electron scanning microscope (VPSEM) imaging and confocal laser scanning microscope (CLSM) imaging in combination with viability live/dead staining. The biofilm thickness along with the number of viable cells showed significant increases after long-term exposure to 1 °C. Hence, this study observed nitrifying bacteria with higher activities at warm temperatures and a slightly greater quantity of nitrifying bacteria with lower activities at cold temperatures in nitrifying MBBR biofilms. Using DNA sequencing analysis, Nitrosomonas and Nitrosospira (ammonia oxidizers) as well as Nitrospira (nitrite oxidizer) were identified and no population shift was observed between 20 °C and after long-term exposure to 1 °C. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Nonuniform Moving Boundary Method for Computational Fluid Dynamics Simulation of Intrathecal Cerebrospinal Flow Distribution in a Cynomolgus Monkey.

    PubMed

    Khani, Mohammadreza; Xing, Tao; Gibbs, Christina; Oshinski, John N; Stewart, Gregory R; Zeller, Jillynne R; Martin, Bryn A

    2017-08-01

    A detailed quantification and understanding of cerebrospinal fluid (CSF) dynamics may improve detection and treatment of central nervous system (CNS) diseases and help optimize CSF system-based delivery of CNS therapeutics. This study presents a computational fluid dynamics (CFD) model that utilizes a nonuniform moving boundary approach to accurately reproduce the nonuniform distribution of CSF flow along the spinal subarachnoid space (SAS) of a single cynomolgus monkey. A magnetic resonance imaging (MRI) protocol was developed and applied to quantify subject-specific CSF space geometry and flow and define the CFD domain and boundary conditions. An algorithm was implemented to reproduce the axial distribution of unsteady CSF flow by nonuniform deformation of the dura surface. Results showed that maximum difference between the MRI measurements and CFD simulation of CSF flow rates was <3.6%. CSF flow along the entire spine was laminar with a peak Reynolds number of ∼150 and average Womersley number of ∼5.4. Maximum CSF flow rate was present at the C4-C5 vertebral level. Deformation of the dura ranged up to a maximum of 134 μm. Geometric analysis indicated that total spinal CSF space volume was ∼8.7 ml. Average hydraulic diameter, wetted perimeter, and SAS area were 2.9 mm, 37.3 mm and 27.24 mm2, respectively. CSF pulse wave velocity (PWV) along the spine was quantified to be 1.2 m/s.

  11. Suppression of AC railway power-line interference in ECG signals recorded by public access defibrillators

    PubMed Central

    Dotsinsky, Ivan

    2005-01-01

    Background Public access defibrillators (PADs) are now available for more efficient and rapid treatment of out-of-hospital sudden cardiac arrest. PADs are used normally by untrained people on the streets and in sports centers, airports, and other public areas. Therefore, automated detection of ventricular fibrillation, or its exclusion, is of high importance. A special case exists at railway stations, where electric power-line frequency interference is significant. Many countries, especially in Europe, use 16.7 Hz AC power, which introduces high level frequency-varying interference that may compromise fibrillation detection. Method Moving signal averaging is often used for 50/60 Hz interference suppression if its effect on the ECG spectrum has little importance (no morphological analysis is performed). This approach may be also applied to the railway situation, if the interference frequency is continuously detected so as to synchronize the analog-to-digital conversion (ADC) for introducing variable inter-sample intervals. A better solution consists of rated ADC, software frequency measuring, internal irregular re-sampling according to the interference frequency, and a moving average over a constant sample number, followed by regular back re-sampling. Results The proposed method leads to a total railway interference cancellation, together with suppression of inherent noise, while the peak amplitudes of some sharp complexes are reduced. This reduction has negligible effect on accurate fibrillation detection. Conclusion The method is developed in the MATLAB environment and represents a useful tool for real time railway interference suppression. PMID:16309558

  12. Suppression of AC railway power-line interference in ECG signals recorded by public access defibrillators.

    PubMed

    Dotsinsky, Ivan

    2005-11-26

    Public access defibrillators (PADs) are now available for more efficient and rapid treatment of out-of-hospital sudden cardiac arrest. PADs are used normally by untrained people on the streets and in sports centers, airports, and other public areas. Therefore, automated detection of ventricular fibrillation, or its exclusion, is of high importance. A special case exists at railway stations, where electric power-line frequency interference is significant. Many countries, especially in Europe, use 16.7 Hz AC power, which introduces high level frequency-varying interference that may compromise fibrillation detection. Moving signal averaging is often used for 50/60 Hz interference suppression if its effect on the ECG spectrum has little importance (no morphological analysis is performed). This approach may be also applied to the railway situation, if the interference frequency is continuously detected so as to synchronize the analog-to-digital conversion (ADC) for introducing variable inter-sample intervals. A better solution consists of rated ADC, software frequency measuring, internal irregular re-sampling according to the interference frequency, and a moving average over a constant sample number, followed by regular back re-sampling. The proposed method leads to a total railway interference cancellation, together with suppression of inherent noise, while the peak amplitudes of some sharp complexes are reduced. This reduction has negligible effect on accurate fibrillation detection. The method is developed in the MATLAB environment and represents a useful tool for real time railway interference suppression.

  13. Plans, Patterns, and Move Categories Guiding a Highly Selective Search

    NASA Astrophysics Data System (ADS)

    Trippen, Gerhard

    In this paper we present our ideas for an Arimaa-playing program (also called a bot) that uses plans and pattern matching to guide a highly selective search. We restrict move generation to moves in certain move categories to reduce the number of moves considered by the bot significantly. Arimaa is a modern board game that can be played with a standard Chess set. However, the rules of the game are not at all like those of Chess. Furthermore, Arimaa was designed to be as simple and intuitive as possible for humans, yet challenging for computers. While all established Arimaa bots use alpha-beta search with a variety of pruning techniques and other heuristics ending in an extensive positional leaf node evaluation, our new bot, Rat, starts with a positional evaluation of the current position. Based on features found in the current position - supported by pattern matching using a directed position graph - our bot Rat decides which of a given set of plans to follow. The plan then dictates what types of moves can be chosen. This is another major difference from bots that generate "all" possible moves for a particular position. Rat is only allowed to generate moves that belong to certain categories. Leaf nodes are evaluated only by a straightforward material evaluation to help avoid moves that lose material. This highly selective search looks, on average, at only 5 moves out of 5,000 to over 40,000 possible moves in a middle game position.

  14. ECG artifact cancellation in surface EMG signals by fractional order calculus application.

    PubMed

    Miljković, Nadica; Popović, Nenad; Djordjević, Olivera; Konstantinović, Ljubica; Šekara, Tomislav B

    2017-03-01

    New aspects for automatic electrocardiography artifact removal from surface electromyography signals by application of fractional order calculus in combination with linear and nonlinear moving window filters are explored. Surface electromyography recordings of skeletal trunk muscles are commonly contaminated with spike shaped artifacts. This artifact originates from electrical heart activity, recorded by electrocardiography, commonly present in the surface electromyography signals recorded in heart proximity. For appropriate assessment of neuromuscular changes by means of surface electromyography, application of a proper filtering technique of electrocardiography artifact is crucial. A novel method for automatic artifact cancellation in surface electromyography signals by applying fractional order calculus and nonlinear median filter is introduced. The proposed method is compared with the linear moving average filter, with and without prior application of fractional order calculus. 3D graphs for assessment of window lengths of the filters, crest factors, root mean square differences, and fractional calculus orders (called WFC and WRC graphs) have been introduced. For an appropriate quantitative filtering evaluation, the synthetic electrocardiography signal and analogous semi-synthetic dataset have been generated. The examples of noise removal in 10 able-bodied subjects and in one patient with muscle dystrophy are presented for qualitative analysis. The crest factors, correlation coefficients, and root mean square differences of the recorded and semi-synthetic electromyography datasets showed that the most successful method was the median filter in combination with fractional order calculus of the order 0.9. Statistically more significant (p < 0.001) ECG peak reduction was obtained by the median filter application compared to the moving average filter in the cases of low level amplitude of muscle contraction compared to ECG spikes. The presented results suggest that the novel method combining a median filter and fractional order calculus can be used for automatic filtering of electrocardiography artifacts in the surface electromyography signal envelopes recorded in trunk muscles. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  15. Analysis of offshore platforms lifting with fixed pile structure type (fixed platform) based on ASD89

    NASA Astrophysics Data System (ADS)

    Sugianto, Agus; Indriani, Andi Marini

    2017-11-01

    Platform construction GTS (Gathering Testing Sattelite) is offshore construction platform with fix pile structure type/fixed platform functioning to support the mining of petroleum exploitation. After construction fabrication process platform was moved to barges, then shipped to the installation site. Moving process is generally done by pull or push based on construction design determined when planning. But at the time of lifting equipment/cranes available in the work area then the moving process can be done by lifting so that moving activity can be implemented more quickly of work. This analysis moving process of GTS platform in a different way that is generally done to GTS platform types by lifting using problem is construction reinforcement required, so the construction can be moved by lifting with analyzing and checking structure working stress that occurs due to construction moving process by lifting AISC code standard and analysis using the SAP2000 structure analysis program. The analysis result showed that existing condition cannot be moved by lifting because stress ratio is above maximum allowable value that is 0.950 (AISC-ASD89). Overstress occurs on the member 295 and 324 with stress ratio value 0.97 and 0.95 so that it is required structural reinforcement. Box plate aplication at both members so that it produces stress ratio values 0.78 at the member 295 and stress ratio of 0.77 at the member 324. These results indicate that the construction have qualified structural reinforcement for being moved by lifting.

  16. A Generation at Risk: When the Baby Boomers Reach Golden Pond.

    ERIC Educational Resources Information Center

    Butler, Robert N.

    The 20th century has seen average life expectancy in the United States move from under 50 years to over 70 years. Coupled with this increase in average life expectancy is the aging of the 76.4 million persons born between 1946 and 1964. As they approach retirement, these baby-boomers will have to balance their own needs with those of living…

  17. Comparison of 3-D Multi-Lag Cross-Correlation and Speckle Brightness Aberration Correction Algorithms on Static and Moving Targets

    PubMed Central

    Ivancevich, Nikolas M.; Dahl, Jeremy J.; Smith, Stephen W.

    2010-01-01

    Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively. PMID:19942503

  18. Comparison of 3-D multi-lag cross- correlation and speckle brightness aberration correction algorithms on static and moving targets.

    PubMed

    Ivancevich, Nikolas M; Dahl, Jeremy J; Smith, Stephen W

    2009-10-01

    Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively.

  19. Recent Enhancements To The FUN3D Flow Solver For Moving-Mesh Applications

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T,; Thomas, James L.

    2009-01-01

    An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids has been extended to handle general mesh movement involving rigid, deforming, and overset meshes. Mesh deformation is achieved through analogy to elastic media by solving the linear elasticity equations. A general method for specifying the motion of moving bodies within the mesh has been implemented that allows for inherited motion through parent-child relationships, enabling simulations involving multiple moving bodies. Several example calculations are shown to illustrate the range of potential applications. For problems in which an isolated body is rotating with a fixed rate, a noninertial reference-frame formulation is available. An example calculation for a tilt-wing rotor is used to demonstrate that the time-dependent moving grid and noninertial formulations produce the same results in the limit of zero time-step size.

  20. Divorce, Separation, and Housing Changes: A Multiprocess Analysis of Longitudinal Data from England and Wales.

    PubMed

    Mikolai, Júlia; Kulu, Hill

    2018-02-01

    This study investigates the effect of marital and nonmarital separation on individuals' residential and housing trajectories. Using rich data from the British Household Panel Survey (BHPS) and applying multilevel competing-risks event history models, we analyze the risk of a move of single, married, cohabiting, and separated men and women to different housing types. We distinguish moves due to separation from moves of separated people and account for unobserved codeterminants of moving and separation risks. Our analysis shows that many individuals move due to separation, as expected, but that the likelihood of moving is also relatively high among separated individuals. We find that separation has a long-term effect on individuals' residential careers. Separated women exhibit high moving risks regardless of whether they moved out of the joint home upon separation, whereas separated men who did not move out upon separation are less likely to move. Interestingly, separated women are most likely to move to terraced houses, whereas separated men are equally likely to move to flats (apartments) and terraced (row) houses, suggesting that family structure shapes moving patterns of separated individuals.

  1. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Effect of Rolling Massage on Particle Moving Behaviour in Blood Vessels

    NASA Astrophysics Data System (ADS)

    Yi, Hou-Hui; Fan, Li-Juan; Yang, Xiao-Feng; Chen, Yan-Yan

    2008-09-01

    The rolling massage manipulation is a classic Chinese massage, which is expected to eliminate many diseases. Here the effect of the rolling massage on the particle moving property in the blood vessels under the rolling massage manipulation is studied by the lattice Boltzmann simulation. The simulation results show that the particle moving behaviour depends on the rolling velocity, the distance between particle position and rolling position. The average values, including particle translational velocity and angular velocity, increase as the rolling velocity increases almost linearly. The result is helpful to understand the mechanism of the massage and develop the rolling techniques.

  2. Interrupted time series analysis of children’s blood lead levels: A case study of lead hazard control program in Syracuse, New York

    PubMed Central

    Shao, Liyang; Zhang, Lianjun; Zhen, Zhen

    2017-01-01

    Children’s blood lead concentrations have been closely monitored over the last two decades in the United States. The bio-monitoring surveillance data collected in local agencies reflected the local temporal trends of children’s blood lead levels (BLLs). However, the analysis and modeling of the long-term time series of BLLs have rarely been reported. We attempted to quantify the long-term trends of children’s BLLs in the city of Syracuse, New York and evaluate the impacts of local lead poisoning prevention programs and Lead Hazard Control Program on reducing the children’s BLLs. We applied interrupted time series analysis on the monthly time series of BLLs surveillance data and used ARMA (autoregressive and moving average) models to measure the average children’s blood lead level shift and detect the seasonal pattern change. Our results showed that there were three intervention stages over the past 20 years to reduce children’s BLLs in the city of Syracuse, NY. The average of children’s BLLs was significantly decreased after the interventions, declining from 8.77μg/dL to 3.94μg/dL during1992 to 2011. The seasonal variation diminished over the past decade, but more short term influences were in the variation. The lead hazard control treatment intervention proved effective in reducing the children’s blood lead levels in Syracuse, NY. Also, the reduction of the seasonal variation of children’s BLLs reflected the impacts of the local lead-based paint mitigation program. The replacement of window and door was the major cost of lead house abatement. However, soil lead was not considered a major source of lead hazard in our analysis. PMID:28182688

  3. Interrupted time series analysis of children's blood lead levels: A case study of lead hazard control program in Syracuse, New York.

    PubMed

    Shao, Liyang; Zhang, Lianjun; Zhen, Zhen

    2017-01-01

    Children's blood lead concentrations have been closely monitored over the last two decades in the United States. The bio-monitoring surveillance data collected in local agencies reflected the local temporal trends of children's blood lead levels (BLLs). However, the analysis and modeling of the long-term time series of BLLs have rarely been reported. We attempted to quantify the long-term trends of children's BLLs in the city of Syracuse, New York and evaluate the impacts of local lead poisoning prevention programs and Lead Hazard Control Program on reducing the children's BLLs. We applied interrupted time series analysis on the monthly time series of BLLs surveillance data and used ARMA (autoregressive and moving average) models to measure the average children's blood lead level shift and detect the seasonal pattern change. Our results showed that there were three intervention stages over the past 20 years to reduce children's BLLs in the city of Syracuse, NY. The average of children's BLLs was significantly decreased after the interventions, declining from 8.77μg/dL to 3.94μg/dL during1992 to 2011. The seasonal variation diminished over the past decade, but more short term influences were in the variation. The lead hazard control treatment intervention proved effective in reducing the children's blood lead levels in Syracuse, NY. Also, the reduction of the seasonal variation of children's BLLs reflected the impacts of the local lead-based paint mitigation program. The replacement of window and door was the major cost of lead house abatement. However, soil lead was not considered a major source of lead hazard in our analysis.

  4. Experimental comparisons of hypothesis test and moving average based combustion phase controllers.

    PubMed

    Gao, Jinwu; Wu, Yuhu; Shen, Tielong

    2016-11-01

    For engine control, combustion phase is the most effective and direct parameter to improve fuel efficiency. In this paper, the statistical control strategy based on hypothesis test criterion is discussed. Taking location of peak pressure (LPP) as combustion phase indicator, the statistical model of LPP is first proposed, and then the controller design method is discussed on the basis of both Z and T tests. For comparison, moving average based control strategy is also presented and implemented in this study. The experiments on a spark ignition gasoline engine at various operating conditions show that the hypothesis test based controller is able to regulate LPP close to set point while maintaining the rapid transient response, and the variance of LPP is also well constrained. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm

    NASA Astrophysics Data System (ADS)

    Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.

    2014-08-01

    This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.

  6. Structural equation modeling of the inflammatory response to traffic air pollution

    PubMed Central

    Baja, Emmanuel S.; Schwartz, Joel D.; Coull, Brent A.; Wellenius, Gregory A.; Vokonas, Pantel S.; Suh, Helen H.

    2015-01-01

    Several epidemiological studies have reported conflicting results on the effect of traffic-related pollutants on markers of inflammation. In a Bayesian framework, we examined the effect of traffic pollution on inflammation using structural equation models (SEMs). We studied measurements of C-reactive protein (CRP), soluble vascular cell adhesion molecule-1 (sVCAM-1), and soluble intracellular adhesion molecule-1 (sICAM-1) for 749 elderly men from the Normative Aging Study. Using repeated measures SEMs, we fit a latent variable for traffic pollution that is reflected by levels of black carbon, carbon monoxide, nitrogen monoxide and nitrogen dioxide to estimate its effect on a latent variable for inflammation that included sICAM-1, sVCAM-1 and CRP. Exposure periods were assessed using 1-, 2-, 3-, 7-, 14- and 30-day moving averages previsit. We compared our findings using SEMs with those obtained using linear mixed models. Traffic pollution was related to increased inflammation for 3-, 7-, 14- and 30-day exposure periods. An inter-quartile range increase in traffic pollution was associated with a 2.3% (95% posterior interval (PI): 0.0–4.7%) increase in inflammation for the 3-day moving average, with the most significant association observed for the 30-day moving average (23.9%; 95% PI: 13.9–36.7%). Traffic pollution adversely impacts inflammation in the elderly. SEMs in a Bayesian framework can comprehensively incorporate multiple pollutants and health outcomes simultaneously in air pollution–cardiovascular epidemiological studies. PMID:23232970

  7. An Estimate of the Likelihood for a Climatically Significant Volcanic Eruption Within the Present Decade (2000-2009)

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Franklin, M. Rose (Technical Monitor)

    2000-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (i.e., those having a volcanic explosivity index, or VEI, equal to 4 or larger) per decade is found to span 2-11, with 96% located in the tropics and extra-tropical Northern Hemisphere, A two-point moving average of the time series has higher values since the 1860s than before, measuring 8.00 in the 1910s (the highest value) and measuring 6.50 in the 1980s, the highest since the 18 1 0s' peak. On the basis of the usual behavior of the first difference of the two-point moving averages, one infers that the two-point moving average for the 1990s will measure about 6.50 +/- 1.00, implying that about 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially, those having VEI equal to 5 or larger) nearly always have been associated with episodes of short-term global cooling, the occurrence of even one could ameliorate the effects of global warming. Poisson probability distributions reveal that the probability of one or more VEI equal to 4 or larger events occurring within the next ten years is >99%, while it is about 49% for VEI equal to 5 or larger events and 18% for VEI equal to 6 or larger events. Hence, the likelihood that a, climatically significant volcanic eruption will occur within the next 10 years appears reasonably high.

  8. Validation of a pretreatment delivery quality assurance method for the CyberKnife Synchrony system.

    PubMed

    Mastella, E; Vigorito, S; Rondi, E; Piperno, G; Ferrari, A; Strata, E; Rozza, D; Jereczek-Fossa, B A; Cattani, F

    2016-08-01

    To evaluate the geometric and dosimetric accuracies of the CyberKnife Synchrony respiratory tracking system (RTS) and to validate a method for pretreatment patient-specific delivery quality assurance (DQA). An EasyCube phantom was mounted on the ExacTrac gating phantom, which can move along the superior-inferior (SI) axis of a patient to simulate a moving target. The authors compared dynamic and static measurements. For each case, a Gafchromic EBT3 film was positioned between two slabs of the EasyCube, while a PinPoint ionization chamber was placed in the appropriate space. There were three steps to their evaluation: (1) the field size, the penumbra, and the symmetry of six secondary collimators were measured along the two main orthogonal axes. Dynamic measurements with deliberately simulated errors were also taken. (2) The delivered dose distributions (from step 1) were compared with the planned ones, using the gamma analysis method. The local gamma passing rates were evaluated using three acceptance criteria: 3% local dose difference (LDD)/3 mm, 2%LDD/2 mm, and 3%LDD/1 mm. (3) The DQA plans for six clinical patients were irradiated in different dynamic conditions, to give a total of 19 cases. The measured and planned dose distributions were evaluated with the same gamma-index criteria used in step 2 and the measured chamber doses were compared with the planned mean doses in the sensitive volume of the chamber. (1) A very slight enlargement of the field size and of the penumbra was observed in the SI direction (on average <1 mm), in line with the overall average CyberKnife system error for tracking treatments. (2) Comparison between the planned and the correctly delivered dose distributions confirmed the dosimetric accuracy of the RTS for simple plans. The multicriteria gamma analysis was able to detect the simulated errors, proving the robustness of their method of analysis. (3) All of the DQA clinical plans passed the tests, both in static and dynamic conditions. No statistically significant differences were found between static and dynamic cases, confirming the high degree of accuracy of the Synchrony RTS. The presented methods and measurements verified the mechanical and dosimetric accuracy of the Synchrony RTS. Their method confirms the fact that the RTS, if used properly, is able to treat a moving target with great precision. By combining PinPoint ion chamber, EBT3 films, and gamma evaluation of dose distributions, their DQA method robustly validated the effectiveness of CyberKnife and Synchrony system.

  9. Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.

    NASA Technical Reports Server (NTRS)

    Larsen, Curtis E.

    1988-01-01

    A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.

  10. Substorm-related plasma sheet motions as determined from differential timing of plasma changes at the ISEE satellites

    NASA Technical Reports Server (NTRS)

    Forbes, T. G.; Hones, E. W., Jr.; Bame, S. J.; Asbridge, J. R.; Paschmann, G.; Sckopke, N.; Russell, C. T.

    1981-01-01

    From an ISEE survey of substorm dropouts and recoveries during the period February 5 to May 25, 1978, 66 timing events observed by the Los Alamos Scientific Laboratory/Max-Planck-Institut Fast Plasma Experiments were studied in detail. Near substorm onset, both the average timing velocity and the bulk flow velocity at the edge of the plasma sheet are inward, toward the center. Measured normal to the surface of the plasma sheet, the timing velocity is 23 + or - 18 km/s and the proton flow velocity is 20 + or - 8 km/s. During substorm recovery, the plasma sheet reappears moving outward with an average timing velocity of 133 + or - 31 km/s; however, the corresponding proton flow velocity is only 3 + or - 7 km/s in the same direction. It is suggested that the difference between the average timing velocity for the expansion of the plasma sheet and the plasma bulk flow perpendicular to the surface of the sheet during substorm recovery is most likely the result of surface waves moving past the position of the satellites.

  11. Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data

    NASA Astrophysics Data System (ADS)

    Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti

    2018-03-01

    In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.

  12. The vacuum friction paradox and related puzzles

    NASA Astrophysics Data System (ADS)

    Barnett, Stephen M.; Sonnleitner, Matthias

    2018-04-01

    The frequency of light emitted by a moving source is shifted by a factor proportional to its velocity. We find that this Doppler shift requires the existence of a paradoxical effect: that a moving atom radiating in otherwise empty space feels a net or average force acing against its direction motion and proportional in magnitude to is speed. Yet there is no preferred rest frame, either in relativity or in Newtonian mechanics, so how can there be a vacuum friction force?

  13. Osmium Atoms and Os2 Molecules Move Faster on Selenium-Doped Compared to Sulfur-Doped Boronic Graphenic Surfaces.

    PubMed

    Barry, Nicolas P E; Pitto-Barry, Anaïs; Tran, Johanna; Spencer, Simon E F; Johansen, Adam M; Sanchez, Ana M; Dove, Andrew P; O'Reilly, Rachel K; Deeth, Robert J; Beanland, Richard; Sadler, Peter J

    2015-07-28

    We deposited Os atoms on S- and Se-doped boronic graphenic surfaces by electron bombardment of micelles containing 16e complexes [Os(p-cymene)(1,2-dicarba-closo-dodecarborane-1,2-diselenate/dithiolate)] encapsulated in a triblock copolymer. The surfaces were characterized by energy-dispersive X-ray (EDX) analysis and electron energy loss spectroscopy of energy filtered TEM (EFTEM). Os atoms moved ca. 26× faster on the B/Se surface compared to the B/S surface (233 ± 34 pm·s(-1) versus 8.9 ± 1.9 pm·s(-1)). Os atoms formed dimers with an average Os-Os distance of 0.284 ± 0.077 nm on the B/Se surface and 0.243 ± 0.059 nm on B/S, close to that in metallic Os. The Os2 molecules moved 0.83× and 0.65× more slowly than single Os atoms on B/S and B/Se surfaces, respectively, and again markedly faster (ca. 20×) on the B/Se surface (151 ± 45 pm·s(-1) versus 7.4 ± 2.8 pm·s(-1)). Os atom motion did not follow Brownian motion and appears to involve anchoring sites, probably S and Se atoms. The ability to control the atomic motion of metal atoms and molecules on surfaces has potential for exploitation in nanodevices of the future.

  14. Using argumentation to retrieve articles with similar citations: an inquiry into improving related articles search in the MEDLINE digital library.

    PubMed

    Tbahriti, Imad; Chichester, Christine; Lisacek, Frédérique; Ruch, Patrick

    2006-06-01

    The aim of this study is to investigate the relationships between citations and the scientific argumentation found abstracts. We design a related article search task and observe how the argumentation can affect the search results. We extracted citation lists from a set of 3200 full-text papers originating from a narrow domain. In parallel, we recovered the corresponding MEDLINE records for analysis of the argumentative moves. Our argumentative model is founded on four classes: PURPOSE, METHODS, RESULTS and CONCLUSION. A Bayesian classifier trained on explicitly structured MEDLINE abstracts generates these argumentative categories. The categories are used to generate four different argumentative indexes. A fifth index contains the complete abstract, together with the title and the list of Medical Subject Headings (MeSH) terms. To appraise the relationship of the moves to the citations, the citation lists were used as the criteria for determining relatedness of articles, establishing a benchmark; it means that two articles are considered as "related" if they share a significant set of co-citations. Our results show that the average precision of queries with the PURPOSE and CONCLUSION features is the highest, while the precision of the RESULTS and METHODS features was relatively low. A linear weighting combination of the moves is proposed, which significantly improves retrieval of related articles.

  15. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  16. CMS distributed data analysis with CRAB3

    NASA Astrophysics Data System (ADS)

    Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; Riahi, H.; Tanasijczuk, A. J.; Yusli, M. N. B.; Wolf, M.; Woodard, A. E.; Vaandering, E.

    2015-12-01

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day. CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks and submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. The new system improves the robustness, scalability and sustainability of the service. Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.

  17. Statistical physics in foreign exchange currency and stock markets

    NASA Astrophysics Data System (ADS)

    Ausloos, M.

    2000-09-01

    Problems in economy and finance have attracted the interest of statistical physicists all over the world. Fundamental problems pertain to the existence or not of long-, medium- or/and short-range power-law correlations in various economic systems, to the presence of financial cycles and on economic considerations, including economic policy. A method like the detrended fluctuation analysis is recalled emphasizing its value in sorting out correlation ranges, thereby leading to predictability at short horizon. The ( m, k)-Zipf method is presented for sorting out short-range correlations in the sign and amplitude of the fluctuations. A well-known financial analysis technique, the so-called moving average, is shown to raise questions to physicists about fractional Brownian motion properties. Among spectacular results, the possibility of crash predictions has been demonstrated through the log-periodicity of financial index oscillations.

  18. Symbiosis of Steel, Energy, and CO2 Evolution in Korea

    NASA Astrophysics Data System (ADS)

    Lee, Hyunjoung; Matsuura, Hiroyuki; Sohn, Il

    2016-09-01

    This study looks at the energy intensity of the steel industry and the greenhouse gas intensity involved with the production of steel. Using several sources of steel production data and the corresponding energy sources used provides a time-series analysis of the greenhouse gas (GHG) and energy intensity from 1990 to 2014. The impact of the steel economy with the gross domestic product (GDP) provides indirect importance of the general manufacturing sector within Korea and in particular the steel industry. Beyond 2008, the shift in excess materials production and significant increase in total imports have led to an imbalance in the Korean steel market and continue to inhibit the growth of the domestic steel market. The forecast of the GHG and energy intensity along with the steel production up to 2030 is provided using the auto regressive integrated moving average analysis.

  19. Increased delivery stride length places greater loads on the ankle joint in elite male cricket fast bowlers.

    PubMed

    Spratford, Wayne; Hicks, Amy

    2014-01-01

    The purpose of this study was to investigate the effect stride length has on ankle biomechanics of the leading leg with reference to the potential risk of injury in cricket fast bowlers. Ankle joint kinematic and kinetic data were collected from 51 male fast bowlers during the stance phase of the final delivery stride. The bowling cohort comprised national under-19, first class and international-level athletes. Bowlers were placed into either Short, Average or Long groups based on final stride length, allowing statistical differences to be measured. A multivariate analysis of variance with a Bonferroni post-hoc correction (α = 0.05) revealed significant differences between peak plantarflexion angles (Short-Long P = 0.005, Average and Long P = 0.04) and negative joint work (Average-Long P = 0.026). This study highlighted that during fast bowling the ankle joint of the leading leg experiences high forces under wide ranges of movement. As stride length increases, greater amounts of negative work and plantarflexion are experienced. These increases place greater loads on the ankle joint and move the foot into positions that make it more susceptible to injuries such as posterior impingement syndrome.

  20. Effect of Reynolds and Grashof numbers on mixed convection inside a lid-driven square cavity filled with water-Al2O3 nanofluid

    NASA Astrophysics Data System (ADS)

    Jaman, Md. Shah; Islam, Showmic; Saha, Sumon; Hasan, Mohammad Nasim; Islam, Md. Quamrul

    2016-07-01

    A numerical analysis is carried out to study the performance of steady laminar mixed convection flow inside a square lid-driven cavity filled with water-Al2O3 nanofluid. The top wall of the cavity is moving at a constant velocity and is heated by an isothermal heat source. Two-dimensional Navier-stokes equations along with the energy equations are solved using Galerkin finite element method. Results are obtained for a range of Reynolds and Grashof numbers by considering with and without the presence of nanoparticles. The parametric studies for a wide range of governing parameters in case of pure mixed convective flow show significant features of the present problem in terms of streamline and isotherm contours, average Nusselt number and average temperature profiles. The computational results indicate that the heat transfer coeffcient is strongly influenced by the above governing parameters at the pure mixed convection regime.

  1. Hydrogeology and leachate movement near two chemical-waste sites in Oswego County, New York

    USGS Publications Warehouse

    Anderson, H.R.; Miller, Todd S.

    1986-01-01

    Forty-five observation wells and test holes were installed at two chemical waste disposal sites in Oswego County, New York, to evaluate the hydrogeologic conditions and the rate and direction of leachate migration. At the site near Oswego groundwater moves northward at an average velocity of 0.4 ft/day through unconsolidated glacial deposits and discharges into White Creek and Wine Creek, which border the site and discharge to Lake Ontario. Leaking barrels by chemical wastes have contaminated the groundwater within the site, as evidenced by detection of 10 ' priority pollutant ' organic compounds, and elevated values of specific conductance, chloride, arsenic, lead, and mercury. At the site near Fulton, where 8,000 barrels of chemical wastes are buried, groundwater in the sandy surficial aquifer bordering the landfill on the south and east moves southward and eastward at an average velocity of 2.8 ft/day and discharges to Bell Creek, which discharges to the Oswego River, or moves beneath the landfill. Leachate is migrating eastward, southeastward, and southwestward, as evidenced by elevated values of specific conductance, temperature, and concentrations of several trace metals at wells east, southeast, and southwest of the site. (USGS)

  2. The Accuracy of Talking Pedometers when Used during Free-Living: A Comparison of Four Devices

    ERIC Educational Resources Information Center

    Albright, Carolyn; Jerome, Gerald J.

    2011-01-01

    The purpose of this study was to determine the accuracy of four commercially available talking pedometers in measuring accumulated daily steps of adult participants while they moved independently. Ten young sighted adults (with an average age of 24.1 [plus or minus] 4.6 years), 10 older sighted adults (with an average age of 73 [plus or minus] 5.5…

  3. A landslide-quake detection algorithm with STA/LTA and diagnostic functions of moving average and scintillation index: A preliminary case study of the 2009 Typhoon Morakot in Taiwan

    NASA Astrophysics Data System (ADS)

    Wu, Yu-Jie; Lin, Guan-Wei

    2017-04-01

    Since 1999, Taiwan has experienced a rapid rise in the number of landslides, and the number even reached a peak after the 2009 Typhoon Morakot. Although it is proved that the ground-motion signals induced by slope processes could be recorded by seismograph, it is difficult to be distinguished from continuous seismic records due to the lack of distinct P and S waves. In this study, we combine three common seismic detectors including the short-term average/long-term average (STA/LTA) approach, and two diagnostic functions of moving average and scintillation index. Based on these detectors, we have established an auto-detection algorithm of landslide-quakes and the detection thresholds are defined to distinguish landslide-quake from earthquakes and background noises. To further improve the proposed detection algorithm, we apply it to seismic archives recorded by Broadband Array in Taiwan for Seismology (BATS) during the 2009 Typhoon Morakots and consequently the discrete landslide-quakes detected by the automatic algorithm are located. The detection algorithm show that the landslide-detection results are consistent with that of visual inspection and hence can be used to automatically monitor landslide-quakes.

  4. High-Resolution Coarse-Grained Modeling Using Oriented Coarse-Grained Sites.

    PubMed

    Haxton, Thomas K

    2015-03-10

    We introduce a method to bring nearly atomistic resolution to coarse-grained models, and we apply the method to proteins. Using a small number of coarse-grained sites (about one per eight atoms) but assigning an independent three-dimensional orientation to each site, we preferentially integrate out stiff degrees of freedom (bond lengths and angles, as well as dihedral angles in rings) that are accurately approximated by their average values, while retaining soft degrees of freedom (unconstrained dihedral angles) mostly responsible for conformational variability. We demonstrate that our scheme retains nearly atomistic resolution by mapping all experimental protein configurations in the Protein Data Bank onto coarse-grained configurations and then analytically backmapping those configurations back to all-atom configurations. This roundtrip mapping throws away all information associated with the eliminated (stiff) degrees of freedom except for their average values, which we use to construct optimal backmapping functions. Despite the 4:1 reduction in the number of degrees of freedom, we find that heavy atoms move only 0.051 Å on average during the roundtrip mapping, while hydrogens move 0.179 Å on average, an unprecedented combination of efficiency and accuracy among coarse-grained protein models. We discuss the advantages of such a high-resolution model for parametrizing effective interactions and accurately calculating observables through direct or multiscale simulations.

  5. KARMA4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Mohammad; Salloum, Maher; Lee, Jina

    2017-07-10

    KARMA4 is a C++ library for autoregressive moving average (ARMA) modeling and forecasting of time-series data while incorporating both process and observation error. KARMA4 is designed for fitting and forecasting of time-series data for predictive purposes.

  6. HELIOSHEATH MAGNETIC FIELDS BETWEEN 104 AND 113 AU IN A REGION OF DECLINING SPEEDS AND A STAGNATION REGION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burlaga, L. F.; Ness, N. F., E-mail: lburlagahsp@verizon.net, E-mail: nfnudel@yahoo.com

    2012-04-10

    We examine the relationships between the magnetic field and the radial velocity component V{sub R} observed in the heliosheath by instruments on Voyager 1 (V1). No increase in the magnetic field strength B was observed in a region where V{sub R} decreased linearly from 70 km s{sup -1} to 0 km s{sup -1} as plasma moved outward past V1. An unusually broad transition from positive to negative polarity was observed during a Almost-Equal-To 26 day interval when the heliospheric current sheet (HCS) moved below the latitude of V1 and the speed of V1 was comparable to the radial speed ofmore » the heliosheath flow. When V1 moved through a region where V{sub R} Almost-Equal-To 0 (the 'stagnation region'), B increased linearly with time by a factor of two, and the average of B was 0.14 nT. Nothing comparable to this was observed previously. The magnetic polarity was negative throughout the stagnation region for Almost-Equal-To 580 days until 2011 DOY 235, indicating that the HCS was below the latitude of V1. The average passage times of the magnetic holes and proton boundary layers were the same during 2009 and 2011, because the plasma moved past V1 during 2009 at the same speed that V1 moved through the stagnation region during 2011. The microscale fluctuations of B in the stagnation region during 2011 are qualitatively the same as those observed in the heliosheath during 2009. These results suggest that the stagnation region is a part of the heliosheath, rather than a 'transition region' associated with the heliopause.« less

  7. Forecasting of Water Consumptions Expenditure Using Holt-Winter’s and ARIMA

    NASA Astrophysics Data System (ADS)

    Razali, S. N. A. M.; Rusiman, M. S.; Zawawi, N. I.; Arbin, N.

    2018-04-01

    This study is carried out to forecast water consumption expenditure of Malaysian university specifically at University Tun Hussein Onn Malaysia (UTHM). The proposed Holt-Winter’s and Auto-Regressive Integrated Moving Average (ARIMA) models were applied to forecast the water consumption expenditure in Ringgit Malaysia from year 2006 until year 2014. The two models were compared and performance measurement of the Mean Absolute Percentage Error (MAPE) and Mean Absolute Deviation (MAD) were used. It is found that ARIMA model showed better results regarding the accuracy of forecast with lower values of MAPE and MAD. Analysis showed that ARIMA (2,1,4) model provided a reasonable forecasting tool for university campus water usage.

  8. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  9. Random walker in temporally deforming higher-order potential forces observed in a financial crisis.

    PubMed

    Watanabe, Kota; Takayasu, Hideki; Takayasu, Misako

    2009-11-01

    Basic peculiarities of market price fluctuations are known to be well described by a recently developed random-walk model in a temporally deforming quadratic potential force whose center is given by a moving average of past price traces [M. Takayasu, T. Mizuno, and H. Takayasu, Physica A 370, 91 (2006)]. By analyzing high-frequency financial time series of exceptional events, such as bubbles and crashes, we confirm the appearance of higher-order potential force in the markets. We show statistical significance of its existence by applying the information criterion. This time series analysis is expected to be applied widely for detecting a nonstationary symptom in random phenomena.

  10. Diversity in migratory patterns among Neotropical fishes in a highly regulated river basin.

    PubMed

    Makrakis, M C; Miranda, L E; Makrakis, S; Fontes Júnior, H M; Morlis, W G; Dias, J H P; Garcia, J O

    2012-07-01

    Migratory behaviour of selected fish species is described in the Paraná River, Brazil-Argentina-Paraguay, to search for patterns relevant to tropical regulated river systems. In a 10 year mark-recapture study, spanning a 1425 km section of the river, 32 867 fishes composed of 18 species were released and 1083 fishes were recaptured. The fishes recaptured were at liberty an average 166 days (maximum 1548 days) and travelled an average 35 km (range 0-625 km). Cluster analysis applied to variables descriptive of movement behaviour identified four general movement patterns. Cluster 1 included species that moved long distances (mean 164 km) upstream (54%) and downstream (40%) the mainstem river and showed high incidence (27%) of passage through dams; cluster 2 also exhibited high rate of movement along the mainstem (49% upstream, 13% downstream), but moved small distances (mean 10 km); cluster 3 included the most fishes moving laterally into tributaries (45%) or not moving at all (25%), but little downstream movement (8%); fishes in cluster 4 exhibited little upstream movement (13%) and farthest downstream movements (mean 41 km). Whereas species could be numerically clustered with statistical models, a species ordination showed ample spread, suggesting that species exhibit diverse movement patterns that cannot be easily classified into just a few classes. The cluster and ordination procedures also showed that adults and juveniles of the same species exhibit similar movement patterns. Conventional concepts about Neotropical migratory fishes portray them as travelling long distances upstream. The present results broaden these concepts suggesting that migratory movements are more diverse, could be long, short or at times absent, upriver, downriver or lateral, and the diversity of movements can vary within and among species. The intense lateral migrations exhibited by a diversity of species, especially to and from large tributaries (above reservoirs) and reservoir tributaries, illustrate the importance of these habitats for the fish species life cycle. Considering that the Paraná River is highly impounded, special attention should be given to the few remaining low-impact habitats as they continue to be targets of hydropower development that will probably intensify the effects on migratory fish stocks. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  11. Single Upconversion Nanoparticle-Bacterium Cotrapping for Single-Bacterium Labeling and Analysis.

    PubMed

    Xin, Hongbao; Li, Yuchao; Xu, Dekang; Zhang, Yueli; Chen, Chia-Hung; Li, Baojun

    2017-04-01

    Detecting and analyzing pathogenic bacteria in an effective and reliable manner is crucial for the diagnosis of acute bacterial infection and initial antibiotic therapy. However, the precise labeling and analysis of bacteria at the single-bacterium level are a technical challenge but very important to reveal important details about the heterogeneity of cells and responds to environment. This study demonstrates an optical strategy for single-bacterium labeling and analysis by the cotrapping of single upconversion nanoparticles (UCNPs) and bacteria together. A single UCNP with an average size of ≈120 nm is first optically trapped. Both ends of a single bacterium are then trapped and labeled with single UCNPs emitting green light. The labeled bacterium can be flexibly moved to designated locations for further analysis. Signals from bacteria of different sizes are detected in real time for single-bacterium analysis. This cotrapping method provides a new approach for single-pathogenic-bacterium labeling, detection, and real-time analysis at the single-particle and single-bacterium level. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Canada on the Move: an intensive media analysis from inception to reception.

    PubMed

    Faulkner, Guy; Finlay, Sara-Jane

    2006-01-01

    Research evaluating mediated physical activity campaigns uses an unsophisticated conceptualization of the media and would benefit from the application of a media studies approach. The purpose of this article is to report on the application of this type of analysis to the Canada on the Move media campaign. Through interviews and document analysis, the press release surrounding Canada on the Move was examined at four levels: inception, production, transmission and reception. Analytic strategies of thematic and textual analysis were conducted. The press release was well received by journalists and editors and was successfully transmitted as inferred from national and local television coverage, although there was no national print pickup. Canada on the Move was perceived by sampled audience members as a useful and interesting strategy to encourage walking. A holistic approach to media analysis reveals the complex and frequently messy process of this mediated communication process. Implications for future media disseminations of Canada on the Move are discussed.

  13. An Examination of Selected Geomagnetic Indices in Relation to the Sunspot Cycle

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2006-01-01

    Previous studies have shown geomagnetic indices to be useful for providing early estimates for the size of the following sunspot cycle several years in advance. Examined this study are various precursor methods for predicting the minimum and maximum amplitude of the following sunspot cycle, these precursors based on the aa and Ap geomagnetic indices and the number of disturbed days (NDD), days when the daily Ap index equaled or exceeded 25. Also examined is the yearly peak of the daily Ap index (Apmax), the number of days when Ap greater than or equal to 100, cyclic averages of sunspot number R, aa, Ap, NDD, and the number of sudden storm commencements (NSSC), as well the cyclic sums of NDD and NSSC. The analysis yields 90-percent prediction intervals for both the minimum and maximum amplitudes for cycle 24, the next sunspot cycle. In terms of yearly averages, the best regressions give Rmin = 9.8+/-2.9 and Rmax = 153.8+/-24.7, equivalent to Rm = 8.8+/-2.8 and RM = 159+/-5.5, based on the 12-mo moving average (or smoothed monthly mean sunspot number). Hence, cycle 24 is expected to be above average in size, similar to cycles 21 and 22, producing more than 300 sudden storm commencements and more than 560 disturbed days, of which about 25 will be Ap greater than or equal to 100. On the basis of annual averages, the sunspot minimum year for cycle 24 will be either 2006 or 2007.

  14. Intelligent transportation systems infrastructure initiative

    DOT National Transportation Integrated Search

    1997-01-01

    The three-quarter moving composite price index is the weighted average of the indices for three consecutive quarters. The Composite Bid Price Index is composed of six indicator items: common excavation, to indicate the price trend for all roadway exc...

  15. CLASSICAL AREAS OF PHENOMENOLOGY: Lattice Boltzmann simulation of behaviour of particles moving in blood vessels under the rolling massage

    NASA Astrophysics Data System (ADS)

    Yi, Hou-Hui; Yang, Xiao-Feng; Wang, Cai-Feng; Li, Hua-Bing

    2009-07-01

    The rolling massage is one of the most important manipulations in Chinese massage, which is expected to eliminate many diseases. Here, the effect of the rolling massage on a pair of particles moving in blood vessels under rolling massage manipulation is studied by the lattice Boltzmann simulation. The simulated results show that the motion of each particle is considerably modified by the rolling massage, and it depends on the relative rolling velocity, the rolling depth, and the distance between particle position and rolling position. Both particles' translational average velocities increase almost linearly as the rolling velocity increases, and obey the same law. The increment of the average relative angular velocity for the leading particle is smaller than that of the trailing one. The result is helpful for understanding the mechanism of the massage and to further develop the rolling techniques.

  16. Compression of head-related transfer function using autoregressive-moving-average models and Legendre polynomials.

    PubMed

    Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob

    2013-11-01

    Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.

  17. ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.

    PubMed

    Lee, Keunbaik; Baek, Changryong; Daniels, Michael J

    2017-11-01

    In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.

  18. Optimized nested Markov chain Monte Carlo sampling: theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D

    2009-01-01

    Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples ofmore » the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.« less

  19. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ono, Tomohiro; Miyabe, Yuki, E-mail: miyabe@kuhp.kyoto-u.ac.jp; Yamada, Masahiro

    Purpose: The Vero4DRT system has the capability for dynamic tumor-tracking (DTT) stereotactic irradiation using a unique gimbaled x-ray head. The purposes of this study were to develop DTT conformal arc irradiation and to estimate its geometric and dosimetric accuracy. Methods: The gimbaled x-ray head, supported on an O-ring gantry, was moved in the pan and tilt directions during O-ring gantry rotation. To evaluate the mechanical accuracy, the gimbaled x-ray head was moved during the gantry rotating according to input command signals without a target tracking, and a machine log analysis was performed. The difference between a command and a measuredmore » position was calculated as mechanical error. To evaluate beam-positioning accuracy, a moving phantom, which had a steel ball fixed at the center, was driven based on a sinusoidal wave (amplitude [A]: 20 mm, time period [T]: 4 s), a patient breathing motion with a regular pattern (A: 16 mm, average T: 4.5 s), and an irregular pattern (A: 7.2–23.0 mm, T: 2.3–10.0 s), and irradiated with DTT during gantry rotation. The beam-positioning error was evaluated as the difference between the centroid position of the irradiated field and the steel ball on images from an electronic portal imaging device. For dosimetric accuracy, dose distributions in static and moving targets were evaluated with DTT conformal arc irradiation. Results: The root mean squares (RMSs) of the mechanical error were up to 0.11 mm for pan motion and up to 0.14 mm for tilt motion. The RMSs of the beam-positioning error were within 0.23 mm for each pattern. The dose distribution in a moving phantom with tracking arc irradiation was in good agreement with that in static conditions. Conclusions: The gimbal positional accuracy was not degraded by gantry motion. As in the case of a fixed port, the Vero4DRT system showed adequate accuracy of DTT conformal arc irradiation.« less

  1. Long-term follow-up after maxillary distraction osteogenesis in growing children with cleft lip and palate.

    PubMed

    Huang, Chiung-Shing; Harikrishnan, Pandurangan; Liao, Yu-Fang; Ko, Ellen W C; Liou, Eric J W; Chen, Philip K T

    2007-05-01

    To evaluate the changes in maxillary position after maxillary distraction osteogenesis in six growing children with cleft lip and palate. Retrospective, longitudinal study on maxillary changes at A point, anterior nasal spine, posterior nasal spine, central incisor, and first molar. The University Hospital Craniofacial Center. Cephalometric radiographs were used to measure the maxillary position immediately after distraction, at 6 months, and more than 1 year after distraction. After maxillary distraction with a rigid external distraction device, the maxilla (A point) on average moved forward 9.7 mm and downward 3.5 mm immediately after distraction, moved backward 0.9 mm and upward 2.0 mm after 6 months postoperatively, and then moved further backward 2.3 mm and downward 6.8 mm after more than 1 year from the predistraction position. In most cases, maxilla moved forward at distraction and started to move backward until 1 year after distraction, but remained forward, as compared with predistraction position. Maxilla also moved downward during distraction and upward in 6 months, but started descending in 1 year. There also was no further forward growth of the maxilla after distraction in growing children with clefts.

  2. An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones.

    PubMed

    Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han

    2015-12-11

    Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel "quasi-dynamic" Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the "process-level" fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move.

  3. Analyzing the prices of the most expensive sheet iron all over the world: Modeling, prediction and regime change

    NASA Astrophysics Data System (ADS)

    Song, Fu-Tie; Zhou, Wei-Xing

    2010-09-01

    The private car license plates issued in Shanghai are bestowed the title of “the most expensive sheet iron all over the world”, more expensive than gold. A citizen has to bid in a monthly auction to obtain a license plate for his new private car. We perform statistical analysis to investigate the influence of the minimal price Pmin of the bidding winners, the quota N of private car license plates, the number N of bidders, as well as two external shocks including the legality debate of the auction in 2004 and the auction regime reform in January 2008 on the average price P of all bidding winners. It is found that the legality debate of the auction had marginal transient impact on the average price in a short time period. In contrast, the change of the auction rules has significant permanent influence on the average price, which reduces the price by about 3020 yuan Renminbi. It means that the average price exhibits nonlinear behaviors with a regime change. The evolution of the average price is independent of the number N of bidders in both regimes. In the early regime before January 2008, the average price P was influenced only by the minimal price Pmin in the preceding month with a positive correlation. In the current regime since January 2008, the average price is positively correlated with the minimal price and the quota in the preceding month and negatively correlated with the quota in the same month. We test the predictive power of the two models using 2-year and 3-year moving windows and find that the latter outperforms the former. It seems that the auction market becomes more efficient after the auction reform since the prediction error increases.

  4. Proceedings of the Annual Conference on Manual Control (18th) Held at Dayton, Ohio on 8-10 June 1982

    DTIC Science & Technology

    1983-01-01

    frequency of the disturbance the probability to cross the borderline becomes larger, and corrective action (moving average value further away-,_. from the...pupillometer. The prototypical data was the average of 10 records from 5 normal subjects who showed similar responses. The different amplitudes of light...following orders touch, position, temperature , and vain. Our subjects sometimes reported numbness in the fingertips, dulled pinprick sensations

  5. Canadian Rural-urban Differences in End-of-life Care Setting Transitions

    PubMed Central

    Wilson, Donna M.; Thomas, Roger; Burns, Katharina Kovacs; Hewitt, Jessica A.; Jane, Osei-Waree; Sandra, Robertson

    2012-01-01

    Few studies have focused on the care setting transitions that occur in the last year of life. A three part mixed-methods study was conducted to gain an understanding of the number and implications or impact of care setting transitions in the last year of life for rural Canadians. Provincial health services utilization data, national online survey data, and local qualitative interview data were analyzed to gain general and specific information for consideration. Rural Albertans had significantly more healthcare setting transitions than urbanites in the last year of life (M=4.2 vs 3.3). Online family respondents reported 8 moves on average occurred for family members in the last year of life. These moves were most often identified (65%) on a likert-type scale as “very difficult,” with the free text information revealing these trips were often emotionally painful for themselves and physically painful for their ill family member. Eleven informants were then interviewed until data saturation, with constant-comparative data analysis conducted for a more in-depth understanding of rural transitions. Moving from place to place for needed care in the last year of life was identified as common and concerning for rural people and their families, with three data themes developing: (a) needed care in the last year of life is scattered across many places, (b) travelling is very difficult for terminally-ill persons and their caregivers, and (c) local rural services are minimal. These findings indicate planning is needed to avoid unnecessary end-of-life care setting transitions and to make needed moves for essential services in the last year of life less costly, stressful, and socially disruptive for rural people and their families. PMID:22980372

  6. Documentation of a spreadsheet for time-series analysis and drawdown estimation

    USGS Publications Warehouse

    Halford, Keith J.

    2006-01-01

    Drawdowns during aquifer tests can be obscured by barometric pressure changes, earth tides, regional pumping, and recharge events in the water-level record. These stresses can create water-level fluctuations that should be removed from observed water levels prior to estimating drawdowns. Simple models have been developed for estimating unpumped water levels during aquifer tests that are referred to as synthetic water levels. These models sum multiple time series such as barometric pressure, tidal potential, and background water levels to simulate non-pumping water levels. The amplitude and phase of each time series are adjusted so that synthetic water levels match measured water levels during periods unaffected by an aquifer test. Differences between synthetic and measured water levels are minimized with a sum-of-squares objective function. Root-mean-square errors during fitting and prediction periods were compared multiple times at four geographically diverse sites. Prediction error equaled fitting error when fitting periods were greater than or equal to four times prediction periods. The proposed drawdown estimation approach has been implemented in a spreadsheet application. Measured time series are independent so that collection frequencies can differ and sampling times can be asynchronous. Time series can be viewed selectively and magnified easily. Fitting and prediction periods can be defined graphically or entered directly. Synthetic water levels for each observation well are created with earth tides, measured time series, moving averages of time series, and differences between measured and moving averages of time series. Selected series and fitting parameters for synthetic water levels are stored and drawdowns are estimated for prediction periods. Drawdowns can be viewed independently and adjusted visually if an anomaly skews initial drawdowns away from 0. The number of observations in a drawdown time series can be reduced by averaging across user-defined periods. Raw or reduced drawdown estimates can be copied from the spreadsheet application or written to tab-delimited ASCII files.

  7. Drift correction of the dissolved signal in single particle ICPMS.

    PubMed

    Cornelis, Geert; Rauch, Sebastien

    2016-07-01

    A method is presented where drift, the random fluctuation of the signal intensity, is compensated for based on the estimation of the drift function by a moving average. It was shown using single particle ICPMS (spICPMS) measurements of 10 and 60 nm Au NPs that drift reduces accuracy of spICPMS analysis at the calibration stage and during calculations of the particle size distribution (PSD), but that the present method can again correct the average signal intensity as well as the signal distribution of particle-containing samples skewed by drift. Moreover, deconvolution, a method that models signal distributions of dissolved signals, fails in some cases when using standards and samples affected by drift, but the present method was shown to improve accuracy again. Relatively high particle signals have to be removed prior to drift correction in this procedure, which was done using a 3 × sigma method, and the signals are treated separately and added again. The method can also correct for flicker noise that increases when signal intensity is increased because of drift. The accuracy was improved in many cases when flicker correction was used, but when accurate results were obtained despite drift, the correction procedures did not reduce accuracy. The procedure may be useful to extract results from experimental runs that would otherwise have to be run again. Graphical Abstract A method is presented where a spICP-MS signal affected by drift (left) is corrected (right) by adjusting the local (moving) averages (green) and standard deviations (purple) to the respective values at a reference time (red). In combination with removing particle events (blue) in the case of calibration standards, this method is shown to obtain particle size distributions where that would otherwise be impossible, even when the deconvolution method is used to discriminate dissolved and particle signals.

  8. Pre-Drinking and the Temporal Gradient of Intoxication in a New Zealand Nightlife Environment.

    PubMed

    Cameron, Michael P; Roskruge, Matthew J; Droste, Nic; Miller, Peter G

    2018-01-01

    We measured changes in the average level of intoxication over time in the nighttime economy and identified the factors associated with intoxication, including pre-drinking. A random intercept sample of 320 pedestrians (105 women; 215 men) was interviewed and received breath alcohol analysis in the nighttime economy of Hamilton, New Zealand. Data were collected over a five-night period, between 7 P.M. and 2:30 A.M. Data were analyzed by plotting the moving average breath alcohol concentration (BrAC) over time and using linear regression models to identify the factors associated with BrAC. Mean BrAC was 241.5 mcg/L for the full sample; 179.7 for women and 271.7 for men, which is a statistically significant difference. Mean BrAC was also significantly higher among those who engaged in pre-drinking than those who did not. In the regression models, time of night and pre-drinking were significantly associated with higher BrAC. The effect of pre-drinking on BrAC was larger for women than for men. The average level of intoxication increases throughout the night. However, this masks a potentially important gender difference, in that women's intoxication levels stop increasing after midnight, whereas men's increase continuously through the night. Similarly, intoxication of pre-drinkers stops increasing from 11 P.M., although remaining higher than non-pre-drinkers throughout the night. Analysis of BrAC provides a more nuanced understanding of intoxication levels in the nighttime economy.

  9. SU-G-JeP4-05: Effects of Irregular Respiratory Motion On the Positioning Accuracy of Moving Target with Free Breathing Cone-Beam Computerized Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, X; Xiong, W; Gewanter, R

    Purpose: Average or maximum intensity projection (AIP or MIP) images derived from 4DCT images are often used as a reference image for target alignment when free breathing Cone-beam CT (FBCBCT) is used for positioning a moving target at treatment. This method can be highly accurate if the patient has stable respiratory motion. However, a patient’s breathing pattern often varies irregularly. The purpose of this study is to investigate the effect of irregular respiration on the positioning accuracy of a moving target with FBCBCT. Methods: Eight patients’ respiratory motion curves were selected to drive a Quasar phantom with embedded cubic andmore » spherical targets. A 4DCT of the moving phantom was acquired on a CT scanner (Philips Brilliance 16) equipped with a Varian RPM system. The phase binned 4DCT images and the corresponding MIP and AIP images were transferred into Eclipse for analysis. CBCTs of the phantom driven by the same breathing curves were acquired on a Varian TrueBeam and fused such that the zero positions of moving targets are the same on both CBCT and AIP images. The sphere and cube volumes and centrioid differences (alignment error) determined by MIP, AIP and FBCBCT images were compared. Results: Compared to the volume determined by FBCBCT, the volumes of cube and sphere in MIP images were 22.4%±8.8% and 34.2%±6.2% larger while the volumes in AIP images were 7.1%±6.2% and 2.7%±15.3% larger, respectively. The alignment errors for the cube and sphere with center-center matches between MIP and FBCBCT were 3.5±3.1mm and 3.2±2.3mm, and the alignment errors between AIP and FBCBCT were 2.1±2.6mm and 2.1±1.7mm, respectively. Conclusion: AIP images appear to be superior reference images than MIP images. However, irregular respiratory motions could compromise the positioning accuracy of a moving target if the target center-center match is used to align FBCBCT and AIP images.« less

  10. Dynamical features of hazardous near-Earth objects

    NASA Astrophysics Data System (ADS)

    Emel'yanenko, V. V.; Naroenkov, S. A.

    2015-07-01

    We discuss the dynamical features of near-Earth objects moving in dangerous proximity to Earth. We report the computation results for the motions of all observed near-Earth objects over a 600-year-long time period: 300 years in the past and 300 years in the future. We analyze the dynamical features of Earth-approaching objects. In particular, we established that the observed distribution of geocentric velocities of dangerous objects depends on their size. No bodies with geocentric velocities smaller that 5 kms-1 have been found among hazardous objects with absolute magnitudes H <18, whereas 9% of observed objects with H <27 pass near Earth moving at such velocities. On the other hand, we found a tendency for geocentric velocities to increase at H >29. We estimated the distribution of absolute magnitudes of hazardous objects based on our analysis of the data for the asteroids that have passed close to Earth. We inferred the Earth-impact frequencies for objects of different sizes. Impacts of objects with H <18 with Earth occur on average once every 0.53 Myr, and impacts of objects with H <27—once every 130-240 years. We show that currently about 0.1% of all near-Earth objects with diameters greater than 10 m have been discovered. We point out the discrepancies between the estimates of impact rates of Chelyabinsk-type objects, determined from fireball observations and from the data of telescopic asteroid tracking surveys. These estimates can be reconciled assuming that Chelyabinsk-sized asteroids have very low albedos (about 0.02 on average).

  11. Baseline repeated measures from controlled human exposure studies: associations between ambient air pollution exposure and the systemic inflammatory biomarkers IL-6 and fibrinogen.

    PubMed

    Thompson, Aaron M S; Zanobetti, Antonella; Silverman, Frances; Schwartz, Joel; Coull, Brent; Urch, Bruce; Speck, Mary; Brook, Jeffrey R; Manno, Michael; Gold, Diane R

    2010-01-01

    Systemic inflammation may be one of the mechanisms mediating the association between ambient air pollution and cardiovascular morbidity and mortality. Interleukin-6 (IL-6) and fibrinogen are biomarkers of systemic inflammation that are independent risk factors for cardio-vascular disease. We investigated the association between ambient air pollution and systemic inflammation using baseline measurements of IL-6 and fibrinogen from controlled human exposure studies. In this retrospective analysis we used repeated-measures data in 45 nonsmoking subjects. Hourly and daily moving averages were calculated for ozone, nitrogen dioxide, sulfur dioxide, and particulate matter

  12. Video-Assisted Thoracic Surgical Lobectomy for Lung Cancer: Description of a Learning Curve.

    PubMed

    Yao, Fei; Wang, Jian; Yao, Ju; Hang, Fangrong; Cao, Shiqi; Cao, Yongke

    2017-07-01

    Video-assisted thoracic surgical (VATS) lobectomy is gaining popularity in the treatment of lung cancer. The aim of this study is to investigate the learning curve of VATS lobectomy by using multidimensional methods and to compare the learning curve groups with respect to perioperative clinical outcomes. We retrospectively reviewed a prospective database to identify 67 consecutive patients who underwent VATS lobectomy for lung cancer by a single surgeon. The learning curve was analyzed by using moving average and the cumulative sum (CUSUM) method. With the moving average and CUSUM analyses for the operation time, patients were stratified into two groups, with chronological order defining early and late experiences. Perioperative clinical outcomes were compared between the two learning curve groups. According to the moving average method, the peak point for operation time occurred at the 26th case. The CUSUM method also showed the operation time peak point at the 26th case. When results were compared between early- and late-experience periods, the operation time, duration of chest drainage, and postoperative hospital stay were significantly longer in the early-experience group (cases 1 to 26). The intraoperative estimated blood loss was significantly less in the late-experience group (cases 27 to 67). CUSUM charts showed a decreasing duration of chest drainage after the 36th case and shortening postoperative hospital stay after the 37th case. Multidimensional statistical analyses suggested that the learning curve for VATS lobectomy for lung cancer required ∼26 cases. Favorable intraoperative and postoperative care parameters for VATS lobectomy were observed in the late-experience group.

  13. Breathing-motion-compensated robotic guided stereotactic body radiation therapy : Patterns of failure analysis.

    PubMed

    Stera, Susanne; Balermpas, Panagiotis; Chan, Mark K H; Huttenlocher, Stefan; Wurster, Stefan; Keller, Christian; Imhoff, Detlef; Rades, Dirk; Dunst, Jürgen; Rödel, Claus; Hildebrandt, Guido; Blanck, Oliver

    2018-02-01

    We retrospectively evaluated the patterns of failure for robotic guided real-time breathing-motion-compensated (BMC) stereotactic body radiation therapy (SBRT) in the treatment of tumors in moving organs. Between 2011 and 2016, a total of 198 patients with 280 lung, liver, and abdominal tumors were treated with BMC-SBRT. The median gross tumor volume (GTV) was 12.3 cc (0.1-372.0 cc). Medians of mean GTV BED α/β = 10   Gy (BED = biological effective dose) was 148.5 Gy 10 (31.5-233.3 Gy 10 ) and prescribed planning target volume (PTV) BED α/β = 10   Gy was 89.7 Gy 10 (28.8-151.2 Gy 10 ), respectively. We analyzed overall survival (OS) and local control (LC) based on various factors, including BEDs with α/β ratios of 15 Gy (lung metastases), 21 Gy (primary lung tumors), and 27 Gy (liver metastases). Median follow-up was 10.4 months (2.0-59.0 months). The 2‑year actuarial LC was 100 and 86.4% for primary early and advanced stage lung tumors, respectively, 100% for lung metastases, 82.2% for liver metastases, and 90% for extrapulmonary extrahepatic metastases. The 2‑year OS rate was 47.9% for all patients. In uni- and multivariate analysis, comparatively lower PTV prescription dose (equivalence of 3 × 12-13 Gy) and higher average GTV dose (equivalence of 3 × 18 Gy) to current practice were significantly associated with LC. For OS, Karnofsky performance score (100%), gender (female), and SBRT without simultaneous chemotherapy were significant prognostic factors. Grade 3 side effects were rare (0.5%). Robotic guided BMC-SBRT can be considered a safe and effective treatment for solid tumors in moving organs. To reach sufficient local control rates, high average GTV doses are necessary. Further prospective studies are warranted to evaluate these points.

  14. Trend analysis of a tropical urban river water quality in Malaysia.

    PubMed

    Othman, Faridah; M E, Alaa Eldin; Mohamed, Ibrahim

    2012-12-01

    Rivers play a significant role in providing water resources for human and ecosystem survival and health. Hence, river water quality is an important parameter that must be preserved and monitored. As the state of Selangor and the city of Kuala Lumpur, Malaysia, are undergoing tremendous development, the river is subjected to pollution from point and non-point sources. The water quality of the Klang River basin, one of the most densely populated areas within the region, is significantly degraded due to human activities as well as urbanization. Evaluation of the overall river water quality status is normally represented by a water quality index (WQI), which consists of six parameters, namely dissolved oxygen, biochemical oxygen demand, chemical oxygen demand, suspended solids, ammoniacal nitrogen and pH. The objectives of this study are to assess the water quality status for this tropical, urban river and to establish the WQI trend. Using monthly WQI data from 1997 to 2007, time series were plotted and trend analysis was performed by employing the first-order autocorrelated trend model on the moving average values for every station. The initial and final values of either the moving average or the trend model were used as the estimates of the initial and final WQI at the stations. It was found that Klang River water quality has shown some improvement between 1997 and 2007. Water quality remains good in the upper stream area, which provides vital water sources for water treatment plants in the Klang valley. Meanwhile, the water quality has also improved in other stations. Results of the current study suggest that the present policy on managing river quality in the Klang River has produced encouraging results; the policy should, however, be further improved alongside more vigorous monitoring of pollution discharge from various point sources such as industrial wastewater, municipal sewers, wet markets, sand mining and landfills, as well as non-point sources such as agricultural or urban runoff and commercial activity.

  15. The BMPix and PEAK Tools: New Methods for Automated Laminae Recognition and Counting - Application to Glacial Varves From Antarctic Marine Sediment

    NASA Astrophysics Data System (ADS)

    Weber, M. E.; Reichelt, L.; Kuhn, G.; Thurow, J. W.; Ricken, W.

    2009-12-01

    We present software-based tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray-scale curves from images at ultrahigh (pixel) resolution. The PEAK tool uses the gray-scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a Gaussian smoothed gray-scale curve. The zero-crossing algorithm counts every positive and negative halfway-passage of the gray-scale curve through a wide moving average. Hence, the record is separated into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the gray-scale curve into its frequency components, before positive and negative passages are count. We applied the new methods successfully to tree rings and to well-dated and already manually counted marine varves from Saanich Inlet before we adopted the tools to rather complex marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that the laminations from three Weddell Sea sites represent true varves that were deposited on sediment ridges over several millennia during the last glacial maximum (LGM). There are apparently two seasonal layers of terrigenous composition, a coarser-grained bright layer, and a finer-grained dark layer. The new tools offer several advantages over previous tools. The counting procedures are based on a moving average generated from gray-scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Since PEAK associates counts with a specific depth, the thickness of each year or each season is also measured which is an important prerequisite for later spectral analysis. Since all information required to conduct the analysis is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.

  16. Cost Analysis and Policy Implications of a Pediatric Palliative Care Program.

    PubMed

    Gans, Daphna; Hadler, Max W; Chen, Xiao; Wu, Shang-Hua; Dimand, Robert; Abramson, Jill M; Ferrell, Betty; Diamant, Allison L; Kominski, Gerald F

    2016-09-01

    In 2010, California launched Partners for Children (PFC), a pediatric palliative care pilot program offering hospice-like services for children eligible for full-scope Medicaid delivered concurrently with curative care, regardless of the child's life expectancy. We assessed the change from before PFC enrollment to the enrolled period in 1) health care costs per enrollee per month (PEPM), 2) costs by service type and diagnosis category, and 3) health care utilization (days of inpatient care and length of hospital stay). A pre-post analysis compared enrollees' health care costs and utilization up to 24 months before enrollment with their costs during participation in the pilot, from January 2010 through December 2012. Analyses were conducted using paid Medicaid claims and program enrollment data. The average PEPM health care costs of program enrollees decreased by $3331 from before their participation in PFC to the enrolled period, driven by a reduction in inpatient costs of $4897 PEPM. PFC enrollees experienced a nearly 50% reduction in the average number of inpatient days per month, from 4.2 to 2.3. Average length of stay per hospitalization dropped from an average of 16.7 days before enrollment to 6.5 days while in the program. Through the provision of home-based therapeutic services, 24/7 access to medical advice, and enhanced, personally tailored care coordination, PFC demonstrated an effective way to reduce costs for children with life-limiting conditions by moving from costly inpatient care to more coordinated and less expensive outpatient care. PFC's home-based care strategy is a cost-effective model for pediatric palliative care elsewhere. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  17. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  18. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  19. Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  20. Girls Thrive Emotionally, Boys Falter After Move to Better Neighborhood

    MedlinePlus

    ... averaging 34 percent, compared to 50 percent for control group families. Mental illness is more prevalent among youth ... compared to 3.5 percent among boys in control group families who did not receive vouchers. Rates of ...

  1. Rippling Dune Front in Herschel Crater on Mars

    NASA Image and Video Library

    2011-11-17

    A rippled dune front in Herschel Crater on Mars moved an average of about two meters about two yards between March 3, 2007 and December 1, 2010, as seen in one of two images from NASA Mars Reconnaissance Orbiter.

  2. Rippling Dune Front in Herschel Crater on Mars

    NASA Image and Video Library

    2011-11-17

    A rippled dune front in Herschel Crater on Mars moved an average of about one meter about one yard between March 3, 2007 and December 1, 2010, as seen in one of two images from NASA Mars Reconnaissance Orbiter.

  3. Shifting Sand in Herschel Crater

    NASA Image and Video Library

    2011-11-17

    The eastern margin of a rippled dune in Herschel Crater on Mars moved an average distance of three meters about three yards between March 3, 2007 and December 1, 2010, in one of two images taken by NASA Mars Reconnaissance Orbiter.

  4. Assessment and prediction of road accident injuries trend using time-series models in Kurdistan.

    PubMed

    Parvareh, Maryam; Karimi, Asrin; Rezaei, Satar; Woldemichael, Abraha; Nili, Sairan; Nouri, Bijan; Nasab, Nader Esmail

    2018-01-01

    Road traffic accidents are commonly encountered incidents that can cause high-intensity injuries to the victims and have direct impacts on the members of the society. Iran has one of the highest incident rates of road traffic accidents. The objective of this study was to model the patterns of road traffic accidents leading to injury in Kurdistan province, Iran. A time-series analysis was conducted to characterize and predict the frequency of road traffic accidents that lead to injury in Kurdistan province. The injuries were categorized into three separate groups which were related to the car occupants, motorcyclists and pedestrian road traffic accident injuries. The Box-Jenkins time-series analysis was used to model the injury observations applying autoregressive integrated moving average (ARIMA) and seasonal autoregressive integrated moving average (SARIMA) from March 2009 to February 2015 and to predict the accidents up to 24 months later (February 2017). The analysis was carried out using R-3.4.2 statistical software package. A total of 5199 pedestrians, 9015 motorcyclists, and 28,906 car occupants' accidents were observed. The mean (SD) number of car occupant, motorcyclist and pedestrian accident injuries observed were 401.01 (SD 32.78), 123.70 (SD 30.18) and 71.19 (SD 17.92) per year, respectively. The best models for the pattern of car occupant, motorcyclist, and pedestrian injuries were the ARIMA (1, 0, 0), SARIMA (1, 0, 2) (1, 0, 0) 12 , and SARIMA (1, 1, 1) (0, 0, 1) 12 , respectively. The motorcyclist and pedestrian injuries showed a seasonal pattern and the peak was during summer (August). The minimum frequency for the motorcyclist and pedestrian injuries were observed during the late autumn and early winter (December and January). Our findings revealed that the observed motorcyclist and pedestrian injuries had a seasonal pattern that was explained by air temperature changes overtime. These findings call the need for close monitoring of the accidents during the high-risk periods in order to control and decrease the rate of the injuries.

  5. Forecasting Daily Patient Outflow From a Ward Having No Real-Time Clinical Data

    PubMed Central

    Tran, Truyen; Luo, Wei; Phung, Dinh; Venkatesh, Svetha

    2016-01-01

    Background: Modeling patient flow is crucial in understanding resource demand and prioritization. We study patient outflow from an open ward in an Australian hospital, where currently bed allocation is carried out by a manager relying on past experiences and looking at demand. Automatic methods that provide a reasonable estimate of total next-day discharges can aid in efficient bed management. The challenges in building such methods lie in dealing with large amounts of discharge noise introduced by the nonlinear nature of hospital procedures, and the nonavailability of real-time clinical information in wards. Objective Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data. Methods We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features. Results Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014. Conclusions In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments. PMID:27444059

  6. Computer simulation of concentrated solid solution strengthening

    NASA Technical Reports Server (NTRS)

    Kuo, C. T. K.; Arsenault, R. J.

    1976-01-01

    The interaction forces between a straight edge dislocation moving through a three-dimensional block containing a random array of solute atoms were determined. The yield stress at 0 K was obtained by determining the average maximum solute-dislocation interaction force that is encountered by edge dislocation, and an expression relating the yield stress to the length of the dislocation and the solute concentration is provided. The magnitude of the solid solution strengthening due to solute atoms can be determined directly from the numerical results, provided the dislocation line length that moves as a unit is specified.

  7. Unsteady characteristics of low-Re flow past two tandem cylinders

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Dou, Hua-Shu; Zhu, Zuchao; Li, Yi

    2018-06-01

    This study investigated the two-dimensional flow past two tandem circular or square cylinders at Re = 100 and D / d = 4-10, where D is the center-to-center distance and d is the cylinder diameter. Numerical simulation was performed to comparably study the effect of cylinder geometry and spacing on the aerodynamic characteristics, unsteady flow patterns, time-averaged flow characteristics and flow unsteadiness. We also provided the first global linear stability analysis and sensitivity analysis on the physical problem for the potential application of flow control. The objective of this work is to quantitatively identify the effect of the cylinder geometry and spacing on the characteristic quantities. Numerical results reveal that there is wake flow transition for both geometries depending on the spacing. The characteristic quantities, including the time-averaged and fluctuating streamwise velocity and pressure coefficient, are quite similar to that of the single cylinder case for the upstream cylinder, while an entirely different variation pattern is observed for the downstream cylinder. The global linear stability analysis shows that the spatial structure of perturbation is mainly observed in the wake of the downstream cylinder for small spacing, while moves upstream with reduced size and is also observed after the upstream cylinder for large spacing. The sensitivity analysis reflects that the temporal growth rate of perturbation is the most sensitive to the near-wake flow of downstream cylinder for small spacing and upstream cylinder for large spacing.

  8. From Moves to Sequences: Expanding the Unit of Analysis in the Study of Classroom Discourse

    ERIC Educational Resources Information Center

    Lefstein, Adam; Snell, Julia; Israeli, Mirit

    2015-01-01

    What is the appropriate unit of analysis for the study of classroom discourse? One common analytic strategy employs individual discourse moves, which are coded, counted and used as indicators of the quality of classroom talk. In this article we question this practice, arguing that discourse moves are positioned within sequences that critically…

  9. [Analysis on influential factors of Chinese medicinal herb growers' willingness to use green pesticides: evidence on Panax notoginseng production areas in Wenshan, Yunnan province].

    PubMed

    Qian, Yun-Xu; Yang, Yue; Zhao, Wei; Cui, Xiu-Ming; Bi, Kai-Shun

    2013-10-01

    The purpose of the article is to apply a binary logistic model to analyze the major factors, which influence Chinese medicinal herb growers' willingness to use green pesticides by using survey data collected in Wenshan, Yunnan Province. The results indicate that, output per capita, average pesticide cost per mu, cognition of pesticide residues, expectations on Panax notoginseng prices, cognition of pesticides' effect of pests control, cognition of P. notoginseng prices of low pesticide residues have a significant influence on growers' willingness to use green pesticides. According to the analysis above, some proposals for enhancing Chinese medicinal herb growers' willingness to use green pesticides are put forward, such as, moving toward the intensive planting systems, fetching down the pieces of green pesticides, emphasizing and propagating the advantages of green pesticides, keeping the prices of Chinese medicinal herb running at steady rates.

  10. Network effects in environmental justice struggles: An investigation of conflicts between mining companies and civil society organizations from a network perspective.

    PubMed

    Aydin, Cem Iskender; Ozkaynak, Begum; Rodríguez-Labajos, Beatriz; Yenilmez, Taylan

    2017-01-01

    This paper examines conflicts that occur between mining companies and civil society organizations (CSOs) around the world and offers an innovative analysis of mining conflicts from a social network perspective. The analysis showed that, as the number of CSOs involved in a conflict increased, its outcome was more likely to be perceived as a success in terms of environmental justice (EJ); if a CSO was connected to other central CSOs, the average perception of EJ success was likely to increase; and as network distance between two conflicts increased (or decreased), they were more likely to lead to different (or similar) EJ outcomes. Such network effects in mining conflicts have policy implications for EJ movements. It would be a strategic move on the part of successful CSOs to become involved in other major conflicts and disseminate information about how they achieved greater EJ success.

  11. Spectral analysis based on fast Fourier transformation (FFT) of surveillance data: the case of scarlet fever in China.

    PubMed

    Zhang, T; Yang, M; Xiao, X; Feng, Z; Li, C; Zhou, Z; Ren, Q; Li, X

    2014-03-01

    Many infectious diseases exhibit repetitive or regular behaviour over time. Time-domain approaches, such as the seasonal autoregressive integrated moving average model, are often utilized to examine the cyclical behaviour of such diseases. The limitations for time-domain approaches include over-differencing and over-fitting; furthermore, the use of these approaches is inappropriate when the assumption of linearity may not hold. In this study, we implemented a simple and efficient procedure based on the fast Fourier transformation (FFT) approach to evaluate the epidemic dynamic of scarlet fever incidence (2004-2010) in China. This method demonstrated good internal and external validities and overcame some shortcomings of time-domain approaches. The procedure also elucidated the cycling behaviour in terms of environmental factors. We concluded that, under appropriate circumstances of data structure, spectral analysis based on the FFT approach may be applicable for the study of oscillating diseases.

  12. Multiscale volatility duration characteristics on financial multi-continuum percolation dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Min; Wang, Jun

    A random stock price model based on the multi-continuum percolation system is developed to investigate the nonlinear dynamics of stock price volatility duration, in an attempt to explain various statistical facts found in financial data, and have a deeper understanding of mechanisms in the financial market. The continuum percolation system is usually referred to be a random coverage process or a Boolean model, it is a member of a class of statistical physics systems. In this paper, the multi-continuum percolation (with different values of radius) is employed to model and reproduce the dispersal of information among the investors. To testify the rationality of the proposed model, the nonlinear analyses of return volatility duration series are preformed by multifractal detrending moving average analysis and Zipf analysis. The comparison empirical results indicate the similar nonlinear behaviors for the proposed model and the actual Chinese stock market.

  13. Network effects in environmental justice struggles: An investigation of conflicts between mining companies and civil society organizations from a network perspective

    PubMed Central

    Aydin, Cem Iskender; Ozkaynak, Begum; Rodríguez-Labajos, Beatriz

    2017-01-01

    This paper examines conflicts that occur between mining companies and civil society organizations (CSOs) around the world and offers an innovative analysis of mining conflicts from a social network perspective. The analysis showed that, as the number of CSOs involved in a conflict increased, its outcome was more likely to be perceived as a success in terms of environmental justice (EJ); if a CSO was connected to other central CSOs, the average perception of EJ success was likely to increase; and as network distance between two conflicts increased (or decreased), they were more likely to lead to different (or similar) EJ outcomes. Such network effects in mining conflicts have policy implications for EJ movements. It would be a strategic move on the part of successful CSOs to become involved in other major conflicts and disseminate information about how they achieved greater EJ success. PMID:28686618

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Arizona. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Hawaii. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  16. A wirelessly-powered homecage with animal behavior analysis and closed-loop power control.

    PubMed

    Yaoyao Jia; Zheyuan Wang; Canales, Daniel; Tinkler, Morgan; Chia-Chun Hsu; Madsen, Teresa E; Mirbozorgi, S Abdollah; Rainnie, Donald; Ghovanloo, Maysam

    2016-08-01

    This paper presents a new EnerCage-homecage system, EnerCage-HC2, for longitudinal electrophysiology data acquisition experiments on small freely moving animal subjects, such as rodents. EnerCage-HC2 is equipped with multi-coil wireless power transmission (WPT), closed-loop power control, bidirectional data communication via Bluetooth Low Energy (BLE), and Microsoft Kinect® based animal behavior tracking and analysis. The EnerCage-HC2 achieves a homogeneous power transfer efficiency (PTE) of 14% on average, with ~42 mW power delivered to the load (PDL) at a nominal height of 7 cm by the closed-loop power control mechanism. The Microsoft Kinect® behavioral analysis algorithm can not only track the animal position in real-time but also classify 5 different types of rodent behaviors: standstill, walking, grooming, rearing, and rotating. A proof-of-concept in vivo experiment was conducted on two awake freely behaving rats while successfully operating a one-channel stimulator and generating an ethogram.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Connecticut. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  18. Averaging interval selection for the calculation of Reynolds shear stress for studies of boundary layer turbulence.

    NASA Astrophysics Data System (ADS)

    Lee, Zoe; Baas, Andreas

    2013-04-01

    It is widely recognised that boundary layer turbulence plays an important role in sediment transport dynamics in aeolian environments. Improvements in the design and affordability of ultrasonic anemometers have provided significant contributions to studies of aeolian turbulence, by facilitating high frequency monitoring of three dimensional wind velocities. Consequently, research has moved beyond studies of mean airflow properties, to investigations into quasi-instantaneous turbulent fluctuations at high spatio-temporal scales. To fully understand, how temporal fluctuations in shear stress drive wind erosivity and sediment transport, research into the best practice for calculating shear stress is necessary. This paper builds upon work published by Lee and Baas (2012) on the influence of streamline correction techniques on Reynolds shear stress, by investigating the time-averaging interval used in the calculation. Concerns relating to the selection of appropriate averaging intervals for turbulence research, where the data are typically non-stationary at all timescales, are well documented in the literature (e.g. Treviño and Andreas, 2000). For example, Finnigan et al. (2003) found that underestimating the required averaging interval can lead to a reduction in the calculated momentum flux, as contributions from turbulent eddies longer than the averaging interval are lost. To avoid the risk of underestimating fluxes, researchers have typically used the total measurement duration as a single averaging period. For non-stationary data, however, using the whole measurement run as a single block average is inadequate for defining turbulent fluctuations. The data presented in this paper were collected in a field study of boundary layer turbulence conducted at Tramore beach near Rosapenna, County Donegal, Ireland. High-frequency (50 Hz) 3D wind velocity measurements were collected using ultrasonic anemometry at thirteen different heights between 0.11 and 1.62 metres above the bed. A technique for determining time-averaging intervals for a series of anemometers stacked in a close vertical array is presented. A minimum timescale is identified using spectral analysis to determine the inertial sub-range, where energy is neither produced nor dissipated but passed down to increasingly smaller scales. An autocorrelation function is then used to derive a scaling pattern between anemometer heights, which defines a series of averaging intervals of increasing length with height above the surface. Results demonstrate the effect of different averaging intervals on the calculation of Reynolds shear stress and highlight the inadequacy of using the total measurement duration as a single block average. Lee, Z. S. & Baas, A. C. W. (2012). Streamline correction for the analysis of boundary layer turbulence. Geomorphology, 171-172, 69-82. Treviño, G. and Andreas, E.L., 2000. Averaging Intervals For Spectral Analysis Of Nonstationary Turbulence. Boundary-Layer Meteorology, 95(2): 231-247. Finnigan, J.J., Clement, R., Malhi, Y., Leuning, R. and Cleugh, H.A., 2003. Re-evaluation of long-term flux measurement techniques. Part I: Averaging and coordinate rotation. Boundary-Layer Meteorology, 107(1): 1-48.

  19. Weather explains high annual variation in butterfly dispersal

    PubMed Central

    Rytteri, Susu; Heikkinen, Risto K.; Heliölä, Janne; von Bagh, Peter

    2016-01-01

    Weather conditions fundamentally affect the activity of short-lived insects. Annual variation in weather is therefore likely to be an important determinant of their between-year variation in dispersal, but conclusive empirical studies are lacking. We studied whether the annual variation of dispersal can be explained by the flight season's weather conditions in a Clouded Apollo (Parnassius mnemosyne) metapopulation. This metapopulation was monitored using the mark–release–recapture method for 12 years. Dispersal was quantified for each monitoring year using three complementary measures: emigration rate (fraction of individuals moving between habitat patches), average residence time in the natal patch, and average distance moved. There was much variation both in dispersal and average weather conditions among the years. Weather variables significantly affected the three measures of dispersal and together with adjusting variables explained 79–91% of the variation observed in dispersal. Different weather variables became selected in the models explaining variation in three dispersal measures apparently because of the notable intercorrelations. In general, dispersal rate increased with increasing temperature, solar radiation, proportion of especially warm days, and butterfly density, and decreased with increasing cloudiness, rainfall, and wind speed. These results help to understand and model annually varying dispersal dynamics of species affected by global warming. PMID:27440662

  20. Identifying hidden sexual bridging communities in Chicago.

    PubMed

    Youm, Yoosik; Mackesy-Amiti, Mary Ellen; Williams, Chyvette T; Ouellet, Lawrence J

    2009-07-01

    Bridge populations can play a central role in the spread of human immunodeficiency virus (HIV) by providing transmission links between higher and lower prevalence populations. While social network methods are well suited to the study of bridge populations, analyses tend to focus on dyads (i.e., risk between drug and/or sex partners) and ignore bridges between distinct subpopulations. This study takes initial steps toward moving the analysis of sexual network linkages beyond individual and risk group levels to a community level in which Chicago's 77 community areas are examined as subpopulations for the purpose of identifying potential bridging communities. Of particular interest are "hidden" bridging communities; that is, areas with above-average levels of sexual ties with other areas but whose below-average AIDS prevalence may hide their potential importance for HIV prevention. Data for this analysis came from the first wave of recruiting at the Chicago Sexual Acquisition and Transmission of HIV Cooperative Agreement Program site. Between August 2005 through October 2006, respondent-driven sampling was used to recruit users of heroin, cocaine, or methamphetamine, men who have sex with men regardless of drug use, the sex partners of these two groups, and sex partners of the sex partners. In this cross-sectional study of the sexual transmission of HIV, participants completed a network-focused computer-assisted self-administered interview, which included questions about the geographic locations of sexual contacts with up to six recent partners. Bridging scores for each area were determined using a matrix representing Chicago's 77 community areas and were assessed using two measures: non-redundant ties and flow betweenness. Bridging measures and acquired immunodeficiency syndrome (AIDS) case prevalence rates were plotted for each community area on charts representing four conditions: below-average bridging and AIDS prevalence, below-average bridging and above-average AIDS prevalence, above-average bridging and AIDS prevalence, and above-average bridging and below-average AIDS prevalence (hidden bridgers). The majority of the 1,068 study participants were male (63%), African American (74%), and very poor, and the median age was 44 years. Most (85%) were sexually active, and 725 provided useable geographic information regarding 1,420 sexual partnerships that involved 57 Chicago community areas. Eight community areas met or came close to meeting the definition of hidden bridgers. Six areas were near the city's periphery, and all eight areas likely had high inflows or outflows of low-income persons displaced by gentrification. The results suggest that further research on this method is warranted, and we propose a means for public health officials in other cities to duplicate the analysis.

  1. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    PubMed

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  2. Osmium Atoms and Os2 Molecules Move Faster on Selenium-Doped Compared to Sulfur-Doped Boronic Graphenic Surfaces

    PubMed Central

    2015-01-01

    We deposited Os atoms on S- and Se-doped boronic graphenic surfaces by electron bombardment of micelles containing 16e complexes [Os(p-cymene)(1,2-dicarba-closo-dodecarborane-1,2-diselenate/dithiolate)] encapsulated in a triblock copolymer. The surfaces were characterized by energy-dispersive X-ray (EDX) analysis and electron energy loss spectroscopy of energy filtered TEM (EFTEM). Os atoms moved ca. 26× faster on the B/Se surface compared to the B/S surface (233 ± 34 pm·s–1versus 8.9 ± 1.9 pm·s–1). Os atoms formed dimers with an average Os–Os distance of 0.284 ± 0.077 nm on the B/Se surface and 0.243 ± 0.059 nm on B/S, close to that in metallic Os. The Os2 molecules moved 0.83× and 0.65× more slowly than single Os atoms on B/S and B/Se surfaces, respectively, and again markedly faster (ca. 20×) on the B/Se surface (151 ± 45 pm·s–1 versus 7.4 ± 2.8 pm·s–1). Os atom motion did not follow Brownian motion and appears to involve anchoring sites, probably S and Se atoms. The ability to control the atomic motion of metal atoms and molecules on surfaces has potential for exploitation in nanodevices of the future. PMID:26525180

  3. A reconstruction method of intra-ventricular blood flow using color flow ultrasound: a simulation study

    NASA Astrophysics Data System (ADS)

    Jang, Jaeseong; Ahn, Chi Young; Jeon, Kiwan; Choi, Jung-il; Lee, Changhoon; Seo, Jin Keun

    2015-03-01

    A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color Doppler echocardiography measurement. From 3D incompressible Navier- Stokes equation, a 2D incompressible Navier-Stokes equation with a mass source term is derived to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. For demonstrating a feasibility of the proposed method, we have performed numerical simulations of the forward problem and numerical analysis of the reconstruction method. First, we construct a 3D moving LV region having a specific stroke volume. To obtain synthetic intra-ventricular flows, we performed a numerical simulation of the forward problem of Navier-Stokes equation inside the 3D moving LV, computed 3D intra-ventricular velocity fields as a solution of the forward problem, projected the 3D velocity fields on the imaging plane and took the inner product of the 2D velocity fields on the imaging plane and scanline directional velocity fields for synthetic scanline directional projected velocity at each position. The proposed method utilized the 2D synthetic projected velocity data for reconstructing LV blood flow. By computing the difference between synthetic flow and reconstructed flow fields, we obtained the averaged point-wise errors of 0.06 m/s and 0.02 m/s for u- and v-components, respectively.

  4. A study of video frame rate on the perception of moving imagery detail

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1993-01-01

    The rate at which each frame of color moving video imagery is displayed was varied in small steps to determine what is the minimal acceptable frame rate for life scientists viewing white rats within a small enclosure. Two, twenty five second-long scenes (slow and fast animal motions) were evaluated by nine NASA principal investigators and animal care technicians. The mean minimum acceptable frame rate across these subjects was 3.9 fps both for the slow and fast moving animal scenes. The highest single trial frame rate averaged across all subjects for the slow and the fast scene was 6.2 and 4.8, respectively. Further research is called for in which frame rate, image size, and color/gray scale depth are covaried during the same observation period.

  5. The Dynamics and Correlates of Religious Service Attendance in Adolescence

    PubMed Central

    Hardie, Jessica Halliday; Pearce, Lisa D.; Denton, Melinda Lundquist

    2013-01-01

    This study examines changes in religious service attendance over time for a contemporary cohort of adolescents moving from middle to late adolescence. We use two waves of a nationally representative panel survey of youth from the National Study of Youth and Religion (NSYR) to examine the dynamics of religious involvement during adolescence. We then follow with an analysis of how demographic characteristics, family background, and life course transitions relate to changes in religious service attendance during adolescence. Our findings suggest that, on average, adolescent religious service attendance declines over time, related to major life course transitions such as becoming employed, leaving home, and initiating sexual activity. Parents’ affiliation and attendance, on the other hand, are protective factors against decreasing attendance. PMID:26900186

  6. Two-dimensional convolute integers for analytical instrumentation

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1982-01-01

    As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.

  7. Real Time Search Algorithm for Observation Outliers During Monitoring Engineering Constructions

    NASA Astrophysics Data System (ADS)

    Latos, Dorota; Kolanowski, Bogdan; Pachelski, Wojciech; Sołoducha, Ryszard

    2017-12-01

    Real time monitoring of engineering structures in case of an emergency of disaster requires collection of a large amount of data to be processed by specific analytical techniques. A quick and accurate assessment of the state of the object is crucial for a probable rescue action. One of the more significant evaluation methods of large sets of data, either collected during a specified interval of time or permanently, is the time series analysis. In this paper presented is a search algorithm for those time series elements which deviate from their values expected during monitoring. Quick and proper detection of observations indicating anomalous behavior of the structure allows to take a variety of preventive actions. In the algorithm, the mathematical formulae used provide maximal sensitivity to detect even minimal changes in the object's behavior. The sensitivity analyses were conducted for the algorithm of moving average as well as for the Douglas-Peucker algorithm used in generalization of linear objects in GIS. In addition to determining the size of deviations from the average it was used the so-called Hausdorff distance. The carried out simulation and verification of laboratory survey data showed that the approach provides sufficient sensitivity for automatic real time analysis of large amount of data obtained from different and various sensors (total stations, leveling, camera, radar).

  8. REVIEW ARTICLE: Hither and yon: a review of bi-directional microtubule-based transport

    NASA Astrophysics Data System (ADS)

    Gross, Steven P.

    2004-06-01

    Active transport is critical for cellular organization and function, and impaired transport has been linked to diseases such as neuronal degeneration. Much long distance transport in cells uses opposite polarity molecular motors of the kinesin and dynein families to move cargos along microtubules. It is increasingly clear that many cargos are moved by both sets of motors, and frequently reverse course. This review compares this bi-directional transport to the more well studied uni-directional transport. It discusses some bi-directionally moving cargos, and critically evaluates three different physical models for how such transport might occur. It then considers the evidence for the number of active motors per cargo, and how the net or average direction of transport might be controlled. The likelihood of a complex linking the activities of kinesin and dynein is also discussed. The paper concludes by reviewing elements of apparent universality between different bi-directionally moving cargos and by briefly considering possible reasons for the existence of bi-directional transport.

  9. Random walk of passive tracers among randomly moving obstacles.

    PubMed

    Gori, Matteo; Donato, Irene; Floriani, Elena; Nardecchia, Ilaria; Pettini, Marco

    2016-04-14

    This study is mainly motivated by the need of understanding how the diffusion behavior of a biomolecule (or even of a larger object) is affected by other moving macromolecules, organelles, and so on, inside a living cell, whence the possibility of understanding whether or not a randomly walking biomolecule is also subject to a long-range force field driving it to its target. By means of the Continuous Time Random Walk (CTRW) technique the topic of random walk in random environment is here considered in the case of a passively diffusing particle among randomly moving and interacting obstacles. The relevant physical quantity which is worked out is the diffusion coefficient of the passive tracer which is computed as a function of the average inter-obstacles distance. The results reported here suggest that if a biomolecule, let us call it a test molecule, moves towards its target in the presence of other independently interacting molecules, its motion can be considerably slowed down.

  10. On the Trend of the Annual Mean, Maximum, and Minimum Temperature and the Diurnal Temperature Range in the Armagh Observatory, Northern Ireland, Dataset, 1844 -2012

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2013-01-01

    Examined are the annual averages, 10-year moving averages, decadal averages, and sunspot cycle (SC) length averages of the mean, maximum, and minimum surface air temperatures and the diurnal temperature range (DTR) for the Armagh Observatory, Northern Ireland, during the interval 1844-2012. Strong upward trends are apparent in the Armagh surface-air temperatures (ASAT), while a strong downward trend is apparent in the DTR, especially when the ASAT data are averaged by decade or over individual SC lengths. The long-term decrease in the decadaland SC-averaged annual DTR occurs because the annual minimum temperatures have risen more quickly than the annual maximum temperatures. Estimates are given for the Armagh annual mean, maximum, and minimum temperatures and the DTR for the current decade (2010-2019) and SC24.

  11. Computational Characterization of Type I collagen-based Extra-cellular Matrix

    NASA Astrophysics Data System (ADS)

    Liang, Long; Jones, Christopher Allen Rucksack; Lin, Daniel; Jiao, Yang; Sun, Bo

    2015-03-01

    A model of extracellular matrix (ECM) of collagen fibers has been built, in which cells could communicate with distant partners via fiber-mediated long-range-transmitted stress states. The ECM is modeled as a spring-like fiber network derived from skeletonized confocal microscopy data. Different local and global perturbations have been performed on the network, each followed by an optimized global Monte-Carlo (MC) energy minimization leading to the deformed network in response to the perturbations. In the optimization, a highly efficient local energy update procedure is employed and force-directed MC moves are used, which results in a convergence to the energy minimum state 20 times faster than the commonly used random displacement trial moves in MC. Further analysis and visualization of the distribution and correlation of the resulting force network reveal that local perturbations can give rise to global impacts: the force chains formed with a linear extent much further than the characteristic length scale associated with the perturbation sites and average fiber length. This behavior provides a strong evidence for our hypothesis of fiber-mediated long-range force transmission in ECM networks and the resulting long-range cell-cell mechanical signaling. ASU Seed Grant.

  12. Moving force identification based on modified preconditioned conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.; Nguyen, Andy

    2018-06-01

    This paper develops a modified preconditioned conjugate gradient (M-PCG) method for moving force identification (MFI) by improving the conjugate gradient (CG) and preconditioned conjugate gradient (PCG) methods with a modified Gram-Schmidt algorithm. The method aims to obtain more accurate and more efficient identification results from the responses of bridge deck caused by vehicles passing by, which are known to be sensitive to ill-posed problems that exist in the inverse problem. A simply supported beam model with biaxial time-varying forces is used to generate numerical simulations with various analysis scenarios to assess the effectiveness of the method. Evaluation results show that regularization matrix L and number of iterations j are very important influence factors to identification accuracy and noise immunity of M-PCG. Compared with the conventional counterpart SVD embedded in the time domain method (TDM) and the standard form of CG, the M-PCG with proper regularization matrix has many advantages such as better adaptability and more robust to ill-posed problems. More importantly, it is shown that the average optimal numbers of iterations of M-PCG can be reduced by more than 70% compared with PCG and this apparently makes M-PCG a preferred choice for field MFI applications.

  13. Model-checking techniques based on cumulative residuals.

    PubMed

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  14. Interaction of aerodynamic noise with laminar boundary layers in supersonic wind tunnels

    NASA Technical Reports Server (NTRS)

    Schopper, M. R.

    1984-01-01

    The interaction between incoming aerodynamic noise and the supersonic laminar boundary layer is studied. The noise field is modeled as a Mach wave radiation field consisting of discrete waves emanating from coherent turbulent entities moving downstream within the supersonic turbulent boundary layer. The individual disturbances are likened to miniature sonic booms and the laminar boundary layer is staffed by the waves as the sources move downstream. The mean, autocorrelation, and power spectral density of the field are expressed in terms of the wave shapes and their average arrival rates. Some consideration is given to the possible appreciable thickness of the weak shock fronts. The emphasis in the interaction analysis is on the behavior of the shocklets in the noise field. The shocklets are shown to be focused by the laminar boundary layer in its outer region. Borrowing wave propagation terminology, this region is termed the caustic region. Using scaling laws from sonic boom work, focus factors at the caustic are estimated to vary from 2 to 6 for incoming shocklet strengths of 1 to .01 percent of the free stream pressure level. The situation regarding experimental evidence of the caustic region is reviewed.

  15. Development of temporal modelling for forecasting and prediction of malaria infections using time-series and ARIMAX analyses: a case study in endemic districts of Bhutan.

    PubMed

    Wangdi, Kinley; Singhasivanon, Pratap; Silawan, Tassanee; Lawpoolsri, Saranath; White, Nicholas J; Kaewkungwal, Jaranit

    2010-09-03

    Malaria still remains a public health problem in some districts of Bhutan despite marked reduction of cases in last few years. To strengthen the country's prevention and control measures, this study was carried out to develop forecasting and prediction models of malaria incidence in the endemic districts of Bhutan using time series and ARIMAX. This study was carried out retrospectively using the monthly reported malaria cases from the health centres to Vector-borne Disease Control Programme (VDCP) and the meteorological data from Meteorological Unit, Department of Energy, Ministry of Economic Affairs. Time series analysis was performed on monthly malaria cases, from 1994 to 2008, in seven malaria endemic districts. The time series models derived from a multiplicative seasonal autoregressive integrated moving average (ARIMA) was deployed to identify the best model using data from 1994 to 2006. The best-fit model was selected for each individual district and for the overall endemic area was developed and the monthly cases from January to December 2009 and 2010 were forecasted. In developing the prediction model, the monthly reported malaria cases and the meteorological factors from 1996 to 2008 of the seven districts were analysed. The method of ARIMAX modelling was employed to determine predictors of malaria of the subsequent month. It was found that the ARIMA (p, d, q) (P, D, Q)s model (p and P representing the auto regressive and seasonal autoregressive; d and D representing the non-seasonal differences and seasonal differencing; and q and Q the moving average parameters and seasonal moving average parameters, respectively and s representing the length of the seasonal period) for the overall endemic districts was (2,1,1)(0,1,1)12; the modelling data from each district revealed two most common ARIMA models including (2,1,1)(0,1,1)12 and (1,1,1)(0,1,1)12. The forecasted monthly malaria cases from January to December 2009 and 2010 varied from 15 to 82 cases in 2009 and 67 to 149 cases in 2010, where population in 2009 was 285,375 and the expected population of 2010 to be 289,085. The ARIMAX model of monthly cases and climatic factors showed considerable variations among the different districts. In general, the mean maximum temperature lagged at one month was a strong positive predictor of an increased malaria cases for four districts. The monthly number of cases of the previous month was also a significant predictor in one district, whereas no variable could predict malaria cases for two districts. The ARIMA models of time-series analysis were useful in forecasting the number of cases in the endemic areas of Bhutan. There was no consistency in the predictors of malaria cases when using ARIMAX model with selected lag times and climatic predictors. The ARIMA forecasting models could be employed for planning and managing malaria prevention and control programme in Bhutan.

  16. An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones

    PubMed Central

    Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han

    2015-01-01

    Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel “quasi-dynamic” Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the “process-level” fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move. PMID:26690447

  17. 77 FR 16566 - Submission for OMB Review, Comment Request, Proposed Collection: Let's Move Museums, Let's Move...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-21

    ... the Nation's museum, library, and information services. The policy research, analysis, and data..., Proposed Collection: Let's Move Museums, Let's Move Gardens AGENCY: Institute of Museum and Library..., comment request. SUMMARY: The Institute of Museum and Library Services announces that the following...

  18. Commercial vehicle fleet management and information systems. Phase 1 : interim report

    DOT National Transportation Integrated Search

    1998-01-01

    The three-quarter moving composite price index is the weighted average of the indices for three consecutive quarters. The Composite Bid Price Index is composed of six indicator items: common excavation, to indicate the price trend for all roadway exc...

  19. NASA Tech Briefs, November 2012

    NASA Technical Reports Server (NTRS)

    2012-01-01

    The topics include: Visual System for Browsing, Analysis, and Retrieval of Data (ViSBARD); Time-Domain Terahertz Computed Axial Tomography NDE System; Adaptive Sampling of Time Series During Remote Exploration; A Tracking Sun Photometer Without Moving Parts; Surface Temperature Data Analysis; Modular, Autonomous Command and Data Handling Software with Built-In Simulation and Test; In-Situ Wire Damage Detection System; Amplifier Module for 260-GHz Band Using Quartz Waveguide Transitions; Wideband Agile Digital Microwave Radiometer; Buckyball Nucleation of HiPco Tubes; FACT, Mega-ROSA, SOLAROSA; An Integrated, Layered-Spinel Composite Cathode for Energy Storage Applications; Engineered Multifunctional Surfaces for Fluid Handling; Polyolefin-Based Aerogels; Adjusting Permittivity by Blending Varying Ratios of SWNTs; Gravity-Assist Mechanical Simulator for Outreach; Concept for Hydrogen-Impregnated Nanofiber/Photovoltaic Cargo Stowage System; DROP: Durable Reconnaissance and Observation Platform; Developing Physiologic Models for Emergency Medical Procedures Under Microgravity; Spectroscopic Chemical Analysis Methods and Apparatus; Low Average Sidelobe Slot Array Antennas for Radiometer Applications; Motion-Corrected 3D Sonic Anemometer for Tethersondes and Other Moving Platforms; Water Treatment Systems for Long Spaceflights; Microchip Non-Aqueous Capillary Electrophoresis (MicronNACE) Method to Analyze Long-Chain Primary Amines; Low-Cost Phased Array Antenna for Sounding Rockets, Missiles, and Expendable Launch Vehicles; Mars Science Laboratory Engineering Cameras; Seismic Imager Space Telescope; Estimating Sea Surface Salinity and Wind Using Combined Passive and Active L-Band Microwave Observations; A Posteriori Study of a DNS Database Describing Super critical Binary-Species Mixing; Scalable SCPPM Decoder; QuakeSim 2.0; HURON (HUman and Robotic Optimization Network) Multi-Agent Temporal Activity Planner/Scheduler; MPST Software: MoonKommand

  20. Time Series Analysis and Forecasting of Wastewater Inflow into Bandar Tun Razak Sewage Treatment Plant in Selangor, Malaysia

    NASA Astrophysics Data System (ADS)

    Abunama, Taher; Othman, Faridah

    2017-06-01

    Analysing the fluctuations of wastewater inflow rates in sewage treatment plants (STPs) is essential to guarantee a sufficient treatment of wastewater before discharging it to the environment. The main objectives of this study are to statistically analyze and forecast the wastewater inflow rates into the Bandar Tun Razak STP in Kuala Lumpur, Malaysia. A time series analysis of three years’ weekly influent data (156weeks) has been conducted using the Auto-Regressive Integrated Moving Average (ARIMA) model. Various combinations of ARIMA orders (p, d, q) have been tried to select the most fitted model, which was utilized to forecast the wastewater inflow rates. The linear regression analysis was applied to testify the correlation between the observed and predicted influents. ARIMA (3, 1, 3) model was selected with the highest significance R-square and lowest normalized Bayesian Information Criterion (BIC) value, and accordingly the wastewater inflow rates were forecasted to additional 52weeks. The linear regression analysis between the observed and predicted values of the wastewater inflow rates showed a positive linear correlation with a coefficient of 0.831.

  1. Noise modeling and analysis of an IMU-based attitude sensor: improvement of performance by filtering and sensor fusion

    NASA Astrophysics Data System (ADS)

    K., Nirmal; A. G., Sreejith; Mathew, Joice; Sarpotdar, Mayuresh; Suresh, Ambily; Prakash, Ajin; Safonova, Margarita; Murthy, Jayant

    2016-07-01

    We describe the characterization and removal of noises present in the Inertial Measurement Unit (IMU) MPU- 6050, which was initially used in an attitude sensor, and later used in the development of a pointing system for small balloon-borne astronomical payloads. We found that the performance of the IMU degraded with time because of the accumulation of different errors. Using Allan variance analysis method, we identified the different components of noise present in the IMU, and verified the results by the power spectral density analysis (PSD). We tried to remove the high-frequency noise using smooth filters such as moving average filter and then Savitzky Golay (SG) filter. Even though we managed to filter some high-frequency noise, these filters performance wasn't satisfactory for our application. We found the distribution of the random noise present in IMU using probability density analysis and identified that the noise in our IMU was white Gaussian in nature. Hence, we used a Kalman filter to remove the noise and which gave us good performance real time.

  2. Statistical analysis of dynamic fibrils observed from NST/BBSO observations

    NASA Astrophysics Data System (ADS)

    Gopalan Priya, Thambaje; Su, Jiang-Tao; Chen, Jie; Deng, Yuan-Yong; Prasad Choudhury, Debi

    2018-02-01

    We present the results obtained from the analysis of dynamic fibrils in NOAA active region (AR) 12132, using high resolution Hα observations from the New Solar Telescope operating at Big Bear Solar Observatory. The dynamic fibrils are seen to be moving up and down, and most of these dynamic fibrils are periodic and have a jet-like appearance. We found from our observations that the fibrils follow almost perfect parabolic paths in many cases. A statistical analysis on the properties of the parabolic paths showing an analysis on deceleration, maximum velocity, duration and kinetic energy of these fibrils is presented here. We found the average maximum velocity to be around 15 kms‑1 and mean deceleration to be around 100 ms‑2. The observed deceleration appears to be a fraction of gravity of the Sun and is not compatible with the path of ballistic motion due to gravity of the Sun. We found a positive correlation between deceleration and maximum velocity. This correlation is consistent with simulations done earlier on magnetoacoustic shock waves propagating upward.

  3. CMS distributed data analysis with CRAB3

    DOE PAGES

    Mascheroni, M.; Balcas, J.; Belforte, S.; ...

    2015-12-23

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day.CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks andmore » submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. Furthermore, the new system improves the robustness, scalability and sustainability of the service.Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.« less

  4. Amyloplast sedimentation dynamics in maize columella cells support a new model for the gravity-sensing apparatus of roots

    NASA Technical Reports Server (NTRS)

    Yoder, T. L.; Zheng, H. Q.; Todd, P.; Staehelin, L. A.

    2001-01-01

    Quantitative analysis of statolith sedimentation behavior was accomplished using videomicroscopy of living columella cells of corn (Zea mays) roots, which displayed no systematic cytoplasmic streaming. Following 90 degrees rotation of the root, the statoliths moved downward along the distal wall and then spread out along the bottom with an average velocity of 1.7 microm min(-1). When statolith trajectories traversed the complete width or length of the cell, they initially moved horizontally toward channel-initiation sites and then moved vertically through the channels to the lower side of the reoriented cell where they again dispersed. These statoliths exhibited a significantly lower average velocity than those sedimenting on distal-to-side trajectories. In addition, although statoliths undergoing distal-to-side sedimentation began at their highest velocity and slowed monotonically as they approached the lower cell membrane, statoliths crossing the cell's central region remained slow initially and accelerated to maximum speed once they reached a channel. The statoliths accelerated sooner, and the channeling effect was less pronounced in roots treated with cytochalasin D. Parallel ultrastructural studies of high-pressure frozen-freeze-substituted columella cells suggest that the low-resistance statolith pathway in the cell periphery corresponds to the sharp interface between the endoplasmic reticulum (ER)-rich cortical and the ER-devoid central region of these cells. The central region is also shown to contain an actin-based cytoskeletal network in which the individual, straight, actin-like filaments are randomly distributed. To explain these findings as well as the results of physical simulation experiments, we have formulated a new, tensegrity-based model of gravity sensing in columella cells. This model envisages the cytoplasm as pervaded by an actin-based cytoskeletal network that is denser in the ER-devoid central region than in the ER-rich cell cortex and is linked to stretch receptors in the plasma membrane. Sedimenting statoliths are postulated to produce a directional signal by locally disrupting the network and thereby altering the balance of forces acting on the receptors in different plasma membrane regions.

  5. Amyloplast Sedimentation Dynamics in Maize Columella Cells Support a New Model for the Gravity-Sensing Apparatus of Roots1

    PubMed Central

    Yoder, Thomas L.; Zheng, Hui-qiong; Todd, Paul; Staehelin, L. Andrew

    2001-01-01

    Quantitative analysis of statolith sedimentation behavior was accomplished using videomicroscopy of living columella cells of corn (Zea mays) roots, which displayed no systematic cytoplasmic streaming. Following 90° rotation of the root, the statoliths moved downward along the distal wall and then spread out along the bottom with an average velocity of 1.7 μm min−1. When statolith trajectories traversed the complete width or length of the cell, they initially moved horizontally toward channel-initiation sites and then moved vertically through the channels to the lower side of the reoriented cell where they again dispersed. These statoliths exhibited a significantly lower average velocity than those sedimenting on distal-to-side trajectories. In addition, although statoliths undergoing distal-to-side sedimentation began at their highest velocity and slowed monotonically as they approached the lower cell membrane, statoliths crossing the cell's central region remained slow initially and accelerated to maximum speed once they reached a channel. The statoliths accelerated sooner, and the channeling effect was less pronounced in roots treated with cytochalasin D. Parallel ultrastructural studies of high-pressure frozen-freeze-substituted columella cells suggest that the low-resistance statolith pathway in the cell periphery corresponds to the sharp interface between the endoplasmic reticulum (ER)-rich cortical and the ER-devoid central region of these cells. The central region is also shown to contain an actin-based cytoskeletal network in which the individual, straight, actin-like filaments are randomly distributed. To explain these findings as well as the results of physical simulation experiments, we have formulated a new, tensegrity-based model of gravity sensing in columella cells. This model envisages the cytoplasm as pervaded by an actin-based cytoskeletal network that is denser in the ER-devoid central region than in the ER-rich cell cortex and is linked to stretch receptors in the plasma membrane. Sedimenting statoliths are postulated to produce a directional signal by locally disrupting the network and thereby altering the balance of forces acting on the receptors in different plasma membrane regions. PMID:11161060

  6. Amyloplast sedimentation dynamics in maize columella cells support a new model for the gravity-sensing apparatus of roots.

    PubMed

    Yoder, T L; Zheng, H Q; Todd, P; Staehelin, L A

    2001-02-01

    Quantitative analysis of statolith sedimentation behavior was accomplished using videomicroscopy of living columella cells of corn (Zea mays) roots, which displayed no systematic cytoplasmic streaming. Following 90 degrees rotation of the root, the statoliths moved downward along the distal wall and then spread out along the bottom with an average velocity of 1.7 microm min(-1). When statolith trajectories traversed the complete width or length of the cell, they initially moved horizontally toward channel-initiation sites and then moved vertically through the channels to the lower side of the reoriented cell where they again dispersed. These statoliths exhibited a significantly lower average velocity than those sedimenting on distal-to-side trajectories. In addition, although statoliths undergoing distal-to-side sedimentation began at their highest velocity and slowed monotonically as they approached the lower cell membrane, statoliths crossing the cell's central region remained slow initially and accelerated to maximum speed once they reached a channel. The statoliths accelerated sooner, and the channeling effect was less pronounced in roots treated with cytochalasin D. Parallel ultrastructural studies of high-pressure frozen-freeze-substituted columella cells suggest that the low-resistance statolith pathway in the cell periphery corresponds to the sharp interface between the endoplasmic reticulum (ER)-rich cortical and the ER-devoid central region of these cells. The central region is also shown to contain an actin-based cytoskeletal network in which the individual, straight, actin-like filaments are randomly distributed. To explain these findings as well as the results of physical simulation experiments, we have formulated a new, tensegrity-based model of gravity sensing in columella cells. This model envisages the cytoplasm as pervaded by an actin-based cytoskeletal network that is denser in the ER-devoid central region than in the ER-rich cell cortex and is linked to stretch receptors in the plasma membrane. Sedimenting statoliths are postulated to produce a directional signal by locally disrupting the network and thereby altering the balance of forces acting on the receptors in different plasma membrane regions.

  7. MO-FG-CAMPUS-TeP3-01: A Model of Baseline Shift to Improve Robustness of Proton Therapy Treatments of Moving Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souris, K; Barragan Montero, A; Di Perri, D

    Purpose: The shift in mean position of a moving tumor also known as “baseline shift”, has been modeled, in order to automatically generate uncertainty scenarios for the assessment and robust optimization of proton therapy treatments in lung cancer. Methods: An average CT scan and a Mid-Position CT scan (MidPCT) of the patient at the planning time are first generated from a 4D-CT data. The mean position of the tumor along the breathing cycle is represented by the GTV contour in the MidPCT. Several studies reported both systematic and random variations of the mean tumor position from fraction to fraction. Ourmore » model can simulate this baseline shift by generating a local deformation field that moves the tumor on all phases of the 4D-CT, without creating any non-physical artifact. The deformation field is comprised of normal and tangential components with respect to the lung wall in order to allow the tumor to slip within the lung instead of deforming the lung surface. The deformation field is eventually smoothed in order to enforce its continuity. Two 4D-CT series acquired at 1 week of interval were used to validate the model. Results: Based on the first 4D-CT set, the model was able to generate a third 4D-CT that reproduced the 5.8 mm baseline-shift measured in the second 4D-CT. Water equivalent thickness (WET) of the voxels have been computed for the 3 average CTs. The root mean square deviation of the WET in the GTV is 0.34 mm between week 1 and week 2, and 0.08 mm between the simulated data and week 2. Conclusion: Our model can be used to automatically generate uncertainty scenarios for robustness analysis of a proton therapy plan. The generated scenarios can also feed a TPS equipped with a robust optimizer. Kevin Souris, Ana Barragan, and Dario Di Perri are financially supported by Televie Grants from F.R.S.-FNRS.« less

  8. Chess-playing epilepsy: a case report with video-EEG and back averaging.

    PubMed

    Mann, M W; Gueguen, B; Guillou, S; Debrand, E; Soufflet, C

    2004-12-01

    A patient suffering from juvenile myoclonic epilepsy experienced myoclonic jerks, fairly regularly, while playing chess. The myoclonus appeared particularly when he had to plan his strategy, to choose between two solutions or while raising the arm to move a chess figure. Video-EEG-polygraphy was performed, with back averaging of the myoclonus registered during a chess match and during neuropsychological testing with Kohs cubes. The EEG spike wave complexes were localised in the fronto-central region. [Published with video sequences].

  9. NGEE Arctic Plant Traits: Shrub Transects, Kougarok Road Mile Marker 64, Seward Peninsula, Alaska, 2016

    DOE Data Explorer

    Verity Salmon; Colleen Iversen; Peter Thornton; Ma

    2017-03-01

    Transect data is from point center quarter surveys for shrub density performed in July 2016 at the Kougarok hill slope located at Kougarok Road, Mile Marker 64. For each sample point along the transects, moving averages for shrub density and shrub basal area are provided along with GPS coordinates, average shrub height and active layer depth. The individual height, basal area, and species of surveyed shrubs are also included. Data upload will be completed January 2017.

  10. A Posteriori Quantification of Rate-Controlling Effects from High-Intensity Turbulence-Flame Interactions Using 4D Measurements

    DTIC Science & Technology

    2016-11-22

    Unclassified REPORT DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1...compact at all conditions tested, as indicated by the overlap of OH and CH2O distributions. 5. We developed analytical techniques for pseudo- Lagrangian ...condition in a constant density flow requires that the flow divergence is zero, ∇ · ~u = 0. Three smoothing schemes were examined, a moving average (i.e

  11. Acute effects of PM2.5 on lung function parameters in schoolchildren in Nanjing, China: a panel study.

    PubMed

    Xu, Dandan; Zhang, Yi; Zhou, Lian; Li, Tiantian

    2018-03-17

    The association between exposure to ambient particulate matter (PM) and reduced lung function parameters has been reported in many works. However, few studies have been conducted in developing countries with high levels of air pollution like China, and little attention has been paid to the acute effects of short-term exposure to air pollution on lung function. The study design consisted of a panel comprising 86 children from the same school in Nanjing, China. Four measurements of lung function were performed. A mixed-effects regression model with study participant as a random effect was used to investigate the relationship between PM 2.5 and lung function. An increase in the current day, 1-day and 2-day moving average PM 2.5 concentration was associated with decreases in lung function indicators. The greatest effect of PM 2.5 on lung function was detected at 1-day moving average PM 2.5 exposure. An increase of 10 μg/m 3 in the 1-day moving average PM 2.5 concentration was associated with a 23.22 mL decrease (95% CI: 13.19, 33.25) in Forced Vital Capacity (FVC), a 18.93 mL decrease (95% CI: 9.34, 28.52) in 1-s Forced Expiratory Volume (FEV 1 ), a 29.38 mL/s decrease (95% CI: -0.40, 59.15) in Peak Expiratory Flow (PEF), and a 27.21 mL/s decrease (95% CI: 8.38, 46.04) in forced expiratory flow 25-75% (FEF 25-75% ). The effects of PM 2.5 on lung function had significant lag effects. After an air pollution event, the health effects last for several days and we still need to pay attention to health protection.

  12. Short-Term Exposure to Air Pollution and Biomarkers of Oxidative Stress: The Framingham Heart Study.

    PubMed

    Li, Wenyuan; Wilker, Elissa H; Dorans, Kirsten S; Rice, Mary B; Schwartz, Joel; Coull, Brent A; Koutrakis, Petros; Gold, Diane R; Keaney, John F; Lin, Honghuang; Vasan, Ramachandran S; Benjamin, Emelia J; Mittleman, Murray A

    2016-04-28

    Short-term exposure to elevated air pollution has been associated with higher risk of acute cardiovascular diseases, with systemic oxidative stress induced by air pollution hypothesized as an important underlying mechanism. However, few community-based studies have assessed this association. Two thousand thirty-five Framingham Offspring Cohort participants living within 50 km of the Harvard Boston Supersite who were not current smokers were included. We assessed circulating biomarkers of oxidative stress including blood myeloperoxidase at the seventh examination (1998-2001) and urinary creatinine-indexed 8-epi-prostaglandin F2α (8-epi-PGF2α) at the seventh and eighth (2005-2008) examinations. We measured fine particulate matter (PM2.5), black carbon, sulfate, nitrogen oxides, and ozone at the Supersite and calculated 1-, 2-, 3-, 5-, and 7-day moving averages of each pollutant. Measured myeloperoxidase and 8-epi-PGF2α were loge transformed. We used linear regression models and linear mixed-effects models with random intercepts for myeloperoxidase and indexed 8-epi-PGF2α, respectively. Models were adjusted for demographic variables, individual- and area-level measures of socioeconomic position, clinical and lifestyle factors, weather, and temporal trend. We found positive associations of PM2.5 and black carbon with myeloperoxidase across multiple moving averages. Additionally, 2- to 7-day moving averages of PM2.5 and sulfate were consistently positively associated with 8-epi-PGF2α. Stronger positive associations of black carbon and sulfate with myeloperoxidase were observed among participants with diabetes than in those without. Our community-based investigation supports an association of select markers of ambient air pollution with circulating biomarkers of oxidative stress. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  13. Models for short term malaria prediction in Sri Lanka

    PubMed Central

    Briët, Olivier JT; Vounatsou, Penelope; Gunawardena, Dissanayake M; Galappaththy, Gawrie NL; Amerasinghe, Priyanie H

    2008-01-01

    Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA) models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA) models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal) ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal) ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts) for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed. PMID:18460204

  14. Evaluating and improving count-based population inference: A case study from 31 years of monitoring Sandhill Cranes

    USGS Publications Warehouse

    Gerber, Brian D.; Kendall, William L.

    2017-01-01

    Monitoring animal populations can be difficult. Limited resources often force monitoring programs to rely on unadjusted or smoothed counts as an index of abundance. Smoothing counts is commonly done using a moving-average estimator to dampen sampling variation. These indices are commonly used to inform management decisions, although their reliability is often unknown. We outline a process to evaluate the biological plausibility of annual changes in population counts and indices from a typical monitoring scenario and compare results with a hierarchical Bayesian time series (HBTS) model. We evaluated spring and fall counts, fall indices, and model-based predictions for the Rocky Mountain population (RMP) of Sandhill Cranes (Antigone canadensis) by integrating juvenile recruitment, harvest, and survival into a stochastic stage-based population model. We used simulation to evaluate population indices from the HBTS model and the commonly used 3-yr moving average estimator. We found counts of the RMP to exhibit biologically unrealistic annual change, while the fall population index was largely biologically realistic. HBTS model predictions suggested that the RMP changed little over 31 yr of monitoring, but the pattern depended on assumptions about the observational process. The HBTS model fall population predictions were biologically plausible if observed crane harvest mortality was compensatory up to natural mortality, as empirical evidence suggests. Simulations indicated that the predicted mean of the HBTS model was generally a more reliable estimate of the true population than population indices derived using a moving 3-yr average estimator. Practitioners could gain considerable advantages from modeling population counts using a hierarchical Bayesian autoregressive approach. Advantages would include: (1) obtaining measures of uncertainty; (2) incorporating direct knowledge of the observational and population processes; (3) accommodating missing years of data; and (4) forecasting population size.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coruh, M; Ewell, L; Demez, N

    Purpose: To estimate the dose delivered to a moving lung tumor by proton therapy beams of different modulation types, and compare with Monte Carlo predictions. Methods: A radiology support devices (RSD) phantom was irradiated with therapeutic proton radiation beams using two different types of modulation: uniform scanning (US) and double scattered (DS). The Eclipse© dose plan was designed to deliver 1.00Gy to the isocenter of a static ∼3×3×3cm (27cc) tumor in the phantom with 100% coverage. The peak to peak amplitude of tumor motion varied from 0.0 to 2.5cm. The radiation dose was measured with an ion-chamber (CC-13) located withinmore » the tumor. The time required to deliver the radiation dose varied from an average of 65s for the DS beams to an average of 95s for the US beams. Results: The amount of radiation dose varied from 100% (both US and DS) to the static tumor down to approximately 92% for the moving tumor. The ratio of US dose to DS dose ranged from approximately 1.01 for the static tumor, down to 0.99 for the 2.5cm moving tumor. A Monte Carlo simulation using TOPAS included a lung tumor with 4.0cm of peak to peak motion. In this simulation, the dose received by the tumor varied by ∼40% as the period of this motion varied from 1s to 4s. Conclusion: The radiation dose deposited to a moving tumor was less than for a static tumor, as expected. At large (2.5cm) amplitudes, the DS proton beams gave a dose closer to the desired dose than the US beams, but equal within experimental uncertainty. TOPAS Monte Carlo simulation can give insight into the moving tumor — dose relationship. This work was supported in part by the Philips corporation.« less

  16. MOVES-Matrix and distributed computing for microscale line source dispersion analysis.

    PubMed

    Liu, Haobing; Xu, Xiaodan; Rodgers, Michael O; Xu, Yanzhi Ann; Guensler, Randall L

    2017-07-01

    MOVES and AERMOD are the U.S. Environmental Protection Agency's recommended models for use in project-level transportation conformity and hot-spot analysis. However, the structure and algorithms involved in running MOVES make analyses cumbersome and time-consuming. Likewise, the modeling setup process, including extensive data requirements and required input formats, in AERMOD lead to a high potential for analysis error in dispersion modeling. This study presents a distributed computing method for line source dispersion modeling that integrates MOVES-Matrix, a high-performance emission modeling tool, with the microscale dispersion models CALINE4 and AERMOD. MOVES-Matrix was prepared by iteratively running MOVES across all possible iterations of vehicle source-type, fuel, operating conditions, and environmental parameters to create a huge multi-dimensional emission rate lookup matrix. AERMOD and CALINE4 are connected with MOVES-Matrix in a distributed computing cluster using a series of Python scripts. This streamlined system built on MOVES-Matrix generates exactly the same emission rates and concentration results as using MOVES with AERMOD and CALINE4, but the approach is more than 200 times faster than using the MOVES graphical user interface. Because AERMOD requires detailed meteorological input, which is difficult to obtain, this study also recommends using CALINE4 as a screening tool for identifying the potential area that may exceed air quality standards before using AERMOD (and identifying areas that are exceedingly unlikely to exceed air quality standards). CALINE4 worst case method yields consistently higher concentration results than AERMOD for all comparisons in this paper, as expected given the nature of the meteorological data employed. The paper demonstrates a distributed computing method for line source dispersion modeling that integrates MOVES-Matrix with the CALINE4 and AERMOD. This streamlined system generates exactly the same emission rates and concentration results as traditional way to use MOVES with AERMOD and CALINE4, which are regulatory models approved by the U.S. EPA for conformity analysis, but the approach is more than 200 times faster than implementing the MOVES model. We highlighted the potentially significant benefit of using CALINE4 as screening tool for identifying potential area that may exceeds air quality standards before using AERMOD, which requires much more meteorology input than CALINE4.

  17. Consistent and efficient processing of ADCP streamflow measurements

    USGS Publications Warehouse

    Mueller, David S.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    The use of Acoustic Doppler Current Profilers (ADCPs) from a moving boat is a commonly used method for measuring streamflow. Currently, the algorithms used to compute the average depth, compute edge discharge, identify invalid data, and estimate velocity and discharge for invalid data vary among manufacturers. These differences could result in different discharges being computed from identical data. Consistent computational algorithm, automated filtering, and quality assessment of ADCP streamflow measurements that are independent of the ADCP manufacturer are being developed in a software program that can process ADCP moving-boat discharge measurements independent of the ADCP used to collect the data.

  18. A filament of energetic particles near the high-latitude dawn magnetopause

    NASA Technical Reports Server (NTRS)

    Lui, A. T. Y.; Williams, D. J.; Mcentire, R. W.; Christon, S. P.; Jacquey, C.; Angelopoulos, V.; Yamamoto, T.; Kokubun, S.; Frank, L. A.; Ackerson, K. L.

    1994-01-01

    The Geotail satelite detected a filament of tailward-streaming energetic particles spatially separated from the boundary layer of energetic particles at the high-latitude dawn magnetopause at a downstream distance of approximately 80 R(sub E) on October 27, 1992. During this event, the composition and charge states of energetic ions at energies above approximately 10 keV show significant intermix of ions from solar wind and ionospheric sources. Detailed analysis leads to the deduction that the filament was moving southward towards the neutral sheet at an average speed of approximately 80 km/s, implying an average duskward electric field of approximately 1 mV/m. Its north-south dimension was approximately 1 R(sub E) and it was associated with an earthward directed field-aligned current of approximately 5 mA/m. The filament was separated from the energetic particle boundary layer straddling the magnetopause by approximately 0.8 R(sub E) and was inferred to be detached from the boundary layer at downstream distance beyond approximately 70 R(sub E) in the distant tail.

  19. The challenge of distinguishing figure from ground: reaction to Gelso's work on the real relationship.

    PubMed

    McCullough, Leigh

    2009-05-01

    The motives of the beginning psychotherapist for choosing his or her orientation are an underresearched issue in psychotherapy training. This study focuses on the role of personality-based factors, specifically the epistemological preferences of the therapist that Kolb (1984) has termed "learning style" (LS). The aim of the present study was to explore possible associations between psychology students' developing LSs and their choice of psychotherapeutic orientation (psychodynamic [PDT] vs. cognitive-behavioural [CBT]). Students in a psychologist's program (N = 175) took the Learning Style Inventory in their third semester and, before their formal choice, in their seventh semester. Besides a common trend toward radicalization or purification of their LS, the average PDT student tended to stick to the "feel and watch" style from the third semester to the seventh, whereas the CBT student tended to move toward "think and do." A cluster analysis revealed that the average movement among the CBT students was the result of the forces in two different subgroups, one toward "think" (and, more weakly, "watch"), the other toward "do" (and, more weakly, "feel").

  20. Estimating Perturbation and Meta-Stability in the Daily Attendance Rates of Six Small High Schools

    NASA Astrophysics Data System (ADS)

    Koopmans, Matthijs

    This paper discusses the daily attendance rates in six small high schools over a ten-year period and evaluates how stable those rates are. “Stability” is approached from two vantage points: pulse models are fitted to estimate the impact of sudden perturbations and their reverberation through the series, and Autoregressive Fractionally Integrated Moving Average (ARFIMA) techniques are used to detect dependencies over the long range of the series. The analyses are meant to (1) exemplify the utility of time series approaches in educational research, which lacks a time series tradition, (2) discuss some time series features that seem to be particular to daily attendance rate trajectories such as the distinct downward pull coming from extreme observations, and (3) present an analytical approach to handle the important yet distinct patterns of variability that can be found in these data. The analysis also illustrates why the assumption of stability that underlies the habitual reporting of weekly, monthly and yearly averages in the educational literature is questionable, as it reveals dynamical processes (perturbation, meta-stability) that remain hidden in such summaries.

  1. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  2. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses.

    PubMed

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  3. 76 FR 71080 - Notice of Proposed Information Collection Requests: Let's Move Museums, Let's Move Gardens

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-16

    ..., library, and information services. The policy research, analysis, and data collection is used to: Identify... Requests: Let's Move Museums, Let's Move Gardens AGENCY: Institute of Museum and Library Services, National.... SUMMARY: The Institute of Museum and Library Services (IMLS), as part of its continuing effort to reduce...

  4. Integrating WEPP into the WEPS infrastructure

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) and the Water Erosion Prediction Project (WEPP) share a common modeling philosophy, that of moving away from primarily empirically based models based on indices or "average conditions", and toward a more process based approach which can be evaluated using ac...

  5. Model Identification of Integrated ARMA Processes

    ERIC Educational Resources Information Center

    Stadnytska, Tetiana; Braun, Simone; Werner, Joachim

    2008-01-01

    This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…

  6. Integration of social information by human groups

    PubMed Central

    Granovskiy, Boris; Gold, Jason M.; Sumpter, David; Goldstone, Robert L.

    2015-01-01

    We consider a situation in which individuals search for accurate decisions without direct feedback on their accuracy but with information about the decisions made by peers in their group. The “wisdom of crowds” hypothesis states that the average judgment of many individuals can give a good estimate of, for example, the outcomes of sporting events and the answers to trivia questions. Two conditions for the application of wisdom of crowds are that estimates should be independent and unbiased. Here, we study how individuals integrate social information when answering trivia questions with answers that range between 0 and 100% (e.g., ‘What percentage of Americans are left-handed?’). We find that, consistent with the wisdom of crowds hypothesis, average performance improves with group size. However, individuals show a consistent bias to produce estimates that are insufficiently extreme. We find that social information provides significant, albeit small, improvement to group performance. Outliers with answers far from the correct answer move towards the position of the group mean. Given that these outliers also tend to be nearer to 50% than do the answers of other group members, this move creates group polarization away from 50%. By looking at individual performance over different questions we find that some people are more likely to be affected by social influence than others. There is also evidence that people differ in their competence in answering questions, but lack of competence is not significantly correlated with willingness to change guesses. We develop a mathematical model based on these results that postulates a cognitive process in which people first decide whether to take into account peer guesses, and if so, to move in the direction of these guesses. The size of the move is proportional to the distance between their own guess and the average guess of the group. This model closely approximates the distribution of guess movements and shows how outlying incorrect opinions can be systematically removed from a group resulting, in some situations, in improved group performance. However, improvement is only predicted for cases in which the initial guesses of individuals in the group are biased. PMID:26189568

  7. Integration of Social Information by Human Groups.

    PubMed

    Granovskiy, Boris; Gold, Jason M; Sumpter, David J T; Goldstone, Robert L

    2015-07-01

    We consider a situation in which individuals search for accurate decisions without direct feedback on their accuracy, but with information about the decisions made by peers in their group. The "wisdom of crowds" hypothesis states that the average judgment of many individuals can give a good estimate of, for example, the outcomes of sporting events and the answers to trivia questions. Two conditions for the application of wisdom of crowds are that estimates should be independent and unbiased. Here, we study how individuals integrate social information when answering trivia questions with answers that range between 0% and 100% (e.g., "What percentage of Americans are left-handed?"). We find that, consistent with the wisdom of crowds hypothesis, average performance improves with group size. However, individuals show a consistent bias to produce estimates that are insufficiently extreme. We find that social information provides significant, albeit small, improvement to group performance. Outliers with answers far from the correct answer move toward the position of the group mean. Given that these outliers also tend to be nearer to 50% than do the answers of other group members, this move creates group polarization away from 50%. By looking at individual performance over different questions we find that some people are more likely to be affected by social influence than others. There is also evidence that people differ in their competence in answering questions, but lack of competence is not significantly correlated with willingness to change guesses. We develop a mathematical model based on these results that postulates a cognitive process in which people first decide whether to take into account peer guesses, and if so, to move in the direction of these guesses. The size of the move is proportional to the distance between their own guess and the average guess of the group. This model closely approximates the distribution of guess movements and shows how outlying incorrect opinions can be systematically removed from a group resulting, in some situations, in improved group performance. However, improvement is only predicted for cases in which the initial guesses of individuals in the group are biased. Copyright © 2015 Cognitive Science Society, Inc.

  8. Psychometric Evaluation of Lexical Diversity Indices: Assessing Length Effects.

    PubMed

    Fergadiotis, Gerasimos; Wright, Heather Harris; Green, Samuel B

    2015-06-01

    Several novel techniques have been developed recently to assess the breadth of a speaker's vocabulary exhibited in a language sample. The specific aim of this study was to increase our understanding of the validity of the scores generated by different lexical diversity (LD) estimation techniques. Four techniques were explored: D, Maas, measure of textual lexical diversity, and moving-average type-token ratio. Four LD indices were estimated for language samples on 4 discourse tasks (procedures, eventcasts, story retell, and recounts) from 442 adults who are neurologically intact. The resulting data were analyzed using structural equation modeling. The scores for measure of textual lexical diversity and moving-average type-token ratio were stronger indicators of the LD of the language samples. The results for the other 2 techniques were consistent with the presence of method factors representing construct-irrelevant sources. These findings offer a deeper understanding of the relative validity of the 4 estimation techniques and should assist clinicians and researchers in the selection of LD measures of language samples that minimize construct-irrelevant sources.

  9. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.

    2014-09-12

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressivemore » Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.« less

  10. Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network

    PubMed Central

    Yu, Ying; Wang, Yirui; Tang, Zheng

    2017-01-01

    With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient. PMID:28246527

  11. The Use of an Autoregressive Integrated Moving Average Model for Prediction of the Incidence of Dysentery in Jiangsu, China.

    PubMed

    Wang, Kewei; Song, Wentao; Li, Jinping; Lu, Wu; Yu, Jiangang; Han, Xiaofeng

    2016-05-01

    The aim of this study is to forecast the incidence of bacillary dysentery with a prediction model. We collected the annual and monthly laboratory data of confirmed cases from January 2004 to December 2014. In this study, we applied an autoregressive integrated moving average (ARIMA) model to forecast bacillary dysentery incidence in Jiangsu, China. The ARIMA (1, 1, 1) × (1, 1, 2)12 model fitted exactly with the number of cases during January 2004 to December 2014. The fitted model was then used to predict bacillary dysentery incidence during the period January to August 2015, and the number of cases fell within the model's CI for the predicted number of cases during January-August 2015. This study shows that the ARIMA model fits the fluctuations in bacillary dysentery frequency, and it can be used for future forecasting when applied to bacillary dysentery prevention and control. © 2016 APJPH.

  12. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    NASA Astrophysics Data System (ADS)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan

    2014-09-01

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.

  13. Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.

    PubMed

    Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng

    2017-01-01

    With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.

  14. An Improved Harmonic Current Detection Method Based on Parallel Active Power Filter

    NASA Astrophysics Data System (ADS)

    Zeng, Zhiwu; Xie, Yunxiang; Wang, Yingpin; Guan, Yuanpeng; Li, Lanfang; Zhang, Xiaoyu

    2017-05-01

    Harmonic detection technology plays an important role in the applications of active power filter. The accuracy and real-time performance of harmonic detection are the precondition to ensure the compensation performance of Active Power Filter (APF). This paper proposed an improved instantaneous reactive power harmonic current detection algorithm. The algorithm uses an improved ip -iq algorithm which is combined with the moving average value filter. The proposed ip -iq algorithm can remove the αβ and dq coordinate transformation, decreasing the cost of calculation, simplifying the extraction process of fundamental components of load currents, and improving the detection speed. The traditional low-pass filter is replaced by the moving average filter, detecting the harmonic currents more precisely and quickly. Compared with the traditional algorithm, the THD (Total Harmonic Distortion) of the grid currents is reduced from 4.41% to 3.89% for the simulations and from 8.50% to 4.37% for the experiments after the improvement. The results show the proposed algorithm is more accurate and efficient.

  15. Prognostics of slurry pumps based on a moving-average wear degradation index and a general sequential Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tse, Peter W.

    2015-05-01

    Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.

  16. Non-equilibrium steady states in the Klein-Gordon theory

    NASA Astrophysics Data System (ADS)

    Doyon, Benjamin; Lucas, Andrew; Schalm, Koenraad; Bhaseen, M. J.

    2015-03-01

    We construct non-equilibrium steady states in the Klein-Gordon theory in arbitrary space dimension d following a local quench. We consider the approach where two independently thermalized semi-infinite systems, with temperatures {{T}L} and {{T}R}, are connected along a d-1-dimensional hypersurface. A current-carrying steady state, described by thermally distributed modes with temperatures {{T}L} and {{T}R} for left and right-moving modes, respectively, emerges at late times. The non-equilibrium density matrix is the exponential of a non-local conserved charge. We obtain exact results for the average energy current and the complete distribution of energy current fluctuations. The latter shows that the long-time energy transfer can be described by a continuum of independent Poisson processes, for which we provide the exact weights. We further describe the full time evolution of local observables following the quench. Averages of generic local observables, including the stress-energy tensor, approach the steady state with a power-law in time, where the exponent depends on the initial conditions at the connection hypersurface. We describe boundary conditions and special operators for which the steady state is reached instantaneously on the connection hypersurface. A semiclassical analysis of freely propagating modes yields the average energy current at large distances and late times. We conclude by comparing and contrasting our findings with results for interacting theories and provide an estimate for the timescale governing the crossover to hydrodynamics. As a modification of our Klein-Gordon analysis we also include exact results for free Dirac fermions.

  17. Battle Damage Modeling

    DTIC Science & Technology

    2010-05-01

    has been an increasing move towards armor systems which are both structural and protection components at the same time. Analysis of material response...the materials can move. As the FE analysis progresses the component will move while the mesh remains motionless (Figure 4). Individual nodes and cells...this parameter. This subroutine needs many inputs, such as the speed of sound in the material , the FE size mesh and the safety factor, which prevents

  18. Improved ICU design reduces acquisition of antibiotic-resistant bacteria: a quasi-experimental observational study

    PubMed Central

    2011-01-01

    Introduction The role of ICU design and particularly single-patient rooms in decreasing bacterial transmission between ICU patients has been debated. A recent change in our ICU allowed further investigation. Methods Pre-move ICU-A and pre-move ICU-B were open-plan units. In March 2007, ICU-A moved to single-patient rooms (post-move ICU-A). ICU-B remained unchanged (post-move ICU-B). The same physicians cover both ICUs. Cultures of specified resistant organisms in surveillance or clinical cultures from consecutive patients staying >48 hours were compared for the different ICUs and periods to assess the effect of ICU design on acquisition of resistant organisms. Results Data were collected for 62, 62, 44 and 39 patients from pre-move ICU-A, post-move ICU-A, pre-move ICU-B and post-move ICU-B, respectively. Fewer post-move ICU-A patients acquired resistant organisms (3/62, 5%) compared with post-move ICU-B patients (7/39, 18%; P = 0.043, P = 0.011 using survival analysis) or pre-move ICU-A patients (14/62, 23%; P = 0.004, P = 0.012 on survival analysis). Only the admission period was significant for acquisition of resistant organisms comparing pre-move ICU-A with post-move ICU-A (hazard ratio = 5.18, 95% confidence interval = 1.03 to 16.06; P = 0.025). More antibiotic-free days were recorded in post-move ICU-A (median = 3, interquartile range = 0 to 5) versus post-move ICU-B (median = 0, interquartile range = 0 to 4; P = 0.070) or pre-move ICU-A (median = 0, interquartile range = 0 to 4; P = 0.017). Adequate hand hygiene was observed on 140/242 (58%) occasions in post-move ICU-A versus 23/66 (35%) occasions in post-move ICU-B (P < 0.001). Conclusions Improved ICU design, and particularly use of single-patient rooms, decreases acquisition of resistant bacteria and antibiotic use. This observation should be considered in future ICU design. PMID:21914222

  19. Improved ICU design reduces acquisition of antibiotic-resistant bacteria: a quasi-experimental observational study.

    PubMed

    Levin, Phillip D; Golovanevski, Mila; Moses, Allon E; Sprung, Charles L; Benenson, Shmuel

    2011-01-01

    The role of ICU design and particularly single-patient rooms in decreasing bacterial transmission between ICU patients has been debated. A recent change in our ICU allowed further investigation. Pre-move ICU-A and pre-move ICU-B were open-plan units. In March 2007, ICU-A moved to single-patient rooms (post-move ICU-A). ICU-B remained unchanged (post-move ICU-B). The same physicians cover both ICUs. Cultures of specified resistant organisms in surveillance or clinical cultures from consecutive patients staying >48 hours were compared for the different ICUs and periods to assess the effect of ICU design on acquisition of resistant organisms. Data were collected for 62, 62, 44 and 39 patients from pre-move ICU-A, post-move ICU-A, pre-move ICU-B and post-move ICU-B, respectively. Fewer post-move ICU-A patients acquired resistant organisms (3/62, 5%) compared with post-move ICU-B patients (7/39, 18%; P = 0.043, P = 0.011 using survival analysis) or pre-move ICU-A patients (14/62, 23%; P = 0.004, P = 0.012 on survival analysis). Only the admission period was significant for acquisition of resistant organisms comparing pre-move ICU-A with post-move ICU-A (hazard ratio = 5.18, 95% confidence interval = 1.03 to 16.06; P = 0.025). More antibiotic-free days were recorded in post-move ICU-A (median = 3, interquartile range = 0 to 5) versus post-move ICU-B (median = 0, interquartile range = 0 to 4; P = 0.070) or pre-move ICU-A (median = 0, interquartile range = 0 to 4; P = 0.017). Adequate hand hygiene was observed on 140/242 (58%) occasions in post-move ICU-A versus 23/66 (35%) occasions in post-move ICU-B (P < 0.001). Improved ICU design, and particularly use of single-patient rooms, decreases acquisition of resistant bacteria and antibiotic use. This observation should be considered in future ICU design.

  20. Acoustic power of a moving point source in a moving medium

    NASA Technical Reports Server (NTRS)

    Cole, J. E., III; Sarris, I. I.

    1976-01-01

    The acoustic power output of a moving point-mass source in an acoustic medium which is in uniform motion and infinite in extent is examined. The acoustic medium is considered to be a homogeneous fluid having both zero viscosity and zero thermal conductivity. Two expressions for the acoustic power output are obtained based on a different definition cited in the literature for the average energy-flux vector in an acoustic medium in uniform motion. The acoustic power output of the source is found by integrating the component of acoustic intensity vector in the radial direction over the surface of an infinitely long cylinder which is within the medium and encloses the line of motion of the source. One of the power expressions is found to give unreasonable results even though the flow is uniform.

  1. Human speed perception is contrast dependent

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Thompson, Peter

    1992-01-01

    When two parallel gratings moving at the same speed are presented simultaneously, the lower-contrast grating appears slower. This misperception is evident across a wide range of contrasts (2.5-50 percent) and does not appear to saturate (e.g. a 50 percent contrast grating appears slower than a 70 percent contrast grating moving at the same speed). On average, a 70 percent contrast grating must be slowed by 35 percent to match a 10 percent contrast grating moving at 2 deg/sec (N = 6). Furthermore, the effect is largely independent of the absolute contrast level and is a quasi-linear function of log contrast ratio. A preliminary parametric study shows that, although spatial frequency has little effect, relative orientation is important. Finally, the misperception of relative speed appears lessened when the stimuli to be matched are presented sequentially.

  2. Performance evaluation of ionospheric time delay forecasting models using GPS observations at a low-latitude station

    NASA Astrophysics Data System (ADS)

    Sivavaraprasad, G.; Venkata Ratnam, D.

    2017-07-01

    Ionospheric delay is one of the major atmospheric effects on the performance of satellite-based radio navigation systems. It limits the accuracy and availability of Global Positioning System (GPS) measurements, related to critical societal and safety applications. The temporal and spatial gradients of ionospheric total electron content (TEC) are driven by several unknown priori geophysical conditions and solar-terrestrial phenomena. Thereby, the prediction of ionospheric delay is challenging especially over Indian sub-continent. Therefore, an appropriate short/long-term ionospheric delay forecasting model is necessary. Hence, the intent of this paper is to forecast ionospheric delays by considering day to day, monthly and seasonal ionospheric TEC variations. GPS-TEC data (January 2013-December 2013) is extracted from a multi frequency GPS receiver established at K L University, Vaddeswaram, Guntur station (geographic: 16.37°N, 80.37°E; geomagnetic: 7.44°N, 153.75°E), India. An evaluation, in terms of forecasting capabilities, of three ionospheric time delay models - an Auto Regressive Moving Average (ARMA) model, Auto Regressive Integrated Moving Average (ARIMA) model, and a Holt-Winter's model is presented. The performances of these models are evaluated through error measurement analysis during both geomagnetic quiet and disturbed days. It is found that, ARMA model is effectively forecasting the ionospheric delay with an accuracy of 82-94%, which is 10% more superior to ARIMA and Holt-Winter's models. Moreover, the modeled VTEC derived from International Reference Ionosphere, IRI (IRI-2012) model and new global TEC model, Neustrelitz TEC Model (NTCM-GL) have compared with forecasted VTEC values of ARMA, ARIMA and Holt-Winter's models during geomagnetic quiet days. The forecast results are indicating that ARMA model would be useful to set up an early warning system for ionospheric disturbances at low latitude regions.

  3. Conventional and advanced time series estimation: application to the Australian and New Zealand Intensive Care Society (ANZICS) adult patient database, 1993-2006.

    PubMed

    Moran, John L; Solomon, Patricia J

    2011-02-01

    Time series analysis has seen limited application in the biomedical Literature. The utility of conventional and advanced time series estimators was explored for intensive care unit (ICU) outcome series. Monthly mean time series, 1993-2006, for hospital mortality, severity-of-illness score (APACHE III), ventilation fraction and patient type (medical and surgical), were generated from the Australia and New Zealand Intensive Care Society adult patient database. Analyses encompassed geographical seasonal mortality patterns, series structural time changes, mortality series volatility using autoregressive moving average and Generalized Autoregressive Conditional Heteroscedasticity models in which predicted variances are updated adaptively, and bivariate and multivariate (vector error correction models) cointegrating relationships between series. The mortality series exhibited marked seasonality, declining mortality trend and substantial autocorrelation beyond 24 lags. Mortality increased in winter months (July-August); the medical series featured annual cycling, whereas the surgical demonstrated long and short (3-4 months) cycling. Series structural breaks were apparent in January 1995 and December 2002. The covariance stationary first-differenced mortality series was consistent with a seasonal autoregressive moving average process; the observed conditional-variance volatility (1993-1995) and residual Autoregressive Conditional Heteroscedasticity effects entailed a Generalized Autoregressive Conditional Heteroscedasticity model, preferred by information criterion and mean model forecast performance. Bivariate cointegration, indicating long-term equilibrium relationships, was established between mortality and severity-of-illness scores at the database level and for categories of ICUs. Multivariate cointegration was demonstrated for {log APACHE III score, log ICU length of stay, ICU mortality and ventilation fraction}. A system approach to understanding series time-dependence may be established using conventional and advanced econometric time series estimators. © 2010 Blackwell Publishing Ltd.

  4. Gesture Based Control and EMG Decomposition

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Chang, Mindy H.; Knuth, Kevin H.

    2005-01-01

    This paper presents two probabilistic developments for use with Electromyograms (EMG). First described is a new-electric interface for virtual device control based on gesture recognition. The second development is a Bayesian method for decomposing EMG into individual motor unit action potentials. This more complex technique will then allow for higher resolution in separating muscle groups for gesture recognition. All examples presented rely upon sampling EMG data from a subject's forearm. The gesture based recognition uses pattern recognition software that has been trained to identify gestures from among a given set of gestures. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time from moving averages of EMG. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard. Moving averages of EMG do not provide easy distinction between fine muscle groups. To better distinguish between different fine motor skill muscle groups we present a Bayesian algorithm to separate surface EMG into representative motor unit action potentials. The algorithm is based upon differential Variable Component Analysis (dVCA) [l], [2] which was originally developed for Electroencephalograms. The algorithm uses a simple forward model representing a mixture of motor unit action potentials as seen across multiple channels. The parameters of this model are iteratively optimized for each component. Results are presented on both synthetic and experimental EMG data. The synthetic case has additive white noise and is compared with known components. The experimental EMG data was obtained using a custom linear electrode array designed for this study.

  5. Application of seasonal auto-regressive integrated moving average model in forecasting the incidence of hand-foot-mouth disease in Wuhan, China.

    PubMed

    Peng, Ying; Yu, Bin; Wang, Peng; Kong, De-Guang; Chen, Bang-Hua; Yang, Xiao-Bing

    2017-12-01

    Outbreaks of hand-foot-mouth disease (HFMD) have occurred many times and caused serious health burden in China since 2008. Application of modern information technology to prediction and early response can be helpful for efficient HFMD prevention and control. A seasonal auto-regressive integrated moving average (ARIMA) model for time series analysis was designed in this study. Eighty-four-month (from January 2009 to December 2015) retrospective data obtained from the Chinese Information System for Disease Prevention and Control were subjected to ARIMA modeling. The coefficient of determination (R 2 ), normalized Bayesian Information Criterion (BIC) and Q-test P value were used to evaluate the goodness-of-fit of constructed models. Subsequently, the best-fitted ARIMA model was applied to predict the expected incidence of HFMD from January 2016 to December 2016. The best-fitted seasonal ARIMA model was identified as (1,0,1)(0,1,1) 12 , with the largest coefficient of determination (R 2 =0.743) and lowest normalized BIC (BIC=3.645) value. The residuals of the model also showed non-significant autocorrelations (P Box-Ljung (Q) =0.299). The predictions by the optimum ARIMA model adequately captured the pattern in the data and exhibited two peaks of activity over the forecast interval, including a major peak during April to June, and again a light peak for September to November. The ARIMA model proposed in this study can forecast HFMD incidence trend effectively, which could provide useful support for future HFMD prevention and control in the study area. Besides, further observations should be added continually into the modeling data set, and parameters of the models should be adjusted accordingly.

  6. Analysis of tipping-curve measurements performed at the DSS-13 beam-waveguide antenna at 32.0 and 8.45 GigaHertz

    NASA Technical Reports Server (NTRS)

    Morabito, D. D.; Skjerve, L.

    1995-01-01

    This article reports on the analysis of the Ka-band Antenna Performance Experiment tipping-curve data acquired at the DSS-13 research and development beam-waveguide (BWG) antenna. By measuring the operating system temperatures as the antenna is moved form zenith to low-elevation angles and fitting a model to the data, one can obtain information on how well the overall temperature model behaves at zenith and approximate the contribution due to the atmosphere. The atmospheric contribution estimated from the data can be expressed in the form of (1) atmospheric noise temperatures that can provide weather statistic information and be compared against those estimated from other methods and (2) the atmospheric loss factor used to refer efficiency measurements to zero atmosphere. This article reports on an analysis performed on a set of 68 8.4-GHz and 67 32-GHz tipping-curve data sets acquired between December 1993 and May 1995 and compares the results with those inferred from a surface model using input meteorological data and from water vapor radiometer (WVR) data. The general results are that, for a selected subset of tip curves, (1) the BWG tipping-curve atmospheric temperatures are in good agreement with those determined from WVR data (the average difference is 0.06 +/- 0.64 K at 32 GHz) and (2) the surface model average values are biased 3.6 K below those of the BWG and WVR at 32 GHz.

  7. Structural Equation Modeling of Multivariate Time Series

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Browne, Michael W.

    2007-01-01

    The covariance structure of a vector autoregressive process with moving average residuals (VARMA) is derived. It differs from other available expressions for the covariance function of a stationary VARMA process and is compatible with current structural equation methodology. Structural equation modeling programs, such as LISREL, may therefore be…

  8. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  9. Operational Control Procedures for the Activated Sludge Process: Appendix.

    ERIC Educational Resources Information Center

    West, Alfred W.

    This document is the appendix for a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. Categories discussed include: control test data, trend charts, moving averages, semi-logarithmic plots, probability…

  10. Are U.S. Military Interventions Contagious over Time? Intervention Timing and Its Implications for Force Planning

    DTIC Science & Technology

    2013-01-01

    29 3.5. ARIMA Models , Temporal Clustering of Conflicts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.6...39 3.9. ARIMA Models ...variance across a distribution. Autoregressive integrated moving average ( ARIMA ) models are used with time-series data sets and are designed to capture

  11. Simulated lumped-parameter system reduced-order adaptive control studies

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.

    1981-01-01

    Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.

  12. Geometric Theory of Moving Grid Wavefront Sensor

    DTIC Science & Technology

    1977-06-30

    Identify by block numbot) Adaptive Optics WaVefront Sensor Geometric Optics Analysis Moving Ronchi Grid "ABSTRACT (Continue an revere sdde If nooessaY...ad Identify by block nucber)A geometric optics analysis is made for a wavefront sensor that uses a moving Ronchi grid. It is shown that by simple data... optical systems being considered or being developed -3 for imaging an object through a turbulent atmosphere. Some of these use a wavefront sensor to

  13. Geochemistry and geohydrology of the West Decker and Big Sky coal-mining areas, southeastern Montana

    USGS Publications Warehouse

    Davis, R.E.

    1984-01-01

    In the West Decker Mine area, water levels west of the mine at post-mining equilibrium may be almost 12 feet higher than pre-mining levels. Dissolved-solids concentration in water from coal aquifers is about 1,400 milligrams per liter and from mine spoils is about 2,500 milligrams per liter. About 13 years will be required for ground water moving at an average velocity of 2 feet per day to flow from the spoils to the Tongue River Reservoir. The increase in dissolved-solids load to the reservoir due to mining will be less than 1 percent. In the Big Sky Mine area, water levels at post-mining equilibrium will closely resemble pre-mining levels. Dissolved-solids concentration in water from coal aquifers is about 2,700 milligrams per liter and from spoils is about 3,700 milligrams per liter. About 36 to 60 years will be required for ground water moving at an average velocity of 1.2 feet per day to flow from the spoils to Rosebud Creek. The average annual increase in dissolved-solids load to the creek due to mining will be about 2 percent, although a greater increase probably will occur during summer months when flow in the creek is low. (USGS)

  14. Weather explains high annual variation in butterfly dispersal.

    PubMed

    Kuussaari, Mikko; Rytteri, Susu; Heikkinen, Risto K; Heliölä, Janne; von Bagh, Peter

    2016-07-27

    Weather conditions fundamentally affect the activity of short-lived insects. Annual variation in weather is therefore likely to be an important determinant of their between-year variation in dispersal, but conclusive empirical studies are lacking. We studied whether the annual variation of dispersal can be explained by the flight season's weather conditions in a Clouded Apollo (Parnassius mnemosyne) metapopulation. This metapopulation was monitored using the mark-release-recapture method for 12 years. Dispersal was quantified for each monitoring year using three complementary measures: emigration rate (fraction of individuals moving between habitat patches), average residence time in the natal patch, and average distance moved. There was much variation both in dispersal and average weather conditions among the years. Weather variables significantly affected the three measures of dispersal and together with adjusting variables explained 79-91% of the variation observed in dispersal. Different weather variables became selected in the models explaining variation in three dispersal measures apparently because of the notable intercorrelations. In general, dispersal rate increased with increasing temperature, solar radiation, proportion of especially warm days, and butterfly density, and decreased with increasing cloudiness, rainfall, and wind speed. These results help to understand and model annually varying dispersal dynamics of species affected by global warming. © 2016 The Author(s).

  15. Adult survival of Black-legged Kittiwakes Rissa tridactyla in a Pacific colony

    USGS Publications Warehouse

    Hatch, Scott A.; Roberts, Bay D.; Fadely, Brian S.

    1993-01-01

    Breeding Black-legged Kittiwakes Rissa tridactyla survived at a mean annual rate of 0.926 in four years at a colony in Alaska. Survival rates observed in sexed males (0.930) and females (0.937) did not differ significantly. The rate of return among nonbreeding Kittiwakes (0.839) was lower than that of known breeders, presumably because more nonbreeders moved away from the study plots where they were marked. Individual nonbreeders frequented sites up to 5 km apart on the same island, while a few established breeders moved up to 2.5 km between years. Mate retention in breeding Kittiwakes averaged 69% in three years. Among pairs that split, the cause of changing mates was about equally divided between death (46%) and divorce (54%). Average adult life expectancy was estimated at 13.0 years. Combined with annual productivity averaging 0.17 chick per nest, the observed survival was insufficient for maintaining population size. Rather, an irregular decline observed in the study colony since 1981 is consistent with the model of a closed population with little or no recruitment. Compared to their Atlantic counterparts, Pacific Kittiwakes have low productivity and high survival. The question arises whether differences reflect phenotypic plasticity or genetically determined variation in population parameters.

  16. Time resolved aerosol monitoring in the urban centre of Soweto

    NASA Astrophysics Data System (ADS)

    Formenti, P.; Annegarn, H. J.; Piketh, S. J.

    1998-03-01

    A programme of aerosol sampling was conducted from 1982 to 1984 in the urban area of Soweto, Johannesburg, South Africa. The particulate matter (aerodynamic diameter <15 μm) was collected using a two hours time resolution single stage streaker sampler and elemental concentrations were resolved via Particle Induced X-ray Emission (PIXE) analysis. Samples have been selected for analysis from an aerosol sample archive to establish base-line atmospheric conditions that existed in Soweto prior to large scale electrification, and to establish source apportionment of crustal elements between coal smoke and traffic induced road dust, based on chemical elemental measurements. A novel technique is demonstrated for processing PIXE-derived time sequence elemental concentration vectors. Slowly varying background components have been extracted from sulphur and crustal aerosol components, using alternatively two digital filters: a moving minimum, and a moving average. The residuals of the crustal elements, assigned to locally generated aerosol components, were modelled using surrogate tracers: sulphur as a surrogate for coal smoke; and Pb as a surrogate for traffic activity. Results from this source apportionment revealed coal emissions contributed between 40% and 50% of the aerosol mineral matter, while 18-22% originated from road dust. Background aerosol, characteristic of the regional winter aerosol burden over the South African Highveld, was between 12% and 21%. Minor contributors identified included a manganese smelter, located 30 km from the sampling site, and informal trash burning, as the source of intermittent heavy metals (Cu, Zn). Elemental source profiles derived for these various sources are presented.

  17. Underestimating the effects of spatial heterogeneity due to individual movement and spatial scale: infectious disease as an example

    USGS Publications Warehouse

    Cross, Paul C.; Caillaud, Damien; Heisey, Dennis M.

    2013-01-01

    Many ecological and epidemiological studies occur in systems with mobile individuals and heterogeneous landscapes. Using a simulation model, we show that the accuracy of inferring an underlying biological process from observational data depends on movement and spatial scale of the analysis. As an example, we focused on estimating the relationship between host density and pathogen transmission. Observational data can result in highly biased inference about the underlying process when individuals move among sampling areas. Even without sampling error, the effect of host density on disease transmission is underestimated by approximately 50 % when one in ten hosts move among sampling areas per lifetime. Aggregating data across larger regions causes minimal bias when host movement is low, and results in less biased inference when movement rates are high. However, increasing data aggregation reduces the observed spatial variation, which would lead to the misperception that a spatially targeted control effort may not be very effective. In addition, averaging over the local heterogeneity will result in underestimating the importance of spatial covariates. Minimizing the bias due to movement is not just about choosing the best spatial scale for analysis, but also about reducing the error associated with using the sampling location as a proxy for an individual’s spatial history. This error associated with the exposure covariate can be reduced by choosing sampling regions with less movement, including longitudinal information of individuals’ movements, or reducing the window of exposure by using repeated sampling or younger individuals.

  18. Performance Analysis of Inter-Domain Handoff Scheme Based on Virtual Layer in PMIPv6 Networks for IP-Based Internet of Things.

    PubMed

    Cho, Chulhee; Choi, Jae-Young; Jeong, Jongpil; Chung, Tai-Myoung

    2017-01-01

    Lately, we see that Internet of things (IoT) is introduced in medical services for global connection among patients, sensors, and all nearby things. The principal purpose of this global connection is to provide context awareness for the purpose of bringing convenience to a patient's life and more effectively implementing clinical processes. In health care, monitoring of biosignals of a patient has to be continuously performed while the patient moves inside and outside the hospital. Also, to monitor the accurate location and biosignals of the patient, appropriate mobility management is necessary to maintain connection between the patient and the hospital network. In this paper, a binding update scheme on PMIPv6, which reduces signal traffic during location updates by Virtual LMA (VLMA) on the top original Local Mobility Anchor (LMA) Domain, is proposed to reduce the total cost. If a Mobile Node (MN) moves to a Mobile Access Gateway (MAG)-located boundary of an adjacent LMA domain, the MN changes itself into a virtual mode, and this movement will be assumed to be a part of the VLMA domain. In the proposed scheme, MAGs eliminate global binding updates for MNs between LMA domains and significantly reduce the packet loss and latency by eliminating the handoff between LMAs. In conclusion, the performance analysis results show that the proposed scheme improves performance significantly versus PMIPv6 and HMIPv6 in terms of the binding update rate per user and average handoff latency.

  19. Moving standard deviation and moving sum of outliers as quality tools for monitoring analytical precision.

    PubMed

    Liu, Jiakai; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping

    2018-02-01

    An increase in analytical imprecision (expressed as CV a ) can introduce additional variability (i.e. noise) to the patient results, which poses a challenge to the optimal management of patients. Relatively little work has been done to address the need for continuous monitoring of analytical imprecision. Through numerical simulations, we describe the use of moving standard deviation (movSD) and a recently described moving sum of outlier (movSO) patient results as means for detecting increased analytical imprecision, and compare their performances against internal quality control (QC) and the average of normal (AoN) approaches. The power of detecting an increase in CV a is suboptimal under routine internal QC procedures. The AoN technique almost always had the highest average number of patient results affected before error detection (ANPed), indicating that it had generally the worst capability for detecting an increased CV a . On the other hand, the movSD and movSO approaches were able to detect an increased CV a at significantly lower ANPed, particularly for measurands that displayed a relatively small ratio of biological variation to CV a. CONCLUSION: The movSD and movSO approaches are effective in detecting an increase in CV a for high-risk measurands with small biological variation. Their performance is relatively poor when the biological variation is large. However, the clinical risks of an increase in analytical imprecision is attenuated for these measurands as an increased analytical imprecision will only add marginally to the total variation and less likely to impact on the clinical care. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  20. Ants determine their next move at rest: motor planning and causality in complex systems.

    PubMed

    Hunt, Edmund R; Baddeley, Roland J; Worley, Alan; Sendova-Franks, Ana B; Franks, Nigel R

    2016-01-01

    To find useful work to do for their colony, individual eusocial animals have to move, somehow staying attentive to relevant social information. Recent research on individual Temnothorax albipennis ants moving inside their colony's nest found a power-law relationship between a movement's duration and its average speed; and a universal speed profile for movements showing that they mostly fluctuate around a constant average speed. From this predictability it was inferred that movement durations are somehow determined before the movement itself. Here, we find similar results in lone T. albipennis ants exploring a large arena outside the nest, both when the arena is clean and when it contains chemical information left by previous nest-mates. This implies that these movement characteristics originate from the same individual neural and/or physiological mechanism(s), operating without immediate regard to social influences. However, the presence of pheromones and/or other cues was found to affect the inter-event speed correlations. Hence we suggest that ants' motor planning results in intermittent response to the social environment: movement duration is adjusted in response to social information only between movements, not during them. This environmentally flexible, intermittently responsive movement behaviour points towards a spatially allocated division of labour in this species. It also prompts more general questions on collective animal movement and the role of intermittent causation from higher to lower organizational levels in the stability of complex systems.

  1. Effect of human movement on airborne disease transmission in an airplane cabin: study using numerical modeling and quantitative risk analysis.

    PubMed

    Han, Zhuyang; To, Gin Nam Sze; Fu, Sau Chung; Chao, Christopher Yu-Hang; Weng, Wenguo; Huang, Quanyi

    2014-08-06

    Airborne transmission of respiratory infectious disease in indoor environment (e.g. airplane cabin, conference room, hospital, isolated room and inpatient ward) may cause outbreaks of infectious diseases, which may lead to many infection cases and significantly influences on the public health. This issue has received more and more attentions from academics. This work investigates the influence of human movement on the airborne transmission of respiratory infectious diseases in an airplane cabin by using an accurate human model in numerical simulation and comparing the influences of different human movement behaviors on disease transmission. The Eulerian-Lagrangian approach is adopted to simulate the dispersion and deposition of the expiratory aerosols. The dose-response model is used to assess the infection risks of the occupants. The likelihood analysis is performed as a hypothesis test on the input parameters and different human movement pattern assumptions. An in-flight SARS outbreak case is used for investigation. A moving person with different moving speeds is simulated to represent the movement behaviors. A digital human model was used to represent the detailed profile of the occupants, which was obtained by scanning a real thermal manikin using the 3D laser scanning system. The analysis results indicate that human movement can strengthen the downward transport of the aerosols, significantly reduce the overall deposition and removal rate of the suspended aerosols and increase the average infection risk in the cabin. The likelihood estimation result shows that the risk assessment results better fit the outcome of the outbreak case when the movements of the seated passengers are considered. The intake fraction of the moving person is significantly higher than most of the seated passengers. The infection risk distribution in the airplane cabin highly depends on the movement behaviors of the passengers and the index patient. The walking activities of the crew members and the seated passengers can significantly increase their personal infection risks. Taking the influence of the movement of the seated passengers and the index patient into consideration is necessary and important. For future studies, investigations on the behaviors characteristics of the passengers during flight will be useful and helpful for infection control.

  2. 3D printer generated thorax phantom with mobile tumor for radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Mayer, Rulon; Liacouras, Peter; Thomas, Andrew; Kang, Minglei; Lin, Liyong; Simone, Charles B.

    2015-07-01

    This article describes the design, construction, and properties of an anthropomorphic thorax phantom with a moving surrogate tumor. This novel phantom permits detection of dose both inside and outside a moving tumor and within the substitute lung tissue material. A 3D printer generated the thorax shell composed of a chest wall, spinal column, and posterior regions of the phantom. Images of a computed tomography scan of the thorax from a patient with lung cancer provided the template for the 3D printing. The plastic phantom is segmented into two materials representing the muscle and bones, and its geometry closely matches a patient. A surrogate spherical plastic tumor controlled by a 3D linear stage simulates a lung tumor's trajectory during normal breathing. Sawdust emulates the lung tissue in terms of average and distribution in Hounsfield numbers. The sawdust also provides a forgiving medium that permits tumor motion and sandwiching of radiochromic film inside the mobile surrogate plastic tumor for dosimetry. A custom cork casing shields the film and tumor and eliminates film bending during extended scans. The phantom, lung tissue surrogate, and radiochromic film are exposed to a seven field plan based on an ECLIPSE plan for 6 MV photons from a Trilogy machine delivering 230 cGy to the isocenter. The dose collected in a sagittal plane is compared to the calculated plan. Gamma analysis finds 8.8% and 5.5% gamma failure rates for measurements of large amplitude trajectory and static measurements relative to the large amplitude plan, respectively. These particular gamma analysis results were achieved using parameters of 3% dose and 3 mm, for regions receiving doses >150 cGy. The plan assumes a stationary detection grid unlike the moving radiochromic film and tissues. This difference was experimentally observed and motivated calculated dose distributions that incorporated the phase of the tumor periodic motion. These calculations modestly improve agreement between the measured and intended doses.

  3. The TW Hydrae association: trigonometric parallaxes and kinematic analysis

    NASA Astrophysics Data System (ADS)

    Ducourant, C.; Teixeira, R.; Galli, P. A. B.; Le Campion, J. F.; Krone-Martins, A.; Zuckerman, B.; Chauvin, G.; Song, I.

    2014-03-01

    Context. The nearby TW Hydrae association (TWA) is currently a benchmark for the study of the formation and evolution of young low-mass stars, circumstellar disks, and the imaging detection of planetary companions. For these studies, it is crucial to evaluate the distance to group members in order to access their physical properties. Membership of several stars is strongly debated and age estimates vary from one author to another with doubts about coevality. Aims: We revisit the kinematic properties of the TWA in light of new trigonometric parallaxes and proper motions to derive the dynamical age of the association and physical parameters of kinematic members. Methods: Using observations performed with the New Technology Telescope (NTT) from ESO we measured trigonometric parallaxes and proper motions for 13 stars in TWA. Results: With the convergent point method we identify a co-moving group with 31 TWA stars. We deduce kinematic distances for seven members of the moving group that lack trigonometric parallaxes. A traceback strategy is applied to the stellar space motions of a selection of 16 of the co-moving objects with accurate and reliable data yielding a dynamical age for the association of t ≃ 7.5 ± 0.7 Myr. Using our new parallaxes and photometry available in the literature we derive stellar ages and masses from theoretical evolutionary models. Conclusions: With new parallax and proper motion measurements from this work and current astrometric catalogs we provide an improved and accurate database for TWA stars to be used in kinematical analysis. We conclude that the dynamical age obtained via traceback strategy is consistent with previous age estimates for the TWA, and is also compatible with the average ages derived in the present paper from evolutionary models for pre-main-sequence stars. Based on observations performed at the European Southern Observatory, Chile (79.C-0229, 81.C-0143, 82.C-0103, 83.C-0102, 84.C-0014).

  4. 3D printer generated thorax phantom with mobile tumor for radiation dosimetry.

    PubMed

    Mayer, Rulon; Liacouras, Peter; Thomas, Andrew; Kang, Minglei; Lin, Liyong; Simone, Charles B

    2015-07-01

    This article describes the design, construction, and properties of an anthropomorphic thorax phantom with a moving surrogate tumor. This novel phantom permits detection of dose both inside and outside a moving tumor and within the substitute lung tissue material. A 3D printer generated the thorax shell composed of a chest wall, spinal column, and posterior regions of the phantom. Images of a computed tomography scan of the thorax from a patient with lung cancer provided the template for the 3D printing. The plastic phantom is segmented into two materials representing the muscle and bones, and its geometry closely matches a patient. A surrogate spherical plastic tumor controlled by a 3D linear stage simulates a lung tumor's trajectory during normal breathing. Sawdust emulates the lung tissue in terms of average and distribution in Hounsfield numbers. The sawdust also provides a forgiving medium that permits tumor motion and sandwiching of radiochromic film inside the mobile surrogate plastic tumor for dosimetry. A custom cork casing shields the film and tumor and eliminates film bending during extended scans. The phantom, lung tissue surrogate, and radiochromic film are exposed to a seven field plan based on an ECLIPSE plan for 6 MV photons from a Trilogy machine delivering 230 cGy to the isocenter. The dose collected in a sagittal plane is compared to the calculated plan. Gamma analysis finds 8.8% and 5.5% gamma failure rates for measurements of large amplitude trajectory and static measurements relative to the large amplitude plan, respectively. These particular gamma analysis results were achieved using parameters of 3% dose and 3 mm, for regions receiving doses >150 cGy. The plan assumes a stationary detection grid unlike the moving radiochromic film and tissues. This difference was experimentally observed and motivated calculated dose distributions that incorporated the phase of the tumor periodic motion. These calculations modestly improve agreement between the measured and intended doses.

  5. 3D printer generated thorax phantom with mobile tumor for radiation dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, Rulon; Liacouras, Peter; Thomas, Andrew

    2015-07-15

    This article describes the design, construction, and properties of an anthropomorphic thorax phantom with a moving surrogate tumor. This novel phantom permits detection of dose both inside and outside a moving tumor and within the substitute lung tissue material. A 3D printer generated the thorax shell composed of a chest wall, spinal column, and posterior regions of the phantom. Images of a computed tomography scan of the thorax from a patient with lung cancer provided the template for the 3D printing. The plastic phantom is segmented into two materials representing the muscle and bones, and its geometry closely matches amore » patient. A surrogate spherical plastic tumor controlled by a 3D linear stage simulates a lung tumor’s trajectory during normal breathing. Sawdust emulates the lung tissue in terms of average and distribution in Hounsfield numbers. The sawdust also provides a forgiving medium that permits tumor motion and sandwiching of radiochromic film inside the mobile surrogate plastic tumor for dosimetry. A custom cork casing shields the film and tumor and eliminates film bending during extended scans. The phantom, lung tissue surrogate, and radiochromic film are exposed to a seven field plan based on an ECLIPSE plan for 6 MV photons from a Trilogy machine delivering 230 cGy to the isocenter. The dose collected in a sagittal plane is compared to the calculated plan. Gamma analysis finds 8.8% and 5.5% gamma failure rates for measurements of large amplitude trajectory and static measurements relative to the large amplitude plan, respectively. These particular gamma analysis results were achieved using parameters of 3% dose and 3 mm, for regions receiving doses >150 cGy. The plan assumes a stationary detection grid unlike the moving radiochromic film and tissues. This difference was experimentally observed and motivated calculated dose distributions that incorporated the phase of the tumor periodic motion. These calculations modestly improve agreement between the measured and intended doses.« less

  6. Statistical analysis of low level atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Tieleman, H. W.; Chen, W. W. L.

    1974-01-01

    The statistical properties of low-level wind-turbulence data were obtained with the model 1080 total vector anemometer and the model 1296 dual split-film anemometer, both manufactured by Thermo Systems Incorporated. The data obtained from the above fast-response probes were compared with the results obtained from a pair of Gill propeller anemometers. The digitized time series representing the three velocity components and the temperature were each divided into a number of blocks, the length of which depended on the lowest frequency of interest and also on the storage capacity of the available computer. A moving-average and differencing high-pass filter was used to remove the trend and the low frequency components in the time series. The calculated results for each of the anemometers used are represented in graphical or tabulated form.

  7. Energy consumption model on WiMAX subscriber station

    NASA Astrophysics Data System (ADS)

    Mubarakah, N.; Suherman; Al-Hakim, M. Y.; Warman, E.

    2018-02-01

    Mobile communication technologies move toward miniaturization. Mobile device’s energy source relies on its battery endurance. The smaller the mobile device, it is expected the slower the battery drains. Energy consumption reduction in mobile devices has been of interest of researcher. In order to optimize energy consumption, its usage should be predictable. This paper proposes a model of predicted energy amount consumed by the WiMAX subscriber station by using regression analysis of active WiMAX states and their durations. The proposed model was assessed by using NS-2 simulation for more than a hundred thousand of recorded energy consumptions data in every WiMAX states. The assessment show a small average deviation between predicted and measured energy consumptions, about 0.18% for training data and 0.187% and 0.191% for test data.

  8. Traffic dynamics of carnival processions

    NASA Astrophysics Data System (ADS)

    Polichronidis, Petros; Wegerle, Dominik; Dieper, Alexander; Schreckenberg, Michael

    2018-03-01

    The traffic dynamics of processions are described in this study. GPS data from participating groups in the Cologne Rose Monday processions 2014–2017 are used to analyze the kinematic characteristics. The preparation of the measured data requires an adjustment by a specially adapted algorithm for the map matching method. A higher average velocity is observed for the last participant, the Carnival Prince, than for the leading participant of the parade. Based on the results of the data analysis, for the first time a model can be established for defilading parade groups as a modified Nagel-Schreckenberg model. This model can reproduce the observed characteristics in simulations. They can be explained partly by the constantly moving vehicle driving ahead of the parade leaving the pathway and partly due to a spatial contraction of the parade during the procession.

  9. Nutrition and hydration: an analysis of the recent papal statement in the light of the Roman Catholic bioethical tradition.

    PubMed

    Shannon, Thomas A

    2006-04-01

    This article discusses the unexpectedly firm stance professed by John Paul II on the provision of artificial nutrition and hydration to patients who are in a persistent vegetative state, and it implications on previously held standards of judging medical treatments. The traditional ordinary/extraordinary care distinction is assessed in light of complexities of the recent allocution as well as its impact on Catholic individuals and in Catholic health care facilities. Shannon concludes that the papal allocution infers that the average Catholic patient is incapable of making proper judgments about their own care. Shannon sees the preservation of life at all costs as at least highly troubling, if not as a radical move against the Catholic medical ethics tradition.

  10. Microcolumn Formation due to Induced-Charge Electroosmosis in a Floating Mode

    NASA Astrophysics Data System (ADS)

    Sugioka, Hideyuki; Dan, Hironobu; Hanazawa, Yuya

    2017-10-01

    Self-organization of particles is important since it may provide new functional materials. Previously, by using two-dimensional multiphysics simulations, we theoretically showed microcolumn formation due to induced-charge electroosmosis (ICEO). In this study, we experimentally demonstrate that gold leaves on a water surface move slowly and dynamically form a microcolumn due to a hydrodynamic interaction under an ac electric field. Further, by numerically analyzing video data, we show the time evolutions of the maximum cluster length and the maximum cluster area. In addition, by cluster analysis, we show the dependences of the average velocity on the applied voltage and frequency to clarify the phenomena. We believe that our findings make a new stage in the development of new functional materials on a water surface.

  11. VizieR Online Data Catalog: HARPS timeseries data for HD41248 (Jenkins+, 2014)

    NASA Astrophysics Data System (ADS)

    Jenkins, J. S.; Tuomi, M.

    2017-05-01

    We modeled the HARPS radial velocities of HD 42148 by adopting the analysis techniques and the statistical model applied in Tuomi et al. (2014, arXiv:1405.2016). This model contains Keplerian signals, a linear trend, a moving average component with exponential smoothing, and linear correlations with activity indices, namely, BIS, FWHM, and chromospheric activity S index. We applied our statistical model outlined above to the full data set of radial velocities for HD 41248, combining the previously published data in Jenkins et al. (2013ApJ...771...41J) with the newly published data in Santos et al. (2014, J/A+A/566/A35), giving rise to a total time series of 223 HARPS (Mayor et al. 2003Msngr.114...20M) velocities. (1 data file).

  12. SU-C-209-02: 3D Fluoroscopic Image Generation From Patient-Specific 4DCBCT-Based Motion Models Derived From Clinical Patient Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhou, S; Cai, W; Hurwitz, M

    Purpose: We develop a method to generate time varying volumetric images (3D fluoroscopic images) using patient-specific motion models derived from four-dimensional cone-beam CT (4DCBCT). Methods: Motion models are derived by selecting one 4DCBCT phase as a reference image, and registering the remaining images to it. Principal component analysis (PCA) is performed on the resultant displacement vector fields (DVFs) to create a reduced set of PCA eigenvectors that capture the majority of respiratory motion. 3D fluoroscopic images are generated by optimizing the weights of the PCA eigenvectors iteratively through comparison of measured cone-beam projections and simulated projections generated from the motionmore » model. This method was applied to images from five lung-cancer patients. The spatial accuracy of this method is evaluated by comparing landmark positions in the 3D fluoroscopic images to manually defined ground truth positions in the patient cone-beam projections. Results: 4DCBCT motion models were shown to accurately generate 3D fluoroscopic images when the patient cone-beam projections contained clearly visible structures moving with respiration (e.g., the diaphragm). When no moving anatomical structure was clearly visible in the projections, the 3D fluoroscopic images generated did not capture breathing deformations, and reverted to the reference image. For the subset of 3D fluoroscopic images generated from projections with visibly moving anatomy, the average tumor localization error and the 95th percentile were 1.6 mm and 3.1 mm respectively. Conclusion: This study showed that 4DCBCT-based 3D fluoroscopic images can accurately capture respiratory deformations in a patient dataset, so long as the cone-beam projections used contain visible structures that move with respiration. For clinical implementation of 3D fluoroscopic imaging for treatment verification, an imaging field of view (FOV) that contains visible structures moving with respiration should be selected. If no other appropriate structures are visible, the images should include the diaphragm. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc, Palo Alto, CA.« less

  13. Discovery, utilisation and analysis of credible threats for 2×2 incomplete information games in the Theory of Moves framework

    NASA Astrophysics Data System (ADS)

    Olsen, Jolie; Sen, Sandip

    2014-04-01

    Steven Brams's [(1994). Theory of moves. Cambridge University Press] Theory of Moves (TOM) is an alternative to traditional game theoretic treatment of real-life interactions, in which players choose strategies based on analysis of future moves and counter-moves that arise if game play commences at a specified start state and either player can choose to move first. In repeated play, players using TOM rationality arrive at nonmyopic equilibria. One advantage of TOM is its ability to model scenarios in which power asymmetries exist between players. In particular, threat power, i.e. the ability of one player to threaten and sustain immediate, globally disadvantageous outcomes to force a desirable result long term, can be utilised to induce Pareto optimal states in games such as Prisoner's Dilemma which result in Pareto-dominated outcomes using traditional methods. Unfortunately, prior work on TOM is limited by an assumption of complete information. This paper presents a mechanism that can be used by a player to utilise threat power when playing a strict, ordinal 2×2 game under incomplete information. We also analyse the benefits of threat power and support in this analysis with empirical evidence.

  14. Permeation of limonene through disposable nitrile gloves using a dextrous robot hand

    PubMed Central

    Banaee, Sean; S Que Hee, Shane

    2017-01-01

    Objectives: The purpose of this study was to investigate the permeation of the low-volatile solvent limonene through different disposable, unlined, unsupported, nitrile exam whole gloves (blue, purple, sterling, and lavender, from Kimberly-Clark). Methods: This study utilized a moving and static dextrous robot hand as part of a novel dynamic permeation system that allowed sampling at specific times. Quantitation of limonene in samples was based on capillary gas chromatography-mass spectrometry and the internal standard method (4-bromophenol). Results: The average post-permeation thicknesses (before reconditioning) for all gloves for both the moving and static hand were more than 10% of the pre-permeation ones (P≤0.05), although this was not so on reconditioning. The standardized breakthrough times and steady-state permeation periods were similar for the blue, purple, and sterling gloves. Both methods had similar sensitivity. The lavender glove showed a higher permeation rate (0.490±0.031 μg/cm2/min) for the moving robotic hand compared to the non-moving hand (P≤0.05), this being ascribed to a thickness threshold. Conclusions: Permeation parameters for the static and dynamic robot hand models indicate that both methods have similar sensitivity in detecting the analyte during permeation and the blue, purple, and sterling gloves behave similarly during the permeation process whether moving or non-moving. PMID:28111415

  15. The Spin Move: A Reliable and Cost-Effective Gowning Technique for the 21st Century.

    PubMed

    Ochiai, Derek H; Adib, Farshad

    2015-04-01

    Operating room efficiency (ORE) and utilization are considered one of the most crucial components of quality improvement in every hospital. We introduced a new gowning technique that could optimize ORE. The Spin Move quickly and efficiently wraps a surgical gown around the surgeon's body. This saves the operative time expended through the traditional gowning techniques. In the Spin Move, while the surgeon is approaching the scrub nurse, he or she uses the left heel as the fulcrum. The torque, which is generated by twisting the right leg around the left leg, helps the surgeon to close the gown as quickly and safely as possible. From 2003 to 2012, the Spin Move was performed in 1,725 consecutive procedures with no complication. The estimated average time was 5.3 and 7.8 seconds for the Spin Move and traditional gowning, respectively. The estimated time saving for the senior author during this period was 71.875 minutes. Approximately 20,000 orthopaedic surgeons practice in the United States. If this technique had been used, 23,958 hours could have been saved. The money saving could have been $14,374,800.00 (23,958 hours × $600/operating room hour) during the past 10 years. The Spin Move is easy to perform and reproducible. It saves operating room time and increases ORE.

  16. The Spin Move: A Reliable and Cost-Effective Gowning Technique for the 21st Century

    PubMed Central

    Ochiai, Derek H.; Adib, Farshad

    2015-01-01

    Operating room efficiency (ORE) and utilization are considered one of the most crucial components of quality improvement in every hospital. We introduced a new gowning technique that could optimize ORE. The Spin Move quickly and efficiently wraps a surgical gown around the surgeon's body. This saves the operative time expended through the traditional gowning techniques. In the Spin Move, while the surgeon is approaching the scrub nurse, he or she uses the left heel as the fulcrum. The torque, which is generated by twisting the right leg around the left leg, helps the surgeon to close the gown as quickly and safely as possible. From 2003 to 2012, the Spin Move was performed in 1,725 consecutive procedures with no complication. The estimated average time was 5.3 and 7.8 seconds for the Spin Move and traditional gowning, respectively. The estimated time saving for the senior author during this period was 71.875 minutes. Approximately 20,000 orthopaedic surgeons practice in the United States. If this technique had been used, 23,958 hours could have been saved. The money saving could have been $14,374,800.00 (23,958 hours × $600/operating room hour) during the past 10 years. The Spin Move is easy to perform and reproducible. It saves operating room time and increases ORE. PMID:26052490

  17. Effects of coarse-graining on the scaling behavior of long-range correlated and anti-correlated signals.

    PubMed

    Xu, Yinlin; Ma, Qianli D Y; Schmitt, Daniel T; Bernaola-Galván, Pedro; Ivanov, Plamen Ch

    2011-11-01

    We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences.

  18. Effects of coarse-graining on the scaling behavior of long-range correlated and anti-correlated signals

    PubMed Central

    Xu, Yinlin; Ma, Qianli D.Y.; Schmitt, Daniel T.; Bernaola-Galván, Pedro; Ivanov, Plamen Ch.

    2014-01-01

    We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences. PMID:25392599

  19. In Vivo Microdialysis and Electroencephalographic Activity in Freely Moving Guinea Pigs Exposed to Organophosphorus Nerve Agents Sarin and VX: Analysis of Acetylcholine and Glutamate

    DTIC Science & Technology

    2011-01-01

    3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE In vivo microdialysis and electroencephalographic activity in freely moving guinea pigs 5a...microdialysis and electroencephalographic activity in freely moving guinea pigs exposed to organophosphorus nerve agents sarin and VX: analysis of...brain seizure activity . This robust double multi- variate design provides greater fidelity when comparing data while also reducing the required number

  20. How Do Changes in Speed Affect the Perception of Duration?

    ERIC Educational Resources Information Center

    Matthews, William J.

    2011-01-01

    Six experiments investigated how changes in stimulus speed influence subjective duration. Participants saw rotating or translating shapes in three conditions: constant speed, accelerating motion, and decelerating motion. The distance moved and average speed were the same in all three conditions. In temporal judgment tasks, the constant-speed…

  1. Unpacking the "Black Box" of Social Programs and Policies: Introduction

    ERIC Educational Resources Information Center

    Solmeyer, Anna R.; Constance, Nicole

    2015-01-01

    Traditionally, evaluation has primarily tried to answer the question "Does a program, service, or policy work?" Recently, more attention is given to questions about variation in program effects and the mechanisms through which program effects occur. Addressing these kinds of questions requires moving beyond assessing average program…

  2. Tax Breaks for Law Students.

    ERIC Educational Resources Information Center

    Button, Alan L.

    1981-01-01

    A guide to federal income tax law as it affects law students is presented. Some costs that may constitute valuable above-the-line deductions are identified: moving expenses, educational expenses, job-seeking expenses, and income averaging. Available from Washington and Lee University School of Law, Lexington, VA 24450, $5.50 sc) (MLW)

  3. Alabama's Education Report Card, 2009-2010

    ERIC Educational Resources Information Center

    Alabama Department of Education, 2011

    2011-01-01

    In a more consistent and viable manner than ever before, education in Alabama is moving toward its ultimate goal of providing every student with a quality education, thereby preparing them for work, college, and life after high school. Alabama's graduation rates from 2002 to 2008 increased significantly, tripling the national average increase and…

  4. Moving toward climate-informed agricultural decision support - can we use PRISM data for more than just monthly averages?

    USDA-ARS?s Scientific Manuscript database

    Decision support systems/models for agriculture are varied in target application and complexity, ranging from simple worksheets to near real-time forecast systems requiring significant computational and manpower resources. Until recently, most such decision support systems have been constructed with...

  5. A MOVING AVERAGE BAYESIAN MODEL FOR SPATIAL SURFACE AND COVERAGE PREDICTION FROM ENVIRONMENTAL POINT-SOURCE DATA

    EPA Science Inventory

    This paper addresses the general problem of estimating at arbitrary locations the value of an unobserved quantity that varies over space, such as ozone concentration in air or nitrate concentrations in surface groundwater, on the basis of approximate measurements of the quantity ...

  6. On the Nature of SEM Estimates of ARMA Parameters.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2002-01-01

    Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…

  7. A Computer Program for the Generation of ARIMA Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Noles, Keith O.

    1977-01-01

    The autoregressive integrated moving averages model (ARIMA) has been applied to time series data in psychological and educational research. A program is described that generates ARIMA data of a known order. The program enables researchers to explore statistical properties of ARIMA data and simulate systems producing time dependent observations.…

  8. Inhalant Use among Indiana School Children, 1991-2004

    ERIC Educational Resources Information Center

    Ding, Kele; Torabi, Mohammad R.; Perera, Bilesha; Jun, Mi Kyung; Jones-McKyer, E. Lisako

    2007-01-01

    Objective: To examine the prevalence and trend of inhalant use among Indiana public school students. Methods: The Alcohol, Tobacco, and Other Drug Use among Indiana Children and Adolescents surveys conducted annually between 1991 and 2004 were reanalyzed using 2-way moving average, Poisson regression, and ANOVA tests. Results: The prevalence had…

  9. Average pollutant concentration in soil profile simulated with Convective-Dispersive Equation. Model and Manual

    USDA-ARS?s Scientific Manuscript database

    Different parts of soil solution move with different velocities, and therefore chemicals are leached gradually from soil with infiltrating water. Solute dispersivity is the soil parameter characterizing this phenomenon. To characterize the dispersivity of soil profile at field scale, it is desirable...

  10. A review of metropolitan area early deployment plans and congestion management systems for the development of intelligent transportation systems

    DOT National Transportation Integrated Search

    1997-01-01

    The three-quarter moving composite price index is the weighted average of the indices for three consecutive quarters. The Composite Bid Price Index is composed of six indicator items: common excavation, to indicate the price trend for all roadway exc...

  11. The Choice of Spatial Interpolation Method Affects Research Conclusions

    NASA Astrophysics Data System (ADS)

    Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.

    2017-12-01

    Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.

  12. Cloud motion in relation to the ambient wind field

    NASA Technical Reports Server (NTRS)

    Fuelberg, H. E.; Scoggins, J. R.

    1975-01-01

    Trajectories of convective clouds were computed from a mathematical model and compared with trajectories observed by radar. The ambient wind field was determined from the AVE IIP data. The model includes gradient, coriolis, drag, lift, and lateral forces. The results show that rotational effects may account for large differences between the computed and observed trajectories and that convective clouds may move 10 to 20 degrees to the right or left of the average wind vector and at speeds 5 to 10 m/sec faster or slower than the average ambient wind speed.

  13. MoveU? Assessing a Social Marketing Campaign to Promote Physical Activity.

    PubMed

    Scarapicchia, Tanya M F; Sabiston, Catherine M F; Brownrigg, Michelle; Blackburn-Evans, Althea; Cressy, Jill; Robb, Janine; Faulkner, Guy E J

    2015-01-01

    MoveU is a social marketing initiative aimed at increasing moderate-to-vigorous physical activity (MVPA) among undergraduate students. Using the Hierarchy of Effects model (HOEM), this study identified awareness of MoveU and examined associations between awareness, outcome expectations, self-efficacy, intentions, and MVPA. Students (N = 2,784) from a Canadian university in March 2013. A secondary analysis of the National College Health Assessment-II survey and measures specific to the MoveU campaign. The main associations were examined in a path analysis. MoveU awareness (36.4%) was lower than other well-established university health campaigns. Younger students, females, and individuals living on campus were more likely to be aware of MoveU. The HOEM was supported, and improvements in model fit were evident, with additional direct relationships between outcome expectancy and intention, and between self-efficacy and MVPA. The intended population was aware of the campaign. The HOEM was useful in the development and evaluation of the MoveU campaign. Longitudinal studies are needed to further test the efficacy of the HOEM in the social marketing of physical activity.

  14. Finding the average speed of a light-emitting toy car with a smartphone light sensor

    NASA Astrophysics Data System (ADS)

    Kapucu, Serkan

    2017-07-01

    This study aims to demonstrate how the average speed of a light-emitting toy car may be determined using a smartphone’s light sensor. The freely available Android smartphone application, ‘AndroSensor’, was used for the experiment. The classroom experiment combines complementary physics knowledge of optics and kinematics to find the average speed of a moving object. The speed of the toy car is found by determining the distance between the light-emitting toy car and the smartphone, and the time taken to travel these distances. To ensure that the average speed of the toy car calculated with the help of the AndroSensor was correct, the average speed was also calculated by analyzing video-recordings of the toy car. The resulting speeds found with these different methods were in good agreement with each other. Hence, it can be concluded that reliable measurements of the average speed of light-emitting objects can be determined with the help of the light sensor of an Android smartphone.

  15. Relative distance between tracers as a measure of diffusivity within moving aggregates

    NASA Astrophysics Data System (ADS)

    Pönisch, Wolfram; Zaburdaev, Vasily

    2018-02-01

    Tracking of particles, be it a passive tracer or an actively moving bacterium in the growing bacterial colony, is a powerful technique to probe the physical properties of the environment of the particles. One of the most common measures of particle motion driven by fluctuations and random forces is its diffusivity, which is routinely obtained by measuring the mean squared displacement of the particles. However, often the tracer particles may be moving in a domain or an aggregate which itself experiences some regular or random motion and thus masks the diffusivity of tracers. Here we provide a method for assessing the diffusivity of tracer particles within mobile aggregates by measuring the so-called mean squared relative distance (MSRD) between two tracers. We provide analytical expressions for both the ensemble and time averaged MSRD allowing for direct identification of diffusivities from experimental data.

  16. Laser ablation for the synthesis of carbon nanotubes

    DOEpatents

    Holloway, Brian C; Eklund, Peter C; Smith, Michael W; Jordan, Kevin C; Shinn, Michelle

    2012-11-27

    Single walled carbon nanotubes are produced in a novel apparatus by the laser-induced ablation of moving carbon target. The laser used is of high average power and ultra-fast pulsing. According to various preferred embodiments, the laser produces and output above about 50 watts/cm.sup.2 at a repetition rate above about 15 MHz and exhibits a pulse duration below about 10 picoseconds. The carbon, carbon/catalyst target and the laser beam are moved relative to one another and a focused flow of "side pumped", preheated inert gas is introduced near the point of ablation to minimize or eliminate interference by the ablated plume by removal of the plume and introduction of new target area for incidence with the laser beam. When the target is moved relative to the laser beam, rotational or translational movement may be imparted thereto, but rotation of the target is preferred.

  17. Laser ablation for the synthesis of carbon nanotubes

    DOEpatents

    Holloway, Brian C.; Eklund, Peter C.; Smith, Michael W.; Jordan, Kevin C.; Shinn, Michelle

    2010-04-06

    Single walled carbon nanotubes are produced in a novel apparatus by the laser-induced ablation of moving carbon target. The laser used is of high average power and ultra-fast pulsing. According to various preferred embodiments, the laser produces an output above about 50 watts/cm.sup.2 at a repetition rate above about 15 MHz and exhibits a pulse duration below about 10 picoseconds. The carbon, carbon/catalyst target and the laser beam are moved relative to one another and a focused flow of "side pumped", preheated inert gas is introduced near the point of ablation to minimize or eliminate interference by the ablated plume by removal of the plume and introduction of new target area for incidence with the laser beam. When the target is moved relative to the laser beam, rotational or translational movement may be imparted thereto, but rotation of the target is preferred.

  18. Laser ablation for the synthesis of carbon nanotubes

    NASA Technical Reports Server (NTRS)

    Holloway, Brian C. (Inventor); Eklund, Peter C. (Inventor); Smith, Michael W. (Inventor); Jordan, Kevin C. (Inventor); Shinn, Michelle (Inventor)

    2010-01-01

    Single walled carbon nanotubes are produced in a novel apparatus by the laser-induced ablation of moving carbon target. The laser used is of high average power and ultra-fast pulsing. According to various preferred embodiments, the laser produces an output above about 50 watts/cm.sup.2 at a repetition rate above about 15 MHz and exhibits a pulse duration below about 10 picoseconds. The carbon, carbon/catalyst target and the laser beam are moved relative to one another and a focused flow of side pumped, preheated inert gas is introduced near the point of ablation to minimize or eliminate interference by the ablated plume by removal of the plume and introduction of new target area for incidence with the laser beam. When the target is moved relative to the laser beam, rotational or translational movement may be imparted thereto, but rotation of the target is preferred.

  19. Laser ablation for the synthesis of carbon nanotubes

    NASA Technical Reports Server (NTRS)

    Holloway, Brian C. (Inventor); Eklund, Peter C. (Inventor); Smith, Michael W. (Inventor); Jordan, Kevin C. (Inventor); Shinn, Michelle (Inventor)

    2012-01-01

    Single walled carbon nanotubes are produced in a novel apparatus by the laser-induced ablation of moving carbon target. The laser used is of high average power and ultra-fast pulsing. According to various preferred embodiments, the laser produces and output above about 50 watts/cm.sup.2 at a repetition rate above about 15 MHz and exhibits a pulse duration below about 10 picoseconds. The carbon, carbon/catalyst target and the laser beam are moved relative to one another and a focused flow of "side pumped", preheated inert gas is introduced near the point of ablation to minimize or eliminate interference by the ablated plume by removal of the plume and introduction of new target area for incidence with the laser beam. When the target is moved relative to the laser beam, rotational or translational movement may be imparted thereto, but rotation of the target is preferred.

  20. THE SEDIMENTATION PROPERTIES OF THE SKIN-SENSITIZING ANTIBODIES OF RAGWEED-SENSITIVE PATIENTS

    PubMed Central

    Andersen, Burton R.; Vannier, Wilton E.

    1964-01-01

    The sedimentation coefficients of the skin-sensitizing antibodies to ragweed were evaluated by the moving partition cell method and the sucrose density gradient method. The most reliable results were obtained by sucrose density gradient ultracentrifugation which showed that the major portion of skin-sensitizing antibodies to ragweed sediment with an average value of 7.7S (7.4 to 7.9S). This is about one S unit faster than γ-globulins (6.8S). The data from the moving partition cell method are in agreement with these results. Our studies failed to demonstrate heterogeneity of the skin-sensitizing antibodies with regard to sedimentation rate. PMID:14194391

  1. Progression of ash canopy thinning and dieback outward from the initial infestation of emerald ash borer (Coleoptera: Buprestidae) in southeastern Michigan.

    PubMed

    Smitley, David; Davis, Terrance; Rebek, Eric

    2008-10-01

    Our objective was to characterize the rate at which ash (Fraxinus spp.) trees decline in areas adjacent to the leading edge of visible ash canopy thinning due to emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae). Trees in southeastern Michigan were surveyed from 2003 to 2006 for canopy thinning and dieback by comparing survey trees with a set of 11 standard photographs. Freeways stemming from Detroit in all directions were used as survey transects. Between 750 and 1,100 trees were surveyed each year. A rapid method of sampling populations of emerald ash borer was developed by counting emerald ash borer emergence holes with binoculars and then felling trees to validate binocular counts. Approximately 25% of the trees surveyed for canopy thinning in 2005 and 2006 also were sampled for emerald ash borer emergence holes using binoculars. Regression analysis indicates that 41-53% of the variation in ash canopy thinning can be explained by the number of emerald ash borer emergence holes per tree. Emerald ash borer emergence holes were found at every site where ash canopy thinning averaged > 40%. In 2003, ash canopy thinning averaged 40% at a distance of 19.3 km from the epicenter of the emerald ash borer infestation in Canton. By 2006, the point at which ash trees averaged 40% canopy thinning had increased to a distance of 51.2 km away from Canton. Therefore, the point at which ash trees averaged 40% canopy thinning, a state of decline clearly visible to the average person, moved outward at a rate of 10.6 km/yr during this period.

  2. Allometric scaling of UK urban emissions: interpretation and implications for air quality management

    NASA Astrophysics Data System (ADS)

    MacKenzie, Rob; Barnes, Matt; Whyatt, Duncan; Hewitt, Nick

    2016-04-01

    Allometry uncovers structures and patterns by relating the characteristics of complex systems to a measure of scale. We present an allometric analysis of air quality for UK urban settlements, beginning with emissions and moving on to consider air concentrations. We consider both airshed-average 'urban background' concentrations (cf. those derived from satellites for NO2) and local pollution 'hotspots'. We show that there is a strong and robust scaling (with respect to population) of the non-point-source emissions of the greenhouse gases carbon dioxide and methane, as well as the toxic pollutants nitrogen dioxide, PM2.5, and 1,3-butadiene. The scaling of traffic-related emissions is not simply a reflection of road length, but rather results from the socio-economic patterning of road-use. The recent controversy regarding diesel vehicle emissions is germane to our study but does not affect our overall conclusions. We next develop an hypothesis for the population-scaling of airshed-average air concentrations, with which we demonstrate that, although average air quality is expected to be worse in large urban centres compared to small urban centres, the overall effect is an economy of scale (i.e., large cities reduce the overall burden of emissions compared to the same population spread over many smaller urban settlements). Our hypothesis explains satellite-derived observations of airshed-average urban NO2 concentrations. The theory derived also explains which properties of nature-based solutions (urban greening) can make a significant contribution at city scale, and points to a hitherto unforeseen opportunity to make large cities cleaner than smaller cities in absolute terms with respect to their airshed-average pollutant concentration.

  3. Maps of averaged spectral deviations from soil lines and their comparison with traditional soil maps

    NASA Astrophysics Data System (ADS)

    Rukhovich, D. I.; Rukhovich, A. D.; Rukhovich, D. D.; Simakova, M. S.; Kulyanitsa, A. L.; Bryzzhev, A. V.; Koroleva, P. V.

    2016-07-01

    The analysis of 34 cloudless fragments of Landsat 5, 7, and 8 images (1985-2014) on the territory of Plavsk, Arsen'evsk, and Chern districts of Tula oblast has been performed. It is shown that bare soil surface on the RED-NIR plots derived from the images cannot be described in the form of a sector of spectral plane as it can be done for the NDVI values. The notion of spectral neighborhood of soil line (SNSL) is suggested. It is defined as the sum of points of the RED-NIR spectral space, which are characterized by spectral characteristics of the bare soil applied for constructing soil lines. The way of the SNSL separation along the line of the lowest concentration density of points on the RED-NIR spectral space is suggested. This line separates bare soil surface from vegetating plants. The SNSL has been applied to construct soil line (SL) for each of the 34 images and to delineate bare soil surface on them. Distances from the points with averaged RED-NIR coordinates to the SL have been calculated using the method of moving window. These distances can be referred to as averaged spectral deviations (ASDs). The calculations have been performed strictly for the SNSL areas. As a result, 34 maps of ASDs have been created. These maps contain ASD values for 6036 points of a grid used in the study. Then, the integral map of normalized ASD values has been built with due account for the number of points participating in the calculation (i.e., lying in the SNSL) within the moving window. The integral map of ASD values has been compared with four traditional soil maps on the studied territory. It is shown that this integral map can be interpreted in terms of soil taxa: the areas of seven soil subtypes (soddy moderately podzolic, soddy slightly podzolic, light gray forest. gray forest, dark gray forest, podzolized chernozems, and leached chernozems) belonging to three soil types (soddy-podzolic, gray forest, and chernozemic soils) can be delineated on it.

  4. What is new about covered interest parity condition in the European Union? Evidence from fractal cross-correlation regressions

    NASA Astrophysics Data System (ADS)

    Ferreira, Paulo; Kristoufek, Ladislav

    2017-11-01

    We analyse the covered interest parity (CIP) using two novel regression frameworks based on cross-correlation analysis (detrended cross-correlation analysis and detrending moving-average cross-correlation analysis), which allow for studying the relationships at different scales and work well under non-stationarity and heavy tails. CIP is a measure of capital mobility commonly used to analyse financial integration, which remains an interesting feature of study in the context of the European Union. The importance of this features is related to the fact that the adoption of a common currency is associated with some benefits for countries, but also involves some risks such as the loss of economic instruments to face possible asymmetric shocks. While studying the Eurozone members could explain some problems in the common currency, studying the non-Euro countries is important to analyse if they are fit to take the possible benefits. Our results point to the CIP verification mainly in the Central European countries while in the remaining countries, the verification of the parity is only residual.

  5. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Texas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Texas. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  6. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Minnesota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Minnesota. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  7. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Indiana

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Indiana. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  8. A Study on Analysis of EEG Caused by Grating Stimulation Imaging

    NASA Astrophysics Data System (ADS)

    Urakawa, Hiroshi; Nishimura, Toshihiro; Tsubai, Masayoshi; Itoh, Kenji

    Recently, many researchers have studied a visual perception. Focus is attended to studies of the visual perception phenomenon by using the grating stimulation images. The previous researches have suggested that a subset of retinal ganglion cells responds to motion in the receptive field center, but only if the wider surround moves with a different trajectory. We discuss the function of human retina, and measure and analysis EEG(electroencephalography) of a normal subject who looks on grating stimulation images. We confirmed the visual perception of human by EEG signal analysis. We also have obtained that a sinusoidal grating stimulation was given, asymmetry was observed the α wave element in EEG of the symmetric part in a left hemisphere and a right hemisphere of the brain. Therefore, it is presumed that projected image is even when the still picture is seen and the image projected onto retinas of right and left eyes is not even for the dynamic scene. It evaluated it by taking the envelope curve for the detected α wave, and using the average and standard deviation.

  9. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Florida. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  10. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Maine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Maine. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  11. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Vermont

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Vermont. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  12. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  13. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Michigan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Michigan. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  14. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Alabama

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Alabama. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  15. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of New Hampshire

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of New Hampshire. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology usedmore » in the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  16. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of New Mexico. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology usedmore » in the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  17. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Colorado. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  18. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Washington

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Washington. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  19. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Montana

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Montana. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  20. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the District of Columbia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the District of Columbia. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  1. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Massachusetts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Massachusetts. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  2. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Oregon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Oregon. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  3. Geospatial Analysis of Grey Wolf Movement Patterns

    NASA Astrophysics Data System (ADS)

    Sur, D.

    2017-12-01

    The grey wolf is a top predator that lives across a diverse habitat, ranging from Europe to North America. They often hunt in packs, preferring caribou, deer and elk as prey. Currently, many gray wolves live in Denali National Park and Preserve. In this study, several wolf packs were studied in three distinct regions of Denali. The purpose of my research was to investigate the links between wolf habitat, movement patterns, and prey thresholds. These are needed for projecting future population, growth and distribution of wolves in the studied region. I also investigated the effect wolves have on the ecological structure of the communities they inhabit. In the study I carried out a quantitative analysis of wolf population trends and daily distance movement by utilizing an analysis of variance (ANOVA) in the program JmpPro12 (SAS Institute, Crary, NC) to assess regional differences in pack size, wolf density, average daily distance moved. I found a clear link between the wolf habitat and prey thresholds; the habitat directly influences the types of prey available. However there was no link between the daily distance movement, the wolf habitat and prey density.

  4. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Wisconsin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Wisconsin. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  5. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Ohio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Ohio. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  6. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of South Carolina

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of South Carolina. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology usedmore » in the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  7. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of North Carolina

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of North Carolina. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology usedmore » in the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  8. Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Iowa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong

    2015-12-01

    Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Iowa. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less

  9. Reducing the legal blood alcohol concentration limit for driving in developing countries: a time for change? Results and implications derived from a time-series analysis (2001-10) conducted in Brazil.

    PubMed

    Andreuccetti, Gabriel; Carvalho, Heraclito B; Cherpitel, Cheryl J; Ye, Yu; Ponce, Julio C; Kahn, Tulio; Leyton, Vilma

    2011-12-01

    In Brazil, a new law introduced in 2008 has lowered the blood alcohol concentration limit for drivers from 0.06 to 0.02, but the effectiveness in reducing traffic accidents remains uncertain. This study evaluated the effects of this enactment on road traffic injuries and fatalities. Time-series analysis using autoregressive integrated moving average (ARIMA) modelling. State and capital of São Paulo, Brazil.   A total of 1,471,087 non-fatal and 51,561 fatal road traffic accident cases in both regions. Monthly rates of traffic injuries and fatalities per 100,000 inhabitants from January 2001 to June 2010. The new traffic law was responsible for significant reductions in traffic injury and fatality rates in both localities (P<0.05). A stronger effect was observed for traffic fatality (-7.2 and -16.0% in the average monthly rate in the State and capital, respectively) compared to traffic injury rates (-1.8 and -2.3% in the State and capital, respectively). Lowering the blood alcohol concentration limit in Brazil had a greater impact on traffic fatalities than injuries, with a higher effect in the capital, where presumably the police enforcement was enhanced. © 2011 The Authors, Addiction © 2011 Society for the Study of Addiction.

  10. Temporal and long-term trend analysis of class C notifiable diseases in China from 2009 to 2014

    PubMed Central

    Zhang, Xingyu; Hou, Fengsu; Qiao, Zhijiao; Li, Xiaosong; Zhou, Lijun; Liu, Yuanyuan; Zhang, Tao

    2016-01-01

    Objectives Time series models are effective tools for disease forecasting. This study aims to explore the time series behaviour of 11 notifiable diseases in China and to predict their incidence through effective models. Settings and participants The Chinese Ministry of Health started to publish class C notifiable diseases in 2009. The monthly reported case time series of 11 infectious diseases from the surveillance system between 2009 and 2014 was collected. Methods We performed a descriptive and a time series study using the surveillance data. Decomposition methods were used to explore (1) their seasonality expressed in the form of seasonal indices and (2) their long-term trend in the form of a linear regression model. Autoregressive integrated moving average (ARIMA) models have been established for each disease. Results The number of cases and deaths caused by hand, foot and mouth disease ranks number 1 among the detected diseases. It occurred most often in May and July and increased, on average, by 0.14126/100 000 per month. The remaining incidence models show good fit except the influenza and hydatid disease models. Both the hydatid disease and influenza series become white noise after differencing, so no available ARIMA model can be fitted for these two diseases. Conclusion Time series analysis of effective surveillance time series is useful for better understanding the occurrence of the 11 types of infectious disease. PMID:27797981

  11. Research on measurement method of optical camouflage effect of moving object

    NASA Astrophysics Data System (ADS)

    Wang, Juntang; Xu, Weidong; Qu, Yang; Cui, Guangzhen

    2016-10-01

    Camouflage effectiveness measurement as an important part of the camouflage technology, which testing and measuring the camouflage effect of the target and the performance of the camouflage equipment according to the tactical and technical requirements. The camouflage effectiveness measurement of current optical band is mainly aimed at the static target which could not objectively reflect the dynamic camouflage effect of the moving target. This paper synthetical used technology of dynamic object detection and camouflage effect detection, the digital camouflage of the moving object as the research object, the adaptive background update algorithm of Surendra was improved, a method of optical camouflage effect detection using Lab-color space in the detection of moving-object was presented. The binary image of moving object is extracted by this measurement technology, in the sequence diagram, the characteristic parameters such as the degree of dispersion, eccentricity, complexity and moment invariants are constructed to construct the feature vector space. The Euclidean distance of moving target which through digital camouflage was calculated, the results show that the average Euclidean distance of 375 frames was 189.45, which indicated that the degree of dispersion, eccentricity, complexity and moment invariants of the digital camouflage graphics has a great difference with the moving target which not spray digital camouflage. The measurement results showed that the camouflage effect was good. Meanwhile with the performance evaluation module, the correlation coefficient of the dynamic target image range 0.1275 from 0.0035, and presented some ups and down. Under the dynamic condition, the adaptability of target and background was reflected. In view of the existing infrared camouflage technology, the next step, we want to carry out the camouflage effect measurement technology of the moving target based on infrared band.

  12. [Metrological analysis of measuring systems in testing an anticipatory reaction to the position of a moving object].

    PubMed

    Aksiuta, E F; Ostashev, A V; Sergeev, E V; Aksiuta, V E

    1997-01-01

    The methods of the information (entropy) error theory were used to make a metrological analysis of the well-known commercial measuring systems for timing an anticipative reaction (AR) to the position of a moving object, which is based on the electromechanical, gas-discharge, and electron principles. The required accuracy of measurement was ascertained to be achieved only by using the systems based on the electron principle of moving object simulation and AR measurement.

  13. Analysis of electrochemical noise (ECN) data in time and frequency domain for comparison corrosion inhibition of some azole compounds on Cu in 1.0 M H2SO4 solution

    NASA Astrophysics Data System (ADS)

    Ramezanzadeh, B.; Arman, S. Y.; Mehdipour, M.; Markhali, B. P.

    2014-01-01

    In this study, the corrosion inhibition properties of two similar heterocyclic compounds namely benzotriazole (BTA) and benzothiazole (BNS) inhibitors on copper in 1.0 M H2SO4 solution were studied by electrochemical techniques as well as surface analysis. The results showed that corrosion inhibition of copper largely depends on the molecular structure and concentration of the inhibitors. The effect of DC trend on the interpretation of electrochemical noise (ECN) results in time domain was evaluated by moving average removal (MAR) method. Accordingly, the impact of square and Hanning window functions as drift removal methods in frequency domain was studied. After DC trend removal, a good trend was observed between electrochemical noise (ECN) data and the results obtained from EIS and potentiodynamic polarization. Furthermore, the shot noise theory in frequency domain was applied to approach the charge of each electrochemical event (q) from the potential and current noise signals.

  14. Hospitalization for primary care susceptible conditions, health spending and Family Health Strategy: an analysis of trends.

    PubMed

    Morimoto, Tissiani; Costa, Juvenal Soares Dias da

    2017-03-01

    The goal of this study was to analyze the trend over time of hospitalizations due to conditions susceptible to primary healthcare (HCSPC), and how it relates to healthcare spending and Family Health Strategy (FHS) coverage in the city of São Leopoldo, Rio Grande do Sul State, Brazil, between 2003 and 2012. This is an ecological, time-trend study. We used secondary data available in the Unified Healthcare System Hospital Data System, the Primary Care Department and Public Health Budget Data System. The analysis compared HCSPC using three-year moving averages and Poisson regressions or negative binomials. We found no statistical significance in decreasing HCSPC indicators and primary care spending in the period analyzed. Healthcare spending, per-capita spending and FHS coverage increased significantly, but we found no correlation with HCSPC. The results show that, despite increases in the funds invested and population covered by FHS, they are still insufficient to deliver the level of care the population requires.

  15. Do smoke-free laws affect revenues in pubs and restaurants?

    PubMed

    Melberg, Hans Olav; Lund, Karl E

    2012-02-01

    In the debate about laws regulating smoking in restaurants and pubs, there has been some controversy as to whether smoke-free laws would reduce revenues in the hospitality industry. Norway presents an interesting case for three reasons. First, it was among the first countries to implement smoke-free laws, so it is possible to assess the long-term effects. Second, it has a cold climate so if there is a negative effect on revenue one would expect to find it in Norway. Third, the data from Norway are detailed enough to distinguish between revenue from pubs and restaurants. Autoregressive integrated moving average (ARIMA) intervention analysis of bi-monthly observations of revenues in restaurants and pubs show that the law did not have a statistically significant long-term effect on revenue in restaurants or on restaurant revenue as a share of personal consumption. Similar analysis for pubs shows that there was no significant long-run effect on pub revenue.

  16. The chronnectome: time-varying connectivity networks as the next frontier in fMRI data discovery.

    PubMed

    Calhoun, Vince D; Miller, Robyn; Pearlson, Godfrey; Adalı, Tulay

    2014-10-22

    Recent years have witnessed a rapid growth of interest in moving functional magnetic resonance imaging (fMRI) beyond simple scan-length averages and into approaches that capture time-varying properties of connectivity. In this Perspective we use the term "chronnectome" to describe metrics that allow a dynamic view of coupling. In the chronnectome, coupling refers to possibly time-varying levels of correlated or mutually informed activity between brain regions whose spatial properties may also be temporally evolving. We primarily focus on multivariate approaches developed in our group and review a number of approaches with an emphasis on matrix decompositions such as principle component analysis and independent component analysis. We also discuss the potential these approaches offer to improve characterization and understanding of brain function. There are a number of methodological directions that need to be developed further, but chronnectome approaches already show great promise for the study of both the healthy and the diseased brain.

  17. Multifractal detrended cross-correlation between the Chinese domestic and international gold markets based on DCCA and DMCA methods

    NASA Astrophysics Data System (ADS)

    Cao, Guangxi; Han, Yan; Chen, Yuemeng; Yang, Chunxia

    2014-05-01

    Based on the daily price data of Shanghai and London gold spot markets, we applied detrended cross-correlation analysis (DCCA) and detrended moving average cross-correlation analysis (DMCA) methods to quantify power-law cross-correlation between domestic and international gold markets. Results show that the cross-correlations between the Chinese domestic and international gold spot markets are multifractal. Furthermore, forward DMCA and backward DMCA seems to outperform DCCA and centered DMCA for short-range gold series, which confirms the comparison results of short-range artificial data in L. Y. He and S. P. Chen [Physica A 390 (2011) 3806-3814]. Finally, we analyzed the local multifractal characteristics of the cross-correlation between Chinese domestic and international gold markets. We show that multifractal characteristics of the cross-correlation between the Chinese domestic and international gold markets are time-varying and that multifractal characteristics were strengthened by the financial crisis in 2007-2008.

  18. Velocity Mapping Toolbox (VMT): a processing and visualization suite for moving-vessel ADCP measurements

    USGS Publications Warehouse

    Parsons, D.R.; Jackson, P.R.; Czuba, J.A.; Engel, F.L.; Rhoads, B.L.; Oberg, K.A.; Best, J.L.; Mueller, D.S.; Johnson, K.K.; Riley, J.D.

    2013-01-01

    The use of acoustic Doppler current profilers (ADCP) for discharge measurements and three-dimensional flow mapping has increased rapidly in recent years and has been primarily driven by advances in acoustic technology and signal processing. Recent research has developed a variety of methods for processing data obtained from a range of ADCP deployments and this paper builds on this progress by describing new software for processing and visualizing ADCP data collected along transects in rivers or other bodies of water. The new utility, the Velocity Mapping Toolbox (VMT), allows rapid processing (vector rotation, projection, averaging and smoothing), visualization (planform and cross-section vector and contouring), and analysis of a range of ADCP-derived datasets. The paper documents the data processing routines in the toolbox and presents a set of diverse examples that demonstrate its capabilities. The toolbox is applicable to the analysis of ADCP data collected in a wide range of aquatic environments and is made available as open-source code along with this publication.

  19. Investigation of stickiness influence in the anomalous transport and diffusion for a non-dissipative Fermi-Ulam model

    NASA Astrophysics Data System (ADS)

    Livorati, André L. P.; Palmero, Matheus S.; Díaz-I, Gabriel; Dettmann, Carl P.; Caldas, Iberê L.; Leonel, Edson D.

    2018-02-01

    We study the dynamics of an ensemble of non interacting particles constrained by two infinitely heavy walls, where one of them is moving periodically in time, while the other is fixed. The system presents mixed dynamics, where the accessible region for the particle to diffuse chaotically is bordered by an invariant spanning curve. Statistical analysis for the root mean square velocity, considering high and low velocity ensembles, leads the dynamics to the same steady state plateau for long times. A transport investigation of the dynamics via escape basins reveals that depending of the initial velocity ensemble, the decay rates of the survival probability present different shapes and bumps, in a mix of exponential, power law and stretched exponential decays. After an analysis of step-size averages, we found that the stable manifolds play the role of a preferential path for faster escape, being responsible for the bumps and different shapes of the survival probability.

  20. NEW SUNS IN THE COSMOS. III. MULTIFRACTAL SIGNATURE ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freitas, D. B. de; Nepomuceno, M. M. F.; Junior, P. R. V. de Moraes

    2016-11-01

    In the present paper, we investigate the multifractality signatures in hourly time series extracted from the CoRoT spacecraft database. Our analysis is intended to highlight the possibility that astrophysical time series can be members of a particular class of complex and dynamic processes, which require several photometric variability diagnostics to characterize their structural and topological properties. To achieve this goal, we search for contributions due to a nonlinear temporal correlation and effects caused by heavier tails than the Gaussian distribution, using a detrending moving average algorithm for one-dimensional multifractal signals (MFDMA). We observe that the correlation structure is the mainmore » source of multifractality, while heavy-tailed distribution plays a minor role in generating the multifractal effects. Our work also reveals that the rotation period of stars is inherently scaled by the degree of multifractality. As a result, analyzing the multifractal degree of the referred series, we uncover an evolution of multifractality from shorter to larger periods.« less

Top