ERIC Educational Resources Information Center
St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D.
2014-01-01
Researchers are increasingly using comparative interrupted time series (CITS) designs to estimate the effects of programs and policies when randomized controlled trials are not feasible. In a simple interrupted time series design, researchers compare the pre-treatment values of a treatment group time series to post-treatment values in order to…
Highly comparative time-series analysis: the empirical structure of time series and their methods.
Fulcher, Ben D; Little, Max A; Jones, Nick S
2013-06-06
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.
Highly comparative time-series analysis: the empirical structure of time series and their methods
Fulcher, Ben D.; Little, Max A.; Jones, Nick S.
2013-01-01
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines. PMID:23554344
Regenerating time series from ordinal networks.
McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael
2017-03-01
Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.
Regenerating time series from ordinal networks
NASA Astrophysics Data System (ADS)
McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael
2017-03-01
Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.
An Energy-Based Similarity Measure for Time Series
NASA Astrophysics Data System (ADS)
Boudraa, Abdel-Ouahab; Cexus, Jean-Christophe; Groussat, Mathieu; Brunagel, Pierre
2007-12-01
A new similarity measure, called SimilB, for time series analysis, based on the cross-[InlineEquation not available: see fulltext.]-energy operator (2004), is introduced. [InlineEquation not available: see fulltext.] is a nonlinear measure which quantifies the interaction between two time series. Compared to Euclidean distance (ED) or the Pearson correlation coefficient (CC), SimilB includes the temporal information and relative changes of the time series using the first and second derivatives of the time series. SimilB is well suited for both nonstationary and stationary time series and particularly those presenting discontinuities. Some new properties of [InlineEquation not available: see fulltext.] are presented. Particularly, we show that [InlineEquation not available: see fulltext.] as similarity measure is robust to both scale and time shift. SimilB is illustrated with synthetic time series and an artificial dataset and compared to the CC and the ED measures.
Fulcher, Ben D; Jones, Nick S
2017-11-22
Phenotype measurements frequently take the form of time series, but we currently lack a systematic method for relating these complex data streams to scientifically meaningful outcomes, such as relating the movement dynamics of organisms to their genotype or measurements of brain dynamics of a patient to their disease diagnosis. Previous work addressed this problem by comparing implementations of thousands of diverse scientific time-series analysis methods in an approach termed highly comparative time-series analysis. Here, we introduce hctsa, a software tool for applying this methodological approach to data. hctsa includes an architecture for computing over 7,700 time-series features and a suite of analysis and visualization algorithms to automatically select useful and interpretable time-series features for a given application. Using exemplar applications to high-throughput phenotyping experiments, we show how hctsa allows researchers to leverage decades of time-series research to quantify and understand informative structure in time-series data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Quantifying memory in complex physiological time-series.
Shirazi, Amir H; Raoufy, Mohammad R; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R; Amodio, Piero; Jafari, G Reza; Montagnese, Sara; Mani, Ali R
2013-01-01
In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of "memory length" was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are 'forgotten' quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations.
Quantifying Memory in Complex Physiological Time-Series
Shirazi, Amir H.; Raoufy, Mohammad R.; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R.; Amodio, Piero; Jafari, G. Reza; Montagnese, Sara; Mani, Ali R.
2013-01-01
In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of “memory length” was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are ‘forgotten’ quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations. PMID:24039811
Process fault detection and nonlinear time series analysis for anomaly detection in safeguards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, T.L.; Mullen, M.F.; Wangen, L.E.
In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less
A Method for Comparing Multivariate Time Series with Different Dimensions
Tapinos, Avraam; Mendes, Pedro
2013-01-01
In many situations it is desirable to compare dynamical systems based on their behavior. Similarity of behavior often implies similarity of internal mechanisms or dependency on common extrinsic factors. While there are widely used methods for comparing univariate time series, most dynamical systems are characterized by multivariate time series. Yet, comparison of multivariate time series has been limited to cases where they share a common dimensionality. A semi-metric is a distance function that has the properties of non-negativity, symmetry and reflexivity, but not sub-additivity. Here we develop a semi-metric – SMETS – that can be used for comparing groups of time series that may have different dimensions. To demonstrate its utility, the method is applied to dynamic models of biochemical networks and to portfolios of shares. The former is an example of a case where the dependencies between system variables are known, while in the latter the system is treated (and behaves) as a black box. PMID:23393554
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-17
... comparable to the obligations proposed in this filing: Market-Makers % Time % Series Classes CBOE (current........ 60 All classes collectively. PMMs % Time % Series Classes CBOE (current rule) 99% of the time... % Time % Series Classes CBOE (current rule) 99% of the time required to 90% * Class-by-class. provide...
2011-01-01
Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598
Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp
2011-08-18
Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.
Second-degree Stokes coefficients from multi-satellite SLR
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Müller, Horst; Gerstl, Michael; Štefka, Vojtěch; Bouman, Johannes; Göttl, Franziska; Horwath, Martin
2015-09-01
The long wavelength part of the Earth's gravity field can be determined, with varying accuracy, from satellite laser ranging (SLR). In this study, we investigate the combination of up to ten geodetic SLR satellites using iterative variance component estimation. SLR observations to different satellites are combined in order to identify the impact of each satellite on the estimated Stokes coefficients. The combination of satellite-specific weekly or monthly arcs allows to reduce parameter correlations of the single-satellite solutions and leads to alternative estimates of the second-degree Stokes coefficients. This alternative time series might be helpful for assessing the uncertainty in the impact of the low-degree Stokes coefficients on geophysical investigations. In order to validate the obtained time series of second-degree Stokes coefficients, a comparison with the SLR RL05 time series of the Center of Space Research (CSR) is done. This investigation shows that all time series are comparable to the CSR time series. The precision of the weekly/monthly and coefficients is analyzed by comparing mass-related equatorial excitation functions with geophysical model results and reduced geodetic excitation functions. In case of , the annual amplitude and phase of the DGFI solution agrees better with three of four geophysical model combinations than other time series. In case of , all time series agree very well to each other. The impact of on the ice mass trend estimates for Antarctica are compared based on CSR GRACE RL05 solutions, in which different monthly time series are used for replacing. We found differences in the long-term Antarctic ice loss of Gt/year between the GRACE solutions induced by the different SLR time series of CSR and DGFI, which is about 13 % of the total ice loss of Antarctica. This result shows that Antarctic ice mass loss quantifications must be carefully interpreted.
Luo, Yi; Zhang, Tao; Li, Xiao-song
2016-05-01
To explore the application of fuzzy time series model based on fuzzy c-means clustering in forecasting monthly incidence of Hepatitis E in mainland China. Apredictive model (fuzzy time series method based on fuzzy c-means clustering) was developed using Hepatitis E incidence data in mainland China between January 2004 and July 2014. The incidence datafrom August 2014 to November 2014 were used to test the fitness of the predictive model. The forecasting results were compared with those resulted from traditional fuzzy time series models. The fuzzy time series model based on fuzzy c-means clustering had 0.001 1 mean squared error (MSE) of fitting and 6.977 5 x 10⁻⁴ MSE of forecasting, compared with 0.0017 and 0.0014 from the traditional forecasting model. The results indicate that the fuzzy time series model based on fuzzy c-means clustering has a better performance in forecasting incidence of Hepatitis E.
NASA Astrophysics Data System (ADS)
van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András
2015-04-01
For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.
Forecasting Enrollments with Fuzzy Time Series.
ERIC Educational Resources Information Center
Song, Qiang; Chissom, Brad S.
The concept of fuzzy time series is introduced and used to forecast the enrollment of a university. Fuzzy time series, an aspect of fuzzy set theory, forecasts enrollment using a first-order time-invariant model. To evaluate the model, the conventional linear regression technique is applied and the predicted values obtained are compared to the…
Elbert, Yevgeniy; Burkom, Howard S
2009-11-20
This paper discusses further advances in making robust predictions with the Holt-Winters forecasts for a variety of syndromic time series behaviors and introduces a control-chart detection approach based on these forecasts. Using three collections of time series data, we compare biosurveillance alerting methods with quantified measures of forecast agreement, signal sensitivity, and time-to-detect. The study presents practical rules for initialization and parameterization of biosurveillance time series. Several outbreak scenarios are used for detection comparison. We derive an alerting algorithm from forecasts using Holt-Winters-generalized smoothing for prospective application to daily syndromic time series. The derived algorithm is compared with simple control-chart adaptations and to more computationally intensive regression modeling methods. The comparisons are conducted on background data from both authentic and simulated data streams. Both types of background data include time series that vary widely by both mean value and cyclic or seasonal behavior. Plausible, simulated signals are added to the background data for detection performance testing at signal strengths calculated to be neither too easy nor too hard to separate the compared methods. Results show that both the sensitivity and the timeliness of the Holt-Winters-based algorithm proved to be comparable or superior to that of the more traditional prediction methods used for syndromic surveillance.
Lara, Juan A; Lizcano, David; Pérez, Aurora; Valente, Juan P
2014-10-01
There are now domains where information is recorded over a period of time, leading to sequences of data known as time series. In many domains, like medicine, time series analysis requires to focus on certain regions of interest, known as events, rather than analyzing the whole time series. In this paper, we propose a framework for knowledge discovery in both one-dimensional and multidimensional time series containing events. We show how our approach can be used to classify medical time series by means of a process that identifies events in time series, generates time series reference models of representative events and compares two time series by analyzing the events they have in common. We have applied our framework on time series generated in the areas of electroencephalography (EEG) and stabilometry. Framework performance was evaluated in terms of classification accuracy, and the results confirmed that the proposed schema has potential for classifying EEG and stabilometric signals. The proposed framework is useful for discovering knowledge from medical time series containing events, such as stabilometric and electroencephalographic time series. These results would be equally applicable to other medical domains generating iconographic time series, such as, for example, electrocardiography (ECG). Copyright © 2014 Elsevier Inc. All rights reserved.
Empirical method to measure stochasticity and multifractality in nonlinear time series
NASA Astrophysics Data System (ADS)
Lin, Chih-Hao; Chang, Chia-Seng; Li, Sai-Ping
2013-12-01
An empirical algorithm is used here to study the stochastic and multifractal nature of nonlinear time series. A parameter can be defined to quantitatively measure the deviation of the time series from a Wiener process so that the stochasticity of different time series can be compared. The local volatility of the time series under study can be constructed using this algorithm, and the multifractal structure of the time series can be analyzed by using this local volatility. As an example, we employ this method to analyze financial time series from different stock markets. The result shows that while developed markets evolve very much like an Ito process, the emergent markets are far from efficient. Differences about the multifractal structures and leverage effects between developed and emergent markets are discussed. The algorithm used here can be applied in a similar fashion to study time series of other complex systems.
G14A-06- Analysis of the DORIS, GNSS, SLR, VLBI and Gravimetric Time Series at the GGOS Core Sites
NASA Technical Reports Server (NTRS)
Moreaux, G.; Lemoine, F.; Luceri, V.; Pavlis, E.; MacMillan, D.; Bonvalot, S.; Saunier, J.
2017-01-01
Analysis of the time series at the 3-4 multi-technique GGOS sites to analyze and compare the spectral content of the space geodetic and gravity time series. Evaluate the level of agreement between the space geodesy measurements and the physical tie vectors.
Information and Complexity Measures Applied to Observed and Simulated Soil Moisture Time Series
USDA-ARS?s Scientific Manuscript database
Time series of soil moisture-related parameters provides important insights in functioning of soil water systems. Analysis of patterns within these time series has been used in several studies. The objective of this work was to compare patterns in observed and simulated soil moisture contents to u...
Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models
ERIC Educational Resources Information Center
Price, Larry R.
2012-01-01
The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…
NASA Astrophysics Data System (ADS)
Yang, Q.; Wang, Y.; Zhang, J.; Delgado, J.
2017-05-01
Accurate and reliable groundwater level forecasting models can help ensure the sustainable use of a watershed's aquifers for urban and rural water supply. In this paper, three time series analysis methods, Holt-Winters (HW), integrated time series (ITS), and seasonal autoregressive integrated moving average (SARIMA), are explored to simulate the groundwater level in a coastal aquifer, China. The monthly groundwater table depth data collected in a long time series from 2000 to 2011 are simulated and compared with those three time series models. The error criteria are estimated using coefficient of determination ( R 2), Nash-Sutcliffe model efficiency coefficient ( E), and root-mean-squared error. The results indicate that three models are all accurate in reproducing the historical time series of groundwater levels. The comparisons of three models show that HW model is more accurate in predicting the groundwater levels than SARIMA and ITS models. It is recommended that additional studies explore this proposed method, which can be used in turn to facilitate the development and implementation of more effective and sustainable groundwater management strategies.
Zhang, Yatao; Wei, Shoushui; Liu, Hai; Zhao, Lina; Liu, Chengyu
2016-09-01
The Lempel-Ziv (LZ) complexity and its variants have been extensively used to analyze the irregularity of physiological time series. To date, these measures cannot explicitly discern between the irregularity and the chaotic characteristics of physiological time series. Our study compared the performance of an encoding LZ (ELZ) complexity algorithm, a novel variant of the LZ complexity algorithm, with those of the classic LZ (CLZ) and multistate LZ (MLZ) complexity algorithms. Simulation experiments on Gaussian noise, logistic chaotic, and periodic time series showed that only the ELZ algorithm monotonically declined with the reduction in irregularity in time series, whereas the CLZ and MLZ approaches yielded overlapped values for chaotic time series and time series mixed with Gaussian noise, demonstrating the accuracy of the proposed ELZ algorithm in capturing the irregularity, rather than the complexity, of physiological time series. In addition, the effect of sequence length on the ELZ algorithm was more stable compared with those on CLZ and MLZ, especially when the sequence length was longer than 300. A sensitivity analysis for all three LZ algorithms revealed that both the MLZ and the ELZ algorithms could respond to the change in time sequences, whereas the CLZ approach could not. Cardiac interbeat (RR) interval time series from the MIT-BIH database were also evaluated, and the results showed that the ELZ algorithm could accurately measure the inherent irregularity of the RR interval time series, as indicated by lower LZ values yielded from a congestive heart failure group versus those yielded from a normal sinus rhythm group (p < 0.01). Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Volatility of linear and nonlinear time series
NASA Astrophysics Data System (ADS)
Kalisky, Tomer; Ashkenazy, Yosef; Havlin, Shlomo
2005-07-01
Previous studies indicated that nonlinear properties of Gaussian distributed time series with long-range correlations, ui , can be detected and quantified by studying the correlations in the magnitude series ∣ui∣ , the “volatility.” However, the origin for this empirical observation still remains unclear and the exact relation between the correlations in ui and the correlations in ∣ui∣ is still unknown. Here we develop analytical relations between the scaling exponent of linear series ui and its magnitude series ∣ui∣ . Moreover, we find that nonlinear time series exhibit stronger (or the same) correlations in the magnitude time series compared with linear time series with the same two-point correlations. Based on these results we propose a simple model that generates multifractal time series by explicitly inserting long range correlations in the magnitude series; the nonlinear multifractal time series is generated by multiplying a long-range correlated time series (that represents the magnitude series) with uncorrelated time series [that represents the sign series sgn(ui) ]. We apply our techniques on daily deep ocean temperature records from the equatorial Pacific, the region of the El-Ninõ phenomenon, and find: (i) long-range correlations from several days to several years with 1/f power spectrum, (ii) significant nonlinear behavior as expressed by long-range correlations of the volatility series, and (iii) broad multifractal spectrum.
Challenges to validity in single-group interrupted time series analysis.
Linden, Ariel
2017-04-01
Single-group interrupted time series analysis (ITSA) is a popular evaluation methodology in which a single unit of observation is studied; the outcome variable is serially ordered as a time series, and the intervention is expected to "interrupt" the level and/or trend of the time series, subsequent to its introduction. The most common threat to validity is history-the possibility that some other event caused the observed effect in the time series. Although history limits the ability to draw causal inferences from single ITSA models, it can be controlled for by using a comparable control group to serve as the counterfactual. Time series data from 2 natural experiments (effect of Florida's 2000 repeal of its motorcycle helmet law on motorcycle fatalities and California's 1988 Proposition 99 to reduce cigarette sales) are used to illustrate how history biases results of single-group ITSA results-as opposed to when that group's results are contrasted to those of a comparable control group. In the first example, an external event occurring at the same time as the helmet repeal appeared to be the cause of a rise in motorcycle deaths, but was only revealed when Florida was contrasted with comparable control states. Conversely, in the second example, a decreasing trend in cigarette sales prior to the intervention raised question about a treatment effect attributed to Proposition 99, but was reinforced when California was contrasted with comparable control states. Results of single-group ITSA should be considered preliminary, and interpreted with caution, until a more robust study design can be implemented. © 2016 John Wiley & Sons, Ltd.
A window-based time series feature extraction method.
Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife
2017-10-01
This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Clinical time series prediction: Toward a hierarchical dynamical system framework.
Liu, Zitao; Hauskrecht, Milos
2015-09-01
Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.
Detection of a sudden change of the field time series based on the Lorenz system.
Da, ChaoJiu; Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan
2017-01-01
We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series.
ERIC Educational Resources Information Center
St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D.
2016-01-01
We explore the conditions under which short, comparative interrupted time-series (CITS) designs represent valid alternatives to randomized experiments in educational evaluations. To do so, we conduct three within-study comparisons, each of which uses a unique data set to test the validity of the CITS design by comparing its causal estimates to…
ERIC Educational Resources Information Center
Price, Cristofer; Unlu, Fatih
2014-01-01
The Comparative Short Interrupted Time Series (C-SITS) design is a frequently employed quasi-experimental method, in which the pre- and post-intervention changes observed in the outcome levels of a treatment group is compared with those of a comparison group where the difference between the former and the latter is attributed to the treatment. The…
NASA Astrophysics Data System (ADS)
Stiller-Reeve, Mathew; Stephenson, David; Spengler, Thomas
2017-04-01
For climate services to be relevant and informative for users, scientific data definitions need to match users' perceptions or beliefs. This study proposes and tests novel yet simple methods to compare beliefs of timing of recurrent climatic events with empirical evidence from multiple historical time series. The methods are tested by applying them to the onset date of the monsoon in Bangladesh, where several scientific monsoon definitions can be applied, yielding different results for monsoon onset dates. It is a challenge to know which monsoon definition compares best with people's beliefs. Time series from eight different scientific monsoon definitions in six regions are compared with respondent beliefs from a previously completed survey concerning the monsoon onset. Beliefs about the timing of the monsoon onset are represented probabilistically for each respondent by constructing a probability mass function (PMF) from elicited responses about the earliest, normal, and latest dates for the event. A three-parameter circular modified triangular distribution (CMTD) is used to allow for the possibility (albeit small) of the onset at any time of the year. These distributions are then compared to the historical time series using two approaches: likelihood scores, and the mean and standard deviation of time series of dates simulated from each belief distribution. The methods proposed give the basis for further iterative discussion with decision-makers in the development of eventual climate services. This study uses Jessore, Bangladesh, as an example and finds that a rainfall definition, applying a 10 mm day-1 threshold to NCEP-NCAR reanalysis (Reanalysis-1) data, best matches the survey respondents' beliefs about monsoon onset.
Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin
2013-01-01
Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509
Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin
2014-01-30
The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.
Trend time-series modeling and forecasting with neural networks.
Qi, Min; Zhang, G Peter
2008-05-01
Despite its great importance, there has been no general consensus on how to model the trends in time-series data. Compared to traditional approaches, neural networks (NNs) have shown some promise in time-series forecasting. This paper investigates how to best model trend time series using NNs. Four different strategies (raw data, raw data with time index, detrending, and differencing) are used to model various trend patterns (linear, nonlinear, deterministic, stochastic, and breaking trend). We find that with NNs differencing often gives meritorious results regardless of the underlying data generating processes (DGPs). This finding is also confirmed by the real gross national product (GNP) series.
ERIC Educational Resources Information Center
St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly
2014-01-01
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…
University of San Francisco Institutional-Level Financial Indicators, FY 1968-1969 to FY 1972-1973.
ERIC Educational Resources Information Center
Counelis, James S.; Rizzo, Claude J.
To manage effectively the higher education enterprise is a complex matter. Part of the problem of managing complexity is the need for monitoring data in time series such that the on-going enterprise is viewed at comparable times in comparable terms. Part of the answer at the University of San Francisco is the development of time series data on…
Koltun, G.F.
2015-01-01
Streamflow hydrographs were plotted for modeled/computed time series for the Ohio River near the USGS Sardis gage and the Ohio River at the Hannibal Lock and Dam. In general, the time series at these two locations compared well. Some notable differences include the exclusive presence of short periods of negative streamflows in the USGS 15-minute time-series data for the gage on the Ohio River above Sardis, Ohio, and the occurrence of several peak streamflows in the USACE gate/hydropower time series for the Hannibal Lock and Dam that were appreciably larger than corresponding peaks in the other time series, including those modeled/computed for the downstream Sardis gage
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance
Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600
Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.
Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao
2018-01-01
Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.
Detection of a sudden change of the field time series based on the Lorenz system
Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan
2017-01-01
We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series. PMID:28141832
Multiscale Poincaré plots for visualizing the structure of heartbeat time series.
Henriques, Teresa S; Mariani, Sara; Burykin, Anton; Rodrigues, Filipa; Silva, Tiago F; Goldberger, Ary L
2016-02-09
Poincaré delay maps are widely used in the analysis of cardiac interbeat interval (RR) dynamics. To facilitate visualization of the structure of these time series, we introduce multiscale Poincaré (MSP) plots. Starting with the original RR time series, the method employs a coarse-graining procedure to create a family of time series, each of which represents the system's dynamics in a different time scale. Next, the Poincaré plots are constructed for the original and the coarse-grained time series. Finally, as an optional adjunct, color can be added to each point to represent its normalized frequency. We illustrate the MSP method on simulated Gaussian white and 1/f noise time series. The MSP plots of 1/f noise time series reveal relative conservation of the phase space area over multiple time scales, while those of white noise show a marked reduction in area. We also show how MSP plots can be used to illustrate the loss of complexity when heartbeat time series from healthy subjects are compared with those from patients with chronic (congestive) heart failure syndrome or with atrial fibrillation. This generalized multiscale approach to Poincaré plots may be useful in visualizing other types of time series.
NASA Astrophysics Data System (ADS)
Ningrum, R. W.; Surarso, B.; Farikhin; Safarudin, Y. M.
2018-03-01
This paper proposes the combination of Firefly Algorithm (FA) and Chen Fuzzy Time Series Forecasting. Most of the existing fuzzy forecasting methods based on fuzzy time series use the static length of intervals. Therefore, we apply an artificial intelligence, i.e., Firefly Algorithm (FA) to set non-stationary length of intervals for each cluster on Chen Method. The method is evaluated by applying on the Jakarta Composite Index (IHSG) and compare with classical Chen Fuzzy Time Series Forecasting. Its performance verified through simulation using Matlab.
Tuning the Voices of a Choir: Detecting Ecological Gradients in Time-Series Populations.
Buras, Allan; van der Maaten-Theunissen, Marieke; van der Maaten, Ernst; Ahlgrimm, Svenja; Hermann, Philipp; Simard, Sonia; Heinrich, Ingo; Helle, Gerd; Unterseher, Martin; Schnittler, Martin; Eusemann, Pascal; Wilmking, Martin
2016-01-01
This paper introduces a new approach-the Principal Component Gradient Analysis (PCGA)-to detect ecological gradients in time-series populations, i.e. several time-series originating from different individuals of a population. Detection of ecological gradients is of particular importance when dealing with time-series from heterogeneous populations which express differing trends. PCGA makes use of polar coordinates of loadings from the first two axes obtained by principal component analysis (PCA) to define groups of similar trends. Based on the mean inter-series correlation (rbar) the gain of increasing a common underlying signal by PCGA groups is quantified using Monte Carlo Simulations. In terms of validation PCGA is compared to three other existing approaches. Focusing on dendrochronological examples, PCGA is shown to correctly determine population gradients and in particular cases to be advantageous over other considered methods. Furthermore, PCGA groups in each example allowed for enhancing the strength of a common underlying signal and comparably well as hierarchical cluster analysis. Our results indicate that PCGA potentially allows for a better understanding of mechanisms causing time-series population gradients as well as objectively enhancing the performance of climate transfer functions in dendroclimatology. While our examples highlight the relevance of PCGA to the field of dendrochronology, we believe that also other disciplines working with data of comparable structure may benefit from PCGA.
Tuning the Voices of a Choir: Detecting Ecological Gradients in Time-Series Populations
Buras, Allan; van der Maaten-Theunissen, Marieke; van der Maaten, Ernst; Ahlgrimm, Svenja; Hermann, Philipp; Simard, Sonia; Heinrich, Ingo; Helle, Gerd; Unterseher, Martin; Schnittler, Martin; Eusemann, Pascal; Wilmking, Martin
2016-01-01
This paper introduces a new approach–the Principal Component Gradient Analysis (PCGA)–to detect ecological gradients in time-series populations, i.e. several time-series originating from different individuals of a population. Detection of ecological gradients is of particular importance when dealing with time-series from heterogeneous populations which express differing trends. PCGA makes use of polar coordinates of loadings from the first two axes obtained by principal component analysis (PCA) to define groups of similar trends. Based on the mean inter-series correlation (rbar) the gain of increasing a common underlying signal by PCGA groups is quantified using Monte Carlo Simulations. In terms of validation PCGA is compared to three other existing approaches. Focusing on dendrochronological examples, PCGA is shown to correctly determine population gradients and in particular cases to be advantageous over other considered methods. Furthermore, PCGA groups in each example allowed for enhancing the strength of a common underlying signal and comparably well as hierarchical cluster analysis. Our results indicate that PCGA potentially allows for a better understanding of mechanisms causing time-series population gradients as well as objectively enhancing the performance of climate transfer functions in dendroclimatology. While our examples highlight the relevance of PCGA to the field of dendrochronology, we believe that also other disciplines working with data of comparable structure may benefit from PCGA. PMID:27467508
2017-01-01
time histories with peak pressures of approximately 250 psi and 500 psi. 1.2 TESTING OBJECTIVES The first goal of this test series was to explore how...finally the late- time at-rest fragments were physically collected and analyzed post-test. Because this test series physically collected over 50,000...for a single fragmenting object. Comparing the three measurement techniques used in this test series , the late- time physically- collected mass
Clustering Financial Time Series by Network Community Analysis
NASA Astrophysics Data System (ADS)
Piccardi, Carlo; Calatroni, Lisa; Bertoni, Fabio
In this paper, we describe a method for clustering financial time series which is based on community analysis, a recently developed approach for partitioning the nodes of a network (graph). A network with N nodes is associated to the set of N time series. The weight of the link (i, j), which quantifies the similarity between the two corresponding time series, is defined according to a metric based on symbolic time series analysis, which has recently proved effective in the context of financial time series. Then, searching for network communities allows one to identify groups of nodes (and then time series) with strong similarity. A quantitative assessment of the significance of the obtained partition is also provided. The method is applied to two distinct case-studies concerning the US and Italy Stock Exchange, respectively. In the US case, the stability of the partitions over time is also thoroughly investigated. The results favorably compare with those obtained with the standard tools typically used for clustering financial time series, such as the minimal spanning tree and the hierarchical tree.
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
How long will the traffic flow time series keep efficacious to forecast the future?
NASA Astrophysics Data System (ADS)
Yuan, PengCheng; Lin, XuXun
2017-02-01
This paper investigate how long will the historical traffic flow time series keep efficacious to forecast the future. In this frame, we collect the traffic flow time series data with different granularity at first. Then, using the modified rescaled range analysis method, we analyze the long memory property of the traffic flow time series by computing the Hurst exponent. We calculate the long-term memory cycle and test its significance. We also compare it with the maximum Lyapunov exponent method result. Our results show that both of the freeway traffic flow time series and the ground way traffic flow time series demonstrate positively correlated trend (have long-term memory property), both of their memory cycle are about 30 h. We think this study is useful for the short-term or long-term traffic flow prediction and management.
NASA Astrophysics Data System (ADS)
Radhakrishnan, Srinivasan; Duvvuru, Arjun; Sultornsanee, Sivarit; Kamarthi, Sagar
2016-02-01
The cross correlation coefficient has been widely applied in financial time series analysis, in specific, for understanding chaotic behaviour in terms of stock price and index movements during crisis periods. To better understand time series correlation dynamics, the cross correlation matrices are represented as networks, in which a node stands for an individual time series and a link indicates cross correlation between a pair of nodes. These networks are converted into simpler trees using different schemes. In this context, Minimum Spanning Trees (MST) are the most favoured tree structures because of their ability to preserve all the nodes and thereby retain essential information imbued in the network. Although cross correlations underlying MSTs capture essential information, they do not faithfully capture dynamic behaviour embedded in the time series data of financial systems because cross correlation is a reliable measure only if the relationship between the time series is linear. To address the issue, this work investigates a new measure called phase synchronization (PS) for establishing correlations among different time series which relate to one another, linearly or nonlinearly. In this approach the strength of a link between a pair of time series (nodes) is determined by the level of phase synchronization between them. We compare the performance of phase synchronization based MST with cross correlation based MST along selected network measures across temporal frame that includes economically good and crisis periods. We observe agreement in the directionality of the results across these two methods. They show similar trends, upward or downward, when comparing selected network measures. Though both the methods give similar trends, the phase synchronization based MST is a more reliable representation of the dynamic behaviour of financial systems than the cross correlation based MST because of the former's ability to quantify nonlinear relationships among time series or relations among phase shifted time series.
Clinical time series prediction: towards a hierarchical dynamical system framework
Liu, Zitao; Hauskrecht, Milos
2014-01-01
Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671
Short Term Rain Prediction For Sustainability of Tanks in the Tropic Influenced by Shadow Rains
NASA Astrophysics Data System (ADS)
Suresh, S.
2007-07-01
Rainfall and flow prediction, adapting the Venkataraman single time series approach and Wiener multiple time series approach were conducted for Aralikottai tank system, and Kothamangalam tank system, Tamilnadu, India. The results indicated that the raw prediction of daily values is closer to actual values than trend identified predictions. The sister seasonal time series were more amenable for prediction than whole parent time series. Venkataraman single time approach was more suited for rainfall prediction. Wiener approach proved better for daily prediction of flow based on rainfall. The major conclusion is that the sister seasonal time series of rain and flow have their own identities even though they form part of the whole parent time series. Further studies with other tropical small watersheds are necessary to establish this unique characteristic of independent but not exclusive behavior of seasonal stationary stochastic processes as compared to parent non stationary stochastic processes.
Rodgers, Joseph Lee; Beasley, William Howard; Schuelke, Matthew
2014-01-01
Many data structures, particularly time series data, are naturally seasonal, cyclical, or otherwise circular. Past graphical methods for time series have focused on linear plots. In this article, we move graphical analysis onto the circle. We focus on 2 particular methods, one old and one new. Rose diagrams are circular histograms and can be produced in several different forms using the RRose software system. In addition, we propose, develop, illustrate, and provide software support for a new circular graphical method, called Wrap-Around Time Series Plots (WATS Plots), which is a graphical method useful to support time series analyses in general but in particular in relation to interrupted time series designs. We illustrate the use of WATS Plots with an interrupted time series design evaluating the effect of the Oklahoma City bombing on birthrates in Oklahoma County during the 10 years surrounding the bombing of the Murrah Building in Oklahoma City. We compare WATS Plots with linear time series representations and overlay them with smoothing and error bands. Each method is shown to have advantages in relation to the other; in our example, the WATS Plots more clearly show the existence and effect size of the fertility differential.
NASA Technical Reports Server (NTRS)
Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.
2013-01-01
Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-01-01
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques. PMID:28383496
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-04-06
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques.
Fuzzy time-series based on Fibonacci sequence for stock price forecasting
NASA Astrophysics Data System (ADS)
Chen, Tai-Liang; Cheng, Ching-Hsue; Jong Teoh, Hia
2007-07-01
Time-series models have been utilized to make reasonably accurate predictions in the areas of stock price movements, academic enrollments, weather, etc. For promoting the forecasting performance of fuzzy time-series models, this paper proposes a new model, which incorporates the concept of the Fibonacci sequence, the framework of Song and Chissom's model and the weighted method of Yu's model. This paper employs a 5-year period TSMC (Taiwan Semiconductor Manufacturing Company) stock price data and a 13-year period of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) stock index data as experimental datasets. By comparing our forecasting performances with Chen's (Forecasting enrollments based on fuzzy time-series. Fuzzy Sets Syst. 81 (1996) 311-319), Yu's (Weighted fuzzy time-series models for TAIEX forecasting. Physica A 349 (2004) 609-624) and Huarng's (The application of neural networks to forecast fuzzy time series. Physica A 336 (2006) 481-491) models, we conclude that the proposed model surpasses in accuracy these conventional fuzzy time-series models.
Wild, Beate; Stadnitski, Tatjana; Wesche, Daniela; Stroe-Kunold, Esther; Schultz, Jobst-Hendrik; Rudofsky, Gottfried; Maser-Gluth, Christiane; Herzog, Wolfgang; Friederich, Hans-Christoph
2016-04-01
The aim of the study was to investigate the characteristics of the awakening salivary cortisol in patients with anorexia nervosa (AN) using a time series design. We included ten AN inpatients, six with a very low BMI (high symptom severity, HSS group) and four patients with less severe symptoms (low symptom severity, LSS group). Patients collected salivary cortisol daily upon awakening. The number of collected saliva samples varied across patients between n=65 and n=229 (due to the different lengths of their inpatient stay). In addition, before retiring, the patients answered questions daily on the handheld regarding disorder-related psychosocial variables. The analysis of cortisol and diary data was conducted by using a time series approach. Time series showed that the awakening cortisol of the AN patients was elevated as compared to a control group. Cortisol measurements of patients with LSS essentially fluctuated in a stationary manner around a constant mean. The series of patients with HSS were generally less stable; four HSS patients showed a non-stationary cortisol awakening series. Antipsychotic medication did not change awakening cortisol in a specific way. The lagged dependencies between cortisol and depressive feelings became significant for four patients. Here, higher cortisol values were temporally associated with higher values of depressive feelings. Upon awakening, the cortisol of all AN patients was in the standard range but elevated as compared to healthy controls. Patients with HSS appeared to show less stable awakening cortisol time series compared to patients with LSS. Copyright © 2016 Elsevier B.V. All rights reserved.
A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting
NASA Astrophysics Data System (ADS)
Kim, T.; Joo, K.; Seo, J.; Heo, J. H.
2016-12-01
Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.
A Comparative Analysis of Vocabulary Load of Three Provincially Adopted Primary Arithmetic Series.
ERIC Educational Resources Information Center
Horodezky, Betty; Weinstein, Pauline Smith
1981-01-01
Results indicated considerable difference in total number of running words among series; an increase in different words for grades two-three in each series; repetition of most words in each series between one and nine times; and no high degree of word overlap between series or grades in any given series. (Author/CM)
Baroreflex Coupling Assessed by Cross-Compression Entropy
Schumann, Andy; Schulz, Steffen; Voss, Andreas; Scharbrodt, Susann; Baumert, Mathias; Bär, Karl-Jürgen
2017-01-01
Estimating interactions between physiological systems is an important challenge in modern biomedical research. Here, we explore a new concept for quantifying information common in two time series by cross-compressibility. Cross-compression entropy (CCE) exploits the ZIP data compression algorithm extended to bivariate data analysis. First, time series are transformed into symbol vectors. Symbols of the target time series are coded by the symbols of the source series. Uncoupled and linearly coupled surrogates were derived from cardiovascular recordings of 36 healthy controls obtained during rest to demonstrate suitability of this method for assessing physiological coupling. CCE at rest was compared to that of isometric handgrip exercise. Finally, spontaneous baroreflex interaction assessed by CCEBRS was compared between 21 patients suffering from acute schizophrenia and 21 matched controls. The CCEBRS of original time series was significantly higher than in uncoupled surrogates in 89% of the subjects and higher than in linearly coupled surrogates in 47% of the subjects. Handgrip exercise led to sympathetic activation and vagal inhibition accompanied by reduced baroreflex sensitivity. CCEBRS decreased from 0.553 ± 0.030 at rest to 0.514 ± 0.035 during exercise (p < 0.001). In acute schizophrenia, heart rate, and blood pressure were elevated. Heart rate variability indicated a change of sympathovagal balance. The CCEBRS of patients with schizophrenia was reduced compared to healthy controls (0.546 ± 0.042 vs. 0.507 ± 0.046, p < 0.01) and revealed a decrease of blood pressure influence on heart rate in patients with schizophrenia. Our results indicate that CCE is suitable for the investigation of linear and non-linear coupling in cardiovascular time series. CCE can quantify causal interactions in short, noisy and non-stationary physiological time series. PMID:28539889
Acosta-Mesa, Héctor-Gabriel; Rechy-Ramírez, Fernando; Mezura-Montes, Efrén; Cruz-Ramírez, Nicandro; Hernández Jiménez, Rodolfo
2014-06-01
In this work, we present a novel application of time series discretization using evolutionary programming for the classification of precancerous cervical lesions. The approach optimizes the number of intervals in which the length and amplitude of the time series should be compressed, preserving the important information for classification purposes. Using evolutionary programming, the search for a good discretization scheme is guided by a cost function which considers three criteria: the entropy regarding the classification, the complexity measured as the number of different strings needed to represent the complete data set, and the compression rate assessed as the length of the discrete representation. This discretization approach is evaluated using a time series data based on temporal patterns observed during a classical test used in cervical cancer detection; the classification accuracy reached by our method is compared with the well-known times series discretization algorithm SAX and the dimensionality reduction method PCA. Statistical analysis of the classification accuracy shows that the discrete representation is as efficient as the complete raw representation for the present application, reducing the dimensionality of the time series length by 97%. This representation is also very competitive in terms of classification accuracy when compared with similar approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
Data imputation analysis for Cosmic Rays time series
NASA Astrophysics Data System (ADS)
Fernandes, R. C.; Lucio, P. S.; Fernandez, J. H.
2017-05-01
The occurrence of missing data concerning Galactic Cosmic Rays time series (GCR) is inevitable since loss of data is due to mechanical and human failure or technical problems and different periods of operation of GCR stations. The aim of this study was to perform multiple dataset imputation in order to depict the observational dataset. The study has used the monthly time series of GCR Climax (CLMX) and Roma (ROME) from 1960 to 2004 to simulate scenarios of 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% and 90% of missing data compared to observed ROME series, with 50 replicates. Then, the CLMX station as a proxy for allocation of these scenarios was used. Three different methods for monthly dataset imputation were selected: AMÉLIA II - runs the bootstrap Expectation Maximization algorithm, MICE - runs an algorithm via Multivariate Imputation by Chained Equations and MTSDI - an Expectation Maximization algorithm-based method for imputation of missing values in multivariate normal time series. The synthetic time series compared with the observed ROME series has also been evaluated using several skill measures as such as RMSE, NRMSE, Agreement Index, R, R2, F-test and t-test. The results showed that for CLMX and ROME, the R2 and R statistics were equal to 0.98 and 0.96, respectively. It was observed that increases in the number of gaps generate loss of quality of the time series. Data imputation was more efficient with MTSDI method, with negligible errors and best skill coefficients. The results suggest a limit of about 60% of missing data for imputation, for monthly averages, no more than this. It is noteworthy that CLMX, ROME and KIEL stations present no missing data in the target period. This methodology allowed reconstructing 43 time series.
Trend Change Detection in NDVI Time Series: Effects of Inter-Annual Variability and Methodology
NASA Technical Reports Server (NTRS)
Forkel, Matthias; Carvalhais, Nuno; Verbesselt, Jan; Mahecha, Miguel D.; Neigh, Christopher S.R.; Reichstein, Markus
2013-01-01
Changing trends in ecosystem productivity can be quantified using satellite observations of Normalized Difference Vegetation Index (NDVI). However, the estimation of trends from NDVI time series differs substantially depending on analyzed satellite dataset, the corresponding spatiotemporal resolution, and the applied statistical method. Here we compare the performance of a wide range of trend estimation methods and demonstrate that performance decreases with increasing inter-annual variability in the NDVI time series. Trend slope estimates based on annual aggregated time series or based on a seasonal-trend model show better performances than methods that remove the seasonal cycle of the time series. A breakpoint detection analysis reveals that an overestimation of breakpoints in NDVI trends can result in wrong or even opposite trend estimates. Based on our results, we give practical recommendations for the application of trend methods on long-term NDVI time series. Particularly, we apply and compare different methods on NDVI time series in Alaska, where both greening and browning trends have been previously observed. Here, the multi-method uncertainty of NDVI trends is quantified through the application of the different trend estimation methods. Our results indicate that greening NDVI trends in Alaska are more spatially and temporally prevalent than browning trends. We also show that detected breakpoints in NDVI trends tend to coincide with large fires. Overall, our analyses demonstrate that seasonal trend methods need to be improved against inter-annual variability to quantify changing trends in ecosystem productivity with higher accuracy.
Identification of periods of clear sky irradiance in time series of GHI measurements
Reno, Matthew J.; Hansen, Clifford W.
2016-01-18
In this study, we present a simple algorithm for identifying periods of time with broadband global horizontal irradiance (GHI) similar to that occurring during clear sky conditions from a time series of GHI measurements. Other available methods to identify these periods do so by identifying periods with clear sky conditions using additional measurements, such as direct or diffuse irradiance. Our algorithm compares characteristics of the time series of measured GHI with the output of a clear sky model without requiring additional measurements. We validate our algorithm using data from several locations by comparing our results with those obtained from amore » clear sky detection algorithm, and with satellite and ground-based sky imagery.« less
Identification of periods of clear sky irradiance in time series of GHI measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reno, Matthew J.; Hansen, Clifford W.
In this study, we present a simple algorithm for identifying periods of time with broadband global horizontal irradiance (GHI) similar to that occurring during clear sky conditions from a time series of GHI measurements. Other available methods to identify these periods do so by identifying periods with clear sky conditions using additional measurements, such as direct or diffuse irradiance. Our algorithm compares characteristics of the time series of measured GHI with the output of a clear sky model without requiring additional measurements. We validate our algorithm using data from several locations by comparing our results with those obtained from amore » clear sky detection algorithm, and with satellite and ground-based sky imagery.« less
CI2 for creating and comparing confidence-intervals for time-series bivariate plots.
Mullineaux, David R
2017-02-01
Currently no method exists for calculating and comparing the confidence-intervals (CI) for the time-series of a bivariate plot. The study's aim was to develop 'CI2' as a method to calculate the CI on time-series bivariate plots, and to identify if the CI between two bivariate time-series overlap. The test data were the knee and ankle angles from 10 healthy participants running on a motorised standard-treadmill and non-motorised curved-treadmill. For a recommended 10+ trials, CI2 involved calculating 95% confidence-ellipses at each time-point, then taking as the CI the points on the ellipses that were perpendicular to the direction vector between the means of two adjacent time-points. Consecutive pairs of CI created convex quadrilaterals, and any overlap of these quadrilaterals at the same time or ±1 frame as a time-lag calculated using cross-correlations, indicated where the two time-series differed. CI2 showed no group differences between left and right legs on both treadmills, but the same legs between treadmills for all participants showed differences of less knee extension on the curved-treadmill before heel-strike. To improve and standardise the use of CI2 it is recommended to remove outlier time-series, use 95% confidence-ellipses, and scale the ellipse by the fixed Chi-square value as opposed to the sample-size dependent F-value. For practical use, and to aid in standardisation or future development of CI2, Matlab code is provided. CI2 provides an effective method to quantify the CI of bivariate plots, and to explore the differences in CI between two bivariate time-series. Copyright © 2016 Elsevier B.V. All rights reserved.
Interrupted Time Series Versus Statistical Process Control in Quality Improvement Projects.
Andersson Hagiwara, Magnus; Andersson Gäre, Boel; Elg, Mattias
2016-01-01
To measure the effect of quality improvement interventions, it is appropriate to use analysis methods that measure data over time. Examples of such methods include statistical process control analysis and interrupted time series with segmented regression analysis. This article compares the use of statistical process control analysis and interrupted time series with segmented regression analysis for evaluating the longitudinal effects of quality improvement interventions, using an example study on an evaluation of a computerized decision support system.
Empirical forecast of quiet time ionospheric Total Electron Content maps over Europe
NASA Astrophysics Data System (ADS)
Badeke, Ronny; Borries, Claudia; Hoque, Mainul M.; Minkwitz, David
2018-06-01
An accurate forecast of the atmospheric Total Electron Content (TEC) is helpful to investigate space weather influences on the ionosphere and technical applications like satellite-receiver radio links. The purpose of this work is to compare four empirical methods for a 24-h forecast of vertical TEC maps over Europe under geomagnetically quiet conditions. TEC map data are obtained from the Space Weather Application Center Ionosphere (SWACI) and the Universitat Politècnica de Catalunya (UPC). The time-series methods Standard Persistence Model (SPM), a 27 day median model (MediMod) and a Fourier Series Expansion are compared to maps for the entire year of 2015. As a representative of the climatological coefficient models the forecast performance of the Global Neustrelitz TEC model (NTCM-GL) is also investigated. Time periods of magnetic storms, which are identified with the Dst index, are excluded from the validation. By calculating the TEC values with the most recent maps, the time-series methods perform slightly better than the coefficient model NTCM-GL. The benefit of NTCM-GL is its independence on observational TEC data. Amongst the time-series methods mentioned, MediMod delivers the best overall performance regarding accuracy and data gap handling. Quiet-time SWACI maps can be forecasted accurately and in real-time by the MediMod time-series approach.
Adaptive time-variant models for fuzzy-time-series forecasting.
Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching
2010-12-01
A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.
Multiscale multifractal time irreversibility analysis of stock markets
NASA Astrophysics Data System (ADS)
Jiang, Chenguang; Shang, Pengjian; Shi, Wenbin
2016-11-01
Time irreversibility is one of the most important properties of nonstationary time series. Complex time series often demonstrate even multiscale time irreversibility, such that not only the original but also coarse-grained time series are asymmetric over a wide range of scales. We study the multiscale time irreversibility of time series. In this paper, we develop a method called multiscale multifractal time irreversibility analysis (MMRA), which allows us to extend the description of time irreversibility to include the dependence on the segment size and statistical moments. We test the effectiveness of MMRA in detecting multifractality and time irreversibility of time series generated from delayed Henon map and binomial multifractal model. Then we employ our method to the time irreversibility analysis of stock markets in different regions. We find that the emerging market has higher multifractality degree and time irreversibility compared with developed markets. In this sense, the MMRA method may provide new angles in assessing the evolution stage of stock markets.
Yang, Heping; Zhang, Hongwu; Chen, Haidi; Yang, Shuxiong; Wang, Jun; Hu, Dawang
2016-04-01
To compare the effectiveness of complex defects repair between using chimeric anterolateral thigh flap and series-wound flaps after resection of oral and maxillofacial cancer. After resection of oral and maxillofacial cancer, defect was repaired with chimeric anterolateral thigh flap in 39 patients between January 2011 and July 2014 (chimeric anterolateral thigh flap group); and defect was repaired with series-wound flaps in 35 patients between January 2009 and December 2010 (series-wound flaps group). There was no significant difference in gender, age, duration of disease, tumor type, tumor staging, defect location, and defect area between 2 groups (P > 0.05). The operation time, flap harvesting and microvascular anastomosis time, stomach tube extraction time, and oral feeding time were recorded and compared between 2 groups, and postoperative complications were observed; the effectiveness was evaluated according to clinical efficacy evaluation table of bone and soft tissue defects reconstruction surgery in oral and maxillofacial region. Vascular crisis occurred in 2 cases of chimeric anterolateral thigh flap group, and 4 cases of series-wound flaps group. Partial necrosis appeared at distal end of a series-wound flaps, and oral fistula and infection developed in 3 series-wound flaps. The other flaps and the grafted skin at donor site survived; wounds at recipient site healed by first intention. The operation time, stomach tube extraction time, and oral feeding time of chimeric anterolateral thigh flap group were significantly shorter than those of series-wound flaps group (P < 0.05), while the flap harvesting and microvascular anastomosis time was significantly longer than that of series-wound flaps group (P < 0.05). The patients were followed up 1-5 years (mean, 2.5 years). At 3 months after operation, the appearance, patients' satisfaction, working conditions, oral closure function, chew, language performance, and swallowing scores of the chimeric anterolateral thigh-flap group were significantly better than those of the series-wound flaps group (P < 0.05), while there was no significant difference in diet, mouth opening degree, oral cavity holding water test, and occlusion scores between the 2 groups (P > 0.05). Using chimeric anterolateral thigh flap for defect repair after resection of oral and maxillofacial cancer can significantly shorten the operation time, accelerate postoperative rehabilitation, and help the functional recovery of oral closure, chewing, language performance, swallowing function when compared with the series-wound flaps.
ERIC Educational Resources Information Center
Somers, Marie-Andrée; Zhu, Pei; Jacob, Robin; Bloom, Howard
2013-01-01
In this paper, we examine the validity and precision of two nonexperimental study designs (NXDs) that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. In a CITS design, program impacts are evaluated by looking at whether the treatment group deviates from its…
Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun
2017-12-01
Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
WANG, D.; Wang, Y.; Zeng, X.
2017-12-01
Accurate, fast forecasting of hydro-meteorological time series is presently a major challenge in drought and flood mitigation. This paper proposes a hybrid approach, Wavelet De-noising (WD) and Rank-Set Pair Analysis (RSPA), that takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. WD allows decomposition and reconstruction of a time series by the wavelet transform, and hence separation of the noise from the original series. RSPA, a more reliable and efficient version of Set Pair Analysis, is integrated with WD to form the hybrid WD-RSPA approach. Two types of hydro-meteorological data sets with different characteristics and different levels of human influences at some representative stations are used to illustrate the WD-RSPA approach. The approach is also compared to three other generic methods: the conventional Auto Regressive Integrated Moving Average (ARIMA) method, Artificial Neural Networks (ANNs) (BP-error Back Propagation, MLP-Multilayer Perceptron and RBF-Radial Basis Function), and RSPA alone. Nine error metrics are used to evaluate the model performance. The results show that WD-RSPA is accurate, feasible, and effective. In particular, WD-RSPA is found to be the best among the various generic methods compared in this paper, even when the extreme events are included within a time series.
Monitoring of tissue ablation using time series of ultrasound RF data.
Imani, Farhad; Wu, Mark Z; Lasso, Andras; Burdette, Everett C; Daoud, Mohammad; Fitchinger, Gabor; Abolmaesumi, Purang; Mousavi, Parvin
2011-01-01
This paper is the first report on the monitoring of tissue ablation using ultrasound RF echo time series. We calcuate frequency and time domain features of time series of RF echoes from stationary tissue and transducer, and correlate them with ablated and non-ablated tissue properties. We combine these features in a nonlinear classification framework and demonstrate up to 99% classification accuracy in distinguishing ablated and non-ablated regions of tissue, in areas as small as 12mm2 in size. We also demonstrate significant improvement of ablated tissue classification using RF time series compared to the conventional approach of using single RF scan lines. The results of this study suggest RF echo time series as a promising approach for monitoring ablation, and capturing the changes in the tissue microstructure as a result of heat-induced necrosis.
A harmonic linear dynamical system for prominent ECG feature extraction.
Thi, Ngoc Anh Nguyen; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc
2014-01-01
Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.
A new approach for measuring power spectra and reconstructing time series in active galactic nuclei
NASA Astrophysics Data System (ADS)
Li, Yan-Rong; Wang, Jian-Min
2018-05-01
We provide a new approach to measure power spectra and reconstruct time series in active galactic nuclei (AGNs) based on the fact that the Fourier transform of AGN stochastic variations is a series of complex Gaussian random variables. The approach parametrizes a stochastic series in frequency domain and transforms it back to time domain to fit the observed data. The parameters and their uncertainties are derived in a Bayesian framework, which also allows us to compare the relative merits of different power spectral density models. The well-developed fast Fourier transform algorithm together with parallel computation enables an acceptable time complexity for the approach.
A comparison of high-frequency cross-correlation measures
NASA Astrophysics Data System (ADS)
Precup, Ovidiu V.; Iori, Giulia
2004-12-01
On a high-frequency scale the time series are not homogeneous, therefore standard correlation measures cannot be directly applied to the raw data. There are two ways to deal with this problem. The time series can be homogenised through an interpolation method (An Introduction to High-Frequency Finance, Academic Press, NY, 2001) (linear or previous tick) and then the Pearson correlation statistic computed. Recently, methods that can handle raw non-synchronous time series have been developed (Int. J. Theor. Appl. Finance 6(1) (2003) 87; J. Empirical Finance 4 (1997) 259). This paper compares two traditional methods that use interpolation with an alternative method applied directly to the actual time series.
Use of the Box and Jenkins time series technique in traffic forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nihan, N.L.; Holmesland, K.O.
The use of recently developed time series techniques for short-term traffic volume forecasting is examined. A data set containing monthly volumes on a freeway segment for 1968-76 is used to fit a time series model. The resultant model is used to forecast volumes for 1977. The forecast volumes are then compared with actual volumes in 1977. Time series techniques can be used to develop highly accurate and inexpensive short-term forecasts. The feasibility of using these models to evaluate the effects of policy changes or other outside impacts is considered. (1 diagram, 1 map, 14 references,2 tables)
NASA Astrophysics Data System (ADS)
Muñoz-Diosdado, A.
2005-01-01
We analyzed databases with gait time series of adults and persons with Parkinson, Huntington and amyotrophic lateral sclerosis (ALS) diseases. We obtained the staircase graphs of accumulated events that can be bounded by a straight line whose slope can be used to distinguish between gait time series from healthy and ill persons. The global Hurst exponent of these series do not show tendencies, we intend that this is because some gait time series have monofractal behavior and others have multifractal behavior so they cannot be characterized with a single Hurst exponent. We calculated the multifractal spectra, obtained the spectra width and found that the spectra of the healthy young persons are almost monofractal. The spectra of ill persons are wider than the spectra of healthy persons. In opposition to the interbeat time series where the pathology implies loss of multifractality, in the gait time series the multifractal behavior emerges with the pathology. Data were collected from healthy and ill subjects as they walked in a roughly circular path and they have sensors in both feet, so we have one time series for the left foot and other for the right foot. First, we analyzed these time series separately, and then we compared both results, with direct comparison and with a cross correlation analysis. We tried to find differences in both time series that can be used as indicators of equilibrium problems.
Visual analytics techniques for large multi-attribute time series data
NASA Astrophysics Data System (ADS)
Hao, Ming C.; Dayal, Umeshwar; Keim, Daniel A.
2008-01-01
Time series data commonly occur when variables are monitored over time. Many real-world applications involve the comparison of long time series across multiple variables (multi-attributes). Often business people want to compare this year's monthly sales with last year's sales to make decisions. Data warehouse administrators (DBAs) want to know their daily data loading job performance. DBAs need to detect the outliers early enough to act upon them. In this paper, two new visual analytic techniques are introduced: The color cell-based Visual Time Series Line Charts and Maps highlight significant changes over time in a long time series data and the new Visual Content Query facilitates finding the contents and histories of interesting patterns and anomalies, which leads to root cause identification. We have applied both methods to two real-world applications to mine enterprise data warehouse and customer credit card fraud data to illustrate the wide applicability and usefulness of these techniques.
NASA Astrophysics Data System (ADS)
Unnikrishnan, Poornima; Jothiprakash, Vinayakam
2017-04-01
Precipitation is the major component in the hydrologic cycle. Awareness of not only the total amount of rainfall pertaining to a catchment, but also the pattern of its spatial and temporal distribution are equally important in the management of water resources systems in an efficient way. Trend is the long term direction of a time series; it determines the overall pattern of a time series. Singular Spectrum Analysis (SSA) is a time series analysis technique that decomposes the time series into small components (eigen triples). This property of the method of SSA has been utilized to extract the trend component of the rainfall time series. In order to derive trend from the rainfall time series, we need to select components corresponding to trend from the eigen triples. For this purpose, periodogram analysis of the eigen triples have been proposed to be coupled with SSA, in the present study. In the study, seasonal data of England and Wales Precipitation (EWP) for a time period of 1766-2013 have been analyzed and non linear trend have been derived out of the precipitation data. In order to compare the performance of SSA in deriving trend component, Mann Kendall (MK) test is also used to detect trends in EWP seasonal series and the results have been compared. The result showed that the MK test could detect the presence of positive or negative trend for a significance level, whereas the proposed methodology of SSA could extract the non-linear trend present in the rainfall series along with its shape. We will discuss further the comparison of both the methodologies along with the results in the presentation.
Filter-based multiscale entropy analysis of complex physiological time series.
Xu, Yuesheng; Zhao, Liang
2013-08-01
Multiscale entropy (MSE) has been widely and successfully used in analyzing the complexity of physiological time series. We reinterpret the averaging process in MSE as filtering a time series by a filter of a piecewise constant type. From this viewpoint, we introduce filter-based multiscale entropy (FME), which filters a time series to generate multiple frequency components, and then we compute the blockwise entropy of the resulting components. By choosing filters adapted to the feature of a given time series, FME is able to better capture its multiscale information and to provide more flexibility for studying its complexity. Motivated by the heart rate turbulence theory, which suggests that the human heartbeat interval time series can be described in piecewise linear patterns, we propose piecewise linear filter multiscale entropy (PLFME) for the complexity analysis of the time series. Numerical results from PLFME are more robust to data of various lengths than those from MSE. The numerical performance of the adaptive piecewise constant filter multiscale entropy without prior information is comparable to that of PLFME, whose design takes prior information into account.
Wavelet entropy of BOLD time series: An application to Rolandic epilepsy.
Gupta, Lalit; Jansen, Jacobus F A; Hofman, Paul A M; Besseling, René M H; de Louw, Anton J A; Aldenkamp, Albert P; Backes, Walter H
2017-12-01
To assess the wavelet entropy for the characterization of intrinsic aberrant temporal irregularities in the time series of resting-state blood-oxygen-level-dependent (BOLD) signal fluctuations. Further, to evaluate the temporal irregularities (disorder/order) on a voxel-by-voxel basis in the brains of children with Rolandic epilepsy. The BOLD time series was decomposed using the discrete wavelet transform and the wavelet entropy was calculated. Using a model time series consisting of multiple harmonics and nonstationary components, the wavelet entropy was compared with Shannon and spectral (Fourier-based) entropy. As an application, the wavelet entropy in 22 children with Rolandic epilepsy was compared to 22 age-matched healthy controls. The images were obtained by performing resting-state functional magnetic resonance imaging (fMRI) using a 3T system, an 8-element receive-only head coil, and an echo planar imaging pulse sequence ( T2*-weighted). The wavelet entropy was also compared to spectral entropy, regional homogeneity, and Shannon entropy. Wavelet entropy was found to identify the nonstationary components of the model time series. In Rolandic epilepsy patients, a significantly elevated wavelet entropy was observed relative to controls for the whole cerebrum (P = 0.03). Spectral entropy (P = 0.41), regional homogeneity (P = 0.52), and Shannon entropy (P = 0.32) did not reveal significant differences. The wavelet entropy measure appeared more sensitive to detect abnormalities in cerebral fluctuations represented by nonstationary effects in the BOLD time series than more conventional measures. This effect was observed in the model time series as well as in Rolandic epilepsy. These observations suggest that the brains of children with Rolandic epilepsy exhibit stronger nonstationary temporal signal fluctuations than controls. 2 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2017;46:1728-1737. © 2017 International Society for Magnetic Resonance in Medicine.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-07-25
This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
Modified DTW for a quantitative estimation of the similarity between rainfall time series
NASA Astrophysics Data System (ADS)
Djallel Dilmi, Mohamed; Barthès, Laurent; Mallet, Cécile; Chazottes, Aymeric
2017-04-01
The Precipitations are due to complex meteorological phenomenon and can be described as intermittent process. The spatial and temporal variability of this phenomenon is significant and covers large scales. To analyze and model this variability and / or structure, several studies use a network of rain gauges providing several time series of precipitation measurements. To compare these different time series, the authors compute for each time series some parameters (PDF, rain peak intensity, occurrence, amount, duration, intensity …). However, and despite the calculation of these parameters, the comparison of the parameters between two series of measurements remains qualitative. Due to the advection processes, when different sensors of an observation network measure precipitation time series identical in terms of intermitency or intensities, there is a time lag between the different measured series. Analyzing and extracting relevant information on physical phenomena from these precipitation time series implies the development of automatic analytical methods capable of comparing two time series of precipitation measured by different sensors or at two different locations and thus quantifying the difference / similarity. The limits of the Euclidean distance to measure the similarity between the time series of precipitation have been well demonstrated and explained (eg the Euclidian distance is indeed very sensitive to the effects of phase shift : between two identical but slightly shifted time series, this distance is not negligible). To quantify and analysis these time lag, the correlation functions are well established, normalized and commonly used to measure the spatial dependences that are required by many applications. However, authors generally observed that there is always a considerable scatter of the inter-rain gauge correlation coefficients obtained from the individual pairs of rain gauges. Because of a substantial dispersion of estimated time lag, the interpretation of this inter-correlation is not straightforward. We propose here to use an improvement of the Euclidian distance which integrates the global complexity of the rainfall series. The Dynamic Time Wrapping (DTW) used in speech recognition allows matching two time series instantly different and provide the most probable time lag. However, the original formulation of the DTW suffers from some limitations. In particular, it is not adequate to the rain intermittency. In this study we present an adaptation of the DTW for the analysis of rainfall time series : we used time series from the "Météo France" rain gauge network observed between January 1st, 2007 and December 31st, 2015 on 25 stations located in the Île de France area. Then we analyze the results (eg. The distance, the relationship between the time lag detected by our methods and others measured parameters like speed and direction of the wind…) to show the ability of the proposed similarity to provide usefull information on the rain structure. The possibility of using this measure of similarity to define a quality indicator of a sensor integrated into an observation network is also envisaged.
A study of stationarity in time series by using wavelet transform
NASA Astrophysics Data System (ADS)
Dghais, Amel Abdoullah Ahmed; Ismail, Mohd Tahir
2014-07-01
In this work the core objective is to apply discrete wavelet transform (DWT) functions namely Haar, Daubechies, Symmlet, Coiflet and discrete approximation of the meyer wavelets in non-stationary financial time series data from US stock market (DJIA30). The data consists of 2048 daily data of closing index starting from December 17, 2004 until October 23, 2012. From the unit root test the results show that the data is non stationary in the level. In order to study the stationarity of a time series, the autocorrelation function (ACF) is used. Results indicate that, Haar function is the lowest function to obtain noisy series as compared to Daubechies, Symmlet, Coiflet and discrete approximation of the meyer wavelets. In addition, the original data after decomposition by DWT is less noisy series than decomposition by DWT for return time series.
Superstatistical fluctuations in time series: Applications to share-price dynamics and turbulence
NASA Astrophysics Data System (ADS)
van der Straeten, Erik; Beck, Christian
2009-09-01
We report a general technique to study a given experimental time series with superstatistics. Crucial for the applicability of the superstatistics concept is the existence of a parameter β that fluctuates on a large time scale as compared to the other time scales of the complex system under consideration. The proposed method extracts the main superstatistical parameters out of a given data set and examines the validity of the superstatistical model assumptions. We test the method thoroughly with surrogate data sets. Then the applicability of the superstatistical approach is illustrated using real experimental data. We study two examples, velocity time series measured in turbulent Taylor-Couette flows and time series of log returns of the closing prices of some stock market indices.
NASA Astrophysics Data System (ADS)
Hermosilla, Txomin; Wulder, Michael A.; White, Joanne C.; Coops, Nicholas C.; Hobart, Geordie W.
2017-12-01
The use of time series satellite data allows for the temporally dense, systematic, transparent, and synoptic capture of land dynamics over time. Subsequent to the opening of the Landsat archive, several time series approaches for characterizing landscape change have been developed, often representing a particular analytical time window. The information richness and widespread utility of these time series data have created a need to maintain the currency of time series information via the addition of new data, as it becomes available. When an existing time series is temporally extended, it is critical that previously generated change information remains consistent, thereby not altering reported change statistics or science outcomes based on that change information. In this research, we investigate the impacts and implications of adding additional years to an existing 29-year annual Landsat time series for forest change. To do so, we undertook a spatially explicit comparison of the 29 overlapping years of a time series representing 1984-2012, with a time series representing 1984-2016. Surface reflectance values, and presence, year, and type of change were compared. We found that the addition of years to extend the time series had minimal effect on the annual surface reflectance composites, with slight band-specific differences (r ≥ 0.1) in the final years of the original time series being updated. The area of stand replacing disturbances and determination of change year are virtually unchanged for the overlapping period between the two time-series products. Over the overlapping temporal period (1984-2012), the total area of change differs by 0.53%, equating to an annual difference in change area of 0.019%. Overall, the spatial and temporal agreement of the changes detected by both time series was 96%. Further, our findings suggest that the entire pre-existing historic time series does not need to be re-processed during the update process. Critically, given the time series change detection and update approach followed here, science outcomes or reports representing one temporal epoch can be considered stable and will not be altered when a time series is updated with newly available data.
Wang, Dong; Borthwick, Alistair G; He, Handan; Wang, Yuankun; Zhu, Jieyu; Lu, Yuan; Xu, Pengcheng; Zeng, Xiankui; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Liu, Jiufu; Zou, Ying; He, Ruimin
2018-01-01
Accurate, fast forecasting of hydro-meteorological time series is presently a major challenge in drought and flood mitigation. This paper proposes a hybrid approach, wavelet de-noising (WD) and Rank-Set Pair Analysis (RSPA), that takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. WD allows decomposition and reconstruction of a time series by the wavelet transform, and hence separation of the noise from the original series. RSPA, a more reliable and efficient version of Set Pair Analysis, is integrated with WD to form the hybrid WD-RSPA approach. Two types of hydro-meteorological data sets with different characteristics and different levels of human influences at some representative stations are used to illustrate the WD-RSPA approach. The approach is also compared to three other generic methods: the conventional Auto Regressive Integrated Moving Average (ARIMA) method, Artificial Neural Networks (ANNs) (BP-error Back Propagation, MLP-Multilayer Perceptron and RBF-Radial Basis Function), and RSPA alone. Nine error metrics are used to evaluate the model performance. Compared to three other generic methods, the results generated by WD-REPA model presented invariably smaller error measures which means the forecasting capability of the WD-REPA model is better than other models. The results show that WD-RSPA is accurate, feasible, and effective. In particular, WD-RSPA is found to be the best among the various generic methods compared in this paper, even when the extreme events are included within a time series. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Susanto, R. D.; Setiawan, A.; Zheng, Q.; Sulistyo, B.; Adi, T. R.; Agustiadi, T.; Trenggono, M.; Triyono, T.; Kuswardani, A.
2016-12-01
The seasonal variability of a full lifetime of Aquarius sea surface salinity time series from August 25, 2011 to June 7, 2015 is compared to salinity time series obtained from in situ observations in the Karimata Strait. The Karimata Strait plays dual roles in water exchange between the Pacific and the Indian Ocean. The salinity in the Karimata Strait is strongly affected by seasonal monsoon winds. During the boreal winter monsoon, northwesterly winds draws low salinity water from the South China Sea into the Java Sea and at the same time, the Java Sea receives an influx of the Indian Ocean water via the Sunda Strait. The Java Sea water will reduce the main Indonesian throughflow in the Makassar Strait. Conditions are reversed during the summer monsoon. Low salinity water from the South China Sea also controls the vertical structure of water properties in the upper layer of the Makassar Strait and the Lombok Strait. As a part of the South China Sea and Indonesian Seas Transport/Exchange (SITE) program, trawl resistance bottom mounted CTD was deployed in the Karimata Strait in mid-2010 to mid-2016 at water depth of 40 m. CTD casts during the mooring recoveries and deployments are used to compare the bottom salinity data. This in situ salinity time series is compared with various Aquarius NASA salinity products (the level 2, level 3 ascending and descending tracks and the seven-days rolling averaged) to check the consistency, correlation and statistical analysis. The preliminary results show that the seasonal variability of Aquarius salinity time series has larger amplitude variability compared to that of in situ data.
Complexity multiscale asynchrony measure and behavior for interacting financial dynamics
NASA Astrophysics Data System (ADS)
Yang, Ge; Wang, Jun; Niu, Hongli
2016-08-01
A stochastic financial price process is proposed and investigated by the finite-range multitype contact dynamical system, in an attempt to study the nonlinear behaviors of real asset markets. The viruses spreading process in a finite-range multitype system is used to imitate the interacting behaviors of diverse investment attitudes in a financial market, and the empirical research on descriptive statistics and autocorrelation behaviors of return time series is performed for different values of propagation rates. Then the multiscale entropy analysis is adopted to study several different shuffled return series, including the original return series, the corresponding reversal series, the random shuffled series, the volatility shuffled series and the Zipf-type shuffled series. Furthermore, we propose and compare the multiscale cross-sample entropy and its modification algorithm called composite multiscale cross-sample entropy. We apply them to study the asynchrony of pairs of time series under different time scales.
Characterizing Time Series Data Diversity for Wind Forecasting: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Chartan, Erol Kevin; Feng, Cong
Wind forecasting plays an important role in integrating variable and uncertain wind power into the power grid. Various forecasting models have been developed to improve the forecasting accuracy. However, it is challenging to accurately compare the true forecasting performances from different methods and forecasters due to the lack of diversity in forecasting test datasets. This paper proposes a time series characteristic analysis approach to visualize and quantify wind time series diversity. The developed method first calculates six time series characteristic indices from various perspectives. Then the principal component analysis is performed to reduce the data dimension while preserving the importantmore » information. The diversity of the time series dataset is visualized by the geometric distribution of the newly constructed principal component space. The volume of the 3-dimensional (3D) convex polytope (or the length of 1D number axis, or the area of the 2D convex polygon) is used to quantify the time series data diversity. The method is tested with five datasets with various degrees of diversity.« less
2018-03-01
remaining 3000- series time block. Although there are fewer flight hours when compared to AHC, the coordination with external agencies may increase the...training duration to complete FAC(A). Additionally, pilots may conduct 4000 to 6000 series events concurrently, thus expanding the training time . 4...A)I training. Because FAC(A)I is included in the 5000- series training block, the time period cannot be finitely determined. 8. FL and AMC are the
Variance fluctuations in nonstationary time series: a comparative study of music genres
NASA Astrophysics Data System (ADS)
Jennings, Heather D.; Ivanov, Plamen Ch.; De Martins, Allan M.; da Silva, P. C.; Viswanathan, G. M.
2004-05-01
An important problem in physics concerns the analysis of audio time series generated by transduced acoustic phenomena. Here, we develop a new method to quantify the scaling properties of the local variance of nonstationary time series. We apply this technique to analyze audio signals obtained from selected genres of music. We find quantitative differences in the correlation properties of high art music, popular music, and dance music. We discuss the relevance of these objective findings in relation to the subjective experience of music.
Academic Workload and Working Time: Retrospective Perceptions versus Time-Series Data
ERIC Educational Resources Information Center
Kyvik, Svein
2013-01-01
The purpose of this article is to examine the validity of perceptions by academic staff about their past and present workload and working hours. Retrospective assessments are compared with time-series data. The data are drawn from four mail surveys among academic staff in Norwegian universities undertaken in the period 1982-2008. The findings show…
Shao, Ying-Hui; Gu, Gao-Feng; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Sornette, Didier
2012-01-01
Notwithstanding the significant efforts to develop estimators of long-range correlations (LRC) and to compare their performance, no clear consensus exists on what is the best method and under which conditions. In addition, synthetic tests suggest that the performance of LRC estimators varies when using different generators of LRC time series. Here, we compare the performances of four estimators [Fluctuation Analysis (FA), Detrended Fluctuation Analysis (DFA), Backward Detrending Moving Average (BDMA), and Centred Detrending Moving Average (CDMA)]. We use three different generators [Fractional Gaussian Noises, and two ways of generating Fractional Brownian Motions]. We find that CDMA has the best performance and DFA is only slightly worse in some situations, while FA performs the worst. In addition, CDMA and DFA are less sensitive to the scaling range than FA. Hence, CDMA and DFA remain “The Methods of Choice” in determining the Hurst index of time series. PMID:23150785
A KST framework for correlation network construction from time series signals
NASA Astrophysics Data System (ADS)
Qi, Jin-Peng; Gu, Quan; Zhu, Ying; Zhang, Ping
2018-04-01
A KST (Kolmogorov-Smirnov test and T statistic) method is used for construction of a correlation network based on the fluctuation of each time series within the multivariate time signals. In this method, each time series is divided equally into multiple segments, and the maximal data fluctuation in each segment is calculated by a KST change detection procedure. Connections between each time series are derived from the data fluctuation matrix, and are used for construction of the fluctuation correlation network (FCN). The method was tested with synthetic simulations and the result was compared with those from using KS or T only for detection of data fluctuation. The novelty of this study is that the correlation analyses was based on the data fluctuation in each segment of each time series rather than on the original time signals, which would be more meaningful for many real world applications and for analysis of large-scale time signals where prior knowledge is uncertain.
Comparative Fatigue Lives of Rubber and PVC Wiper Cylindrical Coatings
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.; Savage, Michael
2002-01-01
Three coating materials for rotating cylindrical-coated wiping rollers were fatigue tested in 2 Intaglio printing presses. The coatings were a hard, cross-linked, plasticized PVC thermoset (P-series); a plasticized PVC (A-series); and a hard, nitryl rubber (R-series). Both 2- and 3-parameter Weibull analyses as well as a cost-benefit analysis were performed. The mean value of life for the R-series coating is 24 and 9 times longer than the P- and A-series coatings, respectively. Both the cost and replacement rate for the R-series coating was significantly less than those for the P- and A-series coatings. At a very high probability of survival the R-series coating is approximately 2 and 6 times the lives of the P- and A-series, respectively, before the first failure occurs. Where all coatings are run to failure, using the mean (life) time between removal (MTBR) for each coating to calculate the number of replacements and costs provides qualitatively similar results to those using a Weibull analysis.
Comparison of ITRF2014 station coordinate input time series of DORIS, VLBI and GNSS
NASA Astrophysics Data System (ADS)
Tornatore, Vincenza; Tanır Kayıkçı, Emine; Roggero, Marco
2016-12-01
In this paper station coordinate time series from three space geodesy techniques that have contributed to the realization of the International Terrestrial Reference Frame 2014 (ITRF2014) are compared. In particular the height component time series extracted from official combined intra-technique solutions submitted for ITRF2014 by DORIS, VLBI and GNSS Combination Centers have been investigated. The main goal of this study is to assess the level of agreement among these three space geodetic techniques. A novel analytic method, modeling time series as discrete-time Markov processes, is presented and applied to the compared time series. The analysis method has proven to be particularly suited to obtain quasi-cyclostationary residuals which are an important property to carry out a reliable harmonic analysis. We looked for common signatures among the three techniques. Frequencies and amplitudes of the detected signals have been reported along with their percentage of incidence. Our comparison shows that two of the estimated signals, having one-year and 14 days periods, are common to all the techniques. Different hypotheses on the nature of the signal having a period of 14 days are presented. As a final check we have compared the estimated velocities and their standard deviations (STD) for the sites that co-located the VLBI, GNSS and DORIS stations, obtaining a good agreement among the three techniques both in the horizontal (1.0 mm/yr mean STD) and in the vertical (0.7 mm/yr mean STD) component, although some sites show larger STDs, mainly due to lack of data, different data spans or noisy observations.
Higher-Order Hurst Signatures: Dynamical Information in Time Series
NASA Astrophysics Data System (ADS)
Ferenbaugh, Willis
2005-10-01
Understanding and comparing time series from different systems requires characteristic measures of the dynamics embedded in the series. The Hurst exponent is a second-order dynamical measure of a time series which grew up within the blossoming fractal world of Mandelbrot. This characteristic measure is directly related to the behavior of the autocorrelation, the power-spectrum, and other second-order things. And as with these other measures, the Hurst exponent captures and quantifies some but not all of the intrinsic nature of a series. The more elusive characteristics live in the phase spectrum and the higher-order spectra. This research is a continuing quest to (more) fully characterize the dynamical information in time series produced by plasma experiments or models. The goal is to supplement the series information which can be represented by a Hurst exponent, and we would like to develop supplemental techniques in analogy with Hurst's original R/S analysis. These techniques should be another way to plumb the higher-order dynamics.
Volatility Behaviors of Financial Time Series by Percolation System on Sierpinski Carpet Lattice
NASA Astrophysics Data System (ADS)
Pei, Anqi; Wang, Jun
2015-01-01
The financial time series is simulated and investigated by the percolation system on the Sierpinski carpet lattice, where percolation is usually employed to describe the behavior of connected clusters in a random graph, and the Sierpinski carpet lattice is a graph which corresponds the fractal — Sierpinski carpet. To study the fluctuation behavior of returns for the financial model and the Shanghai Composite Index, we establish a daily volatility measure — multifractal volatility (MFV) measure to obtain MFV series, which have long-range cross-correlations with squared daily return series. The autoregressive fractionally integrated moving average (ARFIMA) model is used to analyze the MFV series, which performs better when compared to other volatility series. By a comparative study of the multifractality and volatility analysis of the data, the simulation data of the proposed model exhibits very similar behaviors to those of the real stock index, which indicates somewhat rationality of the model to the market application.
Conditional heteroscedasticity as a leading indicator of ecological regime shifts.
Seekell, David A; Carpenter, Stephen R; Pace, Michael L
2011-10-01
Regime shifts are massive, often irreversible, rearrangements of nonlinear ecological processes that occur when systems pass critical transition points. Ecological regime shifts sometimes have severe consequences for human well-being, including eutrophication in lakes, desertification, and species extinctions. Theoretical and laboratory evidence suggests that statistical anomalies may be detectable leading indicators of regime shifts in ecological time series, making it possible to foresee and potentially avert incipient regime shifts. Conditional heteroscedasticity is persistent variance characteristic of time series with clustered volatility. Here, we analyze conditional heteroscedasticity as a potential leading indicator of regime shifts in ecological time series. We evaluate conditional heteroscedasticity by using ecological models with and without four types of critical transition. On approaching transition points, all time series contain significant conditional heteroscedasticity. This signal is detected hundreds of time steps in advance of the regime shift. Time series without regime shifts do not have significant conditional heteroscedasticity. Because probability values are easily associated with tests for conditional heteroscedasticity, detection of false positives in time series without regime shifts is minimized. This property reduces the need for a reference system to compare with the perturbed system.
A Synthesis of VIIRS Solar and Lunar Calibrations
NASA Technical Reports Server (NTRS)
Eplee, Robert E.; Turpie, Kevin R.; Meister, Gerhard; Patt, Frederick S.; Fireman, Gwyn F.; Franz, Bryan A.; McClain, Charles R.
2013-01-01
The NASA VIIRS Ocean Science Team (VOST) has developed two independent calibrations of the SNPP VIIRS moderate resolution reflective solar bands using solar diffuser and lunar observations through June 2013. Fits to the solar calibration time series show mean residuals per band of 0.078-0.10%. There are apparent residual lunar libration correlations in the lunar calibration time series that are not accounted for by the ROLO photometric model of the Moon. Fits to the lunar time series that account for residual librations show mean residuals per band of 0.071-0.17%. Comparison of the solar and lunar time series shows that the relative differences in the two calibrations are 0.12-0.31%. Relative uncertainties in the VIIRS solar and lunar calibration time series are comparable to those achieved for SeaWiFS, Aqua MODIS, and Terra MODIS. Intercomparison of the VIIRS lunar time series with those from SeaWiFS, Aqua MODIS, and Terra MODIS shows that the scatter in the VIIRS lunar observations is consistent with that observed for the heritage instruments. Based on these analyses, the VOST has derived a calibration lookup table for VIIRS ocean color data based on fits to the solar calibration time series.
PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting
Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie
2013-01-01
Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693
PRESEE: an MDL/MML algorithm to time-series stream segmenting.
Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie
2013-01-01
Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.
Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.
Malkin, Zinovy
2016-04-01
The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.
Clustering of financial time series
NASA Astrophysics Data System (ADS)
D'Urso, Pierpaolo; Cappelli, Carmela; Di Lallo, Dario; Massari, Riccardo
2013-05-01
This paper addresses the topic of classifying financial time series in a fuzzy framework proposing two fuzzy clustering models both based on GARCH models. In general clustering of financial time series, due to their peculiar features, needs the definition of suitable distance measures. At this aim, the first fuzzy clustering model exploits the autoregressive representation of GARCH models and employs, in the framework of a partitioning around medoids algorithm, the classical autoregressive metric. The second fuzzy clustering model, also based on partitioning around medoids algorithm, uses the Caiado distance, a Mahalanobis-like distance, based on estimated GARCH parameters and covariances that takes into account the information about the volatility structure of time series. In order to illustrate the merits of the proposed fuzzy approaches an application to the problem of classifying 29 time series of Euro exchange rates against international currencies is presented and discussed, also comparing the fuzzy models with their crisp version.
Directionality volatility in electroencephalogram time series
NASA Astrophysics Data System (ADS)
Mansor, Mahayaudin M.; Green, David A.; Metcalfe, Andrew V.
2016-06-01
We compare time series of electroencephalograms (EEGs) from healthy volunteers with EEGs from subjects diagnosed with epilepsy. The EEG time series from the healthy group are recorded during awake state with their eyes open and eyes closed, and the records from subjects with epilepsy are taken from three different recording regions of pre-surgical diagnosis: hippocampal, epileptogenic and seizure zone. The comparisons for these 5 categories are in terms of deviations from linear time series models with constant variance Gaussian white noise error inputs. One feature investigated is directionality, and how this can be modelled by either non-linear threshold autoregressive models or non-Gaussian errors. A second feature is volatility, which is modelled by Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) processes. Other features include the proportion of variability accounted for by time series models, and the skewness and the kurtosis of the residuals. The results suggest these comparisons may have diagnostic potential for epilepsy and provide early warning of seizures.
A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis
NASA Astrophysics Data System (ADS)
Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz
2018-04-01
For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.
Weighted combination of LOD values oa splitted into frequency windows
NASA Astrophysics Data System (ADS)
Fernandez, L. I.; Gambis, D.; Arias, E. F.
In this analysis a one-day combined time series of LOD(length-of-day) estimates is presented. We use individual data series derived by 7 GPS and 3 SLR analysis centers, which routinely contribute to the IERS database over a recent 27-month period (Jul 1996 - Oct 1998). The result is compared to the multi-technique combined series C04 produced by the Central Bureau of the IERS that is commonly used as a reference for the study of the phenomena of Earth rotation variations. The Frequency Windows Combined Series procedure brings out a time series, which is close to C04 but shows an amplitude difference that might explain the evident periodic behavior present in the differences of these two combined series. This method could be useful to generate a new time series to be used as a reference in the high frequency variations of the Earth rotation studies.
Coupling detrended fluctuation analysis for analyzing coupled nonstationary signals.
Hedayatifar, L; Vahabi, M; Jafari, G R
2011-08-01
When many variables are coupled to each other, a single case study could not give us thorough and precise information. When these time series are stationary, different methods of random matrix analysis and complex networks can be used. But, in nonstationary cases, the multifractal-detrended-cross-correlation-analysis (MF-DXA) method was introduced for just two coupled time series. In this article, we have extended the MF-DXA to the method of coupling detrended fluctuation analysis (CDFA) for the case when more than two series are correlated to each other. Here, we have calculated the multifractal properties of the coupled time series, and by comparing CDFA results of the original series with those of the shuffled and surrogate series, we can estimate the source of multifractality and the extent to which our series are coupled to each other. We illustrate the method by selected examples from air pollution and foreign exchange rates.
Coupling detrended fluctuation analysis for analyzing coupled nonstationary signals
NASA Astrophysics Data System (ADS)
Hedayatifar, L.; Vahabi, M.; Jafari, G. R.
2011-08-01
When many variables are coupled to each other, a single case study could not give us thorough and precise information. When these time series are stationary, different methods of random matrix analysis and complex networks can be used. But, in nonstationary cases, the multifractal-detrended-cross-correlation-analysis (MF-DXA) method was introduced for just two coupled time series. In this article, we have extended the MF-DXA to the method of coupling detrended fluctuation analysis (CDFA) for the case when more than two series are correlated to each other. Here, we have calculated the multifractal properties of the coupled time series, and by comparing CDFA results of the original series with those of the shuffled and surrogate series, we can estimate the source of multifractality and the extent to which our series are coupled to each other. We illustrate the method by selected examples from air pollution and foreign exchange rates.
Quantifying complexity of financial short-term time series by composite multiscale entropy measure
NASA Astrophysics Data System (ADS)
Niu, Hongli; Wang, Jun
2015-05-01
It is significant to study the complexity of financial time series since the financial market is a complex evolved dynamic system. Multiscale entropy is a prevailing method used to quantify the complexity of a time series. Due to its less reliability of entropy estimation for short-term time series at large time scales, a modification method, the composite multiscale entropy, is applied to the financial market. To qualify its effectiveness, its applications in the synthetic white noise and 1 / f noise with different data lengths are reproduced first in the present paper. Then it is introduced for the first time to make a reliability test with two Chinese stock indices. After conducting on short-time return series, the CMSE method shows the advantages in reducing deviations of entropy estimation and demonstrates more stable and reliable results when compared with the conventional MSE algorithm. Finally, the composite multiscale entropy of six important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.
On the equivalence of case-crossover and time series methods in environmental epidemiology.
Lu, Yun; Zeger, Scott L
2007-04-01
The case-crossover design was introduced in epidemiology 15 years ago as a method for studying the effects of a risk factor on a health event using only cases. The idea is to compare a case's exposure immediately prior to or during the case-defining event with that same person's exposure at otherwise similar "reference" times. An alternative approach to the analysis of daily exposure and case-only data is time series analysis. Here, log-linear regression models express the expected total number of events on each day as a function of the exposure level and potential confounding variables. In time series analyses of air pollution, smooth functions of time and weather are the main confounders. Time series and case-crossover methods are often viewed as competing methods. In this paper, we show that case-crossover using conditional logistic regression is a special case of time series analysis when there is a common exposure such as in air pollution studies. This equivalence provides computational convenience for case-crossover analyses and a better understanding of time series models. Time series log-linear regression accounts for overdispersion of the Poisson variance, while case-crossover analyses typically do not. This equivalence also permits model checking for case-crossover data using standard log-linear model diagnostics.
New Features for Neuron Classification.
Hernández-Pérez, Leonardo A; Delgado-Castillo, Duniel; Martín-Pérez, Rainer; Orozco-Morales, Rubén; Lorenzo-Ginori, Juan V
2018-04-28
This paper addresses the problem of obtaining new neuron features capable of improving results of neuron classification. Most studies on neuron classification using morphological features have been based on Euclidean geometry. Here three one-dimensional (1D) time series are derived from the three-dimensional (3D) structure of neuron instead, and afterwards a spatial time series is finally constructed from which the features are calculated. Digitally reconstructed neurons were separated into control and pathological sets, which are related to three categories of alterations caused by epilepsy, Alzheimer's disease (long and local projections), and ischemia. These neuron sets were then subjected to supervised classification and the results were compared considering three sets of features: morphological, features obtained from the time series and a combination of both. The best results were obtained using features from the time series, which outperformed the classification using only morphological features, showing higher correct classification rates with differences of 5.15, 3.75, 5.33% for epilepsy and Alzheimer's disease (long and local projections) respectively. The morphological features were better for the ischemia set with a difference of 3.05%. Features like variance, Spearman auto-correlation, partial auto-correlation, mutual information, local minima and maxima, all related to the time series, exhibited the best performance. Also we compared different evaluators, among which ReliefF was the best ranked.
Ultrasonic RF time series for early assessment of the tumor response to chemotherapy.
Lin, Qingguang; Wang, Jianwei; Li, Qing; Lin, Chunyi; Guo, Zhixing; Zheng, Wei; Yan, Cuiju; Li, Anhua; Zhou, Jianhua
2018-01-05
Ultrasound radio-frequency (RF) time series have been shown to carry tissue typing information. To evaluate the potential of RF time series for early prediction of tumor response to chemotherapy, 50MCF-7 breast cancer-bearing nude mice were randomized to receive cisplatin and paclitaxel (treatment group; n = 26) or sterile saline (control group; n = 24). Sequential ultrasound imaging was performed on days 0, 3, 6, and 8 of treatment to simultaneously collect B-mode images and RF data. Six RF time series features, slope, intercept, S1, S2, S3 , and S4 , were extracted during RF data analysis and contrasted with microstructural tumor changes on histopathology. Chemotherapy administration reduced tumor growth relative to control on days 6 and 8. Compared with day 0, intercept, S1 , and S2 were increased while slope was decreased on days 3, 6, and 8 in the treatment group. Compared with the control group, intercept, S1, S2, S3 , and S4 were increased, and slope was decreased, on days 3, 6, and 8 in the treatment group. Tumor cell density decreased significantly in the latter on day 3. We conclude that ultrasonic RF time series analysis provides a simple way to noninvasively assess the early tumor response to chemotherapy.
Characterising experimental time series using local intrinsic dimension
NASA Astrophysics Data System (ADS)
Buzug, Thorsten M.; von Stamm, Jens; Pfister, Gerd
1995-02-01
Experimental strange attractors are analysed with the averaged local intrinsic dimension proposed by A. Passamante et al. [Phys. Rev. A 39 (1989) 3640] which is based on singular value decomposition of local trajectory matrices. The results are compared to the values of Kaplan-Yorke and the correlation dimension. The attractors, reconstructed with Takens' delay time coordinates from scalar velocity time series, are measured in the hydrodynamic Taylor-Couette system. A period doubling route towards chaos obtained from a very short Taylor-Couette cylinder yields a sequence of experimental time series where the local intrinsic dimension is applied.
The MEM of spectral analysis applied to L.O.D.
NASA Astrophysics Data System (ADS)
Fernandez, L. I.; Arias, E. F.
The maximum entropy method (MEM) has been widely applied for polar motion studies taking advantage of its performance on the management of complex time series. The authors used the algorithm of the MEM to estimate Cross Spectral function in order to compare interannual Length-of-Day (LOD) time series with Southern Oscillation Index (SOI) and Sea Surface Temperature (SST) series, which are close related to El Niño-Southern Oscillation (ENSO) events.
Design considerations for case series models with exposure onset measurement error.
Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V
2013-02-28
The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. Copyright © 2012 John Wiley & Sons, Ltd.
Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis
NASA Astrophysics Data System (ADS)
Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal
Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.
Neural network versus classical time series forecasting models
NASA Astrophysics Data System (ADS)
Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam
2017-05-01
Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.
An evaluation of dynamic mutuality measurements and methods in cyclic time series
NASA Astrophysics Data System (ADS)
Xia, Xiaohua; Huang, Guitian; Duan, Na
2010-12-01
Several measurements and techniques have been developed to detect dynamic mutuality and synchronicity of time series in econometrics. This study aims to compare the performances of five methods, i.e., linear regression, dynamic correlation, Markov switching models, concordance index and recurrence quantification analysis, through numerical simulations. We evaluate the abilities of these methods to capture structure changing and cyclicity in time series and the findings of this paper would offer guidance to both academic and empirical researchers. Illustration examples are also provided to demonstrate the subtle differences of these techniques.
The time series approach to short term load forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagan, M.T.; Behr, S.M.
The application of time series analysis methods to load forecasting is reviewed. It is shown than Box and Jenkins time series models, in particular, are well suited to this application. The logical and organized procedures for model development using the autocorrelation function make these models particularly attractive. One of the drawbacks of these models is the inability to accurately represent the nonlinear relationship between load and temperature. A simple procedure for overcoming this difficulty is introduced, and several Box and Jenkins models are compared with a forecasting procedure currently used by a utility company.
NASA Technical Reports Server (NTRS)
Acker, James G.; Shen, Suhung; Leptoukh, Gregory G.; Lee, Zhongping
2012-01-01
Oceanographic time-series stations provide vital data for the monitoring of oceanic processes, particularly those associated with trends over time and interannual variability. There are likely numerous locations where the establishment of a time-series station would be desirable, but for reasons of funding or logistics, such establishment may not be feasible. An alternative to an operational time-series station is monitoring of sites via remote sensing. In this study, the NASA Giovanni data system is employed to simulate the establishment of two time-series stations near the outflow region of California s Eel River, which carries a high sediment load. Previous time-series analysis of this location (Acker et al. 2009) indicated that remotely-sensed chl a exhibits a statistically significant increasing trend during summer (low flow) months, but no apparent trend during winter (high flow) months. Examination of several newly-available ocean data parameters in Giovanni, including 8-day resolution data, demonstrates the differences in ocean parameter trends at the two locations compared to regionally-averaged time-series. The hypothesis that the increased summer chl a values are related to increasing SST is evaluated, and the signature of the Eel River plume is defined with ocean optical parameters.
NASA Astrophysics Data System (ADS)
Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł
2018-01-01
The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.
Coil-to-coil physiological noise correlations and their impact on fMRI time-series SNR
Triantafyllou, C.; Polimeni, J. R.; Keil, B.; Wald, L. L.
2017-01-01
Purpose Physiological nuisance fluctuations (“physiological noise”) are a major contribution to the time-series Signal to Noise Ratio (tSNR) of functional imaging. While thermal noise correlations between array coil elements have a well-characterized effect on the image Signal to Noise Ratio (SNR0), the element-to-element covariance matrix of the time-series fluctuations has not yet been analyzed. We examine this effect with a goal of ultimately improving the combination of multichannel array data. Theory and Methods We extend the theoretical relationship between tSNR and SNR0 to include a time-series noise covariance matrix Ψt, distinct from the thermal noise covariance matrix Ψ0, and compare its structure to Ψ0 and the signal coupling matrix SSH formed from the signal intensity vectors S. Results Inclusion of the measured time-series noise covariance matrix into the model relating tSNR and SNR0 improves the fit of experimental multichannel data and is shown to be distinct from Ψ0 or SSH. Conclusion Time-series noise covariances in array coils are found to differ from Ψ0 and more surprisingly, from the signal coupling matrix SSH. Correct characterization of the time-series noise has implications for the analysis of time-series data and for improving the coil element combination process. PMID:26756964
The Design of Time-Series Comparisons under Resource Constraints.
ERIC Educational Resources Information Center
Willemain, Thomas R.; Hartunian, Nelson S.
1982-01-01
Two methods for dividing an interrupted time-series study between baseline and experimental phases when study resources are limited are compared. In fixed designs, the baseline duration is predetermined. In flexible designs the baseline duration is contingent on remaining resources and the match of results to prior expectations of the evaluator.…
Long-term changes (1980-2003) in total ozone time series over Northern Hemisphere midlatitudes
NASA Astrophysics Data System (ADS)
Białek, Małgorzata
2006-03-01
Long-term changes in total ozone time series for Arosa, Belsk, Boulder and Sapporo stations are examined. For each station we analyze time series of the following statistical characteristics of the distribution of daily ozone data: seasonal mean, standard deviation, maximum and minimum of total daily ozone values for all seasons. The iterative statistical model is proposed to estimate trends and long-term changes in the statistical distribution of the daily total ozone data. The trends are calculated for the period 1980-2003. We observe lessening of negative trends in the seasonal means as compared to those calculated by WMO for 1980-2000. We discuss a possibility of a change of the distribution shape of ozone daily data using the Kolmogorov-Smirnov test and comparing trend values in the seasonal mean, standard deviation, maximum and minimum time series for the selected stations and seasons. The distribution shift toward lower values without a change in the distribution shape is suggested with the following exceptions: the spreading of the distribution toward lower values for Belsk during winter and no decisive result for Sapporo and Boulder in summer.
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
NASA Astrophysics Data System (ADS)
Piecuch, Christopher G.; Landerer, Felix W.; Ponte, Rui M.
2018-05-01
Monthly ocean bottom pressure solutions from the Gravity Recovery and Climate Experiment (GRACE), derived using surface spherical cap mass concentration (MC) blocks and spherical harmonics (SH) basis functions, are compared to tide gauge (TG) monthly averaged sea level data over 2003-2015 to evaluate improved gravimetric data processing methods near the coast. MC solutions can explain ≳ 42% of the monthly variance in TG time series over broad shelf regions and in semi-enclosed marginal seas. MC solutions also generally explain ˜5-32 % more TG data variance than SH estimates. Applying a coastline resolution improvement algorithm in the GRACE data processing leads to ˜ 31% more variance in TG records explained by the MC solution on average compared to not using this algorithm. Synthetic observations sampled from an ocean general circulation model exhibit similar patterns of correspondence between modeled TG and MC time series and differences between MC and SH time series in terms of their relationship with TG time series, suggesting that observational results here are generally consistent with expectations from ocean dynamics. This work demonstrates the improved quality of recent MC solutions compared to earlier SH estimates over the coastal ocean, and suggests that the MC solutions could be a useful tool for understanding contemporary coastal sea level variability and change.
Volatility behavior of visibility graph EMD financial time series from Ising interacting system
NASA Astrophysics Data System (ADS)
Zhang, Bo; Wang, Jun; Fang, Wen
2015-08-01
A financial market dynamics model is developed and investigated by stochastic Ising system, where the Ising model is the most popular ferromagnetic model in statistical physics systems. Applying two graph based analysis and multiscale entropy method, we investigate and compare the statistical volatility behavior of return time series and the corresponding IMF series derived from the empirical mode decomposition (EMD) method. And the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, we find that the degree distribution of visibility graph for the simulation series has the power law tails, and the assortative network exhibits the mixing pattern property. All these features are in agreement with the real market data, the research confirms that the financial model established by the Ising system is reasonable.
NASA Astrophysics Data System (ADS)
Du, Kongchang; Zhao, Ying; Lei, Jiaqiang
2017-09-01
In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.
Robust extrema features for time-series data analysis.
Vemulapalli, Pramod K; Monga, Vishal; Brennan, Sean N
2013-06-01
The extraction of robust features for comparing and analyzing time series is a fundamentally important problem. Research efforts in this area encompass dimensionality reduction using popular signal analysis tools such as the discrete Fourier and wavelet transforms, various distance metrics, and the extraction of interest points from time series. Recently, extrema features for analysis of time-series data have assumed increasing significance because of their natural robustness under a variety of practical distortions, their economy of representation, and their computational benefits. Invariably, the process of encoding extrema features is preceded by filtering of the time series with an intuitively motivated filter (e.g., for smoothing), and subsequent thresholding to identify robust extrema. We define the properties of robustness, uniqueness, and cardinality as a means to identify the design choices available in each step of the feature generation process. Unlike existing methods, which utilize filters "inspired" from either domain knowledge or intuition, we explicitly optimize the filter based on training time series to optimize robustness of the extracted extrema features. We demonstrate further that the underlying filter optimization problem reduces to an eigenvalue problem and has a tractable solution. An encoding technique that enhances control over cardinality and uniqueness is also presented. Experimental results obtained for the problem of time series subsequence matching establish the merits of the proposed algorithm.
Time series analysis of InSAR data: Methods and trends
NASA Astrophysics Data System (ADS)
Osmanoğlu, Batuhan; Sunar, Filiz; Wdowinski, Shimon; Cabral-Cano, Enrique
2016-05-01
Time series analysis of InSAR data has emerged as an important tool for monitoring and measuring the displacement of the Earth's surface. Changes in the Earth's surface can result from a wide range of phenomena such as earthquakes, volcanoes, landslides, variations in ground water levels, and changes in wetland water levels. Time series analysis is applied to interferometric phase measurements, which wrap around when the observed motion is larger than one-half of the radar wavelength. Thus, the spatio-temporal ;unwrapping; of phase observations is necessary to obtain physically meaningful results. Several different algorithms have been developed for time series analysis of InSAR data to solve for this ambiguity. These algorithms may employ different models for time series analysis, but they all generate a first-order deformation rate, which can be compared to each other. However, there is no single algorithm that can provide optimal results in all cases. Since time series analyses of InSAR data are used in a variety of applications with different characteristics, each algorithm possesses inherently unique strengths and weaknesses. In this review article, following a brief overview of InSAR technology, we discuss several algorithms developed for time series analysis of InSAR data using an example set of results for measuring subsidence rates in Mexico City.
Inference of scale-free networks from gene expression time series.
Daisuke, Tominaga; Horton, Paul
2006-04-01
Quantitative time-series observation of gene expression is becoming possible, for example by cell array technology. However, there are no practical methods with which to infer network structures using only observed time-series data. As most computational models of biological networks for continuous time-series data have a high degree of freedom, it is almost impossible to infer the correct structures. On the other hand, it has been reported that some kinds of biological networks, such as gene networks and metabolic pathways, may have scale-free properties. We hypothesize that the architecture of inferred biological network models can be restricted to scale-free networks. We developed an inference algorithm for biological networks using only time-series data by introducing such a restriction. We adopt the S-system as the network model, and a distributed genetic algorithm to optimize models to fit its simulated results to observed time series data. We have tested our algorithm on a case study (simulated data). We compared optimization under no restriction, which allows for a fully connected network, and under the restriction that the total number of links must equal that expected from a scale free network. The restriction reduced both false positive and false negative estimation of the links and also the differences between model simulation and the given time-series data.
Time Series Analysis of Insar Data: Methods and Trends
NASA Technical Reports Server (NTRS)
Osmanoglu, Batuhan; Sunar, Filiz; Wdowinski, Shimon; Cano-Cabral, Enrique
2015-01-01
Time series analysis of InSAR data has emerged as an important tool for monitoring and measuring the displacement of the Earth's surface. Changes in the Earth's surface can result from a wide range of phenomena such as earthquakes, volcanoes, landslides, variations in ground water levels, and changes in wetland water levels. Time series analysis is applied to interferometric phase measurements, which wrap around when the observed motion is larger than one-half of the radar wavelength. Thus, the spatio-temporal ''unwrapping" of phase observations is necessary to obtain physically meaningful results. Several different algorithms have been developed for time series analysis of InSAR data to solve for this ambiguity. These algorithms may employ different models for time series analysis, but they all generate a first-order deformation rate, which can be compared to each other. However, there is no single algorithm that can provide optimal results in all cases. Since time series analyses of InSAR data are used in a variety of applications with different characteristics, each algorithm possesses inherently unique strengths and weaknesses. In this review article, following a brief overview of InSAR technology, we discuss several algorithms developed for time series analysis of InSAR data using an example set of results for measuring subsidence rates in Mexico City.
NASA Technical Reports Server (NTRS)
Remsberg, Ellis E.
2009-01-01
Fourteen-year time series of mesospheric and upper stratospheric temperatures from the Halogen Occultation Experiment (HALOE) are analyzed and reported. The data have been binned according to ten-degree wide latitude zones from 40S to 40N and at 10 altitudes from 43 to 80 km-a total of 90 separate time series. Multiple linear regression (MLR) analysis techniques have been applied to those time series. This study focuses on resolving their 11-yr solar cycle (or SC-like) responses and their linear trend terms. Findings for T(z) from HALOE are compared directly with published results from ground-based Rayleigh lidar and rocketsonde measurements. SC-like responses from HALOE compare well with those from lidar station data at low latitudes. The cooling trends from HALOE also agree reasonably well with those from the lidar data for the concurrent decade. Cooling trends of the lower mesosphere from HALOE are not as large as those from rocketsondes and from lidar station time series of the previous two decades, presumably because the changes in the upper stratospheric ozone were near zero during the HALOE time period and did not affect those trends.
Liu, Siwei; Molenaar, Peter C M
2014-12-01
This article introduces iVAR, an R program for imputing missing data in multivariate time series on the basis of vector autoregressive (VAR) models. We conducted a simulation study to compare iVAR with three methods for handling missing data: listwise deletion, imputation with sample means and variances, and multiple imputation ignoring time dependency. The results showed that iVAR produces better estimates for the cross-lagged coefficients than do the other three methods. We demonstrate the use of iVAR with an empirical example of time series electrodermal activity data and discuss the advantages and limitations of the program.
NASA Astrophysics Data System (ADS)
Wziontek, Hartmut; Wilmes, Herbert; Güntner, Andreas; Creutzfeldt, Benjamin
2010-05-01
Water mass changes are a major source of variations in residual gravimetric time series obtained from the combination of observations with superconducting and absolute gravimeters. Changes in the local water storage are the main influence, but global variations contribute to the signal significantly. For three European gravity stations, Bad Homburg, Wettzell and Medicina, different global hydrology models are compared. The influence of topographic effects is discussed and due to the long-term stability of the combined gravity time series, inter-annual signals in model data and gravimetric observations are compared. Two sources of influence are discriminated, i.e., the effect of a local zone with an extent of a few kilometers around the gravimetric station and the global contribution beyond 50km. Considering their coarse resolution and uncertainties, local effects calculated from global hydrological models are compared with the in-situ gravity observations and, for the station Wettzell, with local hydrological monitoring data.
A hybrid group method of data handling with discrete wavelet transform for GDP forecasting
NASA Astrophysics Data System (ADS)
Isa, Nadira Mohamed; Shabri, Ani
2013-09-01
This study is proposed the application of hybridization model using Group Method of Data Handling (GMDH) and Discrete Wavelet Transform (DWT) in time series forecasting. The objective of this paper is to examine the flexibility of the hybridization GMDH in time series forecasting by using Gross Domestic Product (GDP). A time series data set is used in this study to demonstrate the effectiveness of the forecasting model. This data are utilized to forecast through an application aimed to handle real life time series. This experiment compares the performances of a hybrid model and a single model of Wavelet-Linear Regression (WR), Artificial Neural Network (ANN), and conventional GMDH. It is shown that the proposed model can provide a promising alternative technique in GDP forecasting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentili, Pier Luigi, E-mail: pierluigi.gentili@unipg.it; Gotoda, Hiroshi; Dolnik, Milos
Forecasting of aperiodic time series is a compelling challenge for science. In this work, we analyze aperiodic spectrophotometric data, proportional to the concentrations of two forms of a thermoreversible photochromic spiro-oxazine, that are generated when a cuvette containing a solution of the spiro-oxazine undergoes photoreaction and convection due to localized ultraviolet illumination. We construct the phase space for the system using Takens' theorem and we calculate the Lyapunov exponents and the correlation dimensions to ascertain the chaotic character of the time series. Finally, we predict the time series using three distinct methods: a feed-forward neural network, fuzzy logic, and amore » local nonlinear predictor. We compare the performances of these three methods.« less
NASA Astrophysics Data System (ADS)
Diao, Chunyuan
In today's big data era, the increasing availability of satellite and airborne platforms at various spatial and temporal scales creates unprecedented opportunities to understand the complex and dynamic systems (e.g., plant invasion). Time series remote sensing is becoming more and more important to monitor the earth system dynamics and interactions. To date, most of the time series remote sensing studies have been conducted with the images acquired at coarse spatial scale, due to their relatively high temporal resolution. The construction of time series at fine spatial scale, however, is limited to few or discrete images acquired within or across years. The objective of this research is to advance the time series remote sensing at fine spatial scale, particularly to shift from discrete time series remote sensing to continuous time series remote sensing. The objective will be achieved through the following aims: 1) Advance intra-annual time series remote sensing under the pure-pixel assumption; 2) Advance intra-annual time series remote sensing under the mixed-pixel assumption; 3) Advance inter-annual time series remote sensing in monitoring the land surface dynamics; and 4) Advance the species distribution model with time series remote sensing. Taking invasive saltcedar as an example, four methods (i.e., phenological time series remote sensing model, temporal partial unmixing method, multiyear spectral angle clustering model, and time series remote sensing-based spatially explicit species distribution model) were developed to achieve the objectives. Results indicated that the phenological time series remote sensing model could effectively map saltcedar distributions through characterizing the seasonal phenological dynamics of plant species throughout the year. The proposed temporal partial unmixing method, compared to conventional unmixing methods, could more accurately estimate saltcedar abundance within a pixel by exploiting the adequate temporal signatures of saltcedar. The multiyear spectral angle clustering model could guide the selection of the most representative remotely sensed image for repetitive saltcedar mapping over space and time. Through incorporating spatial autocorrelation, the species distribution model developed in the study could identify the suitable habitats of saltcedar at a fine spatial scale and locate appropriate areas at high risk of saltcedar infestation. Among 10 environmental variables, the distance to the river and the phenological attributes summarized by the time series remote sensing were regarded as the most important. These methods developed in the study provide new perspectives on how the continuous time series can be leveraged under various conditions to investigate the plant invasion dynamics.
Continuous time transfer using GPS carrier phase.
Dach, Rolf; Schildknecht, Thomas; Springer, Tim; Dudle, Gregor; Prost, Leon
2002-11-01
The Astronomical Institute of the University of Berne is hosting one of the Analysis Centers (AC) of the International GPS Service (IGS). A network of a few GPS stations in Europe and North America is routinely analyzed for time transfer purposes, using the carrier phase observations. This work is done in the framework of a joint project with the Swiss Federal Office of Metrology and Accreditation (METAS). The daily solutions are computed independently. The resulting time transfer series show jumps of up to 1 ns at the day boundaries. A method to concatenate the daily time transfer solutions to a continuous series was developed. A continuous time series is available for a time span of more than 4 mo. The results were compared with the time transfer results from other techniques such as two-way satellite time and frequency transfer. This concatenation improves the results obtained in a daily computing scheme because a continuous time series better reflects the characteristics of continuously working clocks.
Classification of time-series images using deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Hatami, Nima; Gavet, Yann; Debayle, Johan
2018-04-01
Convolutional Neural Networks (CNN) has achieved a great success in image recognition task by automatically learning a hierarchical feature representation from raw data. While the majority of Time-Series Classification (TSC) literature is focused on 1D signals, this paper uses Recurrence Plots (RP) to transform time-series into 2D texture images and then take advantage of the deep CNN classifier. Image representation of time-series introduces different feature types that are not available for 1D signals, and therefore TSC can be treated as texture image recognition task. CNN model also allows learning different levels of representations together with a classifier, jointly and automatically. Therefore, using RP and CNN in a unified framework is expected to boost the recognition rate of TSC. Experimental results on the UCR time-series classification archive demonstrate competitive accuracy of the proposed approach, compared not only to the existing deep architectures, but also to the state-of-the art TSC algorithms.
RADON CONCENTRATION TIME SERIES MODELING AND APPLICATION DISCUSSION.
Stránský, V; Thinová, L
2017-11-01
In the year 2010 a continual radon measurement was established at Mladeč Caves in the Czech Republic using a continual radon monitor RADIM3A. In order to model radon time series in the years 2010-15, the Box-Jenkins Methodology, often used in econometrics, was applied. Because of the behavior of radon concentrations (RCs), a seasonal integrated, autoregressive moving averages model with exogenous variables (SARIMAX) has been chosen to model the measured time series. This model uses the time series seasonality, previously acquired values and delayed atmospheric parameters, to forecast RC. The developed model for RC time series is called regARIMA(5,1,3). Model residuals could be retrospectively compared with seismic evidence of local or global earthquakes, which occurred during the RCs measurement. This technique enables us to asses if continuously measured RC could serve an earthquake precursor. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A Comparison of Alternative Approaches to the Analysis of Interrupted Time-Series.
ERIC Educational Resources Information Center
Harrop, John W.; Velicer, Wayne F.
1985-01-01
Computer generated data representative of 16 Auto Regressive Integrated Moving Averages (ARIMA) models were used to compare the results of interrupted time-series analysis using: (1) the known model identification, (2) an assumed (l,0,0) model, and (3) an assumed (3,0,0) model as an approximation to the General Transformation approach. (Author/BW)
An Evaluation Method of Words Tendency Depending on Time-Series Variation and Its Improvements.
ERIC Educational Resources Information Center
Atlam, El-Sayed; Okada, Makoto; Shishibori, Masami; Aoe, Jun-ichi
2002-01-01
Discussion of word frequency and keywords in text focuses on a method to estimate automatically the stability classes that indicate a word's popularity with time-series variations based on the frequency change in past electronic text data. Compares the evaluation of decision tree stability class results with manual classification results.…
Triantafyllou, Christina; Polimeni, Jonathan R; Keil, Boris; Wald, Lawrence L
2016-12-01
Physiological nuisance fluctuations ("physiological noise") are a major contribution to the time-series signal-to-noise ratio (tSNR) of functional imaging. While thermal noise correlations between array coil elements have a well-characterized effect on the image Signal to Noise Ratio (SNR 0 ), the element-to-element covariance matrix of the time-series fluctuations has not yet been analyzed. We examine this effect with a goal of ultimately improving the combination of multichannel array data. We extend the theoretical relationship between tSNR and SNR 0 to include a time-series noise covariance matrix Ψ t , distinct from the thermal noise covariance matrix Ψ 0 , and compare its structure to Ψ 0 and the signal coupling matrix SS H formed from the signal intensity vectors S. Inclusion of the measured time-series noise covariance matrix into the model relating tSNR and SNR 0 improves the fit of experimental multichannel data and is shown to be distinct from Ψ 0 or SS H . Time-series noise covariances in array coils are found to differ from Ψ 0 and more surprisingly, from the signal coupling matrix SS H . Correct characterization of the time-series noise has implications for the analysis of time-series data and for improving the coil element combination process. Magn Reson Med 76:1708-1719, 2016. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Phase walk analysis of leptokurtic time series.
Schreiber, Korbinian; Modest, Heike I; Räth, Christoph
2018-06-01
The Fourier phase information play a key role for the quantified description of nonlinear data. We present a novel tool for time series analysis that identifies nonlinearities by sensitively detecting correlations among the Fourier phases. The method, being called phase walk analysis, is based on well established measures from random walk analysis, which are now applied to the unwrapped Fourier phases of time series. We provide an analytical description of its functionality and demonstrate its capabilities on systematically controlled leptokurtic noise. Hereby, we investigate the properties of leptokurtic time series and their influence on the Fourier phases of time series. The phase walk analysis is applied to measured and simulated intermittent time series, whose probability density distribution is approximated by power laws. We use the day-to-day returns of the Dow-Jones industrial average, a synthetic time series with tailored nonlinearities mimicing the power law behavior of the Dow-Jones and the acceleration of the wind at an Atlantic offshore site. Testing for nonlinearities by means of surrogates shows that the new method yields strong significances for nonlinear behavior. Due to the drastically decreased computing time as compared to embedding space methods, the number of surrogate realizations can be increased by orders of magnitude. Thereby, the probability distribution of the test statistics can very accurately be derived and parameterized, which allows for much more precise tests on nonlinearities.
Fontana, Fabio; Rixen, Christian; Jonas, Tobias; Aberegg, Gabriel; Wunderle, Stefan
2008-01-01
This study evaluates the ability to track grassland growth phenology in the Swiss Alps with NOAA-16 Advanced Very High Resolution Radiometer (AVHRR) Normalized Difference Vegetation Index (NDVI) time series. Three growth parameters from 15 alpine and subalpine grassland sites were investigated between 2001 and 2005: Melt-Out (MO), Start Of Growth (SOG), and End Of Growth (EOG). We tried to estimate these phenological dates from yearly NDVI time series by identifying dates, where certain fractions (thresholds) of the maximum annual NDVI amplitude were crossed for the first time. For this purpose, the NDVI time series were smoothed using two commonly used approaches (Fourier adjustment or alternatively Savitzky-Golay filtering). Moreover, AVHRR NDVI time series were compared against data from the newer generation sensors SPOT VEGETATION and TERRA MODIS. All remote sensing NDVI time series were highly correlated with single point ground measurements and therefore accurately represented growth dynamics of alpine grassland. The newer generation sensors VGT and MODIS performed better than AVHRR, however, differences were minor. Thresholds for the determination of MO, SOG, and EOG were similar across sensors and smoothing methods, which demonstrated the robustness of the results. For our purpose, the Fourier adjustment algorithm created better NDVI time series than the Savitzky-Golay filter, since latter appeared to be more sensitive to noisy NDVI time series. Findings show that the application of various thresholds to NDVI time series allows the observation of the temporal progression of vegetation growth at the selected sites with high consistency. Hence, we believe that our study helps to better understand large-scale vegetation growth dynamics above the tree line in the European Alps. PMID:27879852
Fontana, Fabio; Rixen, Christian; Jonas, Tobias; Aberegg, Gabriel; Wunderle, Stefan
2008-04-23
This study evaluates the ability to track grassland growth phenology in the Swiss Alps with NOAA-16 Advanced Very High Resolution Radiometer (AVHRR) Normalized Difference Vegetation Index (NDVI) time series. Three growth parameters from 15 alpine and subalpine grassland sites were investigated between 2001 and 2005: Melt-Out (MO), Start Of Growth (SOG), and End Of Growth (EOG).We tried to estimate these phenological dates from yearly NDVI time series by identifying dates, where certain fractions (thresholds) of the maximum annual NDVI amplitude were crossed for the first time. For this purpose, the NDVI time series were smoothed using two commonly used approaches (Fourier adjustment or alternatively Savitzky-Golay filtering). Moreover, AVHRR NDVI time series were compared against data from the newer generation sensors SPOT VEGETATION and TERRA MODIS. All remote sensing NDVI time series were highly correlated with single point ground measurements and therefore accurately represented growth dynamics of alpine grassland. The newer generation sensors VGT and MODIS performed better than AVHRR, however, differences were minor. Thresholds for the determination of MO, SOG, and EOG were similar across sensors and smoothing methods, which demonstrated the robustness of the results. For our purpose, the Fourier adjustment algorithm created better NDVI time series than the Savitzky-Golay filter, since latter appeared to be more sensitive to noisy NDVI time series. Findings show that the application of various thresholds to NDVI time series allows the observation of the temporal progression of vegetation growth at the selected sites with high consistency. Hence, we believe that our study helps to better understand largescale vegetation growth dynamics above the tree line in the European Alps.
Documentation of a spreadsheet for time-series analysis and drawdown estimation
Halford, Keith J.
2006-01-01
Drawdowns during aquifer tests can be obscured by barometric pressure changes, earth tides, regional pumping, and recharge events in the water-level record. These stresses can create water-level fluctuations that should be removed from observed water levels prior to estimating drawdowns. Simple models have been developed for estimating unpumped water levels during aquifer tests that are referred to as synthetic water levels. These models sum multiple time series such as barometric pressure, tidal potential, and background water levels to simulate non-pumping water levels. The amplitude and phase of each time series are adjusted so that synthetic water levels match measured water levels during periods unaffected by an aquifer test. Differences between synthetic and measured water levels are minimized with a sum-of-squares objective function. Root-mean-square errors during fitting and prediction periods were compared multiple times at four geographically diverse sites. Prediction error equaled fitting error when fitting periods were greater than or equal to four times prediction periods. The proposed drawdown estimation approach has been implemented in a spreadsheet application. Measured time series are independent so that collection frequencies can differ and sampling times can be asynchronous. Time series can be viewed selectively and magnified easily. Fitting and prediction periods can be defined graphically or entered directly. Synthetic water levels for each observation well are created with earth tides, measured time series, moving averages of time series, and differences between measured and moving averages of time series. Selected series and fitting parameters for synthetic water levels are stored and drawdowns are estimated for prediction periods. Drawdowns can be viewed independently and adjusted visually if an anomaly skews initial drawdowns away from 0. The number of observations in a drawdown time series can be reduced by averaging across user-defined periods. Raw or reduced drawdown estimates can be copied from the spreadsheet application or written to tab-delimited ASCII files.
Investigation of a long time series of CO2 from a tall tower using WRF-SPA
NASA Astrophysics Data System (ADS)
Smallman, Luke; Williams, Mathew; Moncrieff, John B.
2013-04-01
Atmospheric observations from tall towers are an important source of information about CO2 exchange at the regional scale. Here, we have used a forward running model, WRF-SPA, to generate a time series of CO2 at a tall tower for comparison with observations from Scotland over multiple years (2006-2008). We use this comparison to infer strength and distribution of sources and sinks of carbon and ecosystem process information at the seasonal scale. The specific aim of this research is to combine a high resolution (6 km) forward running meteorological model (WRF) with a modified version of a mechanistic ecosystem model (SPA). SPA provides surface fluxes calculated from coupled energy, hydrological and carbon cycles. This closely coupled representation of the biosphere provides realistic surface exchanges to drive mixing within the planetary boundary layer. The combined model is used to investigate the sources and sinks of CO2 and to explore which land surfaces contribute to a time series of hourly observations of atmospheric CO2 at a tall tower, Angus, Scotland. In addition to comparing the modelled CO2 time series to observations, modelled ecosystem specific (i.e. forest, cropland, grassland) CO2 tracers (e.g., assimilation and respiration) have been compared to the modelled land surface assimilation to investigate how representative tall tower observations are of land surface processes. WRF-SPA modelled CO2 time series compares well to observations (R2 = 0.67, rmse = 3.4 ppm, bias = 0.58 ppm). Through comparison of model-observation residuals, we have found evidence that non-cropped components of agricultural land (e.g., hedgerows and forest patches) likely contribute a significant and observable impact on regional carbon balance.
Jahanian, Hesamoddin; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gholam-Ali
2005-09-01
To present novel feature spaces, based on multiscale decompositions obtained by scalar wavelet and multiwavelet transforms, to remedy problems associated with high dimension of functional magnetic resonance imaging (fMRI) time series (when they are used directly in clustering algorithms) and their poor signal-to-noise ratio (SNR) that limits accurate classification of fMRI time series according to their activation contents. Using randomization, the proposed method finds wavelet/multiwavelet coefficients that represent the activation content of fMRI time series and combines them to define new feature spaces. Using simulated and experimental fMRI data sets, the proposed feature spaces are compared to the cross-correlation (CC) feature space and their performances are evaluated. In these studies, the false positive detection rate is controlled using randomization. To compare different methods, several points of the receiver operating characteristics (ROC) curves, using simulated data, are estimated and compared. The proposed features suppress the effects of confounding signals and improve activation detection sensitivity. Experimental results show improved sensitivity and robustness of the proposed method compared to the conventional CC analysis. More accurate and sensitive activation detection can be achieved using the proposed feature spaces compared to CC feature space. Multiwavelet features show superior detection sensitivity compared to the scalar wavelet features. (c) 2005 Wiley-Liss, Inc.
A large-signal dynamic simulation for the series resonant converter
NASA Technical Reports Server (NTRS)
King, R. J.; Stuart, T. A.
1983-01-01
A simple nonlinear discrete-time dynamic model for the series resonant dc-dc converter is derived using approximations appropriate to most power converters. This model is useful for the dynamic simulation of a series resonant converter using only a desktop calculator. The model is compared with a laboratory converter for a large transient event.
NASA Astrophysics Data System (ADS)
Godsey, S. E.; Kirchner, J. W.
2008-12-01
The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.
Accuracy of the Garmin 920 XT HRM to perform HRV analysis.
Cassirame, Johan; Vanhaesebrouck, Romain; Chevrolat, Simon; Mourot, Laurent
2017-12-01
Heart rate variability (HRV) analysis is widely used to investigate autonomous cardiac drive. This method requires periodogram measurement, which can be obtained by an electrocardiogram (ECG) or from a heart rate monitor (HRM), e.g. the Garmin 920 XT device. The purpose of this investigation was to assess the accuracy of RR time series measurements from a Garmin 920 XT HRM as compared to a standard ECG, and to verify whether the measurements thus obtained are suitable for HRV analysis. RR time series were collected simultaneously with an ECG (Powerlab system, AD Instruments, Castell Hill, Australia) and a Garmin XT 920 in 11 healthy subjects during three conditions, namely in the supine position, the standing position and during moderate exercise. In a first step, we compared RR time series obtained with both tools using the Bland and Altman method to obtain the limits of agreement in all three conditions. In a second step, we compared the results of HRV analysis between the ECG RR time series and Garmin 920 XT series. Results show that the accuracy of this system is in accordance with the literature in terms of the limits of agreement. In the supine position, bias was 0.01, - 2.24, + 2.26 ms; in the standing position, - 0.01, - 3.12, + 3.11 ms respectively, and during exercise, - 0.01, - 4.43 and + 4.40 ms. Regarding HRV analysis, we did not find any difference for HRV analysis in the supine position, but the standing and exercise conditions both showed small modifications.
The application of complex network time series analysis in turbulent heated jets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charakopoulos, A. K.; Karakasidis, T. E., E-mail: thkarak@uth.gr; Liakopoulos, A.
In the present study, we applied the methodology of the complex network-based time series analysis to experimental temperature time series from a vertical turbulent heated jet. More specifically, we approach the hydrodynamic problem of discriminating time series corresponding to various regions relative to the jet axis, i.e., time series corresponding to regions that are close to the jet axis from time series originating at regions with a different dynamical regime based on the constructed network properties. Applying the transformation phase space method (k nearest neighbors) and also the visibility algorithm, we transformed time series into networks and evaluated the topologicalmore » properties of the networks such as degree distribution, average path length, diameter, modularity, and clustering coefficient. The results show that the complex network approach allows distinguishing, identifying, and exploring in detail various dynamical regions of the jet flow, and associate it to the corresponding physical behavior. In addition, in order to reject the hypothesis that the studied networks originate from a stochastic process, we generated random network and we compared their statistical properties with that originating from the experimental data. As far as the efficiency of the two methods for network construction is concerned, we conclude that both methodologies lead to network properties that present almost the same qualitative behavior and allow us to reveal the underlying system dynamics.« less
The application of complex network time series analysis in turbulent heated jets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charakopoulos, A. K.; Karakasidis, T. E., E-mail: thkarak@uth.gr; Liakopoulos, A.
2014-06-15
In the present study, we applied the methodology of the complex network-based time series analysis to experimental temperature time series from a vertical turbulent heated jet. More specifically, we approach the hydrodynamic problem of discriminating time series corresponding to various regions relative to the jet axis, i.e., time series corresponding to regions that are close to the jet axis from time series originating at regions with a different dynamical regime based on the constructed network properties. Applying the transformation phase space method (k nearest neighbors) and also the visibility algorithm, we transformed time series into networks and evaluated the topologicalmore » properties of the networks such as degree distribution, average path length, diameter, modularity, and clustering coefficient. The results show that the complex network approach allows distinguishing, identifying, and exploring in detail various dynamical regions of the jet flow, and associate it to the corresponding physical behavior. In addition, in order to reject the hypothesis that the studied networks originate from a stochastic process, we generated random network and we compared their statistical properties with that originating from the experimental data. As far as the efficiency of the two methods for network construction is concerned, we conclude that both methodologies lead to network properties that present almost the same qualitative behavior and allow us to reveal the underlying system dynamics.« less
Improved visibility graph fractality with application for the diagnosis of Autism Spectrum Disorder
NASA Astrophysics Data System (ADS)
Ahmadlou, Mehran; Adeli, Hojjat; Adeli, Amir
2012-10-01
Recently, the visibility graph (VG) algorithm was proposed for mapping a time series to a graph to study complexity and fractality of the time series through investigation of the complexity of its graph. The visibility graph algorithm converts a fractal time series to a scale-free graph. VG has been used for the investigation of fractality in the dynamic behavior of both artificial and natural complex systems. However, robustness and performance of the power of scale-freeness of VG (PSVG) as an effective method for measuring fractality has not been investigated. Since noise is unavoidable in real life time series, the robustness of a fractality measure is of paramount importance. To improve the accuracy and robustness of PSVG to noise for measurement of fractality of time series in biological time-series, an improved PSVG is presented in this paper. The proposed method is evaluated using two examples: a synthetic benchmark time series and a complicated real life Electroencephalograms (EEG)-based diagnostic problem, that is distinguishing autistic children from non-autistic children. It is shown that the proposed improved PSVG is less sensitive to noise and therefore more robust compared with PSVG. Further, it is shown that using improved PSVG in the wavelet-chaos neural network model of Adeli and c-workers in place of the Katz fractality dimension results in a more accurate diagnosis of autism, a complicated neurological and psychiatric disorder.
A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.
Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang
2017-01-01
Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.
Modelling short time series in metabolomics: a functional data analysis approach.
Montana, Giovanni; Berk, Maurice; Ebbels, Tim
2011-01-01
Metabolomics is the study of the complement of small molecule metabolites in cells, biofluids and tissues. Many metabolomic experiments are designed to compare changes observed over time under two or more experimental conditions (e.g. a control and drug-treated group), thus producing time course data. Models from traditional time series analysis are often unsuitable because, by design, only very few time points are available and there are a high number of missing values. We propose a functional data analysis approach for modelling short time series arising in metabolomic studies which overcomes these obstacles. Our model assumes that each observed time series is a smooth random curve, and we propose a statistical approach for inferring this curve from repeated measurements taken on the experimental units. A test statistic for detecting differences between temporal profiles associated with two experimental conditions is then presented. The methodology has been applied to NMR spectroscopy data collected in a pre-clinical toxicology study.
Energy-Based Wavelet De-Noising of Hydrologic Time Series
Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu
2014-01-01
De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533
Pearson correlation estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.
NASA Astrophysics Data System (ADS)
Zhang, Yongping; Shang, Pengjian; Xiong, Hui; Xia, Jianan
Time irreversibility is an important property of nonequilibrium dynamic systems. A visibility graph approach was recently proposed, and this approach is generally effective to measure time irreversibility of time series. However, its result may be unreliable when dealing with high-dimensional systems. In this work, we consider the joint concept of time irreversibility and adopt the phase-space reconstruction technique to improve this visibility graph approach. Compared with the previous approach, the improved approach gives a more accurate estimate for the irreversibility of time series, and is more effective to distinguish irreversible and reversible stochastic processes. We also use this approach to extract the multiscale irreversibility to account for the multiple inherent dynamics of time series. Finally, we apply the approach to detect the multiscale irreversibility of financial time series, and succeed to distinguish the time of financial crisis and the plateau. In addition, Asian stock indexes away from other indexes are clearly visible in higher time scales. Simulations and real data support the effectiveness of the improved approach when detecting time irreversibility.
Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application
NASA Astrophysics Data System (ADS)
Chen, Jinduan; Boccelli, Dominic L.
2018-02-01
Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.
Spectral Unmixing Analysis of Time Series Landsat 8 Images
NASA Astrophysics Data System (ADS)
Zhuo, R.; Xu, L.; Peng, J.; Chen, Y.
2018-05-01
Temporal analysis of Landsat 8 images opens up new opportunities in the unmixing procedure. Although spectral analysis of time series Landsat imagery has its own advantage, it has rarely been studied. Nevertheless, using the temporal information can provide improved unmixing performance when compared to independent image analyses. Moreover, different land cover types may demonstrate different temporal patterns, which can aid the discrimination of different natures. Therefore, this letter presents time series K-P-Means, a new solution to the problem of unmixing time series Landsat imagery. The proposed approach is to obtain the "purified" pixels in order to achieve optimal unmixing performance. The vertex component analysis (VCA) is used to extract endmembers for endmember initialization. First, nonnegative least square (NNLS) is used to estimate abundance maps by using the endmember. Then, the estimated endmember is the mean value of "purified" pixels, which is the residual of the mixed pixel after excluding the contribution of all nondominant endmembers. Assembling two main steps (abundance estimation and endmember update) into the iterative optimization framework generates the complete algorithm. Experiments using both simulated and real Landsat 8 images show that the proposed "joint unmixing" approach provides more accurate endmember and abundance estimation results compared with "separate unmixing" approach.
NASA Astrophysics Data System (ADS)
Machiwal, Deepesh; Kumar, Sanjay; Dayal, Devi
2016-05-01
This study aimed at characterization of rainfall dynamics in a hot arid region of Gujarat, India by employing time-series modeling techniques and sustainability approach. Five characteristics, i.e., normality, stationarity, homogeneity, presence/absence of trend, and persistence of 34-year (1980-2013) period annual rainfall time series of ten stations were identified/detected by applying multiple parametric and non-parametric statistical tests. Furthermore, the study involves novelty of proposing sustainability concept for evaluating rainfall time series and demonstrated the concept, for the first time, by identifying the most sustainable rainfall series following reliability ( R y), resilience ( R e), and vulnerability ( V y) approach. Box-whisker plots, normal probability plots, and histograms indicated that the annual rainfall of Mandvi and Dayapar stations is relatively more positively skewed and non-normal compared with that of other stations, which is due to the presence of severe outlier and extreme. Results of Shapiro-Wilk test and Lilliefors test revealed that annual rainfall series of all stations significantly deviated from normal distribution. Two parametric t tests and the non-parametric Mann-Whitney test indicated significant non-stationarity in annual rainfall of Rapar station, where the rainfall was also found to be non-homogeneous based on the results of four parametric homogeneity tests. Four trend tests indicated significantly increasing rainfall trends at Rapar and Gandhidham stations. The autocorrelation analysis suggested the presence of persistence of statistically significant nature in rainfall series of Bhachau (3-year time lag), Mundra (1- and 9-year time lag), Nakhatrana (9-year time lag), and Rapar (3- and 4-year time lag). Results of sustainability approach indicated that annual rainfall of Mundra and Naliya stations ( R y = 0.50 and 0.44; R e = 0.47 and 0.47; V y = 0.49 and 0.46, respectively) are the most sustainable and dependable compared with that of other stations. The highest values of sustainability index at Mundra (0.120) and Naliya (0.112) stations confirmed the earlier findings of R y- R e- V y approach. In general, annual rainfall of the study area is less reliable, less resilient, and moderately vulnerable, which emphasizes the need of developing suitable strategies for managing water resources of the area on sustainable basis. Finally, it is recommended that multiple statistical tests (at least two) should be used in time-series modeling for making reliable decisions. Moreover, methodology and findings of the sustainability concept in rainfall time series can easily be adopted in other arid regions of the world.
NASA Astrophysics Data System (ADS)
Miura, T.; Kato, A.; Wang, J.; Vargas, M.; Lindquist, M.
2015-12-01
Satellite vegetation index (VI) time series data serve as an important means to monitor and characterize seasonal changes of terrestrial vegetation and their interannual variability. It is, therefore, critical to ensure quality of such VI products and one method of validating VI product quality is cross-comparison with in situ flux tower measurements. In this study, we evaluated the quality of VI time series derived from Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (NPP) spacecraft by cross-comparison with in situ radiation flux measurements at select flux tower sites over North America and Europe. VIIRS is a new polar-orbiting satellite sensor series, slated to replace National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer in the afternoon overpass and to continue the highly-calibrated data streams initiated with Moderate Resolution Imaging Spectrometer of National Aeronautics and Space Administration's Earth Observing System. The selected sites covered a wide range of biomes, including croplands, grasslands, evergreen needle forest, woody savanna, and open shrublands. The two VIIRS indices of the Top-of-Atmosphere (TOA) Normalized Difference Vegetation Index (NDVI) and the atmospherically-corrected, Top-of-Canopy (TOC) Enhanced Vegetation Index (EVI) (daily, 375 m spatial resolution) were compared against the TOC NDVI and a two-band version of EVI (EVI2) calculated from tower radiation flux measurements, respectively. VIIRS and Tower VI time series showed comparable seasonal profiles across biomes with statistically significant correlations (> 0.60; p-value < 0.01). "Start-of-season (SOS)" phenological metric values extracted from VIIRS and Tower VI time series were also highly compatible (R2 > 0.95), with mean differences of 2.3 days and 5.0 days for the NDVI and the EVI, respectively. These results indicate that VIIRS VI time series can capture seasonal evolution of vegetated land surface as good as in situ radiometric measurements. Future studies that address biophysical or physiological interpretations of Tower VI time series derived from radiation flux measurements are desirable.
Calculation of power spectrums from digital time series with missing data points
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.
1980-01-01
Two algorithms are developed for calculating power spectrums from the autocorrelation function when there are missing data points in the time series. Both methods use an average sampling interval to compute lagged products. One method, the correlation function power spectrum, takes the discrete Fourier transform of the lagged products directly to obtain the spectrum, while the other, the modified Blackman-Tukey power spectrum, takes the Fourier transform of the mean lagged products. Both techniques require fewer calculations than other procedures since only 50% to 80% of the maximum lags need be calculated. The algorithms are compared with the Fourier transform power spectrum and two least squares procedures (all for an arbitrary data spacing). Examples are given showing recovery of frequency components from simulated periodic data where portions of the time series are missing and random noise has been added to both the time points and to values of the function. In addition the methods are compared using real data. All procedures performed equally well in detecting periodicities in the data.
Towards seasonal forecasting of malaria in India.
Lauderdale, Jonathan M; Caminade, Cyril; Heath, Andrew E; Jones, Anne E; MacLeod, David A; Gouda, Krushna C; Murty, Upadhyayula Suryanarayana; Goswami, Prashant; Mutheneni, Srinivasa R; Morse, Andrew P
2014-08-10
Malaria presents public health challenge despite extensive intervention campaigns. A 30-year hindcast of the climatic suitability for malaria transmission in India is presented, using meteorological variables from a state of the art seasonal forecast model to drive a process-based, dynamic disease model. The spatial distribution and seasonal cycles of temperature and precipitation from the forecast model are compared to three observationally-based meteorological datasets. These time series are then used to drive the disease model, producing a simulated forecast of malaria and three synthetic malaria time series that are qualitatively compared to contemporary and pre-intervention malaria estimates. The area under the Relative Operator Characteristic (ROC) curve is calculated as a quantitative metric of forecast skill, comparing the forecast to the meteorologically-driven synthetic malaria time series. The forecast shows probabilistic skill in predicting the spatial distribution of Plasmodium falciparum incidence when compared to the simulated meteorologically-driven malaria time series, particularly where modelled incidence shows high seasonal and interannual variability such as in Orissa, West Bengal, and Jharkhand (North-east India), and Gujarat, Rajastan, Madhya Pradesh and Maharashtra (North-west India). Focusing on these two regions, the malaria forecast is able to distinguish between years of "high", "above average" and "low" malaria incidence in the peak malaria transmission seasons, with more than 70% sensitivity and a statistically significant area under the ROC curve. These results are encouraging given that the three month forecast lead time used is well in excess of the target for early warning systems adopted by the World Health Organization. This approach could form the basis of an operational system to identify the probability of regional malaria epidemics, allowing advanced and targeted allocation of resources for combatting malaria in India.
NASA Astrophysics Data System (ADS)
Pardo-Iguzquiza, Eulogio; Rodríguez-Tovar, Francisco J.
2011-12-01
One important handicap when working with stratigraphic sequences is the discontinuous character of the sedimentary record, especially relevant in cyclostratigraphic analysis. Uneven palaeoclimatic/palaeoceanographic time series are common, their cyclostratigraphic analysis being comparatively difficult because most spectral methodologies are appropriate only when working with even sampling. As a means to solve this problem, a program for calculating the smoothed Lomb-Scargle periodogram and cross-periodogram, which additionally evaluates the statistical confidence of the estimated power spectrum through a Monte Carlo procedure (the permutation test), has been developed. The spectral analysis of a short uneven time series calls for assessment of the statistical significance of the spectral peaks, since a periodogram can always be calculated but the main challenge resides in identifying true spectral features. To demonstrate the effectiveness of this program, two case studies are presented: the one deals with synthetic data and the other with paleoceanographic/palaeoclimatic proxies. On a simulated time series of 500 data, two uneven time series (with 100 and 25 data) were generated by selecting data at random. Comparative analysis between the power spectra from the simulated series and from the two uneven time series demonstrates the usefulness of the smoothed Lomb-Scargle periodogram for uneven sequences, making it possible to distinguish between statistically significant and spurious spectral peaks. Fragmentary time series of Cd/Ca ratios and δ18O from core AII107-131 of SPECMAP were analysed as a real case study. The efficiency of the direct and cross Lomb-Scargle periodogram in recognizing Milankovitch and sub-Milankovitch signals related to palaeoclimatic/palaeoceanographic changes is demonstrated. As implemented, the Lomb-Scargle periodogram may be applied to any palaeoclimatic/palaeoceanographic proxies, including those usually recovered from contourites, and it holds special interest in the context of centennial- to millennial-scale climatic changes affecting contouritic currents.
Comparing the structure of an emerging market with a mature one under global perturbation
NASA Astrophysics Data System (ADS)
Namaki, A.; Jafari, G. R.; Raei, R.
2011-09-01
In this paper we investigate the Tehran stock exchange (TSE) and Dow Jones Industrial Average (DJIA) in terms of perturbed correlation matrices. To perturb a stock market, there are two methods, namely local and global perturbation. In the local method, we replace a correlation coefficient of the cross-correlation matrix with one calculated from two Gaussian-distributed time series, whereas in the global method, we reconstruct the correlation matrix after replacing the original return series with Gaussian-distributed time series. The local perturbation is just a technical study. We analyze these markets through two statistical approaches, random matrix theory (RMT) and the correlation coefficient distribution. By using RMT, we find that the largest eigenvalue is an influence that is common to all stocks and this eigenvalue has a peak during financial shocks. We find there are a few correlated stocks that make the essential robustness of the stock market but we see that by replacing these return time series with Gaussian-distributed time series, the mean values of correlation coefficients, the largest eigenvalues of the stock markets and the fraction of eigenvalues that deviate from the RMT prediction fall sharply in both markets. By comparing these two markets, we can see that the DJIA is more sensitive to global perturbations. These findings are crucial for risk management and portfolio selection.
Applying "domino" model to study dipolar geomagnetic field reversals and secular variation
NASA Astrophysics Data System (ADS)
Peqini, Klaudio; Duka, Bejo
2014-05-01
Aiming to understand the physical processes underneath the reversals events of geomagnetic field, different numerical models have been conceived. We considered the so named "domino" model, an Ising-Heisenberg model of interacting magnetic spins aligned along a ring [Mazaud and Laj, EPSL, 1989; Mori et al., arXiv:1110.5062v2, 2012]. We will present here some results which are slightly different from the already published results, and will give our interpretation on the differences. Following the empirical studies of the long series of the axial magnetic moment (dipolar moment or "magnetization") generated by the model varying all model parameters, we defined the set of parameters that supply the longest mean time between reversals. Using this set of parameters, a short time series (about 10,000 years) of axial magnetic moment was generated. After de-noising the fluctuation of this time series, we compared it with the series of dipolar magnetic moment values supplied by CALS10K.1b model for the last 10000 years. We found similar behavior of the both series, even if the "domino" model could not supply a full explanation of the geomagnetic field SV. In a similar way we will compare a 14000 years long series with the dipolar magnetic moment obtained by the model SHA.DIF.14k [Pavón-Carrasco et al., EPSL, 2014].
Yang, Hong; Lin, Shan; Cui, Jingru
2014-02-10
Arsenic trioxide (ATO) is presently the most active single agent in the treatment of acute promyelocytic leukemia (APL). In order to explore the molecular mechanism of ATO in leukemia cells with time series, we adopted bioinformatics strategy to analyze expression changing patterns and changes in transcription regulation modules of time series genes filtered from Gene Expression Omnibus database (GSE24946). We totally screened out 1847 time series genes for subsequent analysis. The KEGG (Kyoto encyclopedia of genes and genomes) pathways enrichment analysis of these genes showed that oxidative phosphorylation and ribosome were the top 2 significantly enriched pathways. STEM software was employed to compare changing patterns of gene expression with assigned 50 expression patterns. We screened out 7 significantly enriched patterns and 4 tendency charts of time series genes. The result of Gene Ontology showed that functions of times series genes mainly distributed in profiles 41, 40, 39 and 38. Seven genes with positive regulation of cell adhesion function were enriched in profile 40, and presented the same first increased model then decreased model as profile 40. The transcription module analysis showed that they mainly involved in oxidative phosphorylation pathway and ribosome pathway. Overall, our data summarized the gene expression changes in ATO treated K562-r cell lines with time and suggested that time series genes mainly regulated cell adhesive. Furthermore, our result may provide theoretical basis of molecular biology in treating acute promyelocytic leukemia. Copyright © 2013 Elsevier B.V. All rights reserved.
An approach to checking case-crossover analyses based on equivalence with time-series methods.
Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L
2008-03-01
The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.
NASA Astrophysics Data System (ADS)
Chorozoglou, D.; Kugiumtzis, D.; Papadimitriou, E.
2018-06-01
The seismic hazard assessment in the area of Greece is attempted by studying the earthquake network structure, such as small-world and random. In this network, a node represents a seismic zone in the study area and a connection between two nodes is given by the correlation of the seismic activity of two zones. To investigate the network structure, and particularly the small-world property, the earthquake correlation network is compared with randomized ones. Simulations on multivariate time series of different length and number of variables show that for the construction of randomized networks the method randomizing the time series performs better than methods randomizing directly the original network connections. Based on the appropriate randomization method, the network approach is applied to time series of earthquakes that occurred between main shocks in the territory of Greece spanning the period 1999-2015. The characterization of networks on sliding time windows revealed that small-world structure emerges in the last time interval, shortly before the main shock.
A cluster merging method for time series microarray with production values.
Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio
2014-09-01
A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.
NASA Astrophysics Data System (ADS)
Berx, Barbara; Payne, Mark R.
2017-04-01
Scientific interest in the sub-polar gyre of the North Atlantic Ocean has increased in recent years. The sub-polar gyre has contracted and weakened, and changes in circulation pathways have been linked to changes in marine ecosystem productivity. To aid fisheries and environmental scientists, we present here a time series of the Sub-Polar Gyre Index (SPG-I) based on monthly mean maps of sea surface height. The established definition of the SPG-I is applied, and the first EOF (empirical orthogonal function) and PC (principal component) are presented. Sensitivity to the spatial domain and time series length are explored but found not to be important factors in terms of the SPG-I's interpretation. Our time series compares well with indices presented previously. The SPG-I time series is freely available online (http://dx.doi.org/10.7489/1806-1), and we invite the community to access, apply, and publish studies using this index time series.
Hatch, Christine E; Fisher, Andrew T.; Revenaugh, Justin S.; Constantz, Jim; Ruehl, Chris
2006-01-01
We present a method for determining streambed seepage rates using time series thermal data. The new method is based on quantifying changes in phase and amplitude of temperature variations between pairs of subsurface sensors. For a reasonable range of streambed thermal properties and sensor spacings the time series method should allow reliable estimation of seepage rates for a range of at least ±10 m d−1 (±1.2 × 10−2 m s−1), with amplitude variations being most sensitive at low flow rates and phase variations retaining sensitivity out to much higher rates. Compared to forward modeling, the new method requires less observational data and less setup and data handling and is faster, particularly when interpreting many long data sets. The time series method is insensitive to streambed scour and sedimentation, which allows for application under a wide range of flow conditions and allows time series estimation of variable streambed hydraulic conductivity. This new approach should facilitate wider use of thermal methods and improve understanding of the complex spatial and temporal dynamics of surface water–groundwater interactions.
van Mierlo, Pieter; Lie, Octavian; Staljanssens, Willeke; Coito, Ana; Vulliémoz, Serge
2018-04-26
We investigated the influence of processing steps in the estimation of multivariate directed functional connectivity during seizures recorded with intracranial EEG (iEEG) on seizure-onset zone (SOZ) localization. We studied the effect of (i) the number of nodes, (ii) time-series normalization, (iii) the choice of multivariate time-varying connectivity measure: Adaptive Directed Transfer Function (ADTF) or Adaptive Partial Directed Coherence (APDC) and (iv) graph theory measure: outdegree or shortest path length. First, simulations were performed to quantify the influence of the various processing steps on the accuracy to localize the SOZ. Afterwards, the SOZ was estimated from a 113-electrodes iEEG seizure recording and compared with the resection that rendered the patient seizure-free. The simulations revealed that ADTF is preferred over APDC to localize the SOZ from ictal iEEG recordings. Normalizing the time series before analysis resulted in an increase of 25-35% of correctly localized SOZ, while adding more nodes to the connectivity analysis led to a moderate decrease of 10%, when comparing 128 with 32 input nodes. The real-seizure connectivity estimates localized the SOZ inside the resection area using the ADTF coupled to outdegree or shortest path length. Our study showed that normalizing the time-series is an important pre-processing step, while adding nodes to the analysis did only marginally affect the SOZ localization. The study shows that directed multivariate Granger-based connectivity analysis is feasible with many input nodes (> 100) and that normalization of the time-series before connectivity analysis is preferred.
ERIC Educational Resources Information Center
Zvoch, Keith
2016-01-01
Piecewise growth models (PGMs) were used to estimate and model changes in the preliteracy skill development of kindergartners in a moderately sized school district in the Pacific Northwest. PGMs were applied to interrupted time-series (ITS) data that arose within the context of a response-to-intervention (RtI) instructional framework. During the…
Bendel, David; Beck, Ferdinand; Dittmer, Ulrich
2013-01-01
In the presented study climate change impacts on combined sewer overflows (CSOs) in Baden-Wuerttemberg, Southern Germany, were assessed based on continuous long-term rainfall-runoff simulations. As input data, synthetic rainfall time series were used. The applied precipitation generator NiedSim-Klima accounts for climate change effects on precipitation patterns. Time series for the past (1961-1990) and future (2041-2050) were generated for various locations. Comparing the simulated CSO activity of both periods we observe significantly higher overflow frequencies for the future. Changes in overflow volume and overflow duration depend on the type of overflow structure. Both values will increase at simple CSO structures that merely divide the flow, whereas they will decrease when the CSO structure is combined with a storage tank. However, there is a wide variation between the results of different precipitation time series (representative for different locations).
López-Caraballo, C. H.; Lazzús, J. A.; Salfate, I.; Rojas, P.; Rivera, M.; Palma-Chilla, L.
2015-01-01
An artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey-Glass chaotic time series in the short-term x(t + 6). The performance prediction was evaluated and compared with other studies available in the literature. Also, we presented properties of the dynamical system via the study of chaotic behaviour obtained from the predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called stochastic hybrid ANN+PSO) in order to obtain a new estimator of the predictions, which also allowed us to compute the uncertainties of predictions for noisy Mackey-Glass chaotic time series. Thus, we studied the impact of noise for several cases with a white noise level (σ N) from 0.01 to 0.1. PMID:26351449
López-Caraballo, C H; Lazzús, J A; Salfate, I; Rojas, P; Rivera, M; Palma-Chilla, L
2015-01-01
An artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey-Glass chaotic time series in the short-term x(t + 6). The performance prediction was evaluated and compared with other studies available in the literature. Also, we presented properties of the dynamical system via the study of chaotic behaviour obtained from the predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called stochastic hybrid ANN+PSO) in order to obtain a new estimator of the predictions, which also allowed us to compute the uncertainties of predictions for noisy Mackey-Glass chaotic time series. Thus, we studied the impact of noise for several cases with a white noise level (σ(N)) from 0.01 to 0.1.
NASA Astrophysics Data System (ADS)
Francile, C.; Luoni, M. L.
We present a prediction of the time series of the Wolf number R of sunspots using "time lagged feed forward neural networks". We use two types of networks: the focused and distributed ones which were trained with the back propagation of errors algorithm and the temporal back propagation algorithm respectively. As inputs to neural networks we use the time series of the number R averaged annually and monthly with the method IR5. As data sets for training and test we choose certain intervals of the time series similar to other works, in order to compare the results. Finally we discuss the topology of the networks used, the number of delays used, the number of neurons per layer, the number of hidden layers and the results in the prediction of the series between one and six steps ahead. FULL TEXT IN SPANISH
Using ordinal partition transition networks to analyze ECG data
NASA Astrophysics Data System (ADS)
Kulp, Christopher W.; Chobot, Jeremy M.; Freitas, Helena R.; Sprechini, Gene D.
2016-07-01
Electrocardiogram (ECG) data from patients with a variety of heart conditions are studied using ordinal pattern partition networks. The ordinal pattern partition networks are formed from the ECG time series by symbolizing the data into ordinal patterns. The ordinal patterns form the nodes of the network and edges are defined through the time ordering of the ordinal patterns in the symbolized time series. A network measure, called the mean degree, is computed from each time series-generated network. In addition, the entropy and number of non-occurring ordinal patterns (NFP) is computed for each series. The distribution of mean degrees, entropies, and NFPs for each heart condition studied is compared. A statistically significant difference between healthy patients and several groups of unhealthy patients with varying heart conditions is found for the distributions of the mean degrees, unlike for any of the distributions of the entropies or NFPs.
NASA Astrophysics Data System (ADS)
Yuan, Y.; Meng, Y.; Chen, Y. X.; Jiang, C.; Yue, A. Z.
2018-04-01
In this study, we proposed a method to map urban encroachment onto farmland using satellite image time series (SITS) based on the hierarchical hidden Markov model (HHMM). In this method, the farmland change process is decomposed into three hierarchical levels, i.e., the land cover level, the vegetation phenology level, and the SITS level. Then a three-level HHMM is constructed to model the multi-level semantic structure of farmland change process. Once the HHMM is established, a change from farmland to built-up could be detected by inferring the underlying state sequence that is most likely to generate the input time series. The performance of the method is evaluated on MODIS time series in Beijing. Results on both simulated and real datasets demonstrate that our method improves the change detection accuracy compared with the HMM-based method.
NASA Astrophysics Data System (ADS)
Baldysz, Zofia; Nykiel, Grzegorz; Figurski, Mariusz; Szafranek, Karolina; Kroszczynski, Krzysztof; Araszkiewicz, Andrzej
2015-04-01
In recent years, the GNSS system began to play an increasingly important role in the research related to the climate monitoring. Based on the GPS system, which has the longest operational capability in comparison with other systems, and a common computational strategy applied to all observations, long and homogeneous ZTD (Zenith Tropospheric Delay) time series were derived. This paper presents results of analysis of 16-year ZTD time series obtained from the EPN (EUREF Permanent Network) reprocessing performed by the Military University of Technology. To maintain the uniformity of data, analyzed period of time (1998-2013) is exactly the same for all stations - observations carried out before 1998 were removed from time series and observations processed using different strategy were recalculated according to the MUT LAC approach. For all 16-year time series (59 stations) Lomb-Scargle periodograms were created to obtain information about the oscillations in ZTD time series. Due to strong annual oscillations which disturb the character of oscillations with smaller amplitude and thus hinder their investigation, Lomb-Scargle periodograms for time series with the deleted annual oscillations were created in order to verify presence of semi-annual, ter-annual and quarto-annual oscillations. Linear trend and seasonal components were estimated using LSE (Least Square Estimation) and Mann-Kendall trend test were used to confirm the presence of linear trend designated by LSE method. In order to verify the effect of the length of time series on the estimated size of the linear trend, comparison between two different length of ZTD time series was performed. To carry out a comparative analysis, 30 stations which have been operating since 1996 were selected. For these stations two periods of time were analyzed: shortened 16-year (1998-2013) and full 18-year (1996-2013). For some stations an additional two years of observations have significant impact on changing the size of linear trend - only for 4 stations the size of linear trend was exactly the same for two periods of time. In one case, the nature of the trend has changed from negative (16-year time series) for positive (18-year time series). The average value of a linear trends for 16-year time series is 1,5 mm/decade, but their spatial distribution is not uniform. The average value of linear trends for all 18-year time series is 2,0 mm/decade, with better spatial distribution and smaller discrepancies.
van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter
2015-08-07
Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.
Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter
2015-01-01
Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160
Baksi, Krishanu D; Kuntal, Bhusan K; Mande, Sharmila S
2018-01-01
Realization of the importance of microbiome studies, coupled with the decreasing sequencing cost, has led to the exponential growth of microbiome data. A number of these microbiome studies have focused on understanding changes in the microbial community over time. Such longitudinal microbiome studies have the potential to offer unique insights pertaining to the microbial social networks as well as their responses to perturbations. In this communication, we introduce a web based framework called 'TIME' (Temporal Insights into Microbial Ecology'), developed specifically to obtain meaningful insights from microbiome time series data. The TIME web-server is designed to accept a wide range of popular formats as input with options to preprocess and filter the data. Multiple samples, defined by a series of longitudinal time points along with their metadata information, can be compared in order to interactively visualize the temporal variations. In addition to standard microbiome data analytics, the web server implements popular time series analysis methods like Dynamic time warping, Granger causality and Dickey Fuller test to generate interactive layouts for facilitating easy biological inferences. Apart from this, a new metric for comparing metagenomic time series data has been introduced to effectively visualize the similarities/differences in the trends of the resident microbial groups. Augmenting the visualizations with the stationarity information pertaining to the microbial groups is utilized to predict the microbial competition as well as community structure. Additionally, the 'causality graph analysis' module incorporated in TIME allows predicting taxa that might have a higher influence on community structure in different conditions. TIME also allows users to easily identify potential taxonomic markers from a longitudinal microbiome analysis. We illustrate the utility of the web-server features on a few published time series microbiome data and demonstrate the ease with which it can be used to perform complex analysis.
Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques.
Barzaghi, Riccardo; Cazzaniga, Noemi Emanuela; De Gaetani, Carlo Iapige; Pinto, Livio; Tornatore, Vincenza
2018-03-02
Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D'Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations.
Evolution of the Sunspot Number and Solar Wind B Time Series
NASA Astrophysics Data System (ADS)
Cliver, Edward W.; Herbst, Konstantin
2018-03-01
The past two decades have witnessed significant changes in our knowledge of long-term solar and solar wind activity. The sunspot number time series (1700-present) developed by Rudolf Wolf during the second half of the 19th century was revised and extended by the group sunspot number series (1610-1995) of Hoyt and Schatten during the 1990s. The group sunspot number is significantly lower than the Wolf series before ˜1885. An effort from 2011-2015 to understand and remove differences between these two series via a series of workshops had the unintended consequence of prompting several alternative constructions of the sunspot number. Thus it has been necessary to expand and extend the sunspot number reconciliation process. On the solar wind side, after a decade of controversy, an ISSI International Team used geomagnetic and sunspot data to obtain a high-confidence time series of the solar wind magnetic field strength (B) from 1750-present that can be compared with two independent long-term (> ˜600 year) series of annual B-values based on cosmogenic nuclides. In this paper, we trace the twists and turns leading to our current understanding of long-term solar and solar wind activity.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Molnar, Gyula; Iredell, Lena; Loeb, Norman G.
2011-01-01
This paper compares recent spatial and temporal anomaly time series of OLR as observed by CERES and computed based on AIRS retrieved surface and atmospheric geophysical parameters over the 7 year time period September 2002 through February 2010. This time period is marked by a substantial decrease of OLR, on the order of +/-0.1 W/sq m/yr, averaged over the globe, and very large spatial variations of changes in OLR in the tropics, with local values ranging from -2.8 W/sq m/yr to +3.1 W/sq m/yr. Global and Tropical OLR both began to decrease significantly at the onset of a strong La Ni a in mid-2007. Late 2009 is characterized by a strong El Ni o, with a corresponding change in sign of both Tropical and Global OLR anomalies. The spatial patterns of the 7 year short term changes in AIRS and CERES OLR have a spatial correlation of 0.97 and slopes of the linear least squares fits of anomaly time series averaged over different spatial regions agree on the order of +/-0.01 W/sq m/yr. This essentially perfect agreement of OLR anomaly time series derived from observations by two different instruments, determined in totally independent and different manners, implies that both sets of results must be highly stable. This agreement also validates the anomaly time series of the AIRS derived products used to compute OLR and furthermore indicates that anomaly time series of AIRS derived products can be used to explain the factors contributing to anomaly time series of OLR.
New insights into soil temperature time series modeling: linear or nonlinear?
NASA Astrophysics Data System (ADS)
Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram
2018-03-01
Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...
2016-10-20
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.
2016-01-01
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187
Boolean network inference from time series data incorporating prior biological knowledge.
Haider, Saad; Pal, Ranadip
2012-01-01
Numerous approaches exist for modeling of genetic regulatory networks (GRNs) but the low sampling rates often employed in biological studies prevents the inference of detailed models from experimental data. In this paper, we analyze the issues involved in estimating a model of a GRN from single cell line time series data with limited time points. We present an inference approach for a Boolean Network (BN) model of a GRN from limited transcriptomic or proteomic time series data based on prior biological knowledge of connectivity, constraints on attractor structure and robust design. We applied our inference approach to 6 time point transcriptomic data on Human Mammary Epithelial Cell line (HMEC) after application of Epidermal Growth Factor (EGF) and generated a BN with a plausible biological structure satisfying the data. We further defined and applied a similarity measure to compare synthetic BNs and BNs generated through the proposed approach constructed from transitions of various paths of the synthetic BNs. We have also compared the performance of our algorithm with two existing BN inference algorithms. Through theoretical analysis and simulations, we showed the rarity of arriving at a BN from limited time series data with plausible biological structure using random connectivity and absence of structure in data. The framework when applied to experimental data and data generated from synthetic BNs were able to estimate BNs with high similarity scores. Comparison with existing BN inference algorithms showed the better performance of our proposed algorithm for limited time series data. The proposed framework can also be applied to optimize the connectivity of a GRN from experimental data when the prior biological knowledge on regulators is limited or not unique.
A matching framework to improve causal inference in interrupted time-series analysis.
Linden, Ariel
2018-04-01
Interrupted time-series analysis (ITSA) is a popular evaluation methodology in which a single treatment unit's outcome is studied over time and the intervention is expected to "interrupt" the level and/or trend of the outcome, subsequent to its introduction. When ITSA is implemented without a comparison group, the internal validity may be quite poor. Therefore, adding a comparable control group to serve as the counterfactual is always preferred. This paper introduces a novel matching framework, ITSAMATCH, to create a comparable control group by matching directly on covariates and then use these matches in the outcomes model. We evaluate the effect of California's Proposition 99 (passed in 1988) for reducing cigarette sales, by comparing California to other states not exposed to smoking reduction initiatives. We compare ITSAMATCH results to 2 commonly used matching approaches, synthetic controls (SYNTH), and regression adjustment; SYNTH reweights nontreated units to make them comparable to the treated unit, and regression adjusts covariates directly. Methods are compared by assessing covariate balance and treatment effects. Both ITSAMATCH and SYNTH achieved covariate balance and estimated similar treatment effects. The regression model found no treatment effect and produced inconsistent covariate adjustment. While the matching framework achieved results comparable to SYNTH, it has the advantage of being technically less complicated, while producing statistical estimates that are straightforward to interpret. Conversely, regression adjustment may "adjust away" a treatment effect. Given its advantages, ITSAMATCH should be considered as a primary approach for evaluating treatment effects in multiple-group time-series analysis. © 2017 John Wiley & Sons, Ltd.
Lee, Hyokyeong; Moody-Davis, Asher; Saha, Utsab; Suzuki, Brian M; Asarnow, Daniel; Chen, Steven; Arkin, Michelle; Caffrey, Conor R; Singh, Rahul
2012-01-01
Neglected tropical diseases, especially those caused by helminths, constitute some of the most common infections of the world's poorest people. Development of techniques for automated, high-throughput drug screening against these diseases, especially in whole-organism settings, constitutes one of the great challenges of modern drug discovery. We present a method for enabling high-throughput phenotypic drug screening against diseases caused by helminths with a focus on schistosomiasis. The proposed method allows for a quantitative analysis of the systemic impact of a drug molecule on the pathogen as exhibited by the complex continuum of its phenotypic responses. This method consists of two key parts: first, biological image analysis is employed to automatically monitor and quantify shape-, appearance-, and motion-based phenotypes of the parasites. Next, we represent these phenotypes as time-series and show how to compare, cluster, and quantitatively reason about them using techniques of time-series analysis. We present results on a number of algorithmic issues pertinent to the time-series representation of phenotypes. These include results on appropriate representation of phenotypic time-series, analysis of different time-series similarity measures for comparing phenotypic responses over time, and techniques for clustering such responses by similarity. Finally, we show how these algorithmic techniques can be used for quantifying the complex continuum of phenotypic responses of parasites. An important corollary is the ability of our method to recognize and rigorously group parasites based on the variability of their phenotypic response to different drugs. The methods and results presented in this paper enable automatic and quantitative scoring of high-throughput phenotypic screens focused on helmintic diseases. Furthermore, these methods allow us to analyze and stratify parasites based on their phenotypic response to drugs. Together, these advancements represent a significant breakthrough for the process of drug discovery against schistosomiasis in particular and can be extended to other helmintic diseases which together afflict a large part of humankind.
2012-01-01
Background Neglected tropical diseases, especially those caused by helminths, constitute some of the most common infections of the world's poorest people. Development of techniques for automated, high-throughput drug screening against these diseases, especially in whole-organism settings, constitutes one of the great challenges of modern drug discovery. Method We present a method for enabling high-throughput phenotypic drug screening against diseases caused by helminths with a focus on schistosomiasis. The proposed method allows for a quantitative analysis of the systemic impact of a drug molecule on the pathogen as exhibited by the complex continuum of its phenotypic responses. This method consists of two key parts: first, biological image analysis is employed to automatically monitor and quantify shape-, appearance-, and motion-based phenotypes of the parasites. Next, we represent these phenotypes as time-series and show how to compare, cluster, and quantitatively reason about them using techniques of time-series analysis. Results We present results on a number of algorithmic issues pertinent to the time-series representation of phenotypes. These include results on appropriate representation of phenotypic time-series, analysis of different time-series similarity measures for comparing phenotypic responses over time, and techniques for clustering such responses by similarity. Finally, we show how these algorithmic techniques can be used for quantifying the complex continuum of phenotypic responses of parasites. An important corollary is the ability of our method to recognize and rigorously group parasites based on the variability of their phenotypic response to different drugs. Conclusions The methods and results presented in this paper enable automatic and quantitative scoring of high-throughput phenotypic screens focused on helmintic diseases. Furthermore, these methods allow us to analyze and stratify parasites based on their phenotypic response to drugs. Together, these advancements represent a significant breakthrough for the process of drug discovery against schistosomiasis in particular and can be extended to other helmintic diseases which together afflict a large part of humankind. PMID:22369037
Estimating trends in atmospheric water vapor and temperature time series over Germany
NASA Astrophysics Data System (ADS)
Alshawaf, Fadwa; Balidakis, Kyriakos; Dick, Galina; Heise, Stefan; Wickert, Jens
2017-08-01
Ground-based GNSS (Global Navigation Satellite System) has efficiently been used since the 1990s as a meteorological observing system. Recently scientists have used GNSS time series of precipitable water vapor (PWV) for climate research. In this work, we compare the temporal trends estimated from GNSS time series with those estimated from European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis (ERA-Interim) data and meteorological measurements. We aim to evaluate climate evolution in Germany by monitoring different atmospheric variables such as temperature and PWV. PWV time series were obtained by three methods: (1) estimated from ground-based GNSS observations using the method of precise point positioning, (2) inferred from ERA-Interim reanalysis data, and (3) determined based on daily in situ measurements of temperature and relative humidity. The other relevant atmospheric parameters are available from surface measurements of meteorological stations or derived from ERA-Interim. The trends are estimated using two methods: the first applies least squares to deseasonalized time series and the second uses the Theil-Sen estimator. The trends estimated at 113 GNSS sites, with 10 to 19 years temporal coverage, vary between -1.5 and 2.3 mm decade-1 with standard deviations below 0.25 mm decade-1. These results were validated by estimating the trends from ERA-Interim data over the same time windows, which show similar values. These values of the trend depend on the length and the variations of the time series. Therefore, to give a mean value of the PWV trend over Germany, we estimated the trends using ERA-Interim spanning from 1991 to 2016 (26 years) at 227 synoptic stations over Germany. The ERA-Interim data show positive PWV trends of 0.33 ± 0.06 mm decade-1 with standard errors below 0.03 mm decade-1. The increment in PWV varies between 4.5 and 6.5 % per degree Celsius rise in temperature, which is comparable to the theoretical rate of the Clausius-Clapeyron equation.
ERIC Educational Resources Information Center
Castro-Schilo, Laura; Ferrer, Emilio
2013-01-01
We illustrate the idiographic/nomothetic debate by comparing 3 approaches to using daily self-report data on affect for predicting relationship quality and breakup. The 3 approaches included (a) the first day in the series of daily data; (b) the mean and variability of the daily series; and (c) parameters from dynamic factor analysis, a…
Correlation and Stacking of Relative Paleointensity and Oxygen Isotope Data
NASA Astrophysics Data System (ADS)
Lurcock, P. C.; Channell, J. E.; Lee, D.
2012-12-01
The transformation of a depth-series into a time-series is routinely implemented in the geological sciences. This transformation often involves correlation of a depth-series to an astronomically calibrated time-series. Eyeball tie-points with linear interpolation are still regularly used, although these have the disadvantages of being non-repeatable and not based on firm correlation criteria. Two automated correlation methods are compared: the simulated annealing algorithm (Huybers and Wunsch, 2004) and the Match protocol (Lisiecki and Lisiecki, 2002). Simulated annealing seeks to minimize energy (cross-correlation) as "temperature" is slowly decreased. The Match protocol divides records into intervals, applies penalty functions that constrain accumulation rates, and minimizes the sum of the squares of the differences between two series while maintaining the data sequence in each series. Paired relative paleointensity (RPI) and oxygen isotope records, such as those from IODP Site U1308 and/or reference stacks such as LR04 and PISO, are warped using known warping functions, and then the un-warped and warped time-series are correlated to evaluate the efficiency of the correlation methods. Correlations are performed in tandem to simultaneously optimize RPI and oxygen isotope data. Noise spectra are introduced at differing levels to determine correlation efficiency as noise levels change. A third potential method, known as dynamic time warping, involves minimizing the sum of distances between correlated point pairs across the whole series. A "cost matrix" between the two series is analyzed to find a least-cost path through the matrix. This least-cost path is used to nonlinearly map the time/depth of one record onto the depth/time of another. Dynamic time warping can be expanded to more than two dimensions and used to stack multiple time-series. This procedure can improve on arithmetic stacks, which often lose coherent high-frequency content during the stacking process.
Predicting long-term catchment nutrient export: the use of nonlinear time series models
NASA Astrophysics Data System (ADS)
Valent, Peter; Howden, Nicholas J. K.; Szolgay, Jan; Komornikova, Magda
2010-05-01
After the Second World War the nitrate concentrations in European water bodies changed significantly as the result of increased nitrogen fertilizer use and changes in land use. However, in the last decades, as a consequence of the implementation of nitrate-reducing measures in Europe, the nitrate concentrations in water bodies slowly decrease. This causes that the mean and variance of the observed time series also changes with time (nonstationarity and heteroscedascity). In order to detect changes and properly describe the behaviour of such time series by time series analysis, linear models (such as autoregressive (AR), moving average (MA) and autoregressive moving average models (ARMA)), are no more suitable. Time series with sudden changes in statistical characteristics can cause various problems in the calibration of traditional water quality models and thus give biased predictions. Proper statistical analysis of these non-stationary and heteroscedastic time series with the aim of detecting and subsequently explaining the variations in their statistical characteristics requires the use of nonlinear time series models. This information can be then used to improve the model building and calibration of conceptual water quality model or to select right calibration periods in order to produce reliable predictions. The objective of this contribution is to analyze two long time series of nitrate concentrations of the rivers Ouse and Stour with advanced nonlinear statistical modelling techniques and compare their performance with traditional linear models of the ARMA class in order to identify changes in the time series characteristics. The time series were analysed with nonlinear models with multiple regimes represented by self-exciting threshold autoregressive (SETAR) and Markov-switching models (MSW). The analysis showed that, based on the value of residual sum of squares (RSS) in both datasets, SETAR and MSW models described the time-series better than models of the ARMA class. In most cases the relative improvement of SETAR models against AR models of first order was low ranging between 1% and 4% with the exception of the three-regime model for the River Stour time-series where the improvement was 48.9%. In comparison, the relative improvement of MSW models was between 44.6% and 52.5 for two-regime and from 60.4% to 75% for three-regime models. However, the visual assessment of models plotted against original datasets showed that despite a high value of RSS, some ARMA models could describe the analyzed time-series better than AR, MA and SETAR models with lower values of RSS. In both datasets MSW models provided a very good visual fit describing most of the extreme values.
NASA Astrophysics Data System (ADS)
Salmon, B. P.; Kleynhans, W.; Olivier, J. C.; van den Bergh, F.; Wessels, K. J.
2018-05-01
Humans are transforming land cover at an ever-increasing rate. Accurate geographical maps on land cover, especially rural and urban settlements are essential to planning sustainable development. Time series extracted from MODerate resolution Imaging Spectroradiometer (MODIS) land surface reflectance products have been used to differentiate land cover classes by analyzing the seasonal patterns in reflectance values. The proper fitting of a parametric model to these time series usually requires several adjustments to the regression method. To reduce the workload, a global setting of parameters is done to the regression method for a geographical area. In this work we have modified a meta-optimization approach to setting a regression method to extract the parameters on a per time series basis. The standard deviation of the model parameters and magnitude of residuals are used as scoring function. We successfully fitted a triply modulated model to the seasonal patterns of our study area using a non-linear extended Kalman filter (EKF). The approach uses temporal information which significantly reduces the processing time and storage requirements to process each time series. It also derives reliability metrics for each time series individually. The features extracted using the proposed method are classified with a support vector machine and the performance of the method is compared to the original approach on our ground truth data.
Time lagged ordinal partition networks for capturing dynamics of continuous dynamical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCullough, Michael; Iu, Herbert Ho-Ching; Small, Michael
2015-05-15
We investigate a generalised version of the recently proposed ordinal partition time series to network transformation algorithm. First, we introduce a fixed time lag for the elements of each partition that is selected using techniques from traditional time delay embedding. The resulting partitions define regions in the embedding phase space that are mapped to nodes in the network space. Edges are allocated between nodes based on temporal succession thus creating a Markov chain representation of the time series. We then apply this new transformation algorithm to time series generated by the Rössler system and find that periodic dynamics translate tomore » ring structures whereas chaotic time series translate to band or tube-like structures—thereby indicating that our algorithm generates networks whose structure is sensitive to system dynamics. Furthermore, we demonstrate that simple network measures including the mean out degree and variance of out degrees can track changes in the dynamical behaviour in a manner comparable to the largest Lyapunov exponent. We also apply the same analysis to experimental time series generated by a diode resonator circuit and show that the network size, mean shortest path length, and network diameter are highly sensitive to the interior crisis captured in this particular data set.« less
NASA Astrophysics Data System (ADS)
Zingone, Adriana; Harrison, Paul J.; Kraberg, Alexandra; Lehtinen, Sirpa; McQuatters-Gollop, Abigail; O'Brien, Todd; Sun, Jun; Jakobsen, Hans H.
2015-09-01
Phytoplankton diversity and its variation over an extended time scale can provide answers to a wide range of questions relevant to societal needs. These include human health, the safe and sustained use of marine resources and the ecological status of the marine environment, including long-term changes under the impact of multiple stressors. The analysis of phytoplankton data collected at the same place over time, as well as the comparison among different sampling sites, provide key information for assessing environmental change, and evaluating new actions that must be made to reduce human induced pressures on the environment. To achieve these aims, phytoplankton data may be used several decades later by users that have not participated in their production, including automatic data retrieval and analysis. The methods used in phytoplankton species analysis vary widely among research and monitoring groups, while quality control procedures have not been implemented in most cases. Here we highlight some of the main differences in the sampling and analytical procedures applied to phytoplankton analysis and identify critical steps that are required to improve the quality and inter-comparability of data obtained at different sites and/or times. Harmonization of methods may not be a realistic goal, considering the wide range of purposes of phytoplankton time-series data collection. However, we propose that more consistent and detailed metadata and complementary information be recorded and made available along with phytoplankton time-series datasets, including description of the procedures and elements allowing for a quality control of the data. To keep up with the progress in taxonomic research, there is a need for continued training of taxonomists, and for supporting and complementing existing web resources, in order to allow a constant upgrade of knowledge in phytoplankton classification and identification. Efforts towards the improvement of metadata recording, data annotation and quality control procedures will ensure the internal consistency of phytoplankton time series and facilitate their comparability and accessibility, thus strongly increasing the value of the precious information they provide. Ultimately, the sharing of quality controlled data will allow one to recoup the high cost of obtaining the data through the multiple use of the time-series data in various projects over many decades.
Graphic analysis and multifractal on percolation-based return interval series
NASA Astrophysics Data System (ADS)
Pei, A. Q.; Wang, J.
2015-05-01
A financial time series model is developed and investigated by the oriented percolation system (one of the statistical physics systems). The nonlinear and statistical behaviors of the return interval time series are studied for the proposed model and the real stock market by applying visibility graph (VG) and multifractal detrended fluctuation analysis (MF-DFA). We investigate the fluctuation behaviors of return intervals of the model for different parameter settings, and also comparatively study these fluctuation patterns with those of the real financial data for different threshold values. The empirical research of this work exhibits the multifractal features for the corresponding financial time series. Further, the VGs deviated from both of the simulated data and the real data show the behaviors of small-world, hierarchy, high clustering and power-law tail for the degree distributions.
Modeling BAS Dysregulation in Bipolar Disorder.
Hamaker, Ellen L; Grasman, Raoul P P P; Kamphuis, Jan Henk
2016-08-01
Time series analysis is a technique that can be used to analyze the data from a single subject and has great potential to investigate clinically relevant processes like affect regulation. This article uses time series models to investigate the assumed dysregulation of affect that is associated with bipolar disorder. By formulating a number of alternative models that capture different kinds of theoretically predicted dysregulation, and by comparing these in both bipolar patients and controls, we aim to illustrate the heuristic potential this method of analysis has for clinical psychology. We argue that, not only can time series analysis elucidate specific maladaptive dynamics associated with psychopathology, it may also be clinically applied in symptom monitoring and the evaluation of therapeutic interventions.
CauseMap: fast inference of causality from complex time series.
Maher, M Cyrus; Hernandez, Ryan D
2015-01-01
Background. Establishing health-related causal relationships is a central pursuit in biomedical research. Yet, the interdependent non-linearity of biological systems renders causal dynamics laborious and at times impractical to disentangle. This pursuit is further impeded by the dearth of time series that are sufficiently long to observe and understand recurrent patterns of flux. However, as data generation costs plummet and technologies like wearable devices democratize data collection, we anticipate a coming surge in the availability of biomedically-relevant time series data. Given the life-saving potential of these burgeoning resources, it is critical to invest in the development of open source software tools that are capable of drawing meaningful insight from vast amounts of time series data. Results. Here we present CauseMap, the first open source implementation of convergent cross mapping (CCM), a method for establishing causality from long time series data (≳25 observations). Compared to existing time series methods, CCM has the advantage of being model-free and robust to unmeasured confounding that could otherwise induce spurious associations. CCM builds on Takens' Theorem, a well-established result from dynamical systems theory that requires only mild assumptions. This theorem allows us to reconstruct high dimensional system dynamics using a time series of only a single variable. These reconstructions can be thought of as shadows of the true causal system. If reconstructed shadows can predict points from opposing time series, we can infer that the corresponding variables are providing views of the same causal system, and so are causally related. Unlike traditional metrics, this test can establish the directionality of causation, even in the presence of feedback loops. Furthermore, since CCM can extract causal relationships from times series of, e.g., a single individual, it may be a valuable tool to personalized medicine. We implement CCM in Julia, a high-performance programming language designed for facile technical computing. Our software package, CauseMap, is platform-independent and freely available as an official Julia package. Conclusions. CauseMap is an efficient implementation of a state-of-the-art algorithm for detecting causality from time series data. We believe this tool will be a valuable resource for biomedical research and personalized medicine.
Refined composite multiscale weighted-permutation entropy of financial time series
NASA Astrophysics Data System (ADS)
Zhang, Yongping; Shang, Pengjian
2018-04-01
For quantifying the complexity of nonlinear systems, multiscale weighted-permutation entropy (MWPE) has recently been proposed. MWPE has incorporated amplitude information and been applied to account for the multiple inherent dynamics of time series. However, MWPE may be unreliable, because its estimated values show large fluctuation for slight variation of the data locations, and a significant distinction only for the different length of time series. Therefore, we propose the refined composite multiscale weighted-permutation entropy (RCMWPE). By comparing the RCMWPE results with other methods' results on both synthetic data and financial time series, RCMWPE method shows not only the advantages inherited from MWPE but also lower sensitivity to the data locations, more stable and much less dependent on the length of time series. Moreover, we present and discuss the results of RCMWPE method on the daily price return series from Asian and European stock markets. There are significant differences between Asian markets and European markets, and the entropy values of Hang Seng Index (HSI) are close to but higher than those of European markets. The reliability of the proposed RCMWPE method has been supported by simulations on generated and real data. It could be applied to a variety of fields to quantify the complexity of the systems over multiple scales more accurately.
A 15-Year Time-series Study of Tooth Extraction in Brazil
Cunha, Maria Aparecida Goncalves de Melo; Lino, Patrícia Azevedo; dos Santos, Thiago Rezende; Vasconcelos, Mara; Lucas, Simone Dutra; de Abreu, Mauro Henrique Nogueira Guimarães
2015-01-01
Abstract Tooth loss is considered to be a public health problem. Time-series studies that assess the influence of social conditions and access to health services on tooth loss are scarce. This study aimed to examine the time-series of permanent tooth extraction in Brazil between 1998 and 2012 and to compare these series in municipalities with different Human Development Index (HDI) scores and with different access to distinct primary and secondary care. The time-series study was performed between 1998 and 2012, using data from the Brazilian National Health Information System. Time-series study was performed between 1998 and 2012. Two annual rates of tooth extraction were calculated and evaluated separately according to 3 parameters: the HDI, the presence of a Dental Specialty Center, and coverage by Oral Health Teams. The time-series was analyzed using a linear regression model. An overall decrease in the tooth-loss tendencies during this period was observed, particularly in the tooth-extraction rate during primary care procedures. In the municipalities with an HDI that was lower than the median, the average tooth-loss rates were higher than in the municipalities with a higher HDI. The municipalities with lower rates of Oral Health Team coverage also showed lower extraction rates than the municipalities with higher coverage rates. In general, Brazil has shown a decrease in the trend to extract permanent teeth during these 15 years. Increased human development and access to dental services have influenced tooth-extraction rates. PMID:26632688
Wavelet analysis in ecology and epidemiology: impact of statistical tests
Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario
2014-01-01
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method. PMID:24284892
Wavelet analysis in ecology and epidemiology: impact of statistical tests.
Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario
2014-02-06
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.
Autoregressive modeling for the spectral analysis of oceanographic data
NASA Technical Reports Server (NTRS)
Gangopadhyay, Avijit; Cornillon, Peter; Jackson, Leland B.
1989-01-01
Over the last decade there has been a dramatic increase in the number and volume of data sets useful for oceanographic studies. Many of these data sets consist of long temporal or spatial series derived from satellites and large-scale oceanographic experiments. These data sets are, however, often 'gappy' in space, irregular in time, and always of finite length. The conventional Fourier transform (FT) approach to the spectral analysis is thus often inapplicable, or where applicable, it provides questionable results. Here, through comparative analysis with the FT for different oceanographic data sets, the possibilities offered by autoregressive (AR) modeling to perform spectral analysis of gappy, finite-length series, are discussed. The applications demonstrate that as the length of the time series becomes shorter, the resolving power of the AR approach as compared with that of the FT improves. For the longest data sets examined here, 98 points, the AR method performed only slightly better than the FT, but for the very short ones, 17 points, the AR method showed a dramatic improvement over the FT. The application of the AR method to a gappy time series, although a secondary concern of this manuscript, further underlines the value of this approach.
Time Series Forecasting of the Number of Malaysia Airlines and AirAsia Passengers
NASA Astrophysics Data System (ADS)
Asrah, N. M.; Nor, M. E.; Rahim, S. N. A.; Leng, W. K.
2018-04-01
The standard practice in forecasting process involved by fitting a model and further analysis on the residuals. If we know the distributional behaviour of the time series data, it can help us to directly analyse the model identification, parameter estimation, and model checking. In this paper, we want to compare the distributional behaviour data from the number of Malaysia Airlines (MAS) and AirAsia passenger’s. From the previous research, the AirAsia passengers are govern by geometric Brownian motion (GBM). The data were normally distributed, stationary and independent. Then, GBM was used to forecast the number of AirAsia passenger’s. The same methods were applied to MAS data and the results then were compared. Unfortunately, the MAS data were not govern by GBM. Then, the standard approach in time series forecasting will be applied to MAS data. From this comparison, we can conclude that the number of AirAsia passengers are always in peak season rather than MAS passengers.
Linden, Ariel
2018-04-01
Interrupted time series analysis (ITSA) is an evaluation methodology in which a single treatment unit's outcome is studied over time and the intervention is expected to "interrupt" the level and/or trend of the outcome. The internal validity is strengthened considerably when the treated unit is contrasted with a comparable control group. In this paper, we introduce a robust evaluation framework that combines the synthetic controls method (SYNTH) to generate a comparable control group and ITSA regression to assess covariate balance and estimate treatment effects. We evaluate the effect of California's Proposition 99 for reducing cigarette sales, by comparing California to other states not exposed to smoking reduction initiatives. SYNTH is used to reweight nontreated units to make them comparable to the treated unit. These weights are then used in ITSA regression models to assess covariate balance and estimate treatment effects. Covariate balance was achieved for all but one covariate. While California experienced a significant decrease in the annual trend of cigarette sales after Proposition 99, there was no statistically significant treatment effect when compared to synthetic controls. The advantage of using this framework over regression alone is that it ensures that a comparable control group is generated. Additionally, it offers a common set of statistical measures familiar to investigators, the capability for assessing covariate balance, and enhancement of the evaluation with a comprehensive set of postestimation measures. Therefore, this robust framework should be considered as a primary approach for evaluating treatment effects in multiple group time series analysis. © 2018 John Wiley & Sons, Ltd.
Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting
Ghazali, Rozaida; Herawan, Tutut
2016-01-01
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927
Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.
Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut
2016-01-01
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.
Design and fabrication of two kind of SOI-based EA-type VOAs
NASA Astrophysics Data System (ADS)
Yuan, Pei; Wang, Yue; Wu, Yuanda; An, Junming; Hu, Xiongwei
2018-06-01
SOI-based variable optical attenuators based on electro-absorption mechanism are demonstrated in this paper. Two different doping structures are adopted to realize the attenuation: a structure with a single lateral p-i-n diode and a structure with several lateral p-i-n diodes connected in series. The VOAs with lateral p-i-n diodes connected in series (series VOA) can greatly improve the device attenuation efficiency compared to VOAs with a single lateral p-i-n diode structure (single VOA), which is verified by the experimental results that the attenuation efficiency of the series VOA and the single VOA is 3.76 dB/mA and 0.189 dB/mA respectively. The corresponding power consumption at 20 dB attenuation is 202 mW (series VOA) and 424 mW (single VOA) respectively. The raise time is 34.5 ns (single VOA) and 45.5 ns (series VOA), and the fall time is 37 ns (single VOA) and 48.5 ns (series VOA).
Treatment of Outliers via Interpolation Method with Neural Network Forecast Performances
NASA Astrophysics Data System (ADS)
Wahir, N. A.; Nor, M. E.; Rusiman, M. S.; Gopal, K.
2018-04-01
Outliers often lurk in many datasets, especially in real data. Such anomalous data can negatively affect statistical analyses, primarily normality, variance, and estimation aspects. Hence, handling the occurrences of outliers require special attention. Therefore, it is important to determine the suitable ways in treating outliers so as to ensure that the quality of the analyzed data is indeed high. As such, this paper discusses an alternative method to treat outliers via linear interpolation method. In fact, assuming outlier as a missing value in the dataset allows the application of the interpolation method to interpolate the outliers thus, enabling the comparison of data series using forecast accuracy before and after outlier treatment. With that, the monthly time series of Malaysian tourist arrivals from January 1998 until December 2015 had been used to interpolate the new series. The results indicated that the linear interpolation method, which was comprised of improved time series data, displayed better results, when compared to the original time series data in forecasting from both Box-Jenkins and neural network approaches.
A probabilistic method for constructing wave time-series at inshore locations using model scenarios
Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.
2014-01-01
Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.
Annual, Seasonal, and Secular Changes in Time-Variable Gravity from GRACE
NASA Astrophysics Data System (ADS)
Lemoine, F. G.; Luthcke, S. B.; Klosko, S. M.; Rowlands, D. D.; Chinn, D. S.; McCarthy, J. J.; Ray, R. D.; Boy, J.
2007-12-01
The NASA/DLR GRACE mission, launched in 2002, has now operated for more than five years, producing monthly and ten-day snapshots of the variations of the gravity field of the Earth. The available solutions, either from spherical harmonics or from mascons, allow us new insights into the variations of surface gravity on the Earth at annual, inter-annual, and secular time scales. Our baseline time series, based on GGM02C, NCEP Atmospheric Gravity with IB, and GOT00 tides now is extended to July 2007, spanning four+ years, and we analyze both mascon and spherical harmonic solutions from this time series with respect to global hydrology variations. Our 4degx4deg mascon solutions are extended to cover all continental regions of the globe. Comparisons with hydrology (land-surface) models can offer insights into how these models might be improved. We compare our baseline time series, with new time series that include an updated Goddard Ocean Tide (GOT) model, ECMWF- 3hr atmosphere de-aliasing data, and the MOG-2D ocean dealiasing product. Finally, we intercompare the spherical harmonic solutions at low degree from GRACE from the various product centers (e.g., GFZ, CSR, GRGS), and look for secular signals in both the GSFC mascon and spherical harmonic solutions, taking care to compare the results for secular gravity field change with independent solutions developed over 25 years of independent tracking to geodetic satellites by Satellite Laser Ranging (SLR) and DORIS.
Examining deterrence of adult sex crimes: A semi-parametric intervention time series approach.
Park, Jin-Hong; Bandyopadhyay, Dipankar; Letourneau, Elizabeth
2014-01-01
Motivated by recent developments on dimension reduction (DR) techniques for time series data, the association of a general deterrent effect towards South Carolina (SC)'s registration and notification (SORN) policy for preventing sex crimes was examined. Using adult sex crime arrestee data from 1990 to 2005, the the idea of Central Mean Subspace (CMS) is extended to intervention time series analysis (CMS-ITS) to model the sequential intervention effects of 1995 (the year SC's SORN policy was initially implemented) and 1999 (the year the policy was revised to include online notification) on the time series spectrum. The CMS-ITS model estimation was achieved via kernel smoothing techniques, and compared to interrupted auto-regressive integrated time series (ARIMA) models. Simulation studies and application to the real data underscores our model's ability towards achieving parsimony, and to detect intervention effects not earlier determined via traditional ARIMA models. From a public health perspective, findings from this study draw attention to the potential general deterrent effects of SC's SORN policy. These findings are considered in light of the overall body of research on sex crime arrestee registration and notification policies, which remain controversial.
Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory
Tao, Qing
2017-01-01
Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM. PMID:29391864
Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory.
Yang, Haimin; Pan, Zhisong; Tao, Qing
2017-01-01
Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM.
Multiscale entropy-based methods for heart rate variability complexity analysis
NASA Astrophysics Data System (ADS)
Silva, Luiz Eduardo Virgilio; Cabella, Brenno Caetano Troca; Neves, Ubiraci Pereira da Costa; Murta Junior, Luiz Otavio
2015-03-01
Physiologic complexity is an important concept to characterize time series from biological systems, which associated to multiscale analysis can contribute to comprehension of many complex phenomena. Although multiscale entropy has been applied to physiological time series, it measures irregularity as function of scale. In this study we purpose and evaluate a set of three complexity metrics as function of time scales. Complexity metrics are derived from nonadditive entropy supported by generation of surrogate data, i.e. SDiffqmax, qmax and qzero. In order to access accuracy of proposed complexity metrics, receiver operating characteristic (ROC) curves were built and area under the curves was computed for three physiological situations. Heart rate variability (HRV) time series in normal sinus rhythm, atrial fibrillation, and congestive heart failure data set were analyzed. Results show that proposed metric for complexity is accurate and robust when compared to classic entropic irregularity metrics. Furthermore, SDiffqmax is the most accurate for lower scales, whereas qmax and qzero are the most accurate when higher time scales are considered. Multiscale complexity analysis described here showed potential to assess complex physiological time series and deserves further investigation in wide context.
High Speed Solution of Spacecraft Trajectory Problems Using Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2008-01-01
Taylor series integration is implemented in a spacecraft trajectory analysis code-the Spacecraft N-body Analysis Program (SNAP) - and compared with the code s existing eighth-order Runge-Kutta Fehlberg time integration scheme. Nine trajectory problems, including near Earth, lunar, Mars and Europa missions, are analyzed. Head-to-head comparison at five different error tolerances shows that, on average, Taylor series is faster than Runge-Kutta Fehlberg by a factor of 15.8. Results further show that Taylor series has superior convergence properties. Taylor series integration proves that it can provide rapid, highly accurate solutions to spacecraft trajectory problems.
Altiparmak, Fatih; Ferhatosmanoglu, Hakan; Erdal, Selnur; Trost, Donald C
2006-04-01
An effective analysis of clinical trials data involves analyzing different types of data such as heterogeneous and high dimensional time series data. The current time series analysis methods generally assume that the series at hand have sufficient length to apply statistical techniques to them. Other ideal case assumptions are that data are collected in equal length intervals, and while comparing time series, the lengths are usually expected to be equal to each other. However, these assumptions are not valid for many real data sets, especially for the clinical trials data sets. An addition, the data sources are different from each other, the data are heterogeneous, and the sensitivity of the experiments varies by the source. Approaches for mining time series data need to be revisited, keeping the wide range of requirements in mind. In this paper, we propose a novel approach for information mining that involves two major steps: applying a data mining algorithm over homogeneous subsets of data, and identifying common or distinct patterns over the information gathered in the first step. Our approach is implemented specifically for heterogeneous and high dimensional time series clinical trials data. Using this framework, we propose a new way of utilizing frequent itemset mining, as well as clustering and declustering techniques with novel distance metrics for measuring similarity between time series data. By clustering the data, we find groups of analytes (substances in blood) that are most strongly correlated. Most of these relationships already known are verified by the clinical panels, and, in addition, we identify novel groups that need further biomedical analysis. A slight modification to our algorithm results an effective declustering of high dimensional time series data, which is then used for "feature selection." Using industry-sponsored clinical trials data sets, we are able to identify a small set of analytes that effectively models the state of normal health.
ERIC Educational Resources Information Center
Huitema, Bradley E.; McKean, Joseph W.
2007-01-01
Regression models used in the analysis of interrupted time-series designs assume statistically independent errors. Four methods of evaluating this assumption are the Durbin-Watson (D-W), Huitema-McKean (H-M), Box-Pierce (B-P), and Ljung-Box (L-B) tests. These tests were compared with respect to Type I error and power under a wide variety of error…
Streamflow Prediction based on Chaos Theory
NASA Astrophysics Data System (ADS)
Li, X.; Wang, X.; Babovic, V. M.
2015-12-01
Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.
Brigode, Pierre; Brissette, Francois; Nicault, Antoine; ...
2016-09-06
Over the last decades, different methods have been used by hydrologists to extend observed hydro-climatic time series, based on other data sources, such as tree rings or sedimentological datasets. For example, tree ring multi-proxies have been studied for the Caniapiscau Reservoir in northern Québec (Canada), leading to the reconstruction of flow time series for the last 150 years. In this paper, we applied a new hydro-climatic reconstruction method on the Caniapiscau Reservoir and compare the obtained streamflow time series against time series derived from dendrohydrology by other authors on the same catchment and study the natural streamflow variability over themore » 1881–2011 period in that region. This new reconstruction is based not on natural proxies but on a historical reanalysis of global geopotential height fields, and aims firstly to produce daily climatic time series, which are then used as inputs to a rainfall–runoff model in order to obtain daily streamflow time series. The performances of the hydro-climatic reconstruction were quantified over the observed period, and showed good performances, in terms of both monthly regimes and interannual variability. The streamflow reconstructions were then compared to two different reconstructions performed on the same catchment by using tree ring data series, one being focused on mean annual flows and the other on spring floods. In terms of mean annual flows, the interannual variability in the reconstructed flows was similar (except for the 1930–1940 decade), with noteworthy changes seen in wetter and drier years. For spring floods, the reconstructed interannual variabilities were quite similar for the 1955–2011 period, but strongly different between 1880 and 1940. Here, the results emphasize the need to apply different reconstruction methods on the same catchments. Indeed, comparisons such as those above highlight potential differences between available reconstructions and, finally, allow a retrospective analysis of the proposed reconstructions of past hydro-climatological variabilities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brigode, Pierre; Brissette, Francois; Nicault, Antoine
Over the last decades, different methods have been used by hydrologists to extend observed hydro-climatic time series, based on other data sources, such as tree rings or sedimentological datasets. For example, tree ring multi-proxies have been studied for the Caniapiscau Reservoir in northern Québec (Canada), leading to the reconstruction of flow time series for the last 150 years. In this paper, we applied a new hydro-climatic reconstruction method on the Caniapiscau Reservoir and compare the obtained streamflow time series against time series derived from dendrohydrology by other authors on the same catchment and study the natural streamflow variability over themore » 1881–2011 period in that region. This new reconstruction is based not on natural proxies but on a historical reanalysis of global geopotential height fields, and aims firstly to produce daily climatic time series, which are then used as inputs to a rainfall–runoff model in order to obtain daily streamflow time series. The performances of the hydro-climatic reconstruction were quantified over the observed period, and showed good performances, in terms of both monthly regimes and interannual variability. The streamflow reconstructions were then compared to two different reconstructions performed on the same catchment by using tree ring data series, one being focused on mean annual flows and the other on spring floods. In terms of mean annual flows, the interannual variability in the reconstructed flows was similar (except for the 1930–1940 decade), with noteworthy changes seen in wetter and drier years. For spring floods, the reconstructed interannual variabilities were quite similar for the 1955–2011 period, but strongly different between 1880 and 1940. Here, the results emphasize the need to apply different reconstruction methods on the same catchments. Indeed, comparisons such as those above highlight potential differences between available reconstructions and, finally, allow a retrospective analysis of the proposed reconstructions of past hydro-climatological variabilities.« less
PSO-MISMO modeling strategy for multistep-ahead time series prediction.
Bao, Yukun; Xiong, Tao; Hu, Zhongyi
2014-05-01
Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.
An analysis of the kangaroo care intervention using neonatal EEG complexity: a preliminary study.
Kaffashi, F; Scher, M S; Ludington-Hoe, S M; Loparo, K A
2013-02-01
Skin-to-skin contact (SSC) promotes physiological stability and interaction between parents and infants. Temporal analyses of predictability in EEG-sleep time series can elucidate functional brain maturation between SSC and non-SSC cohorts at similar post-menstrual ages (PMAs). Sixteen EEG-sleep studies were performed on eight preterm infants who received 8 weeks of SSC, and compared with two non-SSC cohorts at term (N=126) that include a preterm group corrected to term age and a full term group. Two time series measures of predictability were used for comparisons. The SSC premature neonate group had increased complexity when compared to the non-SSC premature neonate group at the same PMA. Discriminant analysis shows that SSC neonates at 40 weeks PMA are closer to the full term neonate non-SSC group than to the premature non-SSC group at the same PMA; suggesting that the KC intervention accelerates neurophysiological maturation of premature neonates. Based on the hypothesis that EEG-derived complexity increases with neurophysiological maturation as supported by previously published research, SSC accelerates brain maturation in healthy preterm infants as quantified by time series measures of predictability when compared to a similar non-SSC group. Times series methods that quantify predictability of EEG sleep in neonates can provide useful information about altered neural development after developmental care interventions such as SSC. Analyses of this type may be helpful in assessing other neuroprotection strategies. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Discovering time-lagged rules from microarray data using gene profile classifiers
2011-01-01
Background Gene regulatory networks have an essential role in every process of life. In this regard, the amount of genome-wide time series data is becoming increasingly available, providing the opportunity to discover the time-delayed gene regulatory networks that govern the majority of these molecular processes. Results This paper aims at reconstructing gene regulatory networks from multiple genome-wide microarray time series datasets. In this sense, a new model-free algorithm called GRNCOP2 (Gene Regulatory Network inference by Combinatorial OPtimization 2), which is a significant evolution of the GRNCOP algorithm, was developed using combinatorial optimization of gene profile classifiers. The method is capable of inferring potential time-delay relationships with any span of time between genes from various time series datasets given as input. The proposed algorithm was applied to time series data composed of twenty yeast genes that are highly relevant for the cell-cycle study, and the results were compared against several related approaches. The outcomes have shown that GRNCOP2 outperforms the contrasted methods in terms of the proposed metrics, and that the results are consistent with previous biological knowledge. Additionally, a genome-wide study on multiple publicly available time series data was performed. In this case, the experimentation has exhibited the soundness and scalability of the new method which inferred highly-related statistically-significant gene associations. Conclusions A novel method for inferring time-delayed gene regulatory networks from genome-wide time series datasets is proposed in this paper. The method was carefully validated with several publicly available data sets. The results have demonstrated that the algorithm constitutes a usable model-free approach capable of predicting meaningful relationships between genes, revealing the time-trends of gene regulation. PMID:21524308
Fluctuations in Wikipedia access-rate and edit-event data
NASA Astrophysics Data System (ADS)
Kämpf, Mirko; Tismer, Sebastian; Kantelhardt, Jan W.; Muchnik, Lev
2012-12-01
Internet-based social networks often reflect extreme events in nature and society by drastic increases in user activity. We study and compare the dynamics of the two major complex processes necessary for information spread via the online encyclopedia ‘Wikipedia’, i.e., article editing (information upload) and article access (information viewing) based on article edit-event time series and (hourly) user access-rate time series for all articles. Daily and weekly activity patterns occur in addition to fluctuations and bursting activity. The bursts (i.e., significant increases in activity for an extended period of time) are characterized by a power-law distribution of durations of increases and decreases. For describing the recurrence and clustering of bursts we investigate the statistics of the return intervals between them. We find stretched exponential distributions of return intervals in access-rate time series, while edit-event time series yield simple exponential distributions. To characterize the fluctuation behavior we apply detrended fluctuation analysis (DFA), finding that most article access-rate time series are characterized by strong long-term correlations with fluctuation exponents α≈0.9. The results indicate significant differences in the dynamics of information upload and access and help in understanding the complex process of collecting, processing, validating, and distributing information in self-organized social networks.
Mumbare, Sachin S; Gosavi, Shriram; Almale, Balaji; Patil, Aruna; Dhakane, Supriya; Kadu, Aniruddha
2014-10-01
India's National Family Welfare Programme is dominated by sterilization, particularly tubectomy. Sterilization, being a terminal method of contraception, decides the final number of children for that couple. Many studies have shown the declining trend in the average number of living children at the time of sterilization over a short period of time. So this study was planned to do time series analysis of the average children at the time of terminal contraception, to do forecasting till 2020 for the same and to compare the rates of change in various subgroups of the population. Data was preprocessed in MS Access 2007 by creating and running SQL queries. After testing stationarity of every series with augmented Dickey-Fuller test, time series analysis and forecasting was done using best-fit Box-Jenkins ARIMA (p, d, q) nonseasonal model. To compare the rates of change of average children in various subgroups, at sterilization, analysis of covariance (ANCOVA) was applied. Forecasting showed that the replacement level of 2.1 total fertility rate (TFR) will be achieved in 2018 for couples opting for sterilization. The same will be achieved in 2020, 2016, 2018, and 2019 for rural area, urban area, Hindu couples, and Buddhist couples, respectively. It will not be achieved till 2020 in Muslim couples. Every stratum of population showed the declining trend. The decline for male children and in rural area was significantly faster than the decline for female children and in urban area, respectively. The decline was not significantly different in Hindu, Muslim, and Buddhist couples.
NASA Astrophysics Data System (ADS)
Jia, Duo; Wang, Cangjiao; Lei, Shaogang
2018-01-01
Mapping vegetation dynamic types in mining areas is significant for revealing the mechanisms of environmental damage and for guiding ecological construction. Dynamic types of vegetation can be identified by applying interannual normalized difference vegetation index (NDVI) time series. However, phase differences and time shifts in interannual time series decrease mapping accuracy in mining regions. To overcome these problems and to increase the accuracy of mapping vegetation dynamics, an interannual Landsat time series for optimum vegetation growing status was constructed first by using the enhanced spatial and temporal adaptive reflectance fusion model algorithm. We then proposed a Markov random field optimized semisupervised Gaussian dynamic time warping kernel-based fuzzy c-means (FCM) cluster algorithm for interannual NDVI time series to map dynamic vegetation types in mining regions. The proposed algorithm has been tested in the Shengli mining region and Shendong mining region, which are typical representatives of China's open-pit and underground mining regions, respectively. Experiments show that the proposed algorithm can solve the problems of phase differences and time shifts to achieve better performance when mapping vegetation dynamic types. The overall accuracies for the Shengli and Shendong mining regions were 93.32% and 89.60%, respectively, with improvements of 7.32% and 25.84% when compared with the original semisupervised FCM algorithm.
NASA Astrophysics Data System (ADS)
Delgado, Oihane; Campo-Bescós, Miguel A.; López, J. Javier
2017-04-01
Frequently, when we are trying to solve certain hydrological engineering problems, it is often necessary to know rain intensity values related to a specific probability or return period, T. Based on analyses of extreme rainfall events at different time scale aggregation, we can deduce the relationships among Intensity-Duration-Frequency (IDF), that are widely used in hydraulic infrastructure design. However, the lack of long time series of rainfall intensities for smaller time periods, minutes or hours, leads to use mathematical expressions to characterize and extend these curves. One way to deduce them is through the development of synthetic rainfall time series generated from stochastic models, which is evaluated in this work. From recorded accumulated rainfall time series every 10 min in the pluviograph of Igueldo (San Sebastian, Spain) for the time period between 1927-2005, their homogeneity has been checked and possible statistically significant increasing or decreasing trends have also been shown. Subsequently, two models have been calibrated: Bartlett-Lewis and Markov chains models, which are based on the successions of storms, composed for a series of rainfall events, separated by a short interval of time each. Finally, synthetic ten-minute rainfall time series are generated, which allow to estimate detailed IDF curves and compare them with the estimated IDF based on the recorded data.
Samiee, Kaveh; Kovács, Petér; Gabbouj, Moncef
2015-02-01
A system for epileptic seizure detection in electroencephalography (EEG) is described in this paper. One of the challenges is to distinguish rhythmic discharges from nonstationary patterns occurring during seizures. The proposed approach is based on an adaptive and localized time-frequency representation of EEG signals by means of rational functions. The corresponding rational discrete short-time Fourier transform (DSTFT) is a novel feature extraction technique for epileptic EEG data. A multilayer perceptron classifier is fed by the coefficients of the rational DSTFT in order to separate seizure epochs from seizure-free epochs. The effectiveness of the proposed method is compared with several state-of-art feature extraction algorithms used in offline epileptic seizure detection. The results of the comparative evaluations show that the proposed method outperforms competing techniques in terms of classification accuracy. In addition, it provides a compact representation of EEG time-series.
Generalized sample entropy analysis for traffic signals based on similarity measure
NASA Astrophysics Data System (ADS)
Shang, Du; Xu, Mengjia; Shang, Pengjian
2017-05-01
Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.
Zhao, Yan-Hong; Zhang, Xue-Fang; Zhao, Yan-Qiu; Bai, Fan; Qin, Fan; Sun, Jing; Dong, Ying
2017-08-01
Chronic myeloid leukemia (CML) is characterized by the accumulation of active BCR-ABL protein. Imatinib is the first-line treatment of CML; however, many patients are resistant to this drug. In this study, we aimed to compare the differences in expression patterns and functions of time-series genes in imatinib-resistant CML cells under different drug treatments. GSE24946 was downloaded from the GEO database, which included 17 samples of K562-r cells with (n=12) or without drug administration (n=5). Three drug treatment groups were considered for this study: arsenic trioxide (ATO), AMN107, and ATO+AMN107. Each group had one sample at each time point (3, 12, 24, and 48 h). Time-series genes with a ratio of standard deviation/average (coefficient of variation) >0.15 were screened, and their expression patterns were revealed based on Short Time-series Expression Miner (STEM). Then, the functional enrichment analysis of time-series genes in each group was performed using DAVID, and the genes enriched in the top ten functional categories were extracted to detect their expression patterns. Different time-series genes were identified in the three groups, and most of them were enriched in the ribosome and oxidative phosphorylation pathways. Time-series genes in the three treatment groups had different expression patterns and functions. Time-series genes in the ATO group (e.g. CCNA2 and DAB2) were significantly associated with cell adhesion, those in the AMN107 group were related to cellular carbohydrate metabolic process, while those in the ATO+AMN107 group (e.g. AP2M1) were significantly related to cell proliferation and antigen processing. In imatinib-resistant CML cells, ATO could influence genes related to cell adhesion, AMN107 might affect genes involved in cellular carbohydrate metabolism, and the combination therapy might regulate genes involved in cell proliferation.
Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques
Barzaghi, Riccardo; De Gaetani, Carlo Iapige
2018-01-01
Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D’Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations. PMID:29498650
Argon concentration time-series as a tool to study gas dynamics in the hyporheic zone.
Mächler, Lars; Brennwald, Matthias S; Kipfer, Rolf
2013-07-02
The oxygen dynamics in the hyporheic zone of a peri-alpine river (Thur, Switzerland), were studied through recording and analyzing the concentration time-series of dissolved argon, oxygen, carbon dioxide, and temperature during low flow conditions, for a period of one week. The argon concentration time-series was used to investigate the physical gas dynamics in the hyporheic zone. Differences in the transport behavior of heat and gas were determined by comparing the diel temperature evolution of groundwater to the measured concentration of dissolved argon. These differences were most likely caused by vertical heat transport which influenced the local groundwater temperature. The argon concentration time-series were also used to estimate travel times by cross correlating argon concentrations in the groundwater with argon concentrations in the river. The information gained from quantifying the physical gas transport was used to estimate the oxygen turnover in groundwater after water recharge. The resulting oxygen turnover showed strong diel variations, which correlated with the water temperature during groundwater recharge. Hence, the variation in the consumption rate was most likely caused by the temperature dependence of microbial activity.
NASA Astrophysics Data System (ADS)
Nechad, Bouchra; Alvera-Azcaràte, Aida; Ruddick, Kevin; Greenwood, Naomi
2011-08-01
In situ measurements of total suspended matter (TSM) over the period 2003-2006, collected with two autonomous platforms from the Centre for Environment, Fisheries and Aquatic Sciences (Cefas) measuring the optical backscatter (OBS) in the southern North Sea, are used to assess the accuracy of TSM time series extracted from satellite data. Since there are gaps in the remote sensing (RS) data, due mainly to cloud cover, the Data Interpolating Empirical Orthogonal Functions (DINEOF) is used to fill in the TSM time series and build a continuous daily "recoloured" dataset. The RS datasets consist of TSM maps derived from MODIS imagery using the bio-optical model of Nechad et al. (Rem Sens Environ 114: 854-866, 2010). In this study, the DINEOF time series are compared to the in situ OBS measured in moderately to very turbid waters respectively in West Gabbard and Warp Anchorage, in the southern North Sea. The discrepancies between instantaneous RS, DINEOF-filled RS data and Cefas data are analysed in terms of TSM algorithm uncertainties, space-time variability and DINEOF reconstruction uncertainty.
Qian, Liwei; Zheng, Haoran; Zhou, Hong; Qin, Ruibin; Li, Jinlong
2013-01-01
The increasing availability of time series expression datasets, although promising, raises a number of new computational challenges. Accordingly, the development of suitable classification methods to make reliable and sound predictions is becoming a pressing issue. We propose, here, a new method to classify time series gene expression via integration of biological networks. We evaluated our approach on 2 different datasets and showed that the use of a hidden Markov model/Gaussian mixture models hybrid explores the time-dependence of the expression data, thereby leading to better prediction results. We demonstrated that the biclustering procedure identifies function-related genes as a whole, giving rise to high accordance in prognosis prediction across independent time series datasets. In addition, we showed that integration of biological networks into our method significantly improves prediction performance. Moreover, we compared our approach with several state-of–the-art algorithms and found that our method outperformed previous approaches with regard to various criteria. Finally, our approach achieved better prediction results on early-stage data, implying the potential of our method for practical prediction. PMID:23516469
Development of web tools to disseminate space geodesy data-related products
NASA Astrophysics Data System (ADS)
Soudarin, Laurent; Ferrage, Pascale; Mezerette, Adrien
2015-04-01
In order to promote the products of the DORIS system, the French Space Agency CNES has developed and implemented on the web site of the International DORIS Service (IDS) a set of plot tools to interactively build and display time series of site positions, orbit residuals and terrestrial parameters (scale, geocenter). An interactive global map is also available to select sites, and to get access to their information. Besides the products provided by the CNES Orbitography Team and the IDS components, these tools allow comparing time evolutions of coordinates for collocated DORIS and GNSS stations, thanks to the collaboration with the Terrestrial Frame Combination Center of the International GNSS Service (IGS). A database was created to improve robustness and efficiency of the tools, with the objective to propose a complete web service to foster data exchange with the other geodetic services of the International Association of Geodesy (IAG). The possibility to visualize and compare position time series of the four main space geodetic techniques DORIS, GNSS, SLR and VLBI is already under way at the French level. A dedicated version of these web tools has been developed for the French Space Geodesy Research Group (GRGS). It will give access to position time series provided by the GRGS Analysis Centers involved in DORIS, GNSS, SLR and VLBI data processing for the realization of the International Terrestrial Reference Frame. In this presentation, we will describe the functionalities of these tools, and we will address some aspects of the time series (content, format).
Coronal Mass Ejection Data Clustering and Visualization of Decision Trees
NASA Astrophysics Data System (ADS)
Ma, Ruizhe; Angryk, Rafal A.; Riley, Pete; Filali Boubrahimi, Soukaina
2018-05-01
Coronal mass ejections (CMEs) can be categorized as either “magnetic clouds” (MCs) or non-MCs. Features such as a large magnetic field, low plasma-beta, and low proton temperature suggest that a CME event is also an MC event; however, so far there is neither a definitive method nor an automatic process to distinguish the two. Human labeling is time-consuming, and results can fluctuate owing to the imprecise definition of such events. In this study, we approach the problem of MC and non-MC distinction from a time series data analysis perspective and show how clustering can shed some light on this problem. Although many algorithms exist for traditional data clustering in the Euclidean space, they are not well suited for time series data. Problems such as inadequate distance measure, inaccurate cluster center description, and lack of intuitive cluster representations need to be addressed for effective time series clustering. Our data analysis in this work is twofold: clustering and visualization. For clustering we compared the results from the popular hierarchical agglomerative clustering technique to a distance density clustering heuristic we developed previously for time series data clustering. In both cases, dynamic time warping will be used for similarity measure. For classification as well as visualization, we use decision trees to aggregate single-dimensional clustering results to form a multidimensional time series decision tree, with averaged time series to present each decision. In this study, we achieved modest accuracy and, more importantly, an intuitive interpretation of how different parameters contribute to an MC event.
NASA Astrophysics Data System (ADS)
Rao, A.; Onderdonk, N.
2016-12-01
The Davis-Schrimpf Seep Field (DSSF) is a group of approximately 50 geothermal mud seeps (gryphons) in the Salton Trough of southeastern California. Its location puts it in line with the mapped San Andreas Fault, if extended further south, as well as within the poorly-understood Brawley Seismic Zone. Much of the geomorphology, geochemistry, and other characteristics of the DSSF have been analyzed, but its subsurface structure remains unknown. Here we present data and interpretations from five new temperature timeseries from four separate gryphons at the DSSF, and compare them both amongst themselves, and within the context of all previously collected data to identify possible patterns constraining the subsurface dynamics. Simultaneously collected time-series from different seeps were cross-correlated to quantify similarity. All years' time-series were checked against the record of local seismicity to identify any seismic influence on temperature excursions. Time-series captured from the same feature in different years were statistically summarized and the results plotted to examine their evolution over time. We found that adjacent vents often alternate in temperature, suggesting a switching of flow path of the erupted mud at the scale of a few meters or less. Noticeable warming over time was observed in most of the features with time-series covering multiple years. No synchronicity was observed between DSSF features' temperature excursions, and seismic events within a 24 kilometer radius covering most of the width of the surrounding Salton Trough.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Molnar, Gyula; Iredell, Lena; Loeb, Norman G.
2011-01-01
This paper compares recent spatial anomaly time series of OLR (Outgoing Longwave Radiation) and OLRCLR (Clear Sky OLR) as determined using CERES and AIRS observations over the time period September 2002 through June 2010. We find excellent agreement in OLR anomaly time series of both data sets in almost every detail, down to the 1 x 1 spatial grid point level. This extremely close agreement of OLR anomaly time series derived from observations by two different instruments implies that both sets of results must be highly stable. This agreement also validates to some extent the anomaly time series of the AIRS derived products used in the computation of the AIRS OLR product. The paper then examines anomaly time series of AIRS derived products over the extended time period September 2002 through April 2011. We show that OLR anomalies during this period are closely in phase with those of an El Nino index, and that the recent global and tropical mean decreases in OLR and OLRCLR are a result of a transition from an El Nino condition at the beginning of the data record to La Nina conditions toward the end of the data period. We show that the relationship between global mean, and especially tropical mean, OLR anomalies to the El Nino index can be explained by temporal changes of the distribution of mid-tropospheric water vapor and cloud cover in two spatial regions that are in direct response to El Nino/La Nina activity which occurs outside these spatial regions.
Low Streamflow Forcasting using Minimum Relative Entropy
NASA Astrophysics Data System (ADS)
Cui, H.; Singh, V. P.
2013-12-01
Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.
Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.
Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L
2018-02-01
This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Contributions to Climate Research Using the AIRS Science Team Version-5 Products
NASA Technical Reports Server (NTRS)
Susskind, Joel; Molnar, Gyula; Iredell, Lena
2011-01-01
This paper compares recent spatial anomaly time series of OLR (Outgoing Longwave Radiation) and OLRCLR (Clear Sky OLR) as determined using CERES and AIRS observations over the time period September 2002 through June 2010. We find excellent agreement in OLR anomaly time series of both data sets in almost every detail, down to the 1 x 1 spatial grid point level. This extremely close agreement of OLR anomaly time series derived from observations by two different instruments implies that both sets of results must be highly stable. This agreement also validates to some extent the anomaly time series of the AIRS derived products used in the computation of the AIRS OLR product. The paper then examines anomaly time series of AIRS derived products over the extended time period September 2002 through April 2011. We show that OLR anomalies during this period are closely in phase with those of an El Nino index, and that recent global and tropical mean decreases in OLR and OLR(sub CLR) are a result of a transition from an El Nino condition at the beginning of the data record to La Nina conditions toward the end of the data period. This relationship can be explained by temporal changes of the distribution of mid-tropospheric water vapor and cloud cover in two spatial regions that are in direct response to El Nino/La Nina activity which occurs outside these spatial regions
Analysis of financial time series using multiscale entropy based on skewness and kurtosis
NASA Astrophysics Data System (ADS)
Xu, Meng; Shang, Pengjian
2018-01-01
There is a great interest in studying dynamic characteristics of the financial time series of the daily stock closing price in different regions. Multi-scale entropy (MSE) is effective, mainly in quantifying the complexity of time series on different time scales. This paper applies a new method for financial stability from the perspective of MSE based on skewness and kurtosis. To better understand the superior coarse-graining method for the different kinds of stock indexes, we take into account the developmental characteristics of the three continents of Asia, North America and European stock markets. We study the volatility of different financial time series in addition to analyze the similarities and differences of coarsening time series from the perspective of skewness and kurtosis. A kind of corresponding relationship between the entropy value of stock sequences and the degree of stability of financial markets, were observed. The three stocks which have particular characteristics in the eight piece of stock sequences were discussed, finding the fact that it matches the result of applying the MSE method to showing results on a graph. A comparative study is conducted to simulate over synthetic and real world data. Results show that the modified method is more effective to the change of dynamics and has more valuable information. The result is obtained at the same time, finding the results of skewness and kurtosis discrimination is obvious, but also more stable.
NASA Astrophysics Data System (ADS)
Chanard, Kristel; Fleitout, Luce; Calais, Eric; Rebischung, Paul; Avouac, Jean-Philippe
2018-04-01
We model surface displacements induced by variations in continental water, atmospheric pressure, and nontidal oceanic loading, derived from the Gravity Recovery and Climate Experiment (GRACE) for spherical harmonic degrees two and higher. As they are not observable by GRACE, we use at first the degree-1 spherical harmonic coefficients from Swenson et al. (2008, https://doi.org/10.1029/2007JB005338). We compare the predicted displacements with the position time series of 689 globally distributed continuous Global Navigation Satellite System (GNSS) stations. While GNSS vertical displacements are well explained by the model at a global scale, horizontal displacements are systematically underpredicted and out of phase with GNSS station position time series. We then reestimate the degree 1 deformation field from a comparison between our GRACE-derived model, with no a priori degree 1 loads, and the GNSS observations. We show that this approach reconciles GRACE-derived loading displacements and GNSS station position time series at a global scale, particularly in the horizontal components. Assuming that they reflect surface loading deformation only, our degree-1 estimates can be translated into geocenter motion time series. We also address and assess the impact of systematic errors in GNSS station position time series at the Global Positioning System (GPS) draconitic period and its harmonics on the comparison between GNSS and GRACE-derived annual displacements. Our results confirm that surface mass redistributions observed by GRACE, combined with an elastic spherical and layered Earth model, can be used to provide first-order corrections for loading deformation observed in both horizontal and vertical components of GNSS station position time series.
Burby, Joshua W.; Lacker, Daniel
2016-01-01
Systems as diverse as the interacting species in a community, alleles at a genetic locus, and companies in a market are characterized by competition (over resources, space, capital, etc) and adaptation. Neutral theory, built around the hypothesis that individual performance is independent of group membership, has found utility across the disciplines of ecology, population genetics, and economics, both because of the success of the neutral hypothesis in predicting system properties and because deviations from these predictions provide information about the underlying dynamics. However, most tests of neutrality are weak, based on static system properties such as species-abundance distributions or the number of singletons in a sample. Time-series data provide a window onto a system’s dynamics, and should furnish tests of the neutral hypothesis that are more powerful to detect deviations from neutrality and more informative about to the type of competitive asymmetry that drives the deviation. Here, we present a neutrality test for time-series data. We apply this test to several microbial time-series and financial time-series and find that most of these systems are not neutral. Our test isolates the covariance structure of neutral competition, thus facilitating further exploration of the nature of asymmetry in the covariance structure of competitive systems. Much like neutrality tests from population genetics that use relative abundance distributions have enabled researchers to scan entire genomes for genes under selection, we anticipate our time-series test will be useful for quick significance tests of neutrality across a range of ecological, economic, and sociological systems for which time-series data are available. Future work can use our test to categorize and compare the dynamic fingerprints of particular competitive asymmetries (frequency dependence, volatility smiles, etc) to improve forecasting and management of complex adaptive systems. PMID:27689714
Flicker Noise in GNSS Station Position Time Series: How much is due to Crustal Loading Deformations?
NASA Astrophysics Data System (ADS)
Rebischung, P.; Chanard, K.; Metivier, L.; Altamimi, Z.
2017-12-01
The presence of colored noise in GNSS station position time series was detected 20 years ago. It has been shown since then that the background spectrum of non-linear GNSS station position residuals closely follows a power-law process (known as flicker noise, 1/f noise or pink noise), with some white noise taking over at the highest frequencies. However, the origin of the flicker noise present in GNSS station position time series is still unclear. Flicker noise is often described as intrinsic to the GNSS system, i.e. due to errors in the GNSS observations or in their modeling, but no such error source has been identified so far that could explain the level of observed flicker noise, nor its spatial correlation.We investigate another possible contributor to the observed flicker noise, namely real crustal displacements driven by surface mass transports, i.e. non-tidal loading deformations. This study is motivated by the presence of power-law noise in the time series of low-degree (≤ 40) and low-order (≤ 12) Stokes coefficients observed by GRACE - power-law noise might also exist at higher degrees and orders, but obscured by GRACE observational noise. By comparing GNSS station position time series with loading deformation time series derived from GRACE gravity fields, both with their periodic components removed, we therefore assess whether GNSS and GRACE both plausibly observe the same flicker behavior of surface mass transports / loading deformations. Taking into account GRACE observability limitations, we also quantify the amount of flicker noise in GNSS station position time series that could be explained by such flicker loading deformations.
NASA Astrophysics Data System (ADS)
Su, Xiaoli; Luo, Zhicai; Zhou, Zebing
2018-06-01
Knowledge of backscatter change is important to accurately retrieve elevation change time series from satellite radar altimetry over continental ice sheets. Previously, backscatter coefficients generated in two cases, namely with and without accounting for backscatter gradient (BG), are used. However, the difference between backscatter time series obtained separately in these two cases and its impact on retrieving elevation change are not well known. Here we first compare the mean profiles of the Ku and Ka band backscatter over the Greenland ice sheet (GrIS), with results illustrating that the Ku-band backscatter is 3 ∼ 5 dB larger than that of the Ka band. We then conduct statistic analysis about time series of backscatter formed separately in the above two cases for both Ku and Ka bands over two regions in the GrIS. It is found that the standard deviation of backscatter time series becomes slightly smaller after removing the BG effect, which suggests that the method for the BG correction is effective. Furthermore, the impact on elevation change from backscatter change due to the BG effect is separately assessed for both Ku and Ka bands over the GrIS. We conclude that Ka band altimetry would benefit from a BG induced backscatter analysis (∼10% over region 2). This study may provide a reference to form backscatter time series towards refining elevation change time series from satellite radar altimetry over ice sheets using repeat-track analysis.
NASA Astrophysics Data System (ADS)
Alshawaf, Fadwa; Dick, Galina; Heise, Stefan; Balidakis, Kyriakos; Schmidt, Torsten; Wickert, Jens
2017-04-01
Ground-based GNSS (Global Navigation Satellite Systems) have efficiently been used since the 1990s as a meteorological observing system. Recently scientists used GNSS time series of precipitable water vapor (PWV) for climate research although they may not be sufficiently long. In this work, we compare the trend estimated from GNSS time series with that estimated from European Center for Medium-RangeWeather Forecasts Reanalysis (ERA-Interim) data and meteorological measurements.We aim at evaluating climate evolution in Central Europe by monitoring different atmospheric variables such as temperature and PWV. PWV time series were obtained by three methods: 1) estimated from ground-based GNSS observations using the method of precise point positioning, 2) inferred from ERA-Interim data, and 3) determined based on daily surface measurements of temperature and relative humidity. The other variables are available from surface meteorological stations or received from ERA-Interim. The PWV trend component estimated from GNSS data strongly correlates (>70%) with that estimated from the other data sets. The linear trend is estimated by straight line fitting over 30 years of seasonally-adjusted PWV time series obtained using the meteorological measurements. The results show a positive trend in the PWV time series with an increase of 0.2-0.7 mm/decade with a mean standard deviations of 0.016 mm/decade. In this paper, we present the results at three GNSS stations. The temporal increment of the PWV correlates with the temporal increase in the temperature levels.
Mutual information estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
For the automated, objective and joint analysis of time series, similarity measures are crucial. Used in the analysis of climate records, they allow for a complimentary, unbiased view onto sparse datasets. The irregular sampling of many of these time series, however, makes it necessary to either perform signal reconstruction (e.g. interpolation) or to develop and use adapted measures. Standard linear interpolation comes with an inevitable loss of information and bias effects. We have recently developed a Gaussian kernel-based correlation algorithm with which the interpolation error can be substantially lowered, but this would not work should the functional relationship in a bivariate setting be non-linear. We therefore propose an algorithm to estimate lagged auto and cross mutual information from irregularly sampled time series. We have extended the standard and adaptive binning histogram estimators and use Gaussian distributed weights in the estimation of the (joint) probabilities. To test our method we have simulated linear and nonlinear auto-regressive processes with Gamma-distributed inter-sampling intervals. We have then performed a sensitivity analysis for the estimation of actual coupling length, the lag of coupling and the decorrelation time in the synthetic time series and contrast our results to the performance of a signal reconstruction scheme. Finally we applied our estimator to speleothem records. We compare the estimated memory (or decorrelation time) to that from a least-squares estimator based on fitting an auto-regressive process of order 1. The calculated (cross) mutual information results are compared for the different estimators (standard or adaptive binning) and contrasted with results from signal reconstruction. We find that the kernel-based estimator has a significantly lower root mean square error and less systematic sampling bias than the interpolation-based method. It is possible that these encouraging results could be further improved by using non-histogram mutual information estimators, like k-Nearest Neighbor or Kernel-Density estimators, but for short (<1000 points) and irregularly sampled datasets the proposed algorithm is already a great improvement.
NASA Astrophysics Data System (ADS)
Moeeni, Hamid; Bonakdari, Hossein; Fatemi, Seyed Ehsan
2017-04-01
Because time series stationarization has a key role in stochastic modeling results, three methods are analyzed in this study. The methods are seasonal differencing, seasonal standardization and spectral analysis to eliminate the periodic effect on time series stationarity. First, six time series including 4 streamflow series and 2 water temperature series are stationarized. The stochastic term for these series obtained with ARIMA is subsequently modeled. For the analysis, 9228 models are introduced. It is observed that seasonal standardization and spectral analysis eliminate the periodic term completely, while seasonal differencing maintains seasonal correlation structures. The obtained results indicate that all three methods present acceptable performance overall. However, model accuracy in monthly streamflow prediction is higher with seasonal differencing than with the other two methods. Another advantage of seasonal differencing over the other methods is that the monthly streamflow is never estimated as negative. Standardization is the best method for predicting monthly water temperature although it is quite similar to seasonal differencing, while spectral analysis performed the weakest in all cases. It is concluded that for each monthly seasonal series, seasonal differencing is the best stationarization method in terms of periodic effect elimination. Moreover, the monthly water temperature is predicted with more accuracy than monthly streamflow. The criteria of the average stochastic term divided by the amplitude of the periodic term obtained for monthly streamflow and monthly water temperature were 0.19 and 0.30, 0.21 and 0.13, and 0.07 and 0.04 respectively. As a result, the periodic term is more dominant than the stochastic term for water temperature in the monthly water temperature series compared to streamflow series.
Regional Glacier Mapping by Combination of Dense Optical and SAR Satellite Image Time-Series
NASA Astrophysics Data System (ADS)
Winsvold, S. H.; Kääb, A.; Andreassen, L. M.; Nuth, C.; Schellenberger, T.; van Pelt, W.
2016-12-01
Near-future dense time series from both SAR (Sentinel-1A and B) and optical satellite sensors (Landsat 8, Sentinel-2A and B) will promote new multisensory time series applications for glacier mapping. We assess such combinations of optical and SAR data among others by 1) using SAR data to supplement optical time series that suffer from heavy cloud cover (chronological gap-filling), 2) merging the two data types based on stack statistics (Std.dev, Mean, Max. etc.), or 3) better explaining glacier facies patterns in SAR data using optical satellite images. As one example, summer SAR backscatter time series have been largely unexplored and even neglected in many glaciological studies due to the high content of liquid melt water on the ice surface and its intrusion in the upper part of the snow and firn. This water content causes strong specular scattering and absorption of the radar signal, and little energy is scattered back to the SAR sensor. We find in many scenes of a Sentinel-1 time series a significant temporal backscatter difference between the glacier ice surface and the seasonal snow as it melts up glacier. Even though both surfaces have typically wet conditions, we suggest that the backscatter difference is due to different roughness lengths of the two surfaces. Higher backscatter is found on the ice surface in the ablation area compared to the firn/seasonal snow surface. We find and present also other backscatter patterns in the Sentinel-1 time series related to glacier facies and weather events. For the Ny Ålesund area, Svalbard we use Radarsat-2 time series to explore the glacier backscatter conditions in a > 5 year period, discussing distinct temporal signals from among others refreezing of the firn in late autumn, or temporal lakes. All these examples are analyzed using the above 3 methods. By this multi-temporal and multi-sensor approach we also explore and describe the possible connection between combined SAR/optical time series and surface mass balance.
Analysis of crude oil markets with improved multiscale weighted permutation entropy
NASA Astrophysics Data System (ADS)
Niu, Hongli; Wang, Jun; Liu, Cheng
2018-03-01
Entropy measures are recently extensively used to study the complexity property in nonlinear systems. Weighted permutation entropy (WPE) can overcome the ignorance of the amplitude information of time series compared with PE and shows a distinctive ability to extract complexity information from data having abrupt changes in magnitude. Improved (or sometimes called composite) multi-scale (MS) method possesses the advantage of reducing errors and improving the accuracy when applied to evaluate multiscale entropy values of not enough long time series. In this paper, we combine the merits of WPE and improved MS to propose the improved multiscale weighted permutation entropy (IMWPE) method for complexity investigation of a time series. Then it is validated effective through artificial data: white noise and 1 / f noise, and real market data of Brent and Daqing crude oil. Meanwhile, the complexity properties of crude oil markets are explored respectively of return series, volatility series with multiple exponents and EEMD-produced intrinsic mode functions (IMFs) which represent different frequency components of return series. Moreover, the instantaneous amplitude and frequency of Brent and Daqing crude oil are analyzed by the Hilbert transform utilized to each IMF.
Multifractal behavior of an air pollutant time series and the relevance to the predictability.
Dong, Qingli; Wang, Yong; Li, Peizhi
2017-03-01
Compared with the traditional method of detrended fluctuation analysis, which is used to characterize fractal scaling properties and long-range correlations, this research provides new insight into the multifractality and predictability of a nonstationary air pollutant time series using the methods of spectral analysis and multifractal detrended fluctuation analysis. First, the existence of a significant power-law behavior and long-range correlations for such series are verified. Then, by employing shuffling and surrogating procedures and estimating the scaling exponents, the major source of multifractality in these pollutant series is found to be the fat-tailed probability density function. Long-range correlations also partly contribute to the multifractal features. The relationship between the predictability of the pollutant time series and their multifractal nature is then investigated with extended analyses from the quantitative perspective, and it is found that the contribution of the multifractal strength of long-range correlations to the overall multifractal strength can affect the predictability of a pollutant series in a specific region to some extent. The findings of this comprehensive study can help to better understand the mechanisms governing the dynamics of air pollutant series and aid in performing better meteorological assessment and management. Copyright © 2016 Elsevier Ltd. All rights reserved.
High-order fuzzy time-series based on multi-period adaptation model for forecasting stock markets
NASA Astrophysics Data System (ADS)
Chen, Tai-Liang; Cheng, Ching-Hsue; Teoh, Hia-Jong
2008-02-01
Stock investors usually make their short-term investment decisions according to recent stock information such as the late market news, technical analysis reports, and price fluctuations. To reflect these short-term factors which impact stock price, this paper proposes a comprehensive fuzzy time-series, which factors linear relationships between recent periods of stock prices and fuzzy logical relationships (nonlinear relationships) mined from time-series into forecasting processes. In empirical analysis, the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) and HSI (Heng Seng Index) are employed as experimental datasets, and four recent fuzzy time-series models, Chen’s (1996), Yu’s (2005), Cheng’s (2006) and Chen’s (2007), are used as comparison models. Besides, to compare with conventional statistic method, the method of least squares is utilized to estimate the auto-regressive models of the testing periods within the databases. From analysis results, the performance comparisons indicate that the multi-period adaptation model, proposed in this paper, can effectively improve the forecasting performance of conventional fuzzy time-series models which only factor fuzzy logical relationships in forecasting processes. From the empirical study, the traditional statistic method and the proposed model both reveal that stock price patterns in the Taiwan stock and Hong Kong stock markets are short-term.
NASA Astrophysics Data System (ADS)
Corbineau, A.; Rouyer, T.; Fromentin, J.-M.; Cazelles, B.; Fonteneau, A.; Ménard, F.
2010-07-01
Catch data of large pelagic fish such as tuna, swordfish and billfish are highly variable ranging from short to long term. Based on fisheries data, these time series are noisy and reflect mixed information on exploitation (targeting, strategy, fishing power), population dynamics (recruitment, growth, mortality, migration, etc.), and environmental forcing (local conditions or dominant climate patterns). In this work, we investigated patterns of variation of large pelagic fish (i.e. yellowfin tuna, bigeye tuna, swordfish and blue marlin) in Japanese longliners catch data from 1960 to 2004. We performed wavelet analyses on the yearly time series of each fish species in each biogeographic province of the tropical Indian and Atlantic Oceans. In addition, we carried out cross-wavelet analyses between these biological time series and a large-scale climatic index, i.e. the Southern Oscillation Index (SOI). Results showed that the biogeographic province was the most important factor structuring the patterns of variability of Japanese catch time series. Relationships between the SOI and the fish catches in the Indian and Atlantic Oceans also pointed out the role of climatic variability for structuring patterns of variation of catch time series. This work finally confirmed that Japanese longline CPUE data poorly reflect the underlying population dynamics of tunas.
Crustal displacements due to continental water loading
Van Dam, T.; Wahr, J.; Milly, P.C.D.; Shmakin, A.B.; Blewitt, G.; Lavallee, D.; Larson, K.M.
2001-01-01
The effects of long-wavelength (> 100 km), seasonal variability in continental water storage on vertical crustal motions are assessed. The modeled vertical displacements (??rM) have root-mean-square (RMS) values for 1994-1998 as large as 8 mm, with ranges up to 30 mm, and are predominantly annual in character. Regional strains are on the order of 20 nanostrain for tilt and 5 nanostrain for horizontal deformation. We compare ??rM with observed Global Positioning System (GPS) heights (??rO) (which include adjustments to remove estimated effects of atmospheric pressure and annual tidal and non-tidal ocean loading) for 147 globally distributed sites. When the ??rO time series are adjusted by ??rM, their variances are reduced, on average, by an amount equal to the variance of the ??rM. Of the ??rO time series exhibiting a strong annual signal, more than half are found to have an annual harmonic that is in phase and of comparable amplitude with the annual harmonic in the ??rM. The ??rM time series exhibit long-period variations that could be mistaken for secular tectonic trends or post-glacial rebound when observed over a time span of a few years.
Statistical characteristics of surrogate data based on geophysical measurements
NASA Astrophysics Data System (ADS)
Venema, V.; Bachner, S.; Rust, H. W.; Simmer, C.
2006-09-01
In this study, the statistical properties of a range of measurements are compared with those of their surrogate time series. Seven different records are studied, amongst others, historical time series of mean daily temperature, daily rain sums and runoff from two rivers, and cloud measurements. Seven different algorithms are used to generate the surrogate time series. The best-known method is the iterative amplitude adjusted Fourier transform (IAAFT) algorithm, which is able to reproduce the measured distribution as well as the power spectrum. Using this setup, the measurements and their surrogates are compared with respect to their power spectrum, increment distribution, structure functions, annual percentiles and return values. It is found that the surrogates that reproduce the power spectrum and the distribution of the measurements are able to closely match the increment distributions and the structure functions of the measurements, but this often does not hold for surrogates that only mimic the power spectrum of the measurement. However, even the best performing surrogates do not have asymmetric increment distributions, i.e., they cannot reproduce nonlinear dynamical processes that are asymmetric in time. Furthermore, we have found deviations of the structure functions on small scales.
NASA Astrophysics Data System (ADS)
Yu, Hongjuan; Guo, Jinyun; Kong, Qiaoli; Chen, Xiaodong
2018-04-01
The static observation data from a relative gravimeter contain noise and signals such as gravity tides. This paper focuses on the extraction of the gravity tides from the static relative gravimeter data for the first time applying the combined method of empirical mode decomposition (EMD) and independent component analysis (ICA), called the EMD-ICA method. The experimental results from the CG-5 gravimeter (SCINTREX Limited Ontario Canada) data show that the gravity tides time series derived by EMD-ICA are consistent with the theoretical reference (Longman formula) and the RMS of their differences only reaches 4.4 μGal. The time series of the gravity tides derived by EMD-ICA have a strong correlation with the theoretical time series and the correlation coefficient is greater than 0.997. The accuracy of the gravity tides estimated by EMD-ICA is comparable to the theoretical model and is slightly higher than that of independent component analysis (ICA). EMD-ICA could overcome the limitation of ICA having to process multiple observations and slightly improve the extraction accuracy and reliability of gravity tides from relative gravimeter data compared to that estimated with ICA.
Initial Validation of NDVI time seriesfrom AVHRR, VEGETATION, and MODIS
NASA Technical Reports Server (NTRS)
Morisette, Jeffrey T.; Pinzon, Jorge E.; Brown, Molly E.; Tucker, Jim; Justice, Christopher O.
2004-01-01
The paper will address Theme 7: Multi-sensor opportunities for VEGETATION. We present analysis of a long-term vegetation record derived from three moderate resolution sensors: AVHRR, VEGETATION, and MODIS. While empirically based manipulation can ensure agreement between the three data sets, there is a need to validate the series. This paper uses atmospherically corrected ETM+ data available over the EOS Land Validation Core Sites as an independent data set with which to compare the time series. We use ETM+ data from 15 globally distributed sites, 7 of which contain repeat coverage in time. These high-resolution data are compared to the values of each sensor by spatially aggregating the ETM+ to each specific sensors' spatial coverage. The aggregated ETM+ value provides a point estimate for a specific site on a specific date. The standard deviation of that point estimate is used to construct a confidence interval for that point estimate. The values from each moderate resolution sensor are then evaluated with respect to that confident interval. Result show that AVHRR, VEGETATION, and MODIS data can be combined to assess temporal uncertainties and address data continuity issues and that the atmospherically corrected ETM+ data provide an independent source with which to compare that record. The final product is a consistent time series climate record that links historical observations to current and future measurements.
NASA Astrophysics Data System (ADS)
Serov, Vladislav V.; Kheifets, A. S.
2014-12-01
We analyze a transfer ionization (TI) reaction in the fast proton-helium collision H++He →H0+He2 ++ e- by solving a time-dependent Schrödinger equation (TDSE) under the classical projectile motion approximation in one-dimensional kinematics. In addition, we construct various time-independent analogs of our model using lowest-order perturbation theory in the form of the Born series. By comparing various aspects of the TDSE and the Born series calculations, we conclude that the recent discrepancies of experimental and theoretical data may be attributed to deficiency of the Born models used by other authors. We demonstrate that the correct Born series for TI should include the momentum-space overlap between the double-ionization amplitude and the wave function of the transferred electron.
Causality networks from multivariate time series and application to epilepsy.
Siggiridou, Elsa; Koutlis, Christos; Tsimpiris, Alkiviadis; Kimiskidis, Vasilios K; Kugiumtzis, Dimitris
2015-08-01
Granger causality and variants of this concept allow the study of complex dynamical systems as networks constructed from multivariate time series. In this work, a large number of Granger causality measures used to form causality networks from multivariate time series are assessed. For this, realizations on high dimensional coupled dynamical systems are considered and the performance of the Granger causality measures is evaluated, seeking for the measures that form networks closest to the true network of the dynamical system. In particular, the comparison focuses on Granger causality measures that reduce the state space dimension when many variables are observed. Further, the linear and nonlinear Granger causality measures of dimension reduction are compared to a standard Granger causality measure on electroencephalographic (EEG) recordings containing episodes of epileptiform discharges.
Series resonant converter with auxiliary winding turns: analysis, design and implementation
NASA Astrophysics Data System (ADS)
Lin, Bor-Ren
2018-05-01
Conventional series resonant converters have researched and applied for high-efficiency power units due to the benefit of its low switching losses. The main problems of series resonant converters are wide frequency variation and high circulating current. Thus, resonant converter is limited at narrow input voltage range and large input capacitor is normally adopted in commercial power units to provide the minimum hold-up time requirement when AC power is off. To overcome these problems, the resonant converter with auxiliary secondary windings are presented in this paper to achieve high voltage gain at low input voltage case such as hold-up time duration when utility power is off. Since the high voltage gain is used at low input voltage cased, the frequency variation of the proposed converter compared to the conventional resonant converter is reduced. Compared to conventional resonant converter, the hold-up time in the proposed converter is more than 40ms. The larger magnetising inductance of transformer is used to reduce the circulating current losses. Finally, a laboratory prototype is constructed and experiments are provided to verify the converter performance.
Plazas-Nossa, Leonardo; Torres, Andrés
2014-01-01
The objective of this work is to introduce a forecasting method for UV-Vis spectrometry time series that combines principal component analysis (PCA) and discrete Fourier transform (DFT), and to compare the results obtained with those obtained by using DFT. Three time series for three different study sites were used: (i) Salitre wastewater treatment plant (WWTP) in Bogotá; (ii) Gibraltar pumping station in Bogotá; and (iii) San Fernando WWTP in Itagüí (in the south part of Medellín). Each of these time series had an equal number of samples (1051). In general terms, the results obtained are hardly generalizable, as they seem to be highly dependent on specific water system dynamics; however, some trends can be outlined: (i) for UV range, DFT and PCA/DFT forecasting accuracy were almost the same; (ii) for visible range, the PCA/DFT forecasting procedure proposed gives systematically lower forecasting errors and variability than those obtained with the DFT procedure; and (iii) for short forecasting times the PCA/DFT procedure proposed is more suitable than the DFT procedure, according to processing times obtained.
Seasonal to multi-decadal trends in apparent optical properties in the Sargasso Sea
NASA Astrophysics Data System (ADS)
Allen, James G.; Nelson, Norman B.; Siegel, David A.
2017-01-01
Multi-decadal, monthly observations of optical and biogeochemical properties, made as part of the Bermuda Bio-Optics Project (BBOP) at the Bermuda Atlantic Time-series Study (BATS) site in the Sargasso Sea, allow for the examination of temporal trends in vertical light attenuation and their potential controls. Trends in the magnitude of the diffuse attenuation coefficient, Kd(λ), and a proxy for its spectral shape reflect changes in phytoplankton and chromophoric dissolved organic matter (CDOM) characteristics. The length and methodological consistency of this time series provide an excellent opportunity to extend analyses of seasonal cycles of apparent optical properties to interannual and decadal time scales. Here, we characterize changes in the magnitude and spectral shape proxy of diffuse attenuation coefficient spectra and compare them to available biological and optical data from the BATS time series program. The time series analyses reveal a 1.01%±0.18% annual increase of the magnitude of the diffuse attenuation coefficient at 443 nm over the upper 75 m of the water column while showing no significant change in selected spectral characteristics over the study period. These and other observations indicate that changes in phytoplankton rather than changes in CDOM abundance are the primary driver for the diffuse attenuation trends on multi-year timescales for this region. Our findings are inconsistent with previous decadal-scale global ocean water clarity and global satellite ocean color analyses yet are consistent with recent analyses of the BATS time series and highlight the value of long-term consistent observation at ocean time series sites.
Cross-recurrence quantification analysis of categorical and continuous time series: an R package
Coco, Moreno I.; Dale, Rick
2014-01-01
This paper describes the R package crqa to perform cross-recurrence quantification analysis of two time series of either a categorical or continuous nature. Streams of behavioral information, from eye movements to linguistic elements, unfold over time. When two people interact, such as in conversation, they often adapt to each other, leading these behavioral levels to exhibit recurrent states. In dialog, for example, interlocutors adapt to each other by exchanging interactive cues: smiles, nods, gestures, choice of words, and so on. In order for us to capture closely the goings-on of dynamic interaction, and uncover the extent of coupling between two individuals, we need to quantify how much recurrence is taking place at these levels. Methods available in crqa would allow researchers in cognitive science to pose such questions as how much are two people recurrent at some level of analysis, what is the characteristic lag time for one person to maximally match another, or whether one person is leading another. First, we set the theoretical ground to understand the difference between “correlation” and “co-visitation” when comparing two time series, using an aggregative or cross-recurrence approach. Then, we describe more formally the principles of cross-recurrence, and show with the current package how to carry out analyses applying them. We end the paper by comparing computational efficiency, and results’ consistency, of crqa R package, with the benchmark MATLAB toolbox crptoolbox (Marwan, 2013). We show perfect comparability between the two libraries on both levels. PMID:25018736
Rapid on-site defibrillation versus community program.
Fedoruk, J C; Paterson, D; Hlynka, M; Fung, K Y; Gobet, Michael; Currie, Wayne
2002-01-01
For patients who suffer out-of-hospital cardiac arrest, the time from collapse to initial defibrillation is the single most important factor that affects survival to hospital discharge. The purpose of this study was to compare the survival rates of cardiac arrest victims within an institution that has a rapid defibrillation program with those of its own urban community, tiered EMS system. A logistic regression analysis of a retrospective data series (n = 23) and comparative analysis to a second retrospective data series (n = 724) were gathered for the study period September 1994 to September 1999. The first data series included all persons at Casino Windsor who suffered a cardiac arrest. Data collected included: age, gender, death/survival (neurologically intact discharge), presenting rhythm (ventricular fibrillation (VF), ventricular tachycardia (VT), or other), time of collapse, time to arrival of security personnel, time to initiation of cardiopulmonary resuscitation (CPR) prior to defibrillation (when applicable), time to arrival of staff nurse, time to initial defibrillation, and time to return of spontaneous circulation (if any). Significantly, all arrests within this series were witnessed by the surveillance camera systems, allowing time of collapse to be accurately determined rather than estimated. These data were compared to those of similar events, times, and intervals for all patients in the greater Windsor area who suffered cardiac arrest. This second series was based upon the Ontario Prehospital Advanced Life Support (OPALS) Study database, as coordinated by the Clinical Epidemiology Unit of the Ottawa Hospital, University of Ottawa. The Casino Windsor had 23 cases of cardiac arrests. Of the cases, 13 (56.5%) were male and 10 (43.5%) were female. All cases (100%) were witnessed. The average of the ages was 61.1 years, of the time to initial defibrillation was 7.7 minutes, and of the time for EMS to reach the patient was 13.3 minutes. The presenting rhythm was VF/VT in 91% of the case. Fifteen patients were discharged alive from hospital for a 65% survival rate. The Greater Windsor Study area included 668 cases of out-of-hospital cardiac arrest: Of these, 410 (61.4%) were male and 258 (38.6%) were female, 365 (54.6%) were witnessed, and 303 (45.4%) were not witnessed. The initial rhythm was VF/VT was in 34.3%. Thirty-seven (5.5%) were discharged alive from the hospital. This study provides further evidence that PAD Programs may enhance cardiac arrest survival rates and should be considered for any venue with large numbers of adults as well as areas with difficult medical access.
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva
2014-08-01
To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.
Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva
2013-01-01
Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175
A framework for periodic outlier pattern detection in time-series sequences.
Rasheed, Faraz; Alhajj, Reda
2014-05-01
Periodic pattern detection in time-ordered sequences is an important data mining task, which discovers in the time series all patterns that exhibit temporal regularities. Periodic pattern mining has a large number of applications in real life; it helps understanding the regular trend of the data along time, and enables the forecast and prediction of future events. An interesting related and vital problem that has not received enough attention is to discover outlier periodic patterns in a time series. Outlier patterns are defined as those which are different from the rest of the patterns; outliers are not noise. While noise does not belong to the data and it is mostly eliminated by preprocessing, outliers are actual instances in the data but have exceptional characteristics compared with the majority of the other instances. Outliers are unusual patterns that rarely occur, and, thus, have lesser support (frequency of appearance) in the data. Outlier patterns may hint toward discrepancy in the data such as fraudulent transactions, network intrusion, change in customer behavior, recession in the economy, epidemic and disease biomarkers, severe weather conditions like tornados, etc. We argue that detecting the periodicity of outlier patterns might be more important in many sequences than the periodicity of regular, more frequent patterns. In this paper, we present a robust and time efficient suffix tree-based algorithm capable of detecting the periodicity of outlier patterns in a time series by giving more significance to less frequent yet periodic patterns. Several experiments have been conducted using both real and synthetic data; all aspects of the proposed approach are compared with the existing algorithm InfoMiner; the reported results demonstrate the effectiveness and applicability of the proposed approach.
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.; Leban, F.; Mashiko, S.
1986-01-01
Single-channel pilot manual control output in closed-tracking tasks is modeled in terms of linear discrete transfer functions which are parsimonious and guaranteed stable. The transfer functions are found by applying a modified super-position time series generation technique. A Levinson-Durbin algorithm is used to determine the filter which prewhitens the input and a projective (least squares) fit of pulse response estimates is used to guarantee identified model stability. Results from two case studies are compared to previous findings, where the source of data are relatively short data records, approximately 25 seconds long. Time delay effects and pilot seasonalities are discussed and analyzed. It is concluded that single-channel time series controller modeling is feasible on short records, and that it is important for the analyst to determine a criterion for best time domain fit which allows association of model parameter values, such as pure time delay, with actual physical and physiological constraints. The purpose of the modeling is thus paramount.
NASA Astrophysics Data System (ADS)
Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci
2013-04-01
This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.
Ren, Meng; Li, Na; Wang, Zhan; Liu, Yisi; Chen, Xi; Chu, Yuanyuan; Li, Xiangyu; Zhu, Zhongmin; Tian, Liqiao; Xiang, Hao
2017-01-01
Few studies have compared different methods when exploring the short-term effects of air pollutants on respiratory disease mortality in Wuhan, China. This study assesses the association between air pollutants and respiratory disease mortality with both time-series and time-stratified–case-crossover designs. The generalized additive model (GAM) and the conditional logistic regression model were used to assess the short-term effects of air pollutants on respiratory disease mortality. Stratified analyses were performed by age, sex, and diseases. A 10 μg/m3 increment in SO2 level was associated with an increase in relative risk for all respiratory disease mortality of 2.4% and 1.9% in the case-crossover and time-series analyses in single pollutant models, respectively. Strong evidence of an association between NO2 and daily respiratory disease mortality among men or people older than 65 years was found in the case-crossover study. There was a positive association between air pollutants and respiratory disease mortality in Wuhan, China. Both time-series and case-crossover analyses consistently reveal the association between three air pollutants and respiratory disease mortality. The estimates of association between air pollution and respiratory disease mortality from the case–crossover analysis displayed greater variation than that from the time-series analysis. PMID:28084399
Ren, Meng; Li, Na; Wang, Zhan; Liu, Yisi; Chen, Xi; Chu, Yuanyuan; Li, Xiangyu; Zhu, Zhongmin; Tian, Liqiao; Xiang, Hao
2017-01-13
Few studies have compared different methods when exploring the short-term effects of air pollutants on respiratory disease mortality in Wuhan, China. This study assesses the association between air pollutants and respiratory disease mortality with both time-series and time-stratified-case-crossover designs. The generalized additive model (GAM) and the conditional logistic regression model were used to assess the short-term effects of air pollutants on respiratory disease mortality. Stratified analyses were performed by age, sex, and diseases. A 10 μg/m 3 increment in SO 2 level was associated with an increase in relative risk for all respiratory disease mortality of 2.4% and 1.9% in the case-crossover and time-series analyses in single pollutant models, respectively. Strong evidence of an association between NO 2 and daily respiratory disease mortality among men or people older than 65 years was found in the case-crossover study. There was a positive association between air pollutants and respiratory disease mortality in Wuhan, China. Both time-series and case-crossover analyses consistently reveal the association between three air pollutants and respiratory disease mortality. The estimates of association between air pollution and respiratory disease mortality from the case-crossover analysis displayed greater variation than that from the time-series analysis.
Lutaif, N.A.; Palazzo, R.; Gontijo, J.A.R.
2014-01-01
Maintenance of thermal homeostasis in rats fed a high-fat diet (HFD) is associated with changes in their thermal balance. The thermodynamic relationship between heat dissipation and energy storage is altered by the ingestion of high-energy diet content. Observation of thermal registers of core temperature behavior, in humans and rodents, permits identification of some characteristics of time series, such as autoreference and stationarity that fit adequately to a stochastic analysis. To identify this change, we used, for the first time, a stochastic autoregressive model, the concepts of which match those associated with physiological systems involved and applied in male HFD rats compared with their appropriate standard food intake age-matched male controls (n=7 per group). By analyzing a recorded temperature time series, we were able to identify when thermal homeostasis would be affected by a new diet. The autoregressive time series model (AR model) was used to predict the occurrence of thermal homeostasis, and this model proved to be very effective in distinguishing such a physiological disorder. Thus, we infer from the results of our study that maximum entropy distribution as a means for stochastic characterization of temperature time series registers may be established as an important and early tool to aid in the diagnosis and prevention of metabolic diseases due to their ability to detect small variations in thermal profile. PMID:24519093
Lutaif, N A; Palazzo, R; Gontijo, J A R
2014-01-01
Maintenance of thermal homeostasis in rats fed a high-fat diet (HFD) is associated with changes in their thermal balance. The thermodynamic relationship between heat dissipation and energy storage is altered by the ingestion of high-energy diet content. Observation of thermal registers of core temperature behavior, in humans and rodents, permits identification of some characteristics of time series, such as autoreference and stationarity that fit adequately to a stochastic analysis. To identify this change, we used, for the first time, a stochastic autoregressive model, the concepts of which match those associated with physiological systems involved and applied in male HFD rats compared with their appropriate standard food intake age-matched male controls (n=7 per group). By analyzing a recorded temperature time series, we were able to identify when thermal homeostasis would be affected by a new diet. The autoregressive time series model (AR model) was used to predict the occurrence of thermal homeostasis, and this model proved to be very effective in distinguishing such a physiological disorder. Thus, we infer from the results of our study that maximum entropy distribution as a means for stochastic characterization of temperature time series registers may be established as an important and early tool to aid in the diagnosis and prevention of metabolic diseases due to their ability to detect small variations in thermal profile.
Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition
NASA Astrophysics Data System (ADS)
Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.
2005-12-01
Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.
NASA Astrophysics Data System (ADS)
Ren, Meng; Li, Na; Wang, Zhan; Liu, Yisi; Chen, Xi; Chu, Yuanyuan; Li, Xiangyu; Zhu, Zhongmin; Tian, Liqiao; Xiang, Hao
2017-01-01
Few studies have compared different methods when exploring the short-term effects of air pollutants on respiratory disease mortality in Wuhan, China. This study assesses the association between air pollutants and respiratory disease mortality with both time-series and time-stratified-case-crossover designs. The generalized additive model (GAM) and the conditional logistic regression model were used to assess the short-term effects of air pollutants on respiratory disease mortality. Stratified analyses were performed by age, sex, and diseases. A 10 μg/m3 increment in SO2 level was associated with an increase in relative risk for all respiratory disease mortality of 2.4% and 1.9% in the case-crossover and time-series analyses in single pollutant models, respectively. Strong evidence of an association between NO2 and daily respiratory disease mortality among men or people older than 65 years was found in the case-crossover study. There was a positive association between air pollutants and respiratory disease mortality in Wuhan, China. Both time-series and case-crossover analyses consistently reveal the association between three air pollutants and respiratory disease mortality. The estimates of association between air pollution and respiratory disease mortality from the case-crossover analysis displayed greater variation than that from the time-series analysis.
Huang, Xin; Zeng, Jun; Zhou, Lina; Hu, Chunxiu; Yin, Peiyuan; Lin, Xiaohui
2016-08-31
Time-series metabolomics studies can provide insight into the dynamics of disease development and facilitate the discovery of prospective biomarkers. To improve the performance of early risk identification, a new strategy for analyzing time-series data based on dynamic networks (ATSD-DN) in a systematic time dimension is proposed. In ATSD-DN, the non-overlapping ratio was applied to measure the changes in feature ratios during the process of disease development and to construct dynamic networks. Dynamic concentration analysis and network topological structure analysis were performed to extract early warning information. This strategy was applied to the study of time-series lipidomics data from a stepwise hepatocarcinogenesis rat model. A ratio of lyso-phosphatidylcholine (LPC) 18:1/free fatty acid (FFA) 20:5 was identified as the potential biomarker for hepatocellular carcinoma (HCC). It can be used to classify HCC and non-HCC rats, and the area under the curve values in the discovery and external validation sets were 0.980 and 0.972, respectively. This strategy was also compared with a weighted relative difference accumulation algorithm (wRDA), multivariate empirical Bayes statistics (MEBA) and support vector machine-recursive feature elimination (SVM-RFE). The better performance of ATSD-DN suggests its potential for a more complete presentation of time-series changes and effective extraction of early warning information.
NASA Astrophysics Data System (ADS)
Huang, Xin; Zeng, Jun; Zhou, Lina; Hu, Chunxiu; Yin, Peiyuan; Lin, Xiaohui
2016-08-01
Time-series metabolomics studies can provide insight into the dynamics of disease development and facilitate the discovery of prospective biomarkers. To improve the performance of early risk identification, a new strategy for analyzing time-series data based on dynamic networks (ATSD-DN) in a systematic time dimension is proposed. In ATSD-DN, the non-overlapping ratio was applied to measure the changes in feature ratios during the process of disease development and to construct dynamic networks. Dynamic concentration analysis and network topological structure analysis were performed to extract early warning information. This strategy was applied to the study of time-series lipidomics data from a stepwise hepatocarcinogenesis rat model. A ratio of lyso-phosphatidylcholine (LPC) 18:1/free fatty acid (FFA) 20:5 was identified as the potential biomarker for hepatocellular carcinoma (HCC). It can be used to classify HCC and non-HCC rats, and the area under the curve values in the discovery and external validation sets were 0.980 and 0.972, respectively. This strategy was also compared with a weighted relative difference accumulation algorithm (wRDA), multivariate empirical Bayes statistics (MEBA) and support vector machine-recursive feature elimination (SVM-RFE). The better performance of ATSD-DN suggests its potential for a more complete presentation of time-series changes and effective extraction of early warning information.
Reibling, Nadine
2013-09-01
This paper outlines the capabilities of pooled cross-sectional time series methodology for the international comparison of health system performance in population health. It shows how common model specifications can be improved so that they not only better address the specific nature of time series data on population health but are also more closely aligned with our theoretical expectations of the effect of healthcare systems. Three methodological innovations for this field of applied research are discussed: (1) how dynamic models help us understand the timing of effects, (2) how parameter heterogeneity can be used to compare performance across countries, and (3) how multiple imputation can be used to deal with incomplete data. We illustrate these methodological strategies with an analysis of infant mortality rates in 21 OECD countries between 1960 and 2008 using OECD Health Data. Copyright © 2013 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.
Functional clustering of time series gene expression data by Granger causality
2012-01-01
Background A common approach for time series gene expression data analysis includes the clustering of genes with similar expression patterns throughout time. Clustered gene expression profiles point to the joint contribution of groups of genes to a particular cellular process. However, since genes belong to intricate networks, other features, besides comparable expression patterns, should provide additional information for the identification of functionally similar genes. Results In this study we perform gene clustering through the identification of Granger causality between and within sets of time series gene expression data. Granger causality is based on the idea that the cause of an event cannot come after its consequence. Conclusions This kind of analysis can be used as a complementary approach for functional clustering, wherein genes would be clustered not solely based on their expression similarity but on their topological proximity built according to the intensity of Granger causality among them. PMID:23107425
Root System Water Consumption Pattern Identification on Time Series Data
Figueroa, Manuel; Pope, Christopher
2017-01-01
In agriculture, soil and meteorological sensors are used along low power networks to capture data, which allows for optimal resource usage and minimizing environmental impact. This study uses time series analysis methods for outliers’ detection and pattern recognition on soil moisture sensor data to identify irrigation and consumption patterns and to improve a soil moisture prediction and irrigation system. This study compares three new algorithms with the current detection technique in the project; the results greatly decrease the number of false positives detected. The best result is obtained by the Series Strings Comparison (SSC) algorithm averaging a precision of 0.872 on the testing sets, vastly improving the current system’s 0.348 precision. PMID:28621739
Root System Water Consumption Pattern Identification on Time Series Data.
Figueroa, Manuel; Pope, Christopher
2017-06-16
In agriculture, soil and meteorological sensors are used along low power networks to capture data, which allows for optimal resource usage and minimizing environmental impact. This study uses time series analysis methods for outliers' detection and pattern recognition on soil moisture sensor data to identify irrigation and consumption patterns and to improve a soil moisture prediction and irrigation system. This study compares three new algorithms with the current detection technique in the project; the results greatly decrease the number of false positives detected. The best result is obtained by the Series Strings Comparison (SSC) algorithm averaging a precision of 0.872 on the testing sets, vastly improving the current system's 0.348 precision.
Nonlinear Dynamics of River Runoff Elucidated by Horizontal Visibility Graphs
NASA Astrophysics Data System (ADS)
Lange, Holger; Rosso, Osvaldo A.
2017-04-01
We investigate a set of long-term river runoff time series at daily resolution from Brazil, monitored by the Agencia Nacional de Aguas. A total of 150 time series was obtained, with an average length of 65 years. Both long-term trends and human influence (water management, e.g. for power production) on the dynamical behaviour are analyzed. We use Horizontal Visibility Graphs (HVGs) to determine the individual temporal networks for the time series, and extract their degree and their distance (shortest path length) distributions. Statistical and information-theoretic properties of these distributions are calculated: robust estimators of skewness and kurtosis, the maximum degree occurring in the time series, the Shannon entropy, permutation complexity and Fisher Information. For the latter, we also compare the information measures obtained from the degree distributions to those using the original time series directly, to investigate the impact of graph construction on the dynamical properties as reflected in these measures. Focus is on one hand on universal properties of the HVG, common to all runoff series, and on site-specific aspects on the other. Results demonstrate that the assumption of power law behaviour for the degree distribtion does not generally hold, and that management has a significant impact on this distribution. We also show that a specific pretreatment of the time series conventional in hydrology, the elimination of seasonality by a separate z-transformation for each calendar day, is highly detrimental to the nonlinear behaviour. It changes long-term correlations and the overall dynamics towards more random behaviour. Analysis based on the transformed data easily leads to spurious results, and bear a high risk of misinterpretation.
Variations in atmospheric angular momentum and the length of day
NASA Technical Reports Server (NTRS)
Rosen, R. D.; Salstein, D. A.
1982-01-01
Six years of twice daily global analyses were used to create and study a lengthy time series of high temporal resolution angular momentum values. Changes in these atmospheric values were compared to independently determined charges in the rotation rate of the solid Earth. Finally, the atmospheric data was examined in more detail to determine the time and space scales on which variations in momentum occur within the atmosphere and which regions are contributing most to the changes found in the global integral. The data and techniques used to derive the time series of momentum values are described.
NASA Astrophysics Data System (ADS)
Genty, Dominique; Massault, Marc
1999-05-01
Twenty-two AMS 14C measurements have been made on a modern stalagmite from SW France in order to reconstruct the 14C activity history of the calcite deposit. Annual growth laminae provides a chronology up to 1919 A.D. Results show that the stalagmite 14C activity time series is sensitive to modern atmosphere 14C activity changes such as those produced by the nuclear weapon tests. The comparison between the two 14C time series shows that the stalagmite time series is damped: its amplitude variation between pre-bomb and post-bomb values is 75% less and the time delay between the two time series peaks is 16 years ±3. A model is developed using atmosphere 14C and 13C data, fractionation processes and three soil organic matter components whose mean turnover rates are different. The linear correlation coefficient between modeled and measured activities is 0.99. These results, combined with two other stalagmite 14C time series already published and compared with local vegetation and climate, demonstrate that most of the carbon transfer dynamics are controlled in the soil by soil organic matter degradation rates. Where vegetation produces debris whose degradation is slow, the fraction of old carbon injected in the system increases, the observed 14C time series is much more damped and lag time longer than that observed under grassland sites. The same mixing model applied on the 13C shows a good agreement ( R2 = 0.78) between modeled and measured stalagmite δ 13C and demonstrates that the Suess Effect due to fossil fuel combustion in the atmosphere is recorded in the stalagmite but with a damped effect due to SOM degradation rate. The different sources of dead carbon in the seepage water are calculated and discussed.
Exploratory Causal Analysis in Bivariate Time Series Data
NASA Astrophysics Data System (ADS)
McCracken, James M.
Many scientific disciplines rely on observational data of systems for which it is difficult (or impossible) to implement controlled experiments and data analysis techniques are required for identifying causal information and relationships directly from observational data. This need has lead to the development of many different time series causality approaches and tools including transfer entropy, convergent cross-mapping (CCM), and Granger causality statistics. In this thesis, the existing time series causality method of CCM is extended by introducing a new method called pairwise asymmetric inference (PAI). It is found that CCM may provide counter-intuitive causal inferences for simple dynamics with strong intuitive notions of causality, and the CCM causal inference can be a function of physical parameters that are seemingly unrelated to the existence of a driving relationship in the system. For example, a CCM causal inference might alternate between ''voltage drives current'' and ''current drives voltage'' as the frequency of the voltage signal is changed in a series circuit with a single resistor and inductor. PAI is introduced to address both of these limitations. Many of the current approaches in the times series causality literature are not computationally straightforward to apply, do not follow directly from assumptions of probabilistic causality, depend on assumed models for the time series generating process, or rely on embedding procedures. A new approach, called causal leaning, is introduced in this work to avoid these issues. The leaning is found to provide causal inferences that agree with intuition for both simple systems and more complicated empirical examples, including space weather data sets. The leaning may provide a clearer interpretation of the results than those from existing time series causality tools. A practicing analyst can explore the literature to find many proposals for identifying drivers and causal connections in times series data sets, but little research exists of how these tools compare to each other in practice. This work introduces and defines exploratory causal analysis (ECA) to address this issue along with the concept of data causality in the taxonomy of causal studies introduced in this work. The motivation is to provide a framework for exploring potential causal structures in time series data sets. ECA is used on several synthetic and empirical data sets, and it is found that all of the tested time series causality tools agree with each other (and intuitive notions of causality) for many simple systems but can provide conflicting causal inferences for more complicated systems. It is proposed that such disagreements between different time series causality tools during ECA might provide deeper insight into the data than could be found otherwise.
Swartz, R. Andrew
2013-01-01
This paper investigates the time series representation methods and similarity measures for sensor data feature extraction and structural damage pattern recognition. Both model-based time series representation and dimensionality reduction methods are studied to compare the effectiveness of feature extraction for damage pattern recognition. The evaluation of feature extraction methods is performed by examining the separation of feature vectors among different damage patterns and the pattern recognition success rate. In addition, the impact of similarity measures on the pattern recognition success rate and the metrics for damage localization are also investigated. The test data used in this study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case datasets and damage test data with different damage modalities are used. The simulation results show that both time series representation methods and similarity measures have significant impact on the pattern recognition success rate. PMID:24191136
A coupled stochastic rainfall-evapotranspiration model for hydrological impact analysis
NASA Astrophysics Data System (ADS)
Pham, Minh Tu; Vernieuwe, Hilde; De Baets, Bernard; Verhoest, Niko E. C.
2018-02-01
A hydrological impact analysis concerns the study of the consequences of certain scenarios on one or more variables or fluxes in the hydrological cycle. In such an exercise, discharge is often considered, as floods originating from extremely high discharges often cause damage. Investigating the impact of extreme discharges generally requires long time series of precipitation and evapotranspiration to be used to force a rainfall-runoff model. However, such kinds of data may not be available and one should resort to stochastically generated time series, even though the impact of using such data on the overall discharge, and especially on the extreme discharge events, is not well studied. In this paper, stochastically generated rainfall and corresponding evapotranspiration time series, generated by means of vine copulas, are used to force a simple conceptual hydrological model. The results obtained are comparable to the modelled discharge using observed forcing data. Yet, uncertainties in the modelled discharge increase with an increasing number of stochastically generated time series used. Notwithstanding this finding, it can be concluded that using a coupled stochastic rainfall-evapotranspiration model has great potential for hydrological impact analysis.
Zhang, Hong; Zhang, Sheng; Wang, Ping; Qin, Yuzhe; Wang, Huifeng
2017-07-01
Particulate matter with aerodynamic diameter below 10 μm (PM 10 ) forecasting is difficult because of the uncertainties in describing the emission and meteorological fields. This paper proposed a wavelet-ARMA/ARIMA model to forecast the short-term series of the PM 10 concentrations. It was evaluated by experiments using a 10-year data set of daily PM 10 concentrations from 4 stations located in Taiyuan, China. The results indicated the following: (1) PM 10 concentrations of Taiyuan had a decreasing trend during 2005 to 2012 but increased in 2013. PM 10 concentrations had an obvious seasonal fluctuation related to coal-fired heating in winter and early spring. (2) Spatial differences among the four stations showed that the PM 10 concentrations in industrial and heavily trafficked areas were higher than those in residential and suburb areas. (3) Wavelet analysis revealed that the trend variation and the changes of the PM 10 concentration of Taiyuan were complicated. (4) The proposed wavelet-ARIMA model could be efficiently and successfully applied to the PM 10 forecasting field. Compared with the traditional ARMA/ARIMA methods, this wavelet-ARMA/ARIMA method could effectively reduce the forecasting error, improve the prediction accuracy, and realize multiple-time-scale prediction. Wavelet analysis can filter noisy signals and identify the variation trend and the fluctuation of the PM 10 time-series data. Wavelet decomposition and reconstruction reduce the nonstationarity of the PM 10 time-series data, and thus improve the accuracy of the prediction. This paper proposed a wavelet-ARMA/ARIMA model to forecast the PM 10 time series. Compared with the traditional ARMA/ARIMA method, this wavelet-ARMA/ARIMA method could effectively reduce the forecasting error, improve the prediction accuracy, and realize multiple-time-scale prediction. The proposed model could be efficiently and successfully applied to the PM 10 forecasting field.
Improvements to surrogate data methods for nonstationary time series.
Lucio, J H; Valdés, R; Rodríguez, L R
2012-05-01
The method of surrogate data has been extensively applied to hypothesis testing of system linearity, when only one realization of the system, a time series, is known. Normally, surrogate data should preserve the linear stochastic structure and the amplitude distribution of the original series. Classical surrogate data methods (such as random permutation, amplitude adjusted Fourier transform, or iterative amplitude adjusted Fourier transform) are successful at preserving one or both of these features in stationary cases. However, they always produce stationary surrogates, hence existing nonstationarity could be interpreted as dynamic nonlinearity. Certain modifications have been proposed that additionally preserve some nonstationarity, at the expense of reproducing a great deal of nonlinearity. However, even those methods generally fail to preserve the trend (i.e., global nonstationarity in the mean) of the original series. This is the case of time series with unit roots in their autoregressive structure. Additionally, those methods, based on Fourier transform, either need first and last values in the original series to match, or they need to select a piece of the original series with matching ends. These conditions are often inapplicable and the resulting surrogates are adversely affected by the well-known artefact problem. In this study, we propose a simple technique that, applied within existing Fourier-transform-based methods, generates surrogate data that jointly preserve the aforementioned characteristics of the original series, including (even strong) trends. Moreover, our technique avoids the negative effects of end mismatch. Several artificial and real, stationary and nonstationary, linear and nonlinear time series are examined, in order to demonstrate the advantages of the methods. Corresponding surrogate data are produced with the classical and with the proposed methods, and the results are compared.
Using time series structural characteristics to analyze grain prices in food insecure countries
Davenport, Frank; Funk, Chris
2015-01-01
Two components of food security monitoring are accurate forecasts of local grain prices and the ability to identify unusual price behavior. We evaluated a method that can both facilitate forecasts of cross-country grain price data and identify dissimilarities in price behavior across multiple markets. This method, characteristic based clustering (CBC), identifies similarities in multiple time series based on structural characteristics in the data. Here, we conducted a simulation experiment to determine if CBC can be used to improve the accuracy of maize price forecasts. We then compared forecast accuracies among clustered and non-clustered price series over a rolling time horizon. We found that the accuracy of forecasts on clusters of time series were equal to or worse than forecasts based on individual time series. However, in the following experiment we found that CBC was still useful for price analysis. We used the clusters to explore the similarity of price behavior among Kenyan maize markets. We found that price behavior in the isolated markets of Mandera and Marsabit has become increasingly dissimilar from markets in other Kenyan cities, and that these dissimilarities could not be explained solely by geographic distance. The structural isolation of Mandera and Marsabit that we find in this paper is supported by field studies on food security and market integration in Kenya. Our results suggest that a market with a unique price series (as measured by structural characteristics that differ from neighboring markets) may lack market integration and food security.
Dionne-Odom, Jodie; Westfall, Andrew O; Nzuobontane, Divine; Vinikoor, Michael J; Halle-Ekane, Gregory; Welty, Thomas; Tita, Alan T N
2018-01-01
Although most African countries offer hepatitis B immunization through a 3-dose vaccine series recommended at 6, 10 and 14 weeks of age, very few provide birth dose vaccination. In support of Cameroon's national plan to implement the birth dose vaccine in 2017, we investigated predictors of infant hepatitis B virus (HBV) vaccination under the current program. Using the 2011 Demographic Health Survey in Cameroon, we identified women with at least one living child (age 12-60 months) and information about the hepatitis B vaccine series. Vaccination rates were calculated, and logistic regression modeling was used to identify factors associated with 3-dose series completion. Changes over time were assessed with linear logistic model. Among 4594 mothers analyzed, 66.7% (95% confidence interval [CI]: 64.1-69.3) of infants completed the hepatitis B vaccine series; however, an average 4-week delay in series initiation was noted with median dose timing at 10, 14 and 19 weeks of age. Predictors of series completion included facility delivery (adjusted odds ratio [aOR]: 2.1; 95% CI: 1.7-2.6), household wealth (aOR: 1.9; 95% CI: 1.2-3.1 comparing the highest and lowest quintiles), Christian religion (aOR: 1.8; 95% CI: 1.3-2.5 compared with Muslim religion) and older maternal age (aOR: 1.4; 95% CI: 1.2-1.7 for 10 year units). Birth dose vaccination to reduce vertical and early childhood transmission of hepatitis B may overcome some of the obstacles to timely and complete HBV immunization in Cameroon. Increased awareness of HBV is needed among pregnant women and high-risk groups about vertical transmission, the importance of facility delivery and the effectiveness of prevention beginning with monovalent HBV vaccination at birth.
NASA Astrophysics Data System (ADS)
Allen, D. M.; Henry, C.; Demon, H.; Kirste, D. M.; Huang, J.
2011-12-01
Sustainable management of groundwater resources, particularly in water stressed regions, requires estimates of groundwater recharge. This study in southern Mali, Africa compares approaches for estimating groundwater recharge and understanding recharge processes using a variety of methods encompassing groundwater level-climate data analysis, GRACE satellite data analysis, and recharge modelling for current and future climate conditions. Time series data for GRACE (2002-2006) and observed groundwater level data (1982-2001) do not overlap. To overcome this problem, GRACE time series data were appended to the observed historical time series data, and the records compared. Terrestrial water storage anomalies from GRACE were corrected for soil moisture (SM) using the Global Land Data Assimilation System (GLDAS) to obtain monthly groundwater storage anomalies (GRACE-SM), and monthly recharge estimates. Historical groundwater storage anomalies and recharge were determined using the water table fluctuation method using observation data from 15 wells. Historical annual recharge averaged 145.0 mm (or 15.9% of annual rainfall) and compared favourably with the GRACE-SM estimate of 149.7 mm (or 14.8% of annual rainfall). Both records show lows and peaks in May and September, respectively; however, the peak for the GRACE-SM data is shifted later in the year to November, suggesting that the GLDAS may poorly predict the timing of soil water storage in this region. Recharge simulation results show good agreement between the timing and magnitude of the mean monthly simulated recharge and the regional mean monthly storage anomaly hydrograph generated from all monitoring wells. Under future climate conditions, annual recharge is projected to decrease by 8% for areas with luvisols and by 11% for areas with nitosols. Given this potential reduction in groundwater recharge, there may be added stress placed on an already stressed resource.
Oleson, Jacob J; Cavanaugh, Joseph E; McMurray, Bob; Brown, Grant
2015-01-01
In multiple fields of study, time series measured at high frequencies are used to estimate population curves that describe the temporal evolution of some characteristic of interest. These curves are typically nonlinear, and the deviations of each series from the corresponding curve are highly autocorrelated. In this scenario, we propose a procedure to compare the response curves for different groups at specific points in time. The method involves fitting the curves, performing potentially hundreds of serially correlated tests, and appropriately adjusting the overall alpha level of the tests. Our motivating application comes from psycholinguistics and the visual world paradigm. We describe how the proposed technique can be adapted to compare fixation curves within subjects as well as between groups. Our results lead to conclusions beyond the scope of previous analyses. PMID:26400088
Jehu, Deborah A; Lajoie, Yves; Paquet, Nicole
2017-12-21
The purpose of this study was to investigate obstacle clearance and reaction time parameters when crossing a series of six obstacles in older adults. A second aim was to examine the repeated exposure of this testing protocol once per week for 5 weeks. In total, 10 older adults (five females; age: 67.0 ± 6.9 years) walked onto and over six obstacles of varying heights (range: 100-200 mm) while completing no reaction time, simple reaction time, and choice reaction time tasks once per week for 5 weeks. The highest obstacles elicited the lowest toe clearance, and the first three obstacles revealed smaller heel clearance compared with the last three obstacles. Dual tasking negatively impacted obstacle clearance parameters when information processing demands were high. Longer and less consistent time to completion was observed in Session 1 compared with Sessions 2-5. Finally, improvements in simple reaction time were displayed after Session 2, but choice reaction time gradually improved and did not reach a plateau after repeated testing.
Development of web tools to disseminate space geodesy data-related products
NASA Astrophysics Data System (ADS)
Soudarin, L.; Ferrage, P.; Mezerette, A.
2014-12-01
In order to promote the products of the DORIS system, the French Space Agency CNES has developed and implemented on the web site of the International DORIS Service (IDS) a set of plot tools to interactively build and display time series of site positions, orbit residuals and terrestrial parameters (scale, geocenter). An interactive global map is also available to select sites, and to get access to their information. Besides the products provided by the CNES Orbitography Team and the IDS components, these tools allow comparing time evolutions of coordinates for collocated DORIS and GNSS stations, thanks to the collaboration with the Terrestrial Frame Combination Center of the International GNSS Service (IGS). The next step currently in progress is the creation of a database to improve robustness and efficiency of the tools, with the objective to propose a complete web service to foster data exchange with the other geodetic services of the International Association of Geodesy (IAG). The possibility to visualize and compare position time series of the four main space geodetic techniques DORIS, GNSS, SLR and VLBI is already under way at the French level. A dedicated version of these web tools has been developed for the French Space Geodesy Research Group (GRGS). It will give access to position time series provided by the GRGS Analysis Centers involved in DORIS, GNSS, SLR and VLBI data processing for the realization of the International Terrestrial Reference Frame. In this presentation, we will describe the functionalities of these tools, and we will address some aspects of the time series (content, format).
Johnson, Timothy C.; Slater, Lee D.; Ntarlagiannis, Dimitris; Day-Lewis, Frederick D.; Elwaseif, Mehrez
2012-01-01
Time-lapse resistivity imaging is increasingly used to monitor hydrologic processes. Compared to conventional hydrologic measurements, surface time-lapse resistivity provides superior spatial coverage in two or three dimensions, potentially high-resolution information in time, and information in the absence of wells. However, interpretation of time-lapse electrical tomograms is complicated by the ever-increasing size and complexity of long-term, three-dimensional (3-D) time series conductivity data sets. Here we use 3-D surface time-lapse electrical imaging to monitor subsurface electrical conductivity variations associated with stage-driven groundwater-surface water interactions along a stretch of the Columbia River adjacent to the Hanford 300 near Richland, Washington, USA. We reduce the resulting 3-D conductivity time series using both time-series and time-frequency analyses to isolate a paleochannel causing enhanced groundwater-surface water interactions. Correlation analysis on the time-lapse imaging results concisely represents enhanced groundwater-surface water interactions within the paleochannel, and provides information concerning groundwater flow velocities. Time-frequency analysis using the Stockwell (S) transform provides additional information by identifying the stage periodicities driving groundwater-surface water interactions due to upstream dam operations, and identifying segments in time-frequency space when these interactions are most active. These results provide new insight into the distribution and timing of river water intrusion into the Hanford 300 Area, which has a governing influence on the behavior of a uranium plume left over from historical nuclear fuel processing operations.
Resuspension of ash after the 2014 phreatic eruption at Ontake volcano, Japan
NASA Astrophysics Data System (ADS)
Miwa, Takahiro; Nagai, Masashi; Kawaguchi, Ryohei
2018-02-01
We determined the resuspension process of an ash deposit after the phreatic eruption of September 27th, 2014 at Ontake volcano, Japan, by analyzing the time series data of particle concentrations obtained using an optical particle counter and the characteristics of an ash sample. The time series of particle concentration was obtained by an optical particle counter installed 11 km from the volcano from September 21st to October 19th, 2014. The time series contains counts of dust particles (ash and soil), pollen, and water drops, and was corrected to calculate the concentration of dust particles based on a polarization factor reflecting the optical anisotropy of particles. The dust concentration was compared with the time series of wind velocity. The dust concentration was high and the correlation coefficient with wind velocity was positive from September 28th to October 2nd. Grain-size analysis of an ash sample confirmed that the ash deposit contains abundant very fine particles (< 30 μm). Simple theoretical calculations revealed that the daily peaks of the moderate wind (a few m/s at 10 m above the ground surface) were comparable with the threshold wind velocity for resuspension of an unconsolidated deposit with a wide range of particle densities. These results demonstrate that moderate wind drove the resuspension of an ash deposit containing abundant fine particles produced by the phreatic eruption. Histogram of polarization factors of each species experimentally obtained. The N is the number of analyzed particles.
Mayaud, C; Wagner, T; Benischke, R; Birk, S
2014-04-16
The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the synthetic system allow to deduce that similar aquifer properties are relevant in both systems. In particular, the heterogeneity of aquifer parameters appears to be a controlling factor. Moreover, the location of the overflow connecting the sub-catchments of the two springs is found to be of primary importance, regarding the occurrence of inter-catchment flow. This further supports our current understanding of an overflow zone located in the upper part of the Lurbach karst aquifer. Thus, time series analysis of single events can potentially be used to characterize transient inter-catchment flow behavior of karst systems.
Mayaud, C.; Wagner, T.; Benischke, R.; Birk, S.
2014-01-01
Summary The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the synthetic system allow to deduce that similar aquifer properties are relevant in both systems. In particular, the heterogeneity of aquifer parameters appears to be a controlling factor. Moreover, the location of the overflow connecting the sub-catchments of the two springs is found to be of primary importance, regarding the occurrence of inter-catchment flow. This further supports our current understanding of an overflow zone located in the upper part of the Lurbach karst aquifer. Thus, time series analysis of single events can potentially be used to characterize transient inter-catchment flow behavior of karst systems. PMID:24748687
Application of dynamic topic models to toxicogenomics data.
Lee, Mikyung; Liu, Zhichao; Huang, Ruili; Tong, Weida
2016-10-06
All biological processes are inherently dynamic. Biological systems evolve transiently or sustainably according to sequential time points after perturbation by environment insults, drugs and chemicals. Investigating the temporal behavior of molecular events has been an important subject to understand the underlying mechanisms governing the biological system in response to, such as, drug treatment. The intrinsic complexity of time series data requires appropriate computational algorithms for data interpretation. In this study, we propose, for the first time, the application of dynamic topic models (DTM) for analyzing time-series gene expression data. A large time-series toxicogenomics dataset was studied. It contains over 3144 microarrays of gene expression data corresponding to rat livers treated with 131 compounds (most are drugs) at two doses (control and high dose) in a repeated schedule containing four separate time points (4-, 8-, 15- and 29-day). We analyzed, with DTM, the topics (consisting of a set of genes) and their biological interpretations over these four time points. We identified hidden patterns embedded in this time-series gene expression profiles. From the topic distribution for compound-time condition, a number of drugs were successfully clustered by their shared mode-of-action such as PPARɑ agonists and COX inhibitors. The biological meaning underlying each topic was interpreted using diverse sources of information such as functional analysis of the pathways and therapeutic uses of the drugs. Additionally, we found that sample clusters produced by DTM are much more coherent in terms of functional categories when compared to traditional clustering algorithms. We demonstrated that DTM, a text mining technique, can be a powerful computational approach for clustering time-series gene expression profiles with the probabilistic representation of their dynamic features along sequential time frames. The method offers an alternative way for uncovering hidden patterns embedded in time series gene expression profiles to gain enhanced understanding of dynamic behavior of gene regulation in the biological system.
A late wake time phase delays the human dim light melatonin rhythm.
Burgess, Helen J; Eastman, Charmane I
2006-03-13
Short sleep/dark durations, due to late bedtimes or early wake times or both, are common in modern society. We have previously shown that a series of days with a late bedtime phase delays the human dim light melatonin rhythm, as compared to a series of days with an early bedtime, despite a fixed wake time. Here we compared the effect of an early versus late wake time with a fixed bedtime on the human dim light melatonin rhythm. Fourteen healthy subjects experienced 2 weeks of short 6h nights with an early wake time fixed at their habitual weekday wake time and 2 weeks of long 9 h nights with a wake time that occurred 3h later than the early wake time, in counterbalanced order. We found that after 2 weeks with the late wake time, the dim light melatonin onset delayed by 2.4 h and the dim light melatonin offset delayed by 2.6 h (both p < 0.001), as compared to after 2 weeks with the early wake time. These results highlight the substantial influence that wake time, likely via the associated morning light exposure, has on the timing of the human circadian clock. Furthermore, the results suggest that when people truncate their sleep by waking early their circadian clocks phase advance and when people wake late their circadian clocks phase delay.
A Combined Length-of-Day Series Spanning 1832-1997
NASA Technical Reports Server (NTRS)
Gross, Richard S.
1999-01-01
The Earth's rotation is not constant but exhibits minute changes on all observable time scales ranging from subdaily to secular. This rich spectrum of observed Earth rotation changes reflects the rich variety of astronomical and geophysical phenomena that are causing the Earth's rotation to change, including, but not limited to, ocean and solid body tides, atmospheric wind and pressure changes, oceanic current and sea level height changes, post-glacial rebound, and torques acting at the core-mantle boundary. In particular, the decadal-scale variations of the Earth's rotation are thought to be largely caused by interactions between the Earth's outer core and mantle. Comparing the inferred Earth rotation variations caused by the various core-mantle interactions to observed variations requires Earth rotation observations spanning decades, if not centuries. During the past century many different techniques have been used to observe the Earth's rotation. By combining the individual Earth rotation series determined by each of these techniques, a series of the Earth's rotation can be obtained that is based upon independent measurements spanning the greatest possible time interval. In this study, independent observations of the Earth's rotation are combined to generate a length-of-day series spanning 1832-1997. The observations combined include lunar occultation measurements spanning 1832-1955, optical astrometric measurements spanning 1956-1982, lunar laser ranging measurements spanning 1970-1997, and very long baseline interferometric measurements spanning 1978-1998. These series are combined using a Kalman filter developed at JPL for just this purpose. The resulting combined length-of-day series will be presented and compared with other available length-of-day series of similar duration.
75 FR 8082 - Government-Owned Inventions; Availability for Licensing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-23
... this invention, the colon registration problem is formulated as time series matching along the... on patients, less time-consuming, and more accurate. The effectiveness of the method was verified in...) compared to a traditional method based on dynamic time warping. Colon cancer is the second leading cause of...
Biogeochemical Response to Mesoscale Physical Forcing in the California Current System
NASA Technical Reports Server (NTRS)
Niiler, Pearn P.; Letelier, Ricardo; Moisan, John R.; Marra, John A. (Technical Monitor)
2001-01-01
In the first part of the project, we investigated the local response of the coastal ocean ecosystems (changes in chlorophyll, concentration and chlorophyll, fluorescence quantum yield) to physical forcing by developing and deploying Autonomous Drifting Ocean Stations (ADOS) within several mesoscale features along the U.S. west coast. Also, we compared the temporal and spatial variability registered by sensors mounted in the drifters to that registered by the sensors mounted in the satellites in order to assess the scales of variability that are not resolved by the ocean color satellite. The second part of the project used the existing WOCE SVP Surface Lagrangian drifters to track individual water parcels through time. The individual drifter tracks were used to generate multivariate time series by interpolating/extracting the biological and physical data fields retrieved by remote sensors (ocean color, SST, wind speed and direction, wind stress curl, and sea level topography). The individual time series of the physical data (AVHRR, TOPEX, NCEP) were analyzed against the ocean color (SeaWiFS) time-series to determine the time scale of biological response to the physical forcing. The results from this part of the research is being used to compare the decorrelation scales of chlorophyll from a Lagrangian and Eulerian framework. The results from both parts of this research augmented the necessary time series data needed to investigate the interactions between the ocean mesoscale features, wind, and the biogeochemical processes. Using the historical Lagrangian data sets, we have completed a comparison of the decorrelation scales in both the Eulerian and Lagrangian reference frame for the SeaWiFS data set. We are continuing to investigate how these results might be used in objective mapping efforts.
Status of CSR RL06 GRACE reprocessing and preliminary results
NASA Astrophysics Data System (ADS)
Save, H.
2017-12-01
The GRACE project plans to re-processes the GRACE mission data in order to be consistent with the first gravity products released by the GRACE-FO project. The RL06 reprocessing will harmonize the GRACE time-series with the first release of GRACE-FO. This paper catalogues the changes in the upcoming RL06 release and discusses the quality improvements as compared to the current RL05 release. The processing and parameterization changes as compared to the current release are also discussed. This paper discusses the evolution of the quality of the GRACE solutions and characterize the errors over the past few years. The possible challenges associated with connecting the GRACE time series with that from GRACE-FO are also discussed.
Future projects in asteroseismology: the unique role of Antarctica
NASA Astrophysics Data System (ADS)
Mosser, B.; Siamois Team
Asteroseismology requires observables registered in stringent conditions: very high sensitivity, uninterrupted time series, long duration. These specifications then allow to study the details of the stellar interior structure. Space-borne and ground-based asteroseismic projects are presented and compared. With CoRoT as a precursor, then Kepler and maybe Plato, the roadmap in space appears to be precisely designed. In parallel, ground-based projects are necessary to provide different and unique information on bright stars with Doppler measurements. Dome C appears to be the ideal place for ground-based asteroseismic observations. The unequalled weather conditions yield a duty cycle comparable to space. Long time series (up to 3 months) will be possible, thanks to the long duration of the polar night.
Asquith, W.H.; Mosier, J. G.; Bush, P.W.
1997-01-01
The watershed simulation model Hydrologic Simulation Program—Fortran (HSPF) was used to generate simulated flow (runoff) from the 13 watersheds to the six bay systems because adequate gaged streamflow data from which to estimate freshwater inflows are not available; only about 23 percent of the adjacent contributing watershed area is gaged. The model was calibrated for the gaged parts of three watersheds—that is, selected input parameters (meteorologic and hydrologic properties and conditions) that control runoff were adjusted in a series of simulations until an adequate match between model-generated flows and a set (time series) of gaged flows was achieved. The primary model input is rainfall and evaporation data and the model output is a time series of runoff volumes. After calibration, simulations driven by daily rainfall for a 26-year period (1968–93) were done for the 13 watersheds to obtain runoff under current (1983–93), predevelopment (pre-1940 streamflow and pre-urbanization), and future (2010) land-use conditions for estimating freshwater inflows and for comparing runoff under the three land-use conditions; and to obtain time series of runoff from which to estimate time series of freshwater inflows for trend analysis.
Xie, Ping; Wu, Zi Yi; Zhao, Jiang Yan; Sang, Yan Fang; Chen, Jie
2018-04-01
A stochastic hydrological process is influenced by both stochastic and deterministic factors. A hydrological time series contains not only pure random components reflecting its inheri-tance characteristics, but also deterministic components reflecting variability characteristics, such as jump, trend, period, and stochastic dependence. As a result, the stochastic hydrological process presents complicated evolution phenomena and rules. To better understand these complicated phenomena and rules, this study described the inheritance and variability characteristics of an inconsistent hydrological series from two aspects: stochastic process simulation and time series analysis. In addition, several frequency analysis approaches for inconsistent time series were compared to reveal the main problems in inconsistency study. Then, we proposed a new concept of hydrological genes origined from biological genes to describe the inconsistent hydrolocal processes. The hydrologi-cal genes were constructed using moments methods, such as general moments, weight function moments, probability weight moments and L-moments. Meanwhile, the five components, including jump, trend, periodic, dependence and pure random components, of a stochastic hydrological process were defined as five hydrological bases. With this method, the inheritance and variability of inconsistent hydrological time series were synthetically considered and the inheritance, variability and evolution principles were fully described. Our study would contribute to reveal the inheritance, variability and evolution principles in probability distribution of hydrological elements.
Dynamic correlations at different time-scales with empirical mode decomposition
NASA Astrophysics Data System (ADS)
Nava, Noemi; Di Matteo, T.; Aste, Tomaso
2018-07-01
We introduce a simple approach which combines Empirical Mode Decomposition (EMD) and Pearson's cross-correlations over rolling windows to quantify dynamic dependency at different time scales. The EMD is a tool to separate time series into implicit components which oscillate at different time-scales. We apply this decomposition to intraday time series of the following three financial indices: the S&P 500 (USA), the IPC (Mexico) and the VIX (volatility index USA), obtaining time-varying multidimensional cross-correlations at different time-scales. The correlations computed over a rolling window are compared across the three indices, across the components at different time-scales and across different time lags. We uncover a rich heterogeneity of interactions, which depends on the time-scale and has important lead-lag relations that could have practical use for portfolio management, risk estimation and investment decisions.
Besic, Nikola; Vasile, Gabriel; Anghel, Andrei; Petrut, Teodor-Ion; Ioana, Cornel; Stankovic, Srdjan; Girard, Alexandre; d'Urso, Guy
2014-11-01
In this paper, we propose a novel ultrasonic tomography method for pipeline flow field imaging, based on the Zernike polynomial series. Having intrusive multipath time-offlight ultrasonic measurements (difference in flight time and speed of ultrasound) at the input, we provide at the output tomograms of the fluid velocity components (axial, radial, and orthoradial velocity). Principally, by representing these velocities as Zernike polynomial series, we reduce the tomography problem to an ill-posed problem of finding the coefficients of the series, relying on the acquired ultrasonic measurements. Thereupon, this problem is treated by applying and comparing Tikhonov regularization and quadratically constrained ℓ1 minimization. To enhance the comparative analysis, we additionally introduce sparsity, by employing SVD-based filtering in selecting Zernike polynomials which are to be included in the series. The first approach-Tikhonov regularization without filtering, is used because it is the most suitable method. The performances are quantitatively tested by considering a residual norm and by estimating the flow using the axial velocity tomogram. Finally, the obtained results show the relative residual norm and the error in flow estimation, respectively, ~0.3% and ~1.6% for the less turbulent flow and ~0.5% and ~1.8% for the turbulent flow. Additionally, a qualitative validation is performed by proximate matching of the derived tomograms with a flow physical model.
Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M
2014-01-01
This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.
Finding hidden periodic signals in time series - an application to stock prices
NASA Astrophysics Data System (ADS)
O'Shea, Michael
2014-03-01
Data in the form of time series appear in many areas of science. In cases where the periodicity is apparent and the only other contribution to the time series is stochastic in origin, the data can be `folded' to improve signal to noise and this has been done for light curves of variable stars with the folding resulting in a cleaner light curve signal. Stock index prices versus time are classic examples of time series. Repeating patterns have been claimed by many workers and include unusually large returns on small-cap stocks during the month of January, and small returns on the Dow Jones Industrial average (DJIA) in the months June through September compared to the rest of the year. Such observations imply that these prices have a periodic component. We investigate this for the DJIA. If such a component exists it is hidden in a large non-periodic variation and a large stochastic variation. We show how to extract this periodic component and for the first time reveal its yearly (averaged) shape. This periodic component leads directly to the `Sell in May and buy at Halloween' adage. We also drill down and show that this yearly variation emerges from approximately half of the underlying stocks making up the DJIA index.
NASA Astrophysics Data System (ADS)
Prada, D. A.; Sanabria, M. P.; Torres, A. F.; Álvarez, M. A.; Gómez, J.
2018-04-01
The study of persistence in time series in seismic events in two of the most important nets such as Hindu Kush in Afghanistan and Los Santos Santander in Colombia generate great interest due to its high presence of telluric activity. The data were taken from the global seismological network. Using the Jarque-Bera test the presence of gaussian distribution was analyzed, and because the distribution in the series was asymmetric, without presence of mesocurtisity, the Hurst coefficient was calculated using the rescaled range method, with which it was found the fractal dimension associated to these time series and under what is possible to determine the persistence, antipersistence and volatility in these phenomena.
The Living Planet Index: using species population time series to track trends in biodiversity
Loh, Jonathan; Green, Rhys E; Ricketts, Taylor; Lamoreux, John; Jenkins, Martin; Kapos, Valerie; Randers, Jorgen
2005-01-01
The Living Planet Index was developed to measure the changing state of the world's biodiversity over time. It uses time-series data to calculate average rates of change in a large number of populations of terrestrial, freshwater and marine vertebrate species. The dataset contains about 3000 population time series for over 1100 species. Two methods of calculating the index are outlined: the chain method and a method based on linear modelling of log-transformed data. The dataset is analysed to compare the relative representation of biogeographic realms, ecoregional biomes, threat status and taxonomic groups among species contributing to the index. The two methods show very similar results: terrestrial species declined on average by 25% from 1970 to 2000. Birds and mammals are over-represented in comparison with other vertebrate classes, and temperate species are over-represented compared with tropical species, but there is little difference in representation between threatened and non-threatened species. Some of the problems arising from over-representation are reduced by the way in which the index is calculated. It may be possible to reduce this further by post-stratification and weighting, but new information would first need to be collected for data-poor classes, realms and biomes. PMID:15814346
Crustal Displacements Due to Continental Water Loading
NASA Technical Reports Server (NTRS)
vanDam, T.; Wahr, J.; Milly, P. C. D.; Shmakin, A. B.; Blewitt, G.; Lavallee, D.; Larson, K. M.
2001-01-01
The effects of long-wavelength (> 100 km), seasonal variability in continental water storage on vertical crustal motions are assessed. The modeled vertical displacements (delta-r(sub M)) have root-mean-square (RMS) values for 1994-1998 as large as 8 mm with ranges up to 30 mm, and are predominantly annual in character. Regional strains are on the order of 20 nanostrain for tilt and 5 nanostrain for horizontal deformation. We compare delta-r(sub M) with observed Global Positioning System (GPS) heights (delta-r(sub O)) (which include adjustments to remove estimated effects of atmospheric pressure and annual tidal and non-tidal ocean loading) for 147 globally distributed sites. When the delta-r(sub O) time series are adjusted by delta-r(sub M), their variances are reduced, on average, by an amount equal to the variance of the delta-r(sub M). Of the delta-r(sub O) time series exhibiting a strong annual signal, more than half are found to have an annual harmonic that is in phase and of comparable amplitude with the annual harmonic in the delta-r(sub M). The delta-r(sub M) time series exhibit long-period variations that could be mistaken for secular tectonic trends or post-glacial rebound when observed over a time span of a few years.
a Landsat Time-Series Stacks Model for Detection of Cropland Change
NASA Astrophysics Data System (ADS)
Chen, J.; Chen, J.; Zhang, J.
2017-09-01
Global, timely, accurate and cost-effective cropland monitoring with a fine spatial resolution will dramatically improve our understanding of the effects of agriculture on greenhouse gases emissions, food safety, and human health. Time-series remote sensing imagery have been shown particularly potential to describe land cover dynamics. The traditional change detection techniques are often not capable of detecting land cover changes within time series that are severely influenced by seasonal difference, which are more likely to generate pseuso changes. Here,we introduced and tested LTSM ( Landsat time-series stacks model), an improved Continuous Change Detection and Classification (CCDC) proposed previously approach to extract spectral trajectories of land surface change using a dense Landsat time-series stacks (LTS). The method is expected to eliminate pseudo changes caused by phenology driven by seasonal patterns. The main idea of the method is that using all available Landsat 8 images within a year, LTSM consisting of two term harmonic function are estimated iteratively for each pixel in each spectral band .LTSM can defines change area by differencing the predicted and observed Landsat images. The LTSM approach was compared with change vector analysis (CVA) method. The results indicated that the LTSM method correctly detected the "true change" without overestimating the "false" one, while CVA pointed out "true change" pixels with a large number of "false changes". The detection of change areas achieved an overall accuracy of 92.37 %, with a kappa coefficient of 0.676.
Analysis on Difference of Forest Phenology Extracted from EVI and LAI Based on PhenoCams
NASA Astrophysics Data System (ADS)
Wang, C.; Jing, L.; Qinhuo, L.
2017-12-01
Land surface phenology can make up for the deficiency of field observation with advantages of capturing the continuous expression of phenology on a large scale. However, there are some variability in phenological metrics derived from different satellite time-series data of vegetation parameters. This paper aims at assessing the difference of phenology information extracted from EVI and LAI time series. To achieve this, some web-camera sites were selected to analyze the characteristics between MODIS-EVI and MODIS-LAI time series from 2010 to 2014 for different forest types, including evergreen coniferous forest, evergreen broadleaf forest, deciduous coniferous forest and deciduous broadleaf forest. At the same time, satellite-based phenological metrics were extracted by the Logistics algorithm and compared with camera-based phenological metrics. Results show that the SOS and EOS that are extracted from LAI are close to bud burst and leaf defoliation respectively, while the SOS and EOS that are extracted from EVI is close to leaf unfolding and leaf coloring respectively. Thus the SOS that is extracted from LAI is earlier than that from EVI, while the EOS that is extracted from LAI is later than that from EVI at deciduous forest sites. Although the seasonal variation characteristics of evergreen forests are not apparent, significant discrepancies exist in LAI time series and EVI time series. In addition, Satellite- and camera-based phenological metrics agree well generally, but EVI has higher correlation with the camera-based canopy greenness (green chromatic coordinate, gcc) than LAI.
Alford, Timothy J; Roberts, W Eugene; Hartsfield, James K; Eckert, George J; Snyder, Ronald J
2011-05-01
Utilize American Board of Orthodontics (ABO) cast/radiographic evaluation (CRE) to compare a series of 63 consecutive patients, finished with manual wire bending (conventional) treatment, vs a subsequent series of 69 consecutive patients, finished by the same orthodontist using the SureSmile™ (SS) method. Records of 132 nonextraction patients were scored by a calibrated examiner blinded to treatment mode. Age and discrepancy index (DI) between groups were compared by t-tests. A chi-square test was used to compare for differences in sex and whether the patient was treated using braces only (no orthopedic correction). Analysis of covariance tested for differences in CRE outcomes and treatment times, with sex and DI included as covariates. A logarithmic transformation of CRE outcomes and treatment times was used because their distributions were skewed. Significance was defined as P < .05. Compared with conventional finishing, SS patients had significantly lower DI scores, less treatment time (∼7 months), and better CRE scores for first-order alignment-rotation and interproximal space closure; however, second-order root angulation (RA) was inferior. SS patients were treated in less time to better CRE scores for first-order rotation (AR) and interproximal space closure (IC) but on the average, malocclusions were less complex and second order root alignment was inferior, compared with patients finished with manual wire bending.
Alford, Timothy J.; Roberts, W. Eugene; Hartsfield, James K.; Eckert, George J.; Snyder, Ronald J.
2016-01-01
Objective Utilize American Board of Orthodontics (ABO) cast/radiographic evaluation (CRE) to compare a series of 63 consecutive patients, finished with manual wire bending (conventional) treatment, vs a subsequent series of 69 consecutive patients, finished by the same orthodontist using the SureSmile™ (SS) method. Materials and Methods Records of 132 nonextraction patients were scored by a calibrated examiner blinded to treatment mode. Age and discrepancy index (DI) between groups were compared by t-tests. A chi-square test was used to compare for differences in sex and whether the patient was treated using braces only (no orthopedic correction). Analysis of covariance tested for differences in CRE outcomes and treatment times, with sex and DI included as covariates. A logarithmic transformation of CRE outcomes and treatment times was used because their distributions were skewed. Significance was defined as P < .05. Results Compared with conventional finishing, SS patients had significantly lower DI scores, less treatment time (~7 months), and better CRE scores for first-order alignment-rotation and interproximal space closure; however, second-order root angulation (RA) was inferior. Conclusion SS patients were treated in less time to better CRE scores for first-order rotation (AR) and interproximal space closure (IC) but on the average, malocclusions were less complex and second order root alignment was inferior, compared with patients finished with manual wire bending. PMID:21261488
NASA Technical Reports Server (NTRS)
Yue, G. K.; Poole, L. R.; McCormick, M. P.; Veiga, R. E.; Wang, P.-H.; Rizi, V.; Masci, F.; DAltorio, A.; Visconti, G.
1995-01-01
Stratospheric aerosol and ozone profiles obtained simultaneously from the lidar station at the University of L'Aquila (42.35 deg N, 13.33 deg E, 683 m above sea level) during the first 6 months following the eruption of Mount Pinatubo are compared with corresponding nearby Stratospheric Aerosol and Gas Experiment (SAGE) 2 profiles. The agreement between the two data sets is found to be reasonably good. The temporal change of aerosol profiles obtained by both techniques showed the intrusion and growth of Pinatubo aerosols. In addition, ozone concentration profiles derived from an empirical time-series model based on SAGE 2 ozone data obtained before the Pinatubo eruption are compared with measured profiles. Good agreement is shown in the 1991 profiles, but ozone concentrations measured in January 1992 were reduced relative to time-series model estimates. Possible reasons for the differences between measured and model-based ozone profiles are discussed.
Temporal dynamics of catchment transit times from stable isotope data
NASA Astrophysics Data System (ADS)
Klaus, Julian; Chun, Kwok P.; McGuire, Kevin J.; McDonnell, Jeffrey J.
2015-06-01
Time variant catchment transit time distributions are fundamental descriptors of catchment function but yet not fully understood, characterized, and modeled. Here we present a new approach for use with standard runoff and tracer data sets that is based on tracking of tracer and age information and time variant catchment mixing. Our new approach is able to deal with nonstationarity of flow paths and catchment mixing, and an irregular shape of the transit time distribution. The approach extracts information on catchment mixing from the stable isotope time series instead of prior assumptions of mixing or the shape of transit time distribution. We first demonstrate proof of concept of the approach with artificial data; the Nash-Sutcliffe efficiencies in tracer and instantaneous transit times were >0.9. The model provides very accurate estimates of time variant transit times when the boundary conditions and fluxes are fully known. We then tested the model with real rainfall-runoff flow and isotope tracer time series from the H.J. Andrews Watershed 10 (WS10) in Oregon. Model efficiencies were 0.37 for the 18O modeling for a 2 year time series; the efficiencies increased to 0.86 for the second year underlying the need of long time tracer time series with a long overlap of tracer input and output. The approach was able to determine time variant transit time of WS10 with field data and showed how it follows the storage dynamics and related changes in flow paths where wet periods with high flows resulted in clearly shorter transit times compared to dry low flow periods.
NASA Astrophysics Data System (ADS)
Caro Cuenca, Miguel; Esfahany, Sami Samiei; Hanssen, Ramon F.
2010-12-01
Persistent scatterer Radar Interferometry (PSI) can provide with a wealth of information on surface motion. These methods overcome the major limitations of the antecessor technique, interferometric SAR (InSAR), such as atmospheric disturbances, by detecting the scatterers which are slightly affected by noise. The time span that surface deformation processes are observed is limited by the satellite lifetime, which is usually less than 10 years. However most of deformation phenomena last longer. In order to fully monitor and comprehend the observed signal, acquisitions from different sensors can be merged. This is a complex task for one main reason. PSI methods provide with estimations that are relative in time to one of the acquisitions which is referred to as master or reference image. Therefore, time series acquired by different sensors will have different reference images and cannot be directly compared or joint unless they are set to the same time reference system. In global terms, the operation of translating from one to another reference systems consist of calculating a vertical offset, which is the total deformation that occurs between the two master times. To estimate this offset, different strategies can be applied, for example, using additional data such as leveling or GPS measurements. In this contribution we propose to use a least squares to merge PSI time series without any ancillary information. This method treats the time series individually, i.e. per PS, and requires some knowledge of the deformation signal, for example, if a polynomial would fairly describe the expected behavior. To test the proposed approach, we applied it to the southern Netherlands, where the surface is affected by ground water processes in abandoned mines. The time series were obtained after processing images provided by ERS1/2 and Envisat. The results were validated using in-situ water measurements, which show very high correlation with deformation time series.
Washburne, Alex D.; Burby, Joshua W.; Lacker, Daniel; ...
2016-09-30
Systems as diverse as the interacting species in a community, alleles at a genetic locus, and companies in a market are characterized by competition (over resources, space, capital, etc) and adaptation. Neutral theory, built around the hypothesis that individual performance is independent of group membership, has found utility across the disciplines of ecology, population genetics, and economics, both because of the success of the neutral hypothesis in predicting system properties and because deviations from these predictions provide information about the underlying dynamics. However, most tests of neutrality are weak, based on static system properties such as species-abundance distributions or themore » number of singletons in a sample. Time-series data provide a window onto a system’s dynamics, and should furnish tests of the neutral hypothesis that are more powerful to detect deviations from neutrality and more informative about to the type of competitive asymmetry that drives the deviation. Here, we present a neutrality test for time-series data. We apply this test to several microbial time-series and financial time-series and find that most of these systems are not neutral. Our test isolates the covariance structure of neutral competition, thus facilitating further exploration of the nature of asymmetry in the covariance structure of competitive systems. Much like neutrality tests from population genetics that use relative abundance distributions have enabled researchers to scan entire genomes for genes under selection, we anticipate our time-series test will be useful for quick significance tests of neutrality across a range of ecological, economic, and sociological systems for which time-series data are available. Here, future work can use our test to categorize and compare the dynamic fingerprints of particular competitive asymmetries (frequency dependence, volatility smiles, etc) to improve forecasting and management of complex adaptive systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washburne, Alex D.; Burby, Joshua W.; Lacker, Daniel
Systems as diverse as the interacting species in a community, alleles at a genetic locus, and companies in a market are characterized by competition (over resources, space, capital, etc) and adaptation. Neutral theory, built around the hypothesis that individual performance is independent of group membership, has found utility across the disciplines of ecology, population genetics, and economics, both because of the success of the neutral hypothesis in predicting system properties and because deviations from these predictions provide information about the underlying dynamics. However, most tests of neutrality are weak, based on static system properties such as species-abundance distributions or themore » number of singletons in a sample. Time-series data provide a window onto a system’s dynamics, and should furnish tests of the neutral hypothesis that are more powerful to detect deviations from neutrality and more informative about to the type of competitive asymmetry that drives the deviation. Here, we present a neutrality test for time-series data. We apply this test to several microbial time-series and financial time-series and find that most of these systems are not neutral. Our test isolates the covariance structure of neutral competition, thus facilitating further exploration of the nature of asymmetry in the covariance structure of competitive systems. Much like neutrality tests from population genetics that use relative abundance distributions have enabled researchers to scan entire genomes for genes under selection, we anticipate our time-series test will be useful for quick significance tests of neutrality across a range of ecological, economic, and sociological systems for which time-series data are available. Here, future work can use our test to categorize and compare the dynamic fingerprints of particular competitive asymmetries (frequency dependence, volatility smiles, etc) to improve forecasting and management of complex adaptive systems.« less
Complex-valued time-series correlation increases sensitivity in FMRI analysis.
Kociuba, Mary C; Rowe, Daniel B
2016-07-01
To develop a linear matrix representation of correlation between complex-valued (CV) time-series in the temporal Fourier frequency domain, and demonstrate its increased sensitivity over correlation between magnitude-only (MO) time-series in functional MRI (fMRI) analysis. The standard in fMRI is to discard the phase before the statistical analysis of the data, despite evidence of task related change in the phase time-series. With a real-valued isomorphism representation of Fourier reconstruction, correlation is computed in the temporal frequency domain with CV time-series data, rather than with the standard of MO data. A MATLAB simulation compares the Fisher-z transform of MO and CV correlations for varying degrees of task related magnitude and phase amplitude change in the time-series. The increased sensitivity of the complex-valued Fourier representation of correlation is also demonstrated with experimental human data. Since the correlation description in the temporal frequency domain is represented as a summation of second order temporal frequencies, the correlation is easily divided into experimentally relevant frequency bands for each voxel's temporal frequency spectrum. The MO and CV correlations for the experimental human data are analyzed for four voxels of interest (VOIs) to show the framework with high and low contrast-to-noise ratios in the motor cortex and the supplementary motor cortex. The simulation demonstrates the increased strength of CV correlations over MO correlations for low magnitude contrast-to-noise time-series. In the experimental human data, the MO correlation maps are noisier than the CV maps, and it is more difficult to distinguish the motor cortex in the MO correlation maps after spatial processing. Including both magnitude and phase in the spatial correlation computations more accurately defines the correlated left and right motor cortices. Sensitivity in correlation analysis is important to preserve the signal of interest in fMRI data sets with high noise variance, and avoid excessive processing induced correlation. Copyright © 2016 Elsevier Inc. All rights reserved.
Measuring efficiency of international crude oil markets: A multifractality approach
NASA Astrophysics Data System (ADS)
Niere, H. M.
2015-01-01
The three major international crude oil markets are treated as complex systems and their multifractal properties are explored. The study covers daily prices of Brent crude, OPEC reference basket and West Texas Intermediate (WTI) crude from January 2, 2003 to January 2, 2014. A multifractal detrended fluctuation analysis (MFDFA) is employed to extract the generalized Hurst exponents in each of the time series. The generalized Hurst exponent is used to measure the degree of multifractality which in turn is used to quantify the efficiency of the three international crude oil markets. To identify whether the source of multifractality is long-range correlations or broad fat-tail distributions, shuffled data and surrogated data corresponding to each of the time series are generated. Shuffled data are obtained by randomizing the order of the price returns data. This will destroy any long-range correlation of the time series. Surrogated data is produced using the Fourier-Detrended Fluctuation Analysis (F-DFA). This is done by randomizing the phases of the price returns data in Fourier space. This will normalize the distribution of the time series. The study found that for the three crude oil markets, there is a strong dependence of the generalized Hurst exponents with respect to the order of fluctuations. This shows that the daily price time series of the markets under study have signs of multifractality. Using the degree of multifractality as a measure of efficiency, the results show that WTI is the most efficient while OPEC is the least efficient market. This implies that OPEC has the highest likelihood to be manipulated among the three markets. This reflects the fact that Brent and WTI is a very competitive market hence, it has a higher level of complexity compared against OPEC, which has a large monopoly power. Comparing with shuffled data and surrogated data, the findings suggest that for all the three crude oil markets, the multifractality is mainly due to long-range correlations.
Real time wave forecasting using wind time history and numerical model
NASA Astrophysics Data System (ADS)
Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.
Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.
NASA Astrophysics Data System (ADS)
Krawiecki, A.
A multi-agent spin model for changes of prices in the stock market based on the Ising-like cellular automaton with interactions between traders randomly varying in time is investigated by means of Monte Carlo simulations. The structure of interactions has topology of a small-world network obtained from regular two-dimensional square lattices with various coordination numbers by randomly cutting and rewiring edges. Simulations of the model on regular lattices do not yield time series of logarithmic price returns with statistical properties comparable with the empirical ones. In contrast, in the case of networks with a certain degree of randomness for a wide range of parameters the time series of the logarithmic price returns exhibit intermittent bursting typical of volatility clustering. Also the tails of distributions of returns obey a power scaling law with exponents comparable to those obtained from the empirical data.
Xue, Fangzheng; Li, Qian; Li, Xiumin
2017-01-01
Recently, echo state network (ESN) has attracted a great deal of attention due to its high accuracy and efficient learning performance. Compared with the traditional random structure and classical sigmoid units, simple circle topology and leaky integrator neurons have more advantages on reservoir computing of ESN. In this paper, we propose a new model of ESN with both circle reservoir structure and leaky integrator units. By comparing the prediction capability on Mackey-Glass chaotic time series of four ESN models: classical ESN, circle ESN, traditional leaky integrator ESN, circle leaky integrator ESN, we find that our circle leaky integrator ESN shows significantly better performance than other ESNs with roughly 2 orders of magnitude reduction of the predictive error. Moreover, this model has stronger ability to approximate nonlinear dynamics and resist noise than conventional ESN and ESN with only simple circle structure or leaky integrator neurons. Our results show that the combination of circle topology and leaky integrator neurons can remarkably increase dynamical diversity and meanwhile decrease the correlation of reservoir states, which contribute to the significant improvement of computational performance of Echo state network on time series prediction.
Efficient multidimensional regularization for Volterra series estimation
NASA Astrophysics Data System (ADS)
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Maximum Entropy Method applied to Real-time Time-Dependent Density Functional Theory
NASA Astrophysics Data System (ADS)
Zempo, Yasunari; Toogoshi, Mitsuki; Kano, Satoru S.
Maximum Entropy Method (MEM) is widely used for the analysis of a time-series data such as an earthquake, which has fairly long-periodicity but short observable data. We have examined MEM to apply to the optical analysis of the time-series data from the real-time TDDFT. In the analysis, usually Fourier Transform (FT) is used, and we have to pay our attention to the lower energy part such as the band gap, which requires the long time evolution. The computational cost naturally becomes quite expensive. Since MEM is based on the autocorrelation of the signal, in which the periodicity can be described as the difference of time-lags, its value in the lower energy naturally gets small compared to that in the higher energy. To improve the difficulty, our MEM has the two features: the raw data is repeated it many times and concatenated, which provides the lower energy resolution in high resolution; together with the repeated data, an appropriate phase for the target frequency is introduced to reduce the side effect of the artificial periodicity. We have compared our improved MEM and FT spectrum using small-to-medium size molecules. We can see the clear spectrum of MEM, compared to that of FT. Our new technique provides higher resolution in fewer steps, compared to that of FT. This work was partially supported by JSPS Grants-in-Aid for Scientific Research (C) Grant number 16K05047, Sumitomo Chemical, Co. Ltd., and Simulatio Corp.
Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S.; Perera, Subashan
2016-01-01
Human gait is a complex interaction of many nonlinear systems and stride intervals exhibit self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a biomarker of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of sixteen mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length. PMID:27960102
Arbitrary-order corrections for finite-time drift and diffusion coefficients
NASA Astrophysics Data System (ADS)
Anteneodo, C.; Riera, R.
2009-09-01
We address a standard class of diffusion processes with linear drift and quadratic diffusion coefficients. These contributions to dynamic equations can be directly drawn from data time series. However, real data are constrained to finite sampling rates and therefore it is crucial to establish a suitable mathematical description of the required finite-time corrections. Based on Itô-Taylor expansions, we present the exact corrections to the finite-time drift and diffusion coefficients. These results allow to reconstruct the real hidden coefficients from the empirical estimates. We also derive higher-order finite-time expressions for the third and fourth conditional moments that furnish extra theoretical checks for this class of diffusion models. The analytical predictions are compared with the numerical outcomes of representative artificial time series.
A comparative simulation study of AR(1) estimators in short time series.
Krone, Tanja; Albers, Casper J; Timmerman, Marieke E
2017-01-01
Various estimators of the autoregressive model exist. We compare their performance in estimating the autocorrelation in short time series. In Study 1, under correct model specification, we compare the frequentist r 1 estimator, C-statistic, ordinary least squares estimator (OLS) and maximum likelihood estimator (MLE), and a Bayesian method, considering flat (B f ) and symmetrized reference (B sr ) priors. In a completely crossed experimental design we vary lengths of time series (i.e., T = 10, 25, 40, 50 and 100) and autocorrelation (from -0.90 to 0.90 with steps of 0.10). The results show a lowest bias for the B sr , and a lowest variability for r 1 . The power in different conditions is highest for B sr and OLS. For T = 10, the absolute performance of all measurements is poor, as expected. In Study 2, we study robustness of the methods through misspecification by generating the data according to an ARMA(1,1) model, but still analysing the data with an AR(1) model. We use the two methods with the lowest bias for this study, i.e., B sr and MLE. The bias gets larger when the non-modelled moving average parameter becomes larger. Both the variability and power show dependency on the non-modelled parameter. The differences between the two estimation methods are negligible for all measurements.
Abuabara, Alexander; Abuabara, Allan; Tonchuk, Carin Albino Luçolli
2017-01-01
The World Health Organization recognizes suicide as a public health priority. Increased knowledge of suicide risk factors is needed in order to be able to adopt effective prevention strategies. The aim of this study was to analyze and compare the association between the Gini coefficient (which is used to measure inequality) and suicide death rates over a 14-year period (2000-2013) in Brazil and in the United States (US). The hypothesis put forward was that reduction of income inequality is accompanied by reduction of suicide rates. Descriptive cross-sectional time-series study in Brazil and in the US. Population, death and suicide death data were extracted from the DATASUS database in Brazil and from the National Center for Health Statistics in the US. Gini coefficient data were obtained from the World Development Indicators. Time series analysis was performed on Brazilian and American official data regarding the number of deaths caused by suicide between 2000 and 2013 and the Gini coefficients of the two countries. The suicide trends were examined and compared. Brazil and the US present converging Gini coefficients, mainly due to reduction of inequality in Brazil over the last decade. However, suicide rates are not converging as hypothesized, but are in fact rising in both countries. The hypothesis that reduction of income inequality is accompanied by reduction of suicide rates was not verified.
Sector Identification in a Set of Stock Return Time Series Traded at the London Stock Exchange
NASA Astrophysics Data System (ADS)
Coronnello, C.; Tumminello, M.; Lillo, F.; Micciche, S.; Mantegna, R. N.
2005-09-01
We compare some methods recently used in the literature to detect the existence of a certain degree of common behavior of stock returns belonging to the same economic sector. Specifically, we discuss methods based on random matrix theory and hierarchical clustering techniques. We apply these methods to a portfolio of stocks traded at the London Stock Exchange. The investigated time series are recorded both at a daily time horizon and at a 5-minute time horizon. The correlation coefficient matrix is very different at different time horizons confirming that more structured correlation coefficient matrices are observed for long time horizons. All the considered methods are able to detect economic information and the presence of clusters characterized by the economic sector of stocks. However, different methods present a different degree of sensitivity with respect to different sectors. Our comparative analysis suggests that the application of just a single method could not be able to extract all the economic information present in the correlation coefficient matrix of a stock portfolio.
NASA Astrophysics Data System (ADS)
Yu, B.; Shang, S.
2016-12-01
Food shortage is one of the major challenges that human beings are facing. It is urgent to improve the monitoring of the plantation and distribution of the main crops to solve the following economic and social issues. Recently, with the extensive use of remote sensing satellite data, it has provided favorable conditions for crop identification in large irrigation district with complex planting structure. Difference of different crop phenology is the main basis for crop identification, and the normalized difference vegetation index (NDVI) time-series could better delineate crop phenology cycle. Therefore, the key of crop identification is to obtain high quality NDVI time-series. MODIS and Landsat TM satellite images are the most frequently used, however, neither of them could guarantee high temporal and spatial resolutions at once. Accordingly, this paper makes use of NDVI time-series extracted from China Environment Satellites data, which has two-day-repeat temporal and 30m spatial resolutions. The NDVI time-series are fitted with an asymmetric logistic curve, the fitting effect is good and the correlation coefficient is greater than 0.9. The phonological parameters are derived from NDVI fitting curves, and crop identification is carried out by different relation ellipses between NDVI and its phonological parameters of different crops. This paper takes Hetao Irrigation District of Inner Mongolia as an example, to identify multi-year maize and sunflower in the district, and the identification result is good. Compared with the official statistics, the relative errors are both lower than 5%. The results show that the NDVI time-series dataset derived from HJ-1A/1B CCD could delineate the crop phenology cycle accurately and demonstrate its application in crop identification in irrigated district.
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
Deviations from uniform power law scaling in nonstationary time series
NASA Technical Reports Server (NTRS)
Viswanathan, G. M.; Peng, C. K.; Stanley, H. E.; Goldberger, A. L.
1997-01-01
A classic problem in physics is the analysis of highly nonstationary time series that typically exhibit long-range correlations. Here we test the hypothesis that the scaling properties of the dynamics of healthy physiological systems are more stable than those of pathological systems by studying beat-to-beat fluctuations in the human heart rate. We develop techniques based on the Fano factor and Allan factor functions, as well as on detrended fluctuation analysis, for quantifying deviations from uniform power-law scaling in nonstationary time series. By analyzing extremely long data sets of up to N = 10(5) beats for 11 healthy subjects, we find that the fluctuations in the heart rate scale approximately uniformly over several temporal orders of magnitude. By contrast, we find that in data sets of comparable length for 14 subjects with heart disease, the fluctuations grow erratically, indicating a loss of scaling stability.
Relation between delayed feedback and delay-coupled systems and its application to chaotic lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soriano, Miguel C., E-mail: miguel@ifisc.uib-csic.es; Flunkert, Valentin; Fischer, Ingo
2013-12-15
We present a systematic approach to identify the similarities and differences between a chaotic system with delayed feedback and two mutually delay-coupled systems. We consider the general case in which the coupled systems are either unsynchronized or in a generally synchronized state, in contrast to the mostly studied case of identical synchronization. We construct a new time-series for each of the two coupling schemes, respectively, and present analytic evidence and numerical confirmation that these two constructed time-series are statistically equivalent. From the construction, it then follows that the distribution of time-series segments that are small compared to the overall delaymore » in the system is independent of the value of the delay and of the coupling scheme. By focusing on numerical simulations of delay-coupled chaotic lasers, we present a practical example of our findings.« less
Detecting a currency’s dominance using multivariate time series analysis
NASA Astrophysics Data System (ADS)
Syahidah Yusoff, Nur; Sharif, Shamshuritawati
2017-09-01
A currency exchange rate is the price of one country’s currency in terms of another country’s currency. There are four different prices; opening, closing, highest, and lowest can be achieved from daily trading activities. In the past, a lot of studies have been carried out by using closing price only. However, those four prices are interrelated to each other. Thus, the multivariate time series can provide more information than univariate time series. Therefore, the enthusiasm of this paper is to compare the results of two different approaches, which are mean vector and Escoufier’s RV coefficient in constructing similarity matrices of 20 world currencies. Consequently, both matrices are used to substitute the correlation matrix required by network topology. With the help of degree centrality measure, we can detect the currency’s dominance for both networks. The pros and cons for both approaches will be presented at the end of this paper.
ERIC Educational Resources Information Center
Li, Dongmei; Yi, Qing; Harris, Deborah
2017-01-01
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
NASA Astrophysics Data System (ADS)
Auer, I.; Kirchengast, A.; Proske, H.
2009-09-01
The ongoing climate change debate focuses more and more on changing extreme events. Information on past events can be derived from a number of sources, such as instrumental data, residual impacts in the landscape, but also chronicles and people's memories. A project called "A Tale of Two Valleys” within the framework of the research program "proVision” allowed to study past extreme events in two inner-alpine valleys from the sources mentioned before. Instrumental climate time series provided information for the past 200 years, however great attention had to be given to the homogeneity of the series. To derive homogenized time series of selected climate change indices methods like HOCLIS and Vincent have been applied. Trend analyses of climate change indices inform about increase or decrease of extreme events. Traces of major geomorphodynamic processes of the past (e.g. rockfalls, landslides, debris flows) which were triggered or affected by extreme weather events are still apparent in the landscape and could be evaluated by geomorphological analysis using remote sensing and field data. Regional chronicles provided additional knowledge and covered longer periods back in time, however compared to meteorological time series they enclose a high degree of subjectivity and intermittent recordings cannot be obviated. Finally, questionnaires and oral history complemented our picture of past extreme weather events. People were differently affected and have different memories of it. The joint analyses of these four data sources showed agreement to some extent, however also showed some reasonable differences: meteorological data are point measurements only with a sometimes too coarse temporal resolution. Due to land-use changes and improved constructional measures the impact of an extreme meteorological event may be different today compared to earlier times.
Gleason, Jessie A; Fagliano, Jerald A
2015-10-01
Asthma is one of the most common chronic diseases affecting children. This study assesses the associations of ozone and fine particulate matter (PM2.5) with pediatric emergency department visits in the urban environment of Newark, NJ. Two study designs were utilized and evaluated for usability. We obtained daily emergency department visits among children aged 3-17 years with a primary diagnosis of asthma during April to September for 2004-2007. Both a time-stratified case-crossover study design with bi-directional control sampling and a time-series study design were utilized. Lagged effects (1-d through 5-d lag, 3-d average, and 5-d average) of ozone and PM2.5 were explored and a dose-response analysis comparing the bottom 5th percentile of 3-d average lag ozone with each 5 percentile increase was performed. Associations of interquartile range increase in same-day ozone were similar between the time-series and case-crossover study designs (RR = 1.08, 95% CI 1.04-1.12) and (OR = 1.10, 95% CI 1.06-1.14), respectively. Similar associations were seen for 1-day lag and 3-day average lag ozone levels. PM2.5 was not associated with the outcome in either study design. Dose-response assessment indicated a statistically significant and increasing association around 50-55 ppb consistent for both study designs. Ozone was statistically positively associated with pediatric asthma ED visits in Newark, NJ. Our results were generally comparable across the time-series and case-crossover study designs, indicating both are useful to assess local air pollution impacts.
Ambient temperature and coronary heart disease mortality in Beijing, China: a time series study
2012-01-01
Background Many studies have examined the association between ambient temperature and mortality. However, less evidence is available on the temperature effects on coronary heart disease (CHD) mortality, especially in China. In this study, we examined the relationship between ambient temperature and CHD mortality in Beijing, China during 2000 to 2011. In addition, we compared time series and time-stratified case-crossover models for the non-linear effects of temperature. Methods We examined the effects of temperature on CHD mortality using both time series and time-stratified case-crossover models. We also assessed the effects of temperature on CHD mortality by subgroups: gender (female and male) and age (age > =65 and age < 65). We used a distributed lag non-linear model to examine the non-linear effects of temperature on CHD mortality up to 15 lag days. We used Akaike information criterion to assess the model fit for the two designs. Results The time series models had a better model fit than time-stratified case-crossover models. Both designs showed that the relationships between temperature and group-specific CHD mortality were non-linear. Extreme cold and hot temperatures significantly increased the risk of CHD mortality. Hot effects were acute and short-term, while cold effects were delayed by two days and lasted for five days. The old people and women were more sensitive to extreme cold and hot temperatures than young and men. Conclusions This study suggests that time series models performed better than time-stratified case-crossover models according to the model fit, even though they produced similar non-linear effects of temperature on CHD mortality. In addition, our findings indicate that extreme cold and hot temperatures increase the risk of CHD mortality in Beijing, China, particularly for women and old people. PMID:22909034
Delay differential analysis of time series.
Lainscsek, Claudia; Sejnowski, Terrence J
2015-03-01
Nonlinear dynamical system analysis based on embedding theory has been used for modeling and prediction, but it also has applications to signal detection and classification of time series. An embedding creates a multidimensional geometrical object from a single time series. Traditionally either delay or derivative embeddings have been used. The delay embedding is composed of delayed versions of the signal, and the derivative embedding is composed of successive derivatives of the signal. The delay embedding has been extended to nonuniform embeddings to take multiple timescales into account. Both embeddings provide information on the underlying dynamical system without having direct access to all the system variables. Delay differential analysis is based on functional embeddings, a combination of the derivative embedding with nonuniform delay embeddings. Small delay differential equation (DDE) models that best represent relevant dynamic features of time series data are selected from a pool of candidate models for detection or classification. We show that the properties of DDEs support spectral analysis in the time domain where nonlinear correlation functions are used to detect frequencies, frequency and phase couplings, and bispectra. These can be efficiently computed with short time windows and are robust to noise. For frequency analysis, this framework is a multivariate extension of discrete Fourier transform (DFT), and for higher-order spectra, it is a linear and multivariate alternative to multidimensional fast Fourier transform of multidimensional correlations. This method can be applied to short or sparse time series and can be extended to cross-trial and cross-channel spectra if multiple short data segments of the same experiment are available. Together, this time-domain toolbox provides higher temporal resolution, increased frequency and phase coupling information, and it allows an easy and straightforward implementation of higher-order spectra across time compared with frequency-based methods such as the DFT and cross-spectral analysis.
Small-world bias of correlation networks: From brain to climate
NASA Astrophysics Data System (ADS)
Hlinka, Jaroslav; Hartman, David; Jajcay, Nikola; Tomeček, David; Tintěra, Jaroslav; Paluš, Milan
2017-03-01
Complex systems are commonly characterized by the properties of their graph representation. Dynamical complex systems are then typically represented by a graph of temporal dependencies between time series of state variables of their subunits. It has been shown recently that graphs constructed in this way tend to have relatively clustered structure, potentially leading to spurious detection of small-world properties even in the case of systems with no or randomly distributed true interactions. However, the strength of this bias depends heavily on a range of parameters and its relevance for real-world data has not yet been established. In this work, we assess the relevance of the bias using two examples of multivariate time series recorded in natural complex systems. The first is the time series of local brain activity as measured by functional magnetic resonance imaging in resting healthy human subjects, and the second is the time series of average monthly surface air temperature coming from a large reanalysis of climatological data over the period 1948-2012. In both cases, the clustering in the thresholded correlation graph is substantially higher compared with a realization of a density-matched random graph, while the shortest paths are relatively short, showing thus distinguishing features of small-world structure. However, comparable or even stronger small-world properties were reproduced in correlation graphs of model processes with randomly scrambled interconnections. This suggests that the small-world properties of the correlation matrices of these real-world systems indeed do not reflect genuinely the properties of the underlying interaction structure, but rather result from the inherent properties of correlation matrix.
Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion
NASA Astrophysics Data System (ADS)
Li, Z.; Ghaith, M.
2017-12-01
Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.
Small-world bias of correlation networks: From brain to climate.
Hlinka, Jaroslav; Hartman, David; Jajcay, Nikola; Tomeček, David; Tintěra, Jaroslav; Paluš, Milan
2017-03-01
Complex systems are commonly characterized by the properties of their graph representation. Dynamical complex systems are then typically represented by a graph of temporal dependencies between time series of state variables of their subunits. It has been shown recently that graphs constructed in this way tend to have relatively clustered structure, potentially leading to spurious detection of small-world properties even in the case of systems with no or randomly distributed true interactions. However, the strength of this bias depends heavily on a range of parameters and its relevance for real-world data has not yet been established. In this work, we assess the relevance of the bias using two examples of multivariate time series recorded in natural complex systems. The first is the time series of local brain activity as measured by functional magnetic resonance imaging in resting healthy human subjects, and the second is the time series of average monthly surface air temperature coming from a large reanalysis of climatological data over the period 1948-2012. In both cases, the clustering in the thresholded correlation graph is substantially higher compared with a realization of a density-matched random graph, while the shortest paths are relatively short, showing thus distinguishing features of small-world structure. However, comparable or even stronger small-world properties were reproduced in correlation graphs of model processes with randomly scrambled interconnections. This suggests that the small-world properties of the correlation matrices of these real-world systems indeed do not reflect genuinely the properties of the underlying interaction structure, but rather result from the inherent properties of correlation matrix.
NASA Astrophysics Data System (ADS)
Horvath, Alexander; Horwath, Martin; Pail, Roland
2014-05-01
The Release-05 monthly solutions by the three centers of the GRACE Science and Data System are a significant improvement with respect to the previous Release 4. Meanwhile, previous assessments have revealed different noise levels between the solutions by CSR, GFZ and JPL, and also different amplitudes of interannual signal in the solutions by GFZ as compared to the two other centers. Encouraged by the science community, GFZ and CSR have kindly provided additional sets of time series. GFZ has reprocessed the RL05 monthly solutions (up to degree and order 90) with revised processing. CSR has made available monthly solutions with standard processing up to degree and order 96, in addition to their solutions up to degree and order 60. We compare these different time series with respect to their signal and noise content and analyze them on global and regional scale. For the regional scale our special interest is paid on Antarctica and on revealing polar signals such as ice mass trends and GIA. Following the necessity of destriping, an optimal choice for the setup of the Swenson & Wahr filter approach is evaluated to adapt to the specific signal and noise level in Antarctica. Furthermore we analyze the potential benefit of mixed time series solutions in order to combine the strengths of the solutions available. Concerning the question for an optimal maximum degree we suggest that for resolving large polar ice mass changes, it would be beneficial to provide gravity field variations even beyond degree 90.
Time series modeling of live-cell shape dynamics for image-based phenotypic profiling.
Gordonov, Simon; Hwang, Mun Kyung; Wells, Alan; Gertler, Frank B; Lauffenburger, Douglas A; Bathe, Mark
2016-01-01
Live-cell imaging can be used to capture spatio-temporal aspects of cellular responses that are not accessible to fixed-cell imaging. As the use of live-cell imaging continues to increase, new computational procedures are needed to characterize and classify the temporal dynamics of individual cells. For this purpose, here we present the general experimental-computational framework SAPHIRE (Stochastic Annotation of Phenotypic Individual-cell Responses) to characterize phenotypic cellular responses from time series imaging datasets. Hidden Markov modeling is used to infer and annotate morphological state and state-switching properties from image-derived cell shape measurements. Time series modeling is performed on each cell individually, making the approach broadly useful for analyzing asynchronous cell populations. Two-color fluorescent cells simultaneously expressing actin and nuclear reporters enabled us to profile temporal changes in cell shape following pharmacological inhibition of cytoskeleton-regulatory signaling pathways. Results are compared with existing approaches conventionally applied to fixed-cell imaging datasets, and indicate that time series modeling captures heterogeneous dynamic cellular responses that can improve drug classification and offer additional important insight into mechanisms of drug action. The software is available at http://saphire-hcs.org.
The coupling analysis between stock market indices based on permutation measures
NASA Astrophysics Data System (ADS)
Shi, Wenbin; Shang, Pengjian; Xia, Jianan; Yeh, Chien-Hung
2016-04-01
Many information-theoretic methods have been proposed for analyzing the coupling dependence between time series. And it is significant to quantify the correlation relationship between financial sequences since the financial market is a complex evolved dynamic system. Recently, we developed a new permutation-based entropy, called cross-permutation entropy (CPE), to detect the coupling structures between two synchronous time series. In this paper, we extend the CPE method to weighted cross-permutation entropy (WCPE), to address some of CPE's limitations, mainly its inability to differentiate between distinct patterns of a certain motif and the sensitivity of patterns close to the noise floor. It shows more stable and reliable results than CPE does when applied it to spiky data and AR(1) processes. Besides, we adapt the CPE method to infer the complexity of short-length time series by freely changing the time delay, and test it with Gaussian random series and random walks. The modified method shows the advantages in reducing deviations of entropy estimation compared with the conventional one. Finally, the weighted cross-permutation entropy of eight important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.
Cross-sample entropy of foreign exchange time series
NASA Astrophysics Data System (ADS)
Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao
2010-11-01
The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.
Keshmiri, Soheil; Sumioka, Hidenubo; Yamazaki, Ryuji; Ishiguro, Hiroshi
2018-01-01
Neuroscience research shows a growing interest in the application of Near-Infrared Spectroscopy (NIRS) in analysis and decoding of the brain activity of human subjects. Given the correlation that is observed between the Blood Oxygen Dependent Level (BOLD) responses that are exhibited by the time series data of functional Magnetic Resonance Imaging (fMRI) and the hemoglobin oxy/deoxy-genation that is captured by NIRS, linear models play a central role in these applications. This, in turn, results in adaptation of the feature extraction strategies that are well-suited for discretization of data that exhibit a high degree of linearity, namely, slope and the mean as well as their combination, to summarize the informational contents of the NIRS time series. In this article, we demonstrate that these features are inefficient in capturing the variational information of NIRS data, limiting the reliability and the adequacy of the conclusion on their results. Alternatively, we propose the linear estimate of differential entropy of these time series as a natural representation of such information. We provide evidence for our claim through comparative analysis of the application of these features on NIRS data pertinent to several working memory tasks as well as naturalistic conversational stimuli.
NASA Astrophysics Data System (ADS)
Dash, Y.; Mishra, S. K.; Panigrahi, B. K.
2017-12-01
Prediction of northeast/post monsoon rainfall which occur during October, November and December (OND) over Indian peninsula is a challenging task due to the dynamic nature of uncertain chaotic climate. It is imperative to elucidate this issue by examining performance of different machine leaning (ML) approaches. The prime objective of this research is to compare between a) statistical prediction using historical rainfall observations and global atmosphere-ocean predictors like Sea Surface Temperature (SST) and Sea Level Pressure (SLP) and b) empirical prediction based on a time series analysis of past rainfall data without using any other predictors. Initially, ML techniques have been applied on SST and SLP data (1948-2014) obtained from NCEP/NCAR reanalysis monthly mean provided by the NOAA ESRL PSD. Later, this study investigated the applicability of ML methods using OND rainfall time series for 1948-2014 and forecasted up to 2018. The predicted values of aforementioned methods were verified using observed time series data collected from Indian Institute of Tropical Meteorology and the result revealed good performance of ML algorithms with minimal error scores. Thus, it is found that both statistical and empirical methods are useful for long range climatic projections.
A randomized intervention of reminder letter for human papillomavirus vaccine series completion.
Chao, Chun; Preciado, Melissa; Slezak, Jeff; Xu, Lanfang
2015-01-01
Completion rate for the three-dose series of the human papillomavirus (HPV) vaccine has generally been low. This study evaluated the effectiveness of a reminder letter intervention on HPV vaccine three-dose series completion. Female members of Kaiser Permanente Southern California Health Plan who received at least one dose, but not more than two doses, of the HPV vaccine by February 13, 2013, and who were between ages 9 and 26 years at the time of first HPV vaccination were included. Eighty percent of these females were randomized to receive the reminder letter, and 20% were randomized to receive standard of care (control). The reminder letters were mailed quarterly to those who had not completed the series. The proportion of series completion at the end of the 12-month evaluation period was compared using chi-square test. A total of 9,760 females were included in the intervention group and 2,445 in the control group. HPV vaccine series completion was 56.4% in the intervention group and 46.6% in the control groups (p < .001). The effect of the intervention appeared to be stronger in girls aged 9-17 years compared with young women aged 18-26 years at the first dose and in blacks compared with whites. Reminder letters scheduled quarterly were effective to enhance HPV vaccine series completion among those who initiated the vaccine. However, a large gap in series completion remained despite the intervention. Future studies should address other barriers to series completion, including those at the providers and the health care system level. Copyright © 2015 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Severe European winters in a secular perspective
NASA Astrophysics Data System (ADS)
Hoy, Andreas; Hänsel, Stephanie
2017-04-01
Temperature conditions during the winter time are substantially shaped by a strong year-to-year variability. European winters since the late 1980s - compared to previous decades and centuries - were mainly characterised by a high temperature level, including recent record-warm winters. Yet, comparably cold winters and severe cold spells still occur nowadays, like recently observed from 2009 to 2013 and in early 2017. Central England experienced its second coldest December since start of observations more than 350 years ago in 2010, and some of the lowest temperatures ever measured in northern Europe (below -50 °C in Lapland) were recorded in January 1999. Analysing thermal characteristics and spatial distribution of severe (historical) winters - using early instrumental data - helps expanding and consolidating our knowledge of past weather extremes. This contribution presents efforts towards this direction. We focus on a) compiling and assessing a very long-term instrumental, spatially widespread and well-distributed, high-quality meteorological data set to b) investigate very cold winter temperatures in Europe from early measurements until today. In a first step, we analyse the longest available time series of monthly temperature averages within Europe. Our dataset extends from the Nordic countries up to the Mediterranean and from the British Isles up to Russia. We utilise as much as possible homogenised times series in order to ensure reliable results. Homogenised data derive from the NORDHOM (Scandinavia) and HISTALP (greater alpine region) datasets or were obtained from national weather services and universities. Other (not specifically homogenised) data were derived from the ECA&D dataset or national institutions. The employed time series often start already during the 18th century, with Paris & Central England being the longest datasets (from 1659). In a second step, daily temperature averages are involved. Only some of those series are homogenised, but those available are sufficiently distributed throughout Europe to ensure reliable results. Furthermore, the comparably dense network of long-term observations allows an appropriate quality checking within the network. Additionally, the large collective of homogenised monthly data enables assessing the quality of many daily series. Daily data are used to sum up negative values for the respective winter periods to create times series of "cold summations", which are a good indicator for the severeness of winters in most parts of Europe. Additionally, days below certain thresholds may be counted or summed up. Future work will include daily minimum and maximum temperatures, allowing calculating and applying an extensive set of climate indices, refining the work presented here.
NASA Astrophysics Data System (ADS)
Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia
2016-04-01
Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field.
Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia
2016-01-01
Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field. PMID:27118260
Chen, C P; Wan, J Z
1999-01-01
A fast learning algorithm is proposed to find an optimal weights of the flat neural networks (especially, the functional-link network). Although the flat networks are used for nonlinear function approximation, they can be formulated as linear systems. Thus, the weights of the networks can be solved easily using a linear least-square method. This formulation makes it easier to update the weights instantly for both a new added pattern and a new added enhancement node. A dynamic stepwise updating algorithm is proposed to update the weights of the system on-the-fly. The model is tested on several time-series data including an infrared laser data set, a chaotic time-series, a monthly flour price data set, and a nonlinear system identification problem. The simulation results are compared to existing models in which more complex architectures and more costly training are needed. The results indicate that the proposed model is very attractive to real-time processes.
Artificial neural networks applied to forecasting time series.
Montaño Moreno, Juan J; Palmer Pol, Alfonso; Muñoz Gracia, Pilar
2011-04-01
This study offers a description and comparison of the main models of Artificial Neural Networks (ANN) which have proved to be useful in time series forecasting, and also a standard procedure for the practical application of ANN in this type of task. The Multilayer Perceptron (MLP), Radial Base Function (RBF), Generalized Regression Neural Network (GRNN), and Recurrent Neural Network (RNN) models are analyzed. With this aim in mind, we use a time series made up of 244 time points. A comparative study establishes that the error made by the four neural network models analyzed is less than 10%. In accordance with the interpretation criteria of this performance, it can be concluded that the neural network models show a close fit regarding their forecasting capacity. The model with the best performance is the RBF, followed by the RNN and MLP. The GRNN model is the one with the worst performance. Finally, we analyze the advantages and limitations of ANN, the possible solutions to these limitations, and provide an orientation towards future research.
Optical signal processing using photonic reservoir computing
NASA Astrophysics Data System (ADS)
Salehi, Mohammad Reza; Dehyadegari, Louiza
2014-10-01
As a new approach to recognition and classification problems, photonic reservoir computing has such advantages as parallel information processing, power efficient and high speed. In this paper, a photonic structure has been proposed for reservoir computing which is investigated using a simple, yet, non-partial noisy time series prediction task. This study includes the application of a suitable topology with self-feedbacks in a network of SOA's - which lends the system a strong memory - and leads to adjusting adequate parameters resulting in perfect recognition accuracy (100%) for noise-free time series, which shows a 3% improvement over previous results. For the classification of noisy time series, the rate of accuracy showed a 4% increase and amounted to 96%. Furthermore, an analytical approach was suggested to solve rate equations which led to a substantial decrease in the simulation time, which is an important parameter in classification of large signals such as speech recognition, and better results came up compared with previous works.
Edwards, Ann L; Dawson, Michael R; Hebert, Jacqueline S; Sherstan, Craig; Sutton, Richard S; Chan, K Ming; Pilarski, Patrick M
2016-10-01
Myoelectric prostheses currently used by amputees can be difficult to control. Machine learning, and in particular learned predictions about user intent, could help to reduce the time and cognitive load required by amputees while operating their prosthetic device. The goal of this study was to compare two switching-based methods of controlling a myoelectric arm: non-adaptive (or conventional) control and adaptive control (involving real-time prediction learning). Case series study. We compared non-adaptive and adaptive control in two different experiments. In the first, one amputee and one non-amputee subject controlled a robotic arm to perform a simple task; in the second, three able-bodied subjects controlled a robotic arm to perform a more complex task. For both tasks, we calculated the mean time and total number of switches between robotic arm functions over three trials. Adaptive control significantly decreased the number of switches and total switching time for both tasks compared with the conventional control method. Real-time prediction learning was successfully used to improve the control interface of a myoelectric robotic arm during uninterrupted use by an amputee subject and able-bodied subjects. Adaptive control using real-time prediction learning has the potential to help decrease both the time and the cognitive load required by amputees in real-world functional situations when using myoelectric prostheses. © The International Society for Prosthetics and Orthotics 2015.
NASA Astrophysics Data System (ADS)
Wang, Dong; Ding, Hao; Singh, Vijay P.; Shang, Xiaosan; Liu, Dengfeng; Wang, Yuankun; Zeng, Xiankui; Wu, Jichun; Wang, Lachun; Zou, Xinqing
2015-05-01
For scientific and sustainable management of water resources, hydrologic and meteorologic data series need to be often extended. This paper proposes a hybrid approach, named WA-CM (wavelet analysis-cloud model), for data series extension. Wavelet analysis has time-frequency localization features, known as "mathematics microscope," that can decompose and reconstruct hydrologic and meteorologic series by wavelet transform. The cloud model is a mathematical representation of fuzziness and randomness and has strong robustness for uncertain data. The WA-CM approach first employs the wavelet transform to decompose the measured nonstationary series and then uses the cloud model to develop an extension model for each decomposition layer series. The final extension is obtained by summing the results of extension of each layer. Two kinds of meteorologic and hydrologic data sets with different characteristics and different influence of human activity from six (three pairs) representative stations are used to illustrate the WA-CM approach. The approach is also compared with four other methods, which are conventional correlation extension method, Kendall-Theil robust line method, artificial neural network method (back propagation, multilayer perceptron, and radial basis function), and single cloud model method. To evaluate the model performance completely and thoroughly, five measures are used, which are relative error, mean relative error, standard deviation of relative error, root mean square error, and Thiel inequality coefficient. Results show that the WA-CM approach is effective, feasible, and accurate and is found to be better than other four methods compared. The theory employed and the approach developed here can be applied to extension of data in other areas as well.
NASA Astrophysics Data System (ADS)
Ai, Jinquan; Gao, Wei; Gao, Zhiqiang; Shi, Runhe; Zhang, Chao
2017-04-01
Spartina alterniflora is an aggressive invasive plant species that replaces native species, changes the structure and function of the ecosystem across coastal wetlands in China, and is thus a major conservation concern. Mapping the spread of its invasion is a necessary first step for the implementation of effective ecological management strategies. The performance of a phenology-based approach for S. alterniflora mapping is explored in the coastal wetland of the Yangtze Estuary using a time series of GaoFen satellite no. 1 wide field of view camera (GF-1 WFV) imagery. First, a time series of the normalized difference vegetation index (NDVI) was constructed to evaluate the phenology of S. alterniflora. Two phenological stages (the senescence stage from November to mid-December and the green-up stage from late April to May) were determined as important for S. alterniflora detection in the study area based on NDVI temporal profiles, spectral reflectance curves of S. alterniflora and its coexistent species, and field surveys. Three phenology feature sets representing three major phenology-based detection strategies were then compared to map S. alterniflora: (1) the single-date imagery acquired within the optimal phenological window, (2) the multitemporal imagery, including four images from the two important phenological windows, and (3) the monthly NDVI time series imagery. Support vector machines and maximum likelihood classifiers were applied on each phenology feature set at different training sample sizes. For all phenology feature sets, the overall results were produced consistently with high mapping accuracies under sufficient training samples sizes, although significantly improved classification accuracies (10%) were obtained when the monthly NDVI time series imagery was employed. The optimal single-date imagery had the lowest accuracies of all detection strategies. The multitemporal analysis demonstrated little reduction in the overall accuracy compared with the use of monthly NDVI time series imagery. These results show the importance of considering the phenological stage for image selection for mapping S. alterniflora using GF-1 WFV imagery. Furthermore, in light of the better tradeoff between the number of images and classification accuracy when using multitemporal GF-1 WFV imagery, we suggest using multitemporal imagery acquired at appropriate phenological windows for S. alterniflora mapping at regional scales.
Why didn't Box-Jenkins win (again)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pack, D.J.; Downing, D.J.
This paper focuses on the forecasting performance of the Box-Jenkins methodology applied to the 111 time series of the Makridakis competition. It considers the influence of the following factors: (1) time series length, (2) time-series information (autocorrelation) content, (3) time-series outliers or structural changes, (4) averaging results over time series, and (5) forecast time origin choice. It is found that the 111 time series contain substantial numbers of very short series, series with obvious structural change, and series whose histories are relatively uninformative. If these series are typical of those that one must face in practice, the real message ofmore » the competition is that univariate time series extrapolations will frequently fail regardless of the methodology employed to produce them.« less
Multifractal analysis of visibility graph-based Ito-related connectivity time series.
Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano
2016-02-01
In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.
Large-scale Granger causality analysis on resting-state functional MRI
NASA Astrophysics Data System (ADS)
D'Souza, Adora M.; Abidin, Anas Zainul; Leistritz, Lutz; Wismüller, Axel
2016-03-01
We demonstrate an approach to measure the information flow between each pair of time series in resting-state functional MRI (fMRI) data of the human brain and subsequently recover its underlying network structure. By integrating dimensionality reduction into predictive time series modeling, large-scale Granger Causality (lsGC) analysis method can reveal directed information flow suggestive of causal influence at an individual voxel level, unlike other multivariate approaches. This method quantifies the influence each voxel time series has on every other voxel time series in a multivariate sense and hence contains information about the underlying dynamics of the whole system, which can be used to reveal functionally connected networks within the brain. To identify such networks, we perform non-metric network clustering, such as accomplished by the Louvain method. We demonstrate the effectiveness of our approach to recover the motor and visual cortex from resting state human brain fMRI data and compare it with the network recovered from a visuomotor stimulation experiment, where the similarity is measured by the Dice Coefficient (DC). The best DC obtained was 0.59 implying a strong agreement between the two networks. In addition, we thoroughly study the effect of dimensionality reduction in lsGC analysis on network recovery. We conclude that our approach is capable of detecting causal influence between time series in a multivariate sense, which can be used to segment functionally connected networks in the resting-state fMRI.
Common mode error in Antarctic GPS coordinate time series on its effect on bedrock-uplift estimates
NASA Astrophysics Data System (ADS)
Liu, Bin; King, Matt; Dai, Wujiao
2018-05-01
Spatially-correlated common mode error always exists in regional, or-larger, GPS networks. We applied independent component analysis (ICA) to GPS vertical coordinate time series in Antarctica from 2010 to 2014 and made a comparison with the principal component analysis (PCA). Using PCA/ICA, the time series can be decomposed into a set of temporal components and their spatial responses. We assume the components with common spatial responses are common mode error (CME). An average reduction of ˜40% about the RMS values was achieved in both PCA and ICA filtering. However, the common mode components obtained from the two approaches have different spatial and temporal features. ICA time series present interesting correlations with modeled atmospheric and non-tidal ocean loading displacements. A white noise (WN) plus power law noise (PL) model was adopted in the GPS velocity estimation using maximum likelihood estimation (MLE) analysis, with ˜55% reduction of the velocity uncertainties after filtering using ICA. Meanwhile, spatiotemporal filtering reduces the amplitude of PL and periodic terms in the GPS time series. Finally, we compare the GPS uplift velocities, after correction for elastic effects, with recent models of glacial isostatic adjustment (GIA). The agreements of the GPS observed velocities and four GIA models are generally improved after the spatiotemporal filtering, with a mean reduction of ˜0.9 mm/yr of the WRMS values, possibly allowing for more confident separation of various GIA model predictions.
Hybrid Forecasting of Daily River Discharges Considering Autoregressive Heteroscedasticity
NASA Astrophysics Data System (ADS)
Szolgayová, Elena Peksová; Danačová, Michaela; Komorniková, Magda; Szolgay, Ján
2017-06-01
It is widely acknowledged that in the hydrological and meteorological communities, there is a continuing need to improve the quality of quantitative rainfall and river flow forecasts. A hybrid (combined deterministic-stochastic) modelling approach is proposed here that combines the advantages offered by modelling the system dynamics with a deterministic model and a deterministic forecasting error series with a data-driven model in parallel. Since the processes to be modelled are generally nonlinear and the model error series may exhibit nonstationarity and heteroscedasticity, GARCH-type nonlinear time series models are considered here. The fitting, forecasting and simulation performance of such models have to be explored on a case-by-case basis. The goal of this paper is to test and develop an appropriate methodology for model fitting and forecasting applicable for daily river discharge forecast error data from the GARCH family of time series models. We concentrated on verifying whether the use of a GARCH-type model is suitable for modelling and forecasting a hydrological model error time series on the Hron and Morava Rivers in Slovakia. For this purpose we verified the presence of heteroscedasticity in the simulation error series of the KLN multilinear flow routing model; then we fitted the GARCH-type models to the data and compared their fit with that of an ARMA - type model. We produced one-stepahead forecasts from the fitted models and again provided comparisons of the model's performance.
Performance of time-series methods in forecasting the demand for red blood cell transfusion.
Pereira, Arturo
2004-05-01
Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.
Measurement error in time-series analysis: a simulation study comparing modelled and monitored data.
Butland, Barbara K; Armstrong, Ben; Atkinson, Richard W; Wilkinson, Paul; Heal, Mathew R; Doherty, Ruth M; Vieno, Massimo
2013-11-13
Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003-2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2). Even if correlations between model and monitor data appear reasonably strong, additive classical measurement error in model data may lead to appreciable bias in health effect estimates. As process-based air pollution models become more widely used in epidemiological time-series analysis, assessments of error impact that include statistical simulation may be useful.
NASA Astrophysics Data System (ADS)
Barraza Bernadas, V.; Grings, F.; Roitberg, E.; Perna, P.; Karszenbaum, H.
2017-12-01
The Dry Chaco region (DCF) has the highest absolute deforestation rates of all Argentinian forests. The most recent report indicates a current deforestation rate of 200,000 Ha year-1. In order to better monitor this process, DCF was chosen to implement an early warning program for illegal deforestation. Although the area is intensively studied using medium resolution imagery (Landsat), the products obtained have a yearly pace and therefore unsuited for an early warning program. In this paper, we evaluated the performance of an online Bayesian change-point detection algorithm for MODIS Enhanced Vegetation Index (EVI) and Land Surface Temperature (LST) datasets. The goal was to to monitor the abrupt changes in vegetation dynamics associated with deforestation events. We tested this model by simulating 16-day EVI and 8-day LST time series with varying amounts of seasonality, noise, length of the time series and by adding abrupt changes with different magnitudes. This model was then tested on real satellite time series available through the Google Earth Engine, over a pilot area in DCF, where deforestation was common in the 2004-2016 period. A comparison with yearly benchmark products based on Landsat images is also presented (REDAF dataset). The results shows the advantages of using an automatic model to detect a changepoint in the time series than using only visual inspection techniques. Simulating time series with varying amounts of seasonality and noise, and by adding abrupt changes at different times and magnitudes, revealed that this model is robust against noise, and is not influenced by changes in amplitude of the seasonal component. Furthermore, the results compared favorably with REDAF dataset (near 65% of agreement). These results show the potential to combine LST and EVI to identify deforestation events. This work is being developed within the frame of the national Forest Law for the protection and sustainable development of Native Forest in Argentina in agreement with international legislation (REDD+).
NASA Astrophysics Data System (ADS)
Murray, J. R.; Svarc, J. L.
2016-12-01
Constant secular velocities estimated from Global Positioning System (GPS)-derived position time series are a central input for modeling interseismic deformation in seismically active regions. Both postseismic motion and temporally correlated noise produce long-period signals that are difficult to separate from secular motion and can bias velocity estimates. For GPS sites installed post-earthquake it is especially challenging to uniquely estimate velocities and postseismic signals and to determine when the postseismic transient has decayed sufficiently to enable use of subsequent data for estimating secular rates. Within 60 km of the 2003 M6.5 San Simeon and 2004 M6 Parkfield earthquakes in California, 16 continuous GPS sites (group 1) were established prior to mid-2001, and 52 stations (group 2) were installed following the events. We use group 1 data to investigate how early in the post-earthquake time period one may reliably begin using group 2 data to estimate velocities. For each group 1 time series, we obtain eight velocity estimates using observation time windows with successively later start dates (2006 - 2013) and a parameterization that includes constant velocity, annual, and semi-annual terms but no postseismic decay. We compare these to velocities estimated using only pre-San Simeon data to find when the pre- and post-earthquake velocities match within uncertainties. To obtain realistic velocity uncertainties, for each time series we optimize a temporally correlated noise model consisting of white, flicker, random walk, and, in some cases, band-pass filtered noise contributions. Preliminary results suggest velocities can be reliably estimated using data from 2011 to the present. Ongoing work will assess velocity bias as a function of epicentral distance and length of post-earthquake time series as well as explore spatio-temporal filtering of detrended group 1 time series to provide empirical corrections for postseismic motion in group 2 time series.
Comparing temperature of subauroral mesopause over Yakutia with SABER radiometer data for 2002-2014
NASA Astrophysics Data System (ADS)
Ammosova, Anastasiya; Gavrilyeva, Galina; Ammosov, Petr; Koltovskoi, Igor
2017-06-01
We present the temperature database for the mesopause region, which was collected from spectral measurements of bands O2(0-1) and OH(6-2) with the infrared spectrograph SP-50 at the Maimaga station (63° N; 129.5° E) in 2002-2014. The temperature time series covers 11-year solar cycle. It is compared with the temperature obtained with the Sounding of the Atmosphere using Broadband Emission Radiometry instrument (SABER, v.1.07 and v.2.0), installed onboard the TIMED satellite. We compare temperatures measured during satellite passes at distances under 500 km from the intersection of the spectrograph sighting line with the hydroxyl emitting layer (~87 km) and oxygen emitting layer (~95 km). The time criterion is 30 min. We observe that there is a seasonal dependence of the difference between the ground-based and satellite measurements. The data obtained using SABER v2.0 show good agreement with the temperatures measured with the infrared digital spectrograph. The analysis we carried out allows us to conclude that a series of rotational temperatures obtained at the Maimaga station can be used to study temperature variations on different time scales including long-term trends at the mesopause height.
Updating stand-level forest inventories using airborne laser scanning and Landsat time series data
NASA Astrophysics Data System (ADS)
Bolton, Douglas K.; White, Joanne C.; Wulder, Michael A.; Coops, Nicholas C.; Hermosilla, Txomin; Yuan, Xiaoping
2018-04-01
Vertical forest structure can be mapped over large areas by combining samples of airborne laser scanning (ALS) data with wall-to-wall spatial data, such as Landsat imagery. Here, we use samples of ALS data and Landsat time-series metrics to produce estimates of top height, basal area, and net stem volume for two timber supply areas near Kamloops, British Columbia, Canada, using an imputation approach. Both single-year and time series metrics were calculated from annual, gap-free Landsat reflectance composites representing 1984-2014. Metrics included long-term means of vegetation indices, as well as measures of the variance and slope of the indices through time. Terrain metrics, generated from a 30 m digital elevation model, were also included as predictors. We found that imputation models improved with the inclusion of Landsat time series metrics when compared to single-year Landsat metrics (relative RMSE decreased from 22.8% to 16.5% for top height, from 32.1% to 23.3% for basal area, and from 45.6% to 34.1% for net stem volume). Landsat metrics that characterized 30-years of stand history resulted in more accurate models (for all three structural attributes) than Landsat metrics that characterized only the most recent 10 or 20 years of stand history. To test model transferability, we compared imputed attributes against ALS-based estimates in nearby forest blocks (>150,000 ha) that were not included in model training or testing. Landsat-imputed attributes correlated strongly to ALS-based estimates in these blocks (R2 = 0.62 and relative RMSE = 13.1% for top height, R2 = 0.75 and relative RMSE = 17.8% for basal area, and R2 = 0.67 and relative RMSE = 26.5% for net stem volume), indicating model transferability. These findings suggest that in areas containing spatially-limited ALS data acquisitions, imputation models, and Landsat time series and terrain metrics can be effectively used to produce wall-to-wall estimates of key inventory attributes, providing an opportunity to update estimates of forest attributes in areas where inventory information is either out of date or non-existent.
We present a framework to compare water treatment costs to source water protection costs, an important knowledge gap for drinking water treatment plants (DWTPs). This trade-off helps to determine what incentives a DWTP has to invest in natural infrastructure or pollution reductio...
Real coded genetic algorithm for fuzzy time series prediction
NASA Astrophysics Data System (ADS)
Jain, Shilpa; Bisht, Dinesh C. S.; Singh, Phool; Mathpal, Prakash C.
2017-10-01
Genetic Algorithm (GA) forms a subset of evolutionary computing, rapidly growing area of Artificial Intelligence (A.I.). Some variants of GA are binary GA, real GA, messy GA, micro GA, saw tooth GA, differential evolution GA. This research article presents a real coded GA for predicting enrollments of University of Alabama. Data of Alabama University is a fuzzy time series. Here, fuzzy logic is used to predict enrollments of Alabama University and genetic algorithm optimizes fuzzy intervals. Results are compared to other eminent author works and found satisfactory, and states that real coded GA are fast and accurate.
Initial Results from Fitting Resolved Modes using HMI Intensity Observations
NASA Astrophysics Data System (ADS)
Korzennik, Sylvain G.
2017-08-01
The HMI project recently started processing the continuum intensity images following global helioseismology procedures similar to those used to process the velocity images. The spatial decomposition of these images has produced time series of spherical harmonic coefficients for degrees up to l=300, using a different apodization than the one used for velocity observations. The first 360 days of observations were processed and made available. I present initial results from fitting these time series using my state of the art fitting methodology and compare the derived mode characteristics to those estimated using co-eval velocity observations.
Time series regression-based pairs trading in the Korean equities market
NASA Astrophysics Data System (ADS)
Kim, Saejoon; Heo, Jun
2017-07-01
Pairs trading is an instance of statistical arbitrage that relies on heavy quantitative data analysis to profit by capitalising low-risk trading opportunities provided by anomalies of related assets. A key element in pairs trading is the rule by which open and close trading triggers are defined. This paper investigates the use of time series regression to define the rule which has previously been identified with fixed threshold-based approaches. Empirical results indicate that our approach may yield significantly increased excess returns compared to ones obtained by previous approaches on large capitalisation stocks in the Korean equities market.
Capabilities of stochastic rainfall models as data providers for urban hydrology
NASA Astrophysics Data System (ADS)
Haberlandt, Uwe
2017-04-01
For planning of urban drainage systems using hydrological models, long, continuous precipitation series with high temporal resolution are needed. Since observed time series are often too short or not available everywhere, the use of synthetic precipitation is a common alternative. This contribution compares three precipitation models regarding their suitability to provide 5 minute continuous rainfall time series for a) sizing of drainage networks for urban flood protection and b) dimensioning of combined sewage systems for pollution reduction. The rainfall models are a parametric stochastic model (Haberlandt et al., 2008), a non-parametric probabilistic approach (Bárdossy, 1998) and a stochastic downscaling of dynamically simulated rainfall (Berg et al., 2013); all models are operated both as single site and multi-site generators. The models are applied with regionalised parameters assuming that there is no station at the target location. Rainfall and discharge characteristics are utilised for evaluation of the model performance. The simulation results are compared against results obtained from reference rainfall stations not used for parameter estimation. The rainfall simulations are carried out for the federal states of Baden-Württemberg and Lower Saxony in Germany and the discharge simulations for the drainage networks of the cities of Hamburg, Brunswick and Freiburg. Altogether, the results show comparable simulation performance for the three models, good capabilities for single site simulations but low skills for multi-site simulations. Remarkably, there is no significant difference in simulation performance comparing the tasks flood protection with pollution reduction, so the models are finally able to simulate both the extremes and the long term characteristics of rainfall equally well. Bárdossy, A., 1998. Generating precipitation time series using simulated annealing. Wat. Resour. Res., 34(7): 1737-1744. Berg, P., Wagner, S., Kunstmann, H., Schädler, G., 2013. High resolution regional climate model simulations for Germany: part I — validation. Climate Dynamics, 40(1): 401-414. Haberlandt, U., Ebner von Eschenbach, A.-D., Buchwald, I., 2008. A space-time hybrid hourly rainfall model for derived flood frequency analysis. Hydrol. Earth Syst. Sci., 12: 1353-1367.
Buonaccorsi, Giovanni A; Roberts, Caleb; Cheung, Sue; Watson, Yvonne; O'Connor, James P B; Davies, Karen; Jackson, Alan; Jayson, Gordon C; Parker, Geoff J M
2006-09-01
The quantitative analysis of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) data is subject to model fitting errors caused by motion during the time-series data acquisition. However, the time-varying features that occur as a result of contrast enhancement can confound motion correction techniques based on conventional registration similarity measures. We have therefore developed a heuristic, locally controlled tracer kinetic model-driven registration procedure, in which the model accounts for contrast enhancement, and applied it to the registration of abdominal DCE-MRI data at high temporal resolution. Using severely motion-corrupted data sets that had been excluded from analysis in a clinical trial of an antiangiogenic agent, we compared the results obtained when using different models to drive the tracer kinetic model-driven registration with those obtained when using a conventional registration against the time series mean image volume. Using tracer kinetic model-driven registration, it was possible to improve model fitting by reducing the sum of squared errors but the improvement was only realized when using a model that adequately described the features of the time series data. The registration against the time series mean significantly distorted the time series data, as did tracer kinetic model-driven registration using a simpler model of contrast enhancement. When an appropriate model is used, tracer kinetic model-driven registration influences motion-corrupted model fit parameter estimates and provides significant improvements in localization in three-dimensional parameter maps. This has positive implications for the use of quantitative DCE-MRI for example in clinical trials of antiangiogenic or antivascular agents.
Wavelet application to the time series analysis of DORIS station coordinates
NASA Astrophysics Data System (ADS)
Bessissi, Zahia; Terbeche, Mekki; Ghezali, Boualem
2009-06-01
The topic developed in this article relates to the residual time series analysis of DORIS station coordinates using the wavelet transform. Several analysis techniques, already developed in other disciplines, were employed in the statistical study of the geodetic time series of stations. The wavelet transform allows one, on the one hand, to provide temporal and frequential parameter residual signals, and on the other hand, to determine and quantify systematic signals such as periodicity and tendency. Tendency is the change in short or long term signals; it is an average curve which represents the general pace of the signal evolution. On the other hand, periodicity is a process which is repeated, identical to itself, after a time interval called the period. In this context, the topic of this article consists, on the one hand, in determining the systematic signals by wavelet analysis of time series of DORIS station coordinates, and on the other hand, in applying the denoising signal to the wavelet packet, which makes it possible to obtain a well-filtered signal, smoother than the original signal. The DORIS data used in the treatment are a set of weekly residual time series from 1993 to 2004 from eight stations: DIOA, COLA, FAIB, KRAB, SAKA, SODB, THUB and SYPB. It is the ign03wd01 solution expressed in stcd format, which is derived by the IGN/JPL analysis center. Although these data are not very recent, the goal of this study is to detect the contribution of the wavelet analysis method on the DORIS data, compared to the other analysis methods already studied.
Validation of a national hydrological model
NASA Astrophysics Data System (ADS)
McMillan, H. K.; Booker, D. J.; Cattoën, C.
2016-10-01
Nationwide predictions of flow time-series are valuable for development of policies relating to environmental flows, calculating reliability of supply to water users, or assessing risk of floods or droughts. This breadth of model utility is possible because various hydrological signatures can be derived from simulated flow time-series. However, producing national hydrological simulations can be challenging due to strong environmental diversity across catchments and a lack of data available to aid model parameterisation. A comprehensive and consistent suite of test procedures to quantify spatial and temporal patterns in performance across various parts of the hydrograph is described and applied to quantify the performance of an uncalibrated national rainfall-runoff model of New Zealand. Flow time-series observed at 485 gauging stations were used to calculate Nash-Sutcliffe efficiency and percent bias when simulating between-site differences in daily series, between-year differences in annual series, and between-site differences in hydrological signatures. The procedures were used to assess the benefit of applying a correction to the modelled flow duration curve based on an independent statistical analysis. They were used to aid understanding of climatological, hydrological and model-based causes of differences in predictive performance by assessing multiple hypotheses that describe where and when the model was expected to perform best. As the procedures produce quantitative measures of performance, they provide an objective basis for model assessment that could be applied when comparing observed daily flow series with competing simulated flow series from any region-wide or nationwide hydrological model. Model performance varied in space and time with better scores in larger and medium-wet catchments, and in catchments with smaller seasonal variations. Surprisingly, model performance was not sensitive to aquifer fraction or rain gauge density.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
The high order dispersion analysis based on first-passage-time probability in financial markets
NASA Astrophysics Data System (ADS)
Liu, Chenggong; Shang, Pengjian; Feng, Guochen
2017-04-01
The study of first-passage-time (FPT) event about financial time series has gained broad research recently, which can provide reference for risk management and investment. In this paper, a new measurement-high order dispersion (HOD)-is developed based on FPT probability to explore financial time series. The tick-by-tick data of three Chinese stock markets and three American stock markets are investigated. We classify the financial markets successfully through analyzing the scaling properties of FPT probabilities of six stock markets and employing HOD method to compare the differences of FPT decay curves. It can be concluded that long-range correlation, fat-tailed broad probability density function and its coupling with nonlinearity mainly lead to the multifractality of financial time series by applying HOD method. Furthermore, we take the fluctuation function of multifractal detrended fluctuation analysis (MF-DFA) to distinguish markets and get consistent results with HOD method, whereas the HOD method is capable of fractionizing the stock markets effectively in the same region. We convince that such explorations are relevant for a better understanding of the financial market mechanisms.
NASA Astrophysics Data System (ADS)
Tsai, Christina; Yeh, Ting-Gu
2017-04-01
Extreme weather events are occurring more frequently as a result of climate change. Recently dengue fever has become a serious issue in southern Taiwan. It may have characteristic temporal scales that can be identified. Some researchers have hypothesized that dengue fever incidences are related to climate change. This study applies time-frequency analysis to time series data concerning dengue fever and hydrologic and meteorological variables. Results of three time-frequency analytical methods - the Hilbert Huang transform (HHT), the Wavelet Transform (WT) and the Short Time Fourier Transform (STFT) are compared and discussed. A more effective time-frequency analysis method will be identified to analyze relevant time series data. The most influential time scales of hydrologic and meteorological variables that are associated with dengue fever are determined. Finally, the linkage between hydrologic/meteorological factors and dengue fever incidences can be established.
Power estimation using simulations for air pollution time-series studies
2012-01-01
Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599
Power estimation using simulations for air pollution time-series studies.
Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt
2012-09-20
Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.
GPS Position Time Series @ JPL
NASA Technical Reports Server (NTRS)
Owen, Susan; Moore, Angelyn; Kedar, Sharon; Liu, Zhen; Webb, Frank; Heflin, Mike; Desai, Shailen
2013-01-01
Different flavors of GPS time series analysis at JPL - Use same GPS Precise Point Positioning Analysis raw time series - Variations in time series analysis/post-processing driven by different users. center dot JPL Global Time Series/Velocities - researchers studying reference frame, combining with VLBI/SLR/DORIS center dot JPL/SOPAC Combined Time Series/Velocities - crustal deformation for tectonic, volcanic, ground water studies center dot ARIA Time Series/Coseismic Data Products - Hazard monitoring and response focused center dot ARIA data system designed to integrate GPS and InSAR - GPS tropospheric delay used for correcting InSAR - Caltech's GIANT time series analysis uses GPS to correct orbital errors in InSAR - Zhen Liu's talking tomorrow on InSAR Time Series analysis
Weekly Solutions of Time-Variable Gravity from 1993 to 2010
NASA Technical Reports Server (NTRS)
Lemoine, F.; Chinn, D.; Le Bail, K.; Zelensky, N.; Melachroinos, S.; Beall, J.
2011-01-01
The GRACE mission has been highly successful in determining the time-variable gravity field of the Earth, producing monthly or even more frequent solutions (cf. 10-day) solutions using both spherical harmonics and mascons. However the GRACE time series only commences in 2002 - 2003 and a gap of several years may occur in the series before a GRACE follow-on satellite is launched. Satellites tracked by SLR and DORIS have also been used to study time variations in the Earth's gravitational field. These include (most recently) the solutions of Cox and Chao (2002), Cheng et al. (2004, 2007) and Lemoine et al. (2007). In this paper we discuss the development of a new time series of low degree spherical harmonic fields based on the available SLR, DORIS and GPS data. We develop simultaneous solutions for both the geocenter and the low degree harmonics up to 5x5. The solutions integrate data from SLR geodetic satellites (e.g., Lageos1, Lageos2, Starlette, Stella, Ajisai, Larets, Westpac), altimetry satellites (TOPEX/Poseidon, Envisat, Jason-1, Jason-2), and satellites tracked solely by DORIS (e.g. SPOT2-5). We discuss some pertinent aspects of the satellite-specific modeling. We include altimeter crossovers in the weekly solutions where feasible and time permits. The resulting geocenter time series is compared with geophysical model predictions and other independently-derived solutions. Over the GRACE time period the fidelity and consistency with the GRACE solutions are presented.
Araújo, Ricardo de A
2010-12-01
This paper presents a hybrid intelligent methodology to design increasing translation invariant morphological operators applied to Brazilian stock market prediction (overcoming the random walk dilemma). The proposed Translation Invariant Morphological Robust Automatic phase-Adjustment (TIMRAA) method consists of a hybrid intelligent model composed of a Modular Morphological Neural Network (MMNN) with a Quantum-Inspired Evolutionary Algorithm (QIEA), which searches for the best time lags to reconstruct the phase space of the time series generator phenomenon and determines the initial (sub-optimal) parameters of the MMNN. Each individual of the QIEA population is further trained by the Back Propagation (BP) algorithm to improve the MMNN parameters supplied by the QIEA. Also, for each prediction model generated, it uses a behavioral statistical test and a phase fix procedure to adjust time phase distortions observed in stock market time series. Furthermore, an experimental analysis is conducted with the proposed method through four Brazilian stock market time series, and the achieved results are discussed and compared to results found with random walk models and the previously introduced Time-delay Added Evolutionary Forecasting (TAEF) and Morphological-Rank-Linear Time-lag Added Evolutionary Forecasting (MRLTAEF) methods. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Levy, Lionel L., Jr.; Yoshikawa, Kenneth K.
1959-01-01
A method based on linearized and slender-body theories, which is easily adapted to electronic-machine computing equipment, is developed for calculating the zero-lift wave drag of single- and multiple-component configurations from a knowledge of the second derivative of the area distribution of a series of equivalent bodies of revolution. The accuracy and computational time required of the method to calculate zero-lift wave drag is evaluated relative to another numerical method which employs the Tchebichef form of harmonic analysis of the area distribution of a series of equivalent bodies of revolution. The results of the evaluation indicate that the total zero-lift wave drag of a multiple-component configuration can generally be calculated most accurately as the sum of the zero-lift wave drag of each component alone plus the zero-lift interference wave drag between all pairs of components. The accuracy and computational time required of both methods to calculate total zero-lift wave drag at supersonic Mach numbers is comparable for airplane-type configurations. For systems of bodies of revolution both methods yield similar results with comparable accuracy; however, the present method only requires up to 60 percent of the computing time required of the harmonic-analysis method for two bodies of revolution and less time for a larger number of bodies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Giraldez, Julieta; Gruchalla, Kenny
2016-11-01
Duke Energy, Alstom Grid, and the National Renewable Energy Laboratory teamed up to better understand the impacts of solar photovoltaics (PV) on distribution system operations. The core goal of the project is to compare the operational - specifically, voltage regulation - impacts of three classes of PV inverter operations: 1.) Active power only (Baseline); 2.) Local inverter control (e.g., PF...not equal...1, Q(V), etc.); and 3.) Integrated volt-VAR control (centralized through the distribution management system). These comparisons were made using multiple approaches, each of which represents an important research-and-development effort on its own: a) Quasi-steady-state time-series modeling for approximately 1 yearmore » of operations using the Alstom eTerra (DOTS) system as a simulation engine, augmented by Python scripting for scenario and time-series control and using external models for an advanced inverter; b) Power-hardware-in-the-loop (PHIL) testing of a 500-kVA-class advanced inverter and traditional voltage regulating equipment. This PHIL testing used cosimulation to link full-scale feeder simulation using DOTS in real time to hardware testing; c) Advanced visualization to provide improved insights into time-series results and other PV operational impacts; and d) Cost-benefit analysis to compare the financial and business-model impacts of each integration approach.« less
NASA Astrophysics Data System (ADS)
Osada, Y.; Ohta, Y.; Demachi, T.; Kido, M.; Fujimoto, H.; Azuma, R.; Hino, R.
2013-12-01
Large interplate earthquake repeatedly occurred in Japan Trench. Recently, the detail crustal deformation revealed by the nation-wide inland GPS network called as GEONET by GSI. However, the maximum displacement region for interplate earthquake is mainly located offshore region. GPS/Acoustic seafloor geodetic observation (hereafter GPS/A) is quite important and useful for understanding of shallower part of the interplate coupling between subducting and overriding plates. We typically conduct GPS/A in specific ocean area based on repeated campaign style using research vessel or buoy. Therefore, we cannot monitor the temporal variation of seafloor crustal deformation in real time. The one of technical issue on real time observation is kinematic GPS analysis because kinematic GPS analysis based on reference and rover data. If the precise kinematic GPS analysis will be possible in the offshore region, it should be promising method for real time GPS/A with USV (Unmanned Surface Vehicle) and a moored buoy. We assessed stability, precision and accuracy of StarFireTM global satellites based augmentation system. We primarily tested for StarFire in the static condition. In order to assess coordinate precision and accuracy, we compared 1Hz StarFire time series and post-processed precise point positioning (PPP) 1Hz time series by GIPSY-OASIS II processing software Ver. 6.1.2 with three difference product types (ultra-rapid, rapid, and final orbits). We also used difference interval clock information (30 and 300 seconds) for the post-processed PPP processing. The standard deviation of real time StarFire time series is less than 30 mm (horizontal components) and 60 mm (vertical component) based on 1 month continuous processing. We also assessed noise spectrum of the estimated time series by StarFire and post-processed GIPSY PPP results. We found that the noise spectrum of StarFire time series is similar pattern with GIPSY-OASIS II processing result based on JPL rapid orbit products with 300 seconds interval clock information. And we report stability, precision and accuracy of StarFire in the moving conditon.
Dishman, J Donald; Weber, Kenneth A; Corbin, Roger L; Burke, Jeanmarie R
2012-09-30
The purpose of this research was to characterize unique neurophysiologic events following a high velocity, low amplitude (HVLA) spinal manipulation (SM) procedure. Descriptive time series analysis techniques of time plots, outlier detection and autocorrelation functions were applied to time series of tibial nerve H-reflexes that were evoked at 10-s intervals from 100 s before the event until 100 s after three distinct events L5-S1 HVLA SM, or a L5-S1 joint pre-loading procedure, or the control condition. Sixty-six subjects were randomly assigned to three procedures, i.e., 22 time series per group. If the detection of outliers and correlograms revealed a pattern of non-randomness that was only time-locked to a single, specific event in the normalized time series, then an experimental effect would be inferred beyond the inherent variability of H-reflex responses. Tibial nerve F-wave responses were included to determine if any new information about central nervous function following a HVLA SM procedure could be ascertained. Time series analyses of H(max)/M(max) ratios, pre-post L5-S1 HVLA SM, substantiated the hypothesis that the specific aspects of the manipulative thrust lead to a greater attenuation of the H(max)/M(max) ratio as compared to the non-specific aspects related to the postural perturbation and joint pre-loading. The attenuation of the H(max)/M(max) ratio following the HVLA SM procedure was reliable and may hold promise as a translational tool to measure the consistency and accuracy of protocol implementation involving SM in clinical trials research. F-wave responses were not sensitive to mechanical perturbations of the lumbar spine. Copyright © 2012 Elsevier B.V. All rights reserved.
Spatial Representativeness of Surface-Measured Variations of Downward Solar Radiation
NASA Astrophysics Data System (ADS)
Schwarz, M.; Folini, D.; Hakuba, M. Z.; Wild, M.
2017-12-01
When using time series of ground-based surface solar radiation (SSR) measurements in combination with gridded data, the spatial and temporal representativeness of the point observations must be considered. We use SSR data from surface observations and high-resolution (0.05°) satellite-derived data to infer the spatiotemporal representativeness of observations for monthly and longer time scales in Europe. The correlation analysis shows that the squared correlation coefficients (R2) between SSR times series decrease linearly with increasing distance between the surface observations. For deseasonalized monthly mean time series, R2 ranges from 0.85 for distances up to 25 km between the stations to 0.25 at distances of 500 km. A decorrelation length (i.e., the e-folding distance of R2) on the order of 400 km (with spread of 100-600 km) was found. R2 from correlations between point observations and colocated grid box area means determined from satellite data were found to be 0.80 for a 1° grid. To quantify the error which arises when using a point observation as a surrogate for the area mean SSR of larger surroundings, we calculated a spatial sampling error (SSE) for a 1° grid of 8 (3) W/m2 for monthly (annual) time series. The SSE based on a 1° grid, therefore, is of the same magnitude as the measurement uncertainty. The analysis generally reveals that monthly mean (or longer temporally aggregated) point observations of SSR capture the larger-scale variability well. This finding shows that comparing time series of SSR measurements with gridded data is feasible for those time scales.
Satellite Ocean Color: Present Status, Future Challenges
NASA Technical Reports Server (NTRS)
Gregg, Watson W.; McClain, Charles R.; Zukor, Dorothy J. (Technical Monitor)
2001-01-01
We are midway into our 5th consecutive year of nearly continuous, high quality ocean color observations from space. The Ocean Color and Temperature Scanner/Polarization and Directionality of the Earth's Reflectances (OCTS/POLDER: Nov. 1996 - Jun. 1997), the Sea-viewing Wide Field-of-view Sensor (SeaWiFS: Sep. 1997 - present), and now the Moderate Resolution Imaging Spectrometer (MODIS: Sep. 2000 - present) have and are providing unprecedented views of chlorophyll dynamics on global scales. Global synoptic views of ocean chlorophyll were once a fantasy for ocean color scientists. It took nearly the entire 8-year lifetime of limited Coastal Zone Color Scanner (CZCS) observations to compile seasonal climatologies. Now SeaWIFS produces comparably complete fields in about 8 days. For the first time, scientists may observe spatial and temporal variability never before seen in a synoptic context. Even more exciting, we are beginning to plausibly ask questions of interannual variability. We stand at the beginning of long-time time series of ocean color, from which we may begin to ask questions of interdecadal variability and climate change. These are the scientific questions being addressed by users of the 18-year Advanced Very High Resolution Radiometer time series with respect to terrestrial processes and ocean temperatures. The nearly 5-year time series of ocean color observations now being constructed, with possibilities of continued observations, can put us at comparable standing with our terrestrial and physical oceanographic colleagues, and enable us to understand how ocean biological processes contribute to, and are affected by global climate change.
Improved outcomes in pediatric epilepsy surgery: the UCLA experience, 1986-2008.
Hemb, M; Velasco, T R; Parnes, M S; Wu, J Y; Lerner, J T; Matsumoto, J H; Yudovin, S; Shields, W D; Sankar, R; Salamon, N; Vinters, H V; Mathern, G W
2010-06-01
Epilepsy neurosurgery is a treatment option for children with refractory epilepsy. Our aim was to determine if outcomes improved over time. Pediatric epilepsy surgery patients operated in the first 11 years (1986-1997; pre-1997) were compared with the second 11 years (1998-2008; post-1997) for differences in presurgical and postsurgical variables. Despite similarities in seizure frequency, age at seizure onset, and age at surgery, the post-1997 series had more lobar/focal and fewer multilobar resections, and more patients with tuberous sclerosis complex and fewer cases of nonspecific gliosis compared with the pre-1997 group. Fewer cases had intracranial EEG studies in the post-1997 (0.8%) compared with the pre-1997 group (9%). Compared with the pre-1997 group, the post-1997 series had more seizure-free patients at 0.5 (83%, +16%), 1 (81%, +18%), 2 (77%, +19%), and 5 (74%, +29%) years, and more seizure-free patients were on medications at 0.5 (97%, +6%), 1 (88%, +9%), and 2 (76%, +29%), but not 5 (64%, +8%) years after surgery. There were fewer complications and reoperations in the post-1997 series compared with the pre-1997 group. Logistic regression identified post-1997 series and less aggressive medication withdrawal as the main predictors of becoming seizure-free 2 years after surgery. Improved technology and surgical procedures along with changes in clinical practice were likely factors linked with enhanced and sustained seizure-free outcomes in the post-1997 series. These findings support the general concept that clearer identification of lesions and complete resection are linked with better outcomes in pediatric epilepsy surgery patients.
Estimating terrestrial snow depth with the Topex-Poseidon altimeter and radiometer
Papa, F.; Legresy, B.; Mognard, N.M.; Josberger, E.G.; Remy, F.
2002-01-01
Active and passive microwave measurements obtained by the dual-frequency Topex-Poseidon radar altimeter from the Northern Great Plains of the United States are used to develop a snow pack radar backscatter model. The model results are compared with daily time series of surface snow observations made by the U.S. National Weather Service. The model results show that Ku-band provides more accurate snow depth determinations than does C-band. Comparing the snow depth determinations derived from the Topex-Poseidon nadir-looking passive microwave radiometers with the oblique-looking Satellite Sensor Microwave Imager (SSM/I) passive microwave observations and surface observations shows that both instruments accurately portray the temporal characteristics of the snow depth time series. While both retrievals consistently underestimate the actual snow depths, the Topex-Poseidon results are more accurate.
Evaluation of recent GRACE monthly solution series with an ice sheet perspective
NASA Astrophysics Data System (ADS)
Horwath, Martin; Groh, Andreas
2016-04-01
GRACE monthly global gravity field solutions have undergone a remarkable evolution, leading to the latest (Release 5) series by CSR, GFZ, and JPL, to new series by other processing centers, such as ITSG and AIUB, as well as to efforts to derive combined solutions, particularly by the EGSIEM (European Gravity Service for Improved Emergency Management) project. For applications, such as GRACE inferences on ice sheet mass balance, the obvious question is on what GRACE solution series to base the assessment. Here we evaluate different GRACE solution series (including the ones listed above) in a unified framework. We concentrate on solutions expanded up to degree 90 or higher, since this is most appropriate for polar applications. We empirically assess the error levels in the spectral as well as in the spatial domain based on the month-to-month scatter in the high spherical harmonic degrees. We include empirical assessment of error correlations. We then apply all series to infer Antarctic and Greenland mass change time series and compare the results in terms of apparent signal content and noise level. We find that the ITSG solutions show lowest noise level in the high degrees (above 60). A preliminary combined solution from the EGSIEM project shows lowest noise in the degrees below 60. This virtue maps into the derived ice mass time series, where the EGSIEM-based results show the lowest noise in most cases. Meanwhile, there is no indication that any of the considered series systematically dampens actual geophysical signals.
Shi, K; Liu, C Q; Huang, Z W; Zhang, B; Su, Y
2010-01-01
Detrended fluctuation analysis (DFA) and multifractal methods are applied to the time-scaling properties analysis of water pH series in Poyang Lake Inlet and Outlet in China. The results show that these pH series are characterised by long-term memory and multifractal scaling, and these characteristics have obvious differences between the Lake Inlet and Outlet. The comparison results suggest that monofractal and multifractal parameters can be quantitative dynamical indexes reflecting the capability of anti-acidification of Poyang Lake. Furthermore, we investigated the frequency-size distribution of pH series in Poyang Lake Inlet and Outlet. Our findings suggest that water pH is an example of a self-organised criticality (SOC) process. The results show that it is different SOC behaviours that result in the differences of power-law relations between pH series in Poyang Lake Inlet and Outlet. This work can be helpful to improvement of modelling of lake water quality.
NASA Astrophysics Data System (ADS)
Krämer, Stefan; Rohde, Sophia; Schröder, Kai; Belli, Aslan; Maßmann, Stefanie; Schönfeld, Martin; Henkel, Erik; Fuchs, Lothar
2015-04-01
The design of urban drainage systems with numerical simulation models requires long, continuous rainfall time series with high temporal resolution. However, suitable observed time series are rare. As a result, usual design concepts often use uncertain or unsuitable rainfall data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic rainfall data as input for urban drainage modelling are advanced, tested, and compared. Synthetic rainfall time series of three different precipitation model approaches, - one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model-, are provided for three catchments with different sewer system characteristics in different climate regions in Germany: - Hamburg (northern Germany): maritime climate, mean annual rainfall: 770 mm; combined sewer system length: 1.729 km (City center of Hamburg), storm water sewer system length (Hamburg Harburg): 168 km - Brunswick (Lower Saxony, northern Germany): transitional climate from maritime to continental, mean annual rainfall: 618 mm; sewer system length: 278 km, connected impervious area: 379 ha, height difference: 27 m - Friburg in Brisgau (southern Germany): Central European transitional climate, mean annual rainfall: 908 mm; sewer system length: 794 km, connected impervious area: 1 546 ha, height difference 284 m Hydrodynamic models are set up for each catchment to simulate rainfall runoff processes in the sewer systems. Long term event time series are extracted from the - three different synthetic rainfall time series (comprising up to 600 years continuous rainfall) provided for each catchment and - observed gauge rainfall (reference rainfall) according national hydraulic design standards. The synthetic and reference long term event time series are used as rainfall input for the hydrodynamic sewer models. For comparison of the synthetic rainfall time series against the reference rainfall and against each other the number of - surcharged manholes, - surcharges per manhole, - and the average surcharge volume per manhole are applied as hydraulic performance criteria. The results are discussed and assessed to answer the following questions: - Are the synthetic rainfall approaches suitable to generate high resolution rainfall series and do they produce, - in combination with numerical rainfall runoff models - valid results for design of urban drainage systems? - What are the bounds of uncertainty in the runoff results depending on the synthetic rainfall model and on the climate region? The work is carried out within the SYNOPSE project, funded by the German Federal Ministry of Education and Research (BMBF).
Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.
2016-01-01
Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679
Agreement evaluation of AVHRR and MODIS 16-day composite NDVI data sets
Ji, Lei; Gallo, Kevin P.; Eidenshink, Jeffery C.; Dwyer, John L.
2008-01-01
Satellite-derived normalized difference vegetation index (NDVI) data have been used extensively to detect and monitor vegetation conditions at regional and global levels. A combination of NDVI data sets derived from AVHRR and MODIS can be used to construct a long NDVI time series that may also be extended to VIIRS. Comparative analysis of NDVI data derived from AVHRR and MODIS is critical to understanding the data continuity through the time series. In this study, the AVHRR and MODIS 16-day composite NDVI products were compared using regression and agreement analysis methods. The analysis shows a high agreement between the AVHRR-NDVI and MODIS-NDVI observed from 2002 and 2003 for the conterminous United States, but the difference between the two data sets is appreciable. Twenty per cent of the total difference between the two data sets is due to systematic difference, with the remainder due to unsystematic difference. The systematic difference can be eliminated with a linear regression-based transformation between two data sets, and the unsystematic difference can be reduced partially by applying spatial filters to the data. We conclude that the continuity of NDVI time series from AVHRR to MODIS is satisfactory, but a linear transformation between the two sets is recommended.
Tani, Yuji; Ogasawara, Katsuhiko
2012-01-01
This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.
A technique to detect microclimatic inhomogeneities in historical temperature records
NASA Astrophysics Data System (ADS)
Runnalls, K. E.; Oke, T. R.
2003-04-01
A technique to identify inhomogeneities in historical temperature records caused by microclimatic changes to the surroundings of a climate station (e.g. minor instrument relocations, vegetation growth/removal, construction of houses, roads, runways) is presented. The technique uses daily maximum and minimum temperatures to estimate the magnitude of nocturnal cooling. The test station is compared to a nearby reference station by constructing time series of monthly "cooling ratios". It is argued that the cooling ratio is a particularly sensitive measure of microclimatic differences between neighbouring climate stations. Firstly, because microclimatic character is best expressed at night in stable conditions. Secondly, because larger-scale climatic influences common to both stations are removed by the use of a ratio and, because the ratio can be shown to be invariant in the mean with weather variables such as wind and cloud. Inflections (change points) in time series of cooling ratios therefore signal microclimatic change in one of the station records. Hurst rescaling is applied to the time series to aid in the identification of change points, which can then be compared to documented station history events, if sufficient metatdata is available. Results for a variety of air temperature records, ranging from rural to urban stations, are presented to illustrate the applicability of the technique.
Duality between Time Series and Networks
Campanharo, Andriana S. L. O.; Sirer, M. Irmak; Malmgren, R. Dean; Ramos, Fernando M.; Amaral, Luís A. Nunes.
2011-01-01
Studying the interaction between a system's components and the temporal evolution of the system are two common ways to uncover and characterize its internal workings. Recently, several maps from a time series to a network have been proposed with the intent of using network metrics to characterize time series. Although these maps demonstrate that different time series result in networks with distinct topological properties, it remains unclear how these topological properties relate to the original time series. Here, we propose a map from a time series to a network with an approximate inverse operation, making it possible to use network statistics to characterize time series and time series statistics to characterize networks. As a proof of concept, we generate an ensemble of time series ranging from periodic to random and confirm that application of the proposed map retains much of the information encoded in the original time series (or networks) after application of the map (or its inverse). Our results suggest that network analysis can be used to distinguish different dynamic regimes in time series and, perhaps more importantly, time series analysis can provide a powerful set of tools that augment the traditional network analysis toolkit to quantify networks in new and useful ways. PMID:21858093
Identification of ELF3 as an early transcriptional regulator of human urothelium.
Böck, Matthias; Hinley, Jennifer; Schmitt, Constanze; Wahlicht, Tom; Kramer, Stefan; Southgate, Jennifer
2014-02-15
Despite major advances in high-throughput and computational modelling techniques, understanding of the mechanisms regulating tissue specification and differentiation in higher eukaryotes, particularly man, remains limited. Microarray technology has been explored exhaustively in recent years and several standard approaches have been established to analyse the resultant datasets on a genome-wide scale. Gene expression time series offer a valuable opportunity to define temporal hierarchies and gain insight into the regulatory relationships of biological processes. However, unless datasets are exactly synchronous, time points cannot be compared directly. Here we present a data-driven analysis of regulatory elements from a microarray time series that tracked the differentiation of non-immortalised normal human urothelial (NHU) cells grown in culture. The datasets were obtained by harvesting differentiating and control cultures from finite bladder- and ureter-derived NHU cell lines at different time points using two previously validated, independent differentiation-inducing protocols. Due to the asynchronous nature of the data, a novel ranking analysis approach was adopted whereby we compared changes in the amplitude of experiment and control time series to identify common regulatory elements. Our approach offers a simple, fast and effective ranking method for genes that can be applied to other time series. The analysis identified ELF3 as a candidate transcriptional regulator involved in human urothelial cytodifferentiation. Differentiation-associated expression of ELF3 was confirmed in cell culture experiments and by immunohistochemical demonstration in situ. The importance of ELF3 in urothelial differentiation was verified by knockdown in NHU cells, which led to reduced expression of FOXA1 and GRHL3 transcription factors in response to PPARγ activation. The consequences of this were seen in the repressed expression of late/terminal differentiation-associated uroplakin 3a gene expression and in the compromised development and regeneration of urothelial barrier function. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Heimhuber, V.; Tulbure, M. G.; Broich, M.
2017-02-01
Periodically inundated floodplain areas are hot spots of biodiversity and provide a broad range of ecosystem services but have suffered alarming declines in recent history. Despite their importance, their long-term surface water (SW) dynamics and hydroclimatic drivers remain poorly quantified on continental scales. In this study, we used a 26 year time series of Landsat-derived SW maps in combination with river flow data from 68 gauges and spatial time series of rainfall, evapotranspiration and soil moisture to statistically model SW dynamics as a function of key drivers across Australia's Murray-Darling Basin (˜1 million km2). We fitted generalized additive models for 18,521 individual modeling units made up of 10 × 10 km grid cells, each split into floodplain, floodplain-lake, and nonfloodplain area. Average goodness of fit of models was high across floodplains and floodplain-lakes (r2 > 0.65), which were primarily driven by river flow, and was lower for nonfloodplain areas (r2 > 0.24), which were primarily driven by rainfall. Local climate conditions were more relevant for SW dynamics in the northern compared to the southern basin and had the highest influence in the least regulated and most extended floodplains. We further applied the models of two contrasting floodplain areas to predict SW extents of cloud-affected time steps in the Landsat series during the large 2010 floods with high validated accuracy (r2 > 0.97). Our framework is applicable to other complex river basins across the world and enables a more detailed quantification of large floods and drivers of SW dynamics compared to existing methods.
ERIC Educational Resources Information Center
Brooks, Clifford J.
Created for the Master of Arts in Teaching (MAT) in-service program at Webster University (St. Louis, Missouri), this series of seminars, presented over a 9-day period, focuses on a comparative study of four operas set in the time period of the French Revolution. The operas examined are: (1) Mozart's "Marriage of Figaro" (1786); (2)…
NASA Astrophysics Data System (ADS)
Mosier, T. M.; Hill, D. F.; Sharp, K. V.
2013-12-01
High spatial resolution time-series data are critical for many hydrological and earth science studies. Multiple groups have developed historical and forecast datasets of high-resolution monthly time-series for regions of the world such as the United States (e.g. PRISM for hindcast data and MACA for long-term forecasts); however, analogous datasets have not been available for most data scarce regions. The current work fills this data need by producing and freely distributing hindcast and forecast time-series datasets of monthly precipitation and mean temperature for all global land surfaces, gridded at a 30 arc-second resolution. The hindcast data are constructed through a Delta downscaling method, using as inputs 0.5 degree monthly time-series and 30 arc-second climatology global weather datasets developed by Willmott & Matsuura and WorldClim, respectively. The forecast data are formulated using a similar downscaling method, but with an additional step to remove bias from the climate variable's probability distribution over each region of interest. The downscaling package is designed to be compatible with a number of general circulation models (GCM) (e.g. with GCMs developed for the IPCC AR4 report and CMIP5), and is presently implemented using time-series data from the NCAR CESM1 model in conjunction with 30 arc-second future decadal climatologies distributed by the Consultative Group on International Agricultural Research. The resulting downscaled datasets are 30 arc-second time-series forecasts of monthly precipitation and mean temperature available for all global land areas. As an example of these data, historical and forecast 30 arc-second monthly time-series from 1950 through 2070 are created and analyzed for the region encompassing Pakistan. For this case study, forecast datasets corresponding to the future representative concentration pathways 45 and 85 scenarios developed by the IPCC are presented and compared. This exercise highlights a range of potential meteorological trends for the Pakistan region and more broadly serves to demonstrate the utility of the presented 30 arc-second monthly precipitation and mean temperature datasets for use in data scarce regions.
Rasner, P I; Pushkar', D Iu; Kolontarev, K B; Kotenkov, D V
2014-01-01
The appearance of new surgical technique always requires evaluation of its effectiveness and ease of acquisition. A comparative study of the results of the first three series of successive robot-assisted radical prostatectomy (RARP) performed on at time by three surgeons, was conducted. The series consisted of 40 procedures, and were divided into 4 groups of 10 operations for the analysis. When comparing data, statistically significant improvement of intra- and postoperative performance in each series was revealed, with increase in the number of operations performed, and in each subsequent series compared with the preceding one. We recommend to perform the planned conversion at the first operation. In our study, previous laparoscopic experience did not provide any significant advantages in the acquisition of robot-assisted technology. To characterize the individual learning curve, we recommend the use of the number of operations that the surgeon looked in the life-surgery regimen and/or in which he participated as an assistant before his own surgical activity, as well as the indicator "technical defect". In addition to the term "individual learning curve", we propose to introduce the terms "surgeon's individual training phase", and "clinic's learning curve".
61. The World-Wide Inaccessible Web, Part 2: Internet Routes
ERIC Educational Resources Information Center
Baggaley, Jon; Batpurev, Batchuluun; Klaas, Jim
2007-01-01
In the previous report in this series, Web browser loading times were measured in 12 Asian countries, and were found to be up to four times slower than commonly prescribed as acceptable. Failure of webpages to load at all was frequent. The current follow-up study compares these loading times with the complexity of the Internet routes linking the…
Ferguson, Jake M; Ponciano, José M
2014-01-01
Predicting population extinction risk is a fundamental application of ecological theory to the practice of conservation biology. Here, we compared the prediction performance of a wide array of stochastic, population dynamics models against direct observations of the extinction process from an extensive experimental data set. By varying a series of biological and statistical assumptions in the proposed models, we were able to identify the assumptions that affected predictions about population extinction. We also show how certain autocorrelation structures can emerge due to interspecific interactions, and that accounting for the stochastic effect of these interactions can improve predictions of the extinction process. We conclude that it is possible to account for the stochastic effects of community interactions on extinction when using single-species time series. PMID:24304946
Dakos, Vasilis; Carpenter, Stephen R.; Brock, William A.; Ellison, Aaron M.; Guttal, Vishwesha; Ives, Anthony R.; Kéfi, Sonia; Livina, Valerie; Seekell, David A.; van Nes, Egbert H.; Scheffer, Marten
2012-01-01
Many dynamical systems, including lakes, organisms, ocean circulation patterns, or financial markets, are now thought to have tipping points where critical transitions to a contrasting state can happen. Because critical transitions can occur unexpectedly and are difficult to manage, there is a need for methods that can be used to identify when a critical transition is approaching. Recent theory shows that we can identify the proximity of a system to a critical transition using a variety of so-called ‘early warning signals’, and successful empirical examples suggest a potential for practical applicability. However, while the range of proposed methods for predicting critical transitions is rapidly expanding, opinions on their practical use differ widely, and there is no comparative study that tests the limitations of the different methods to identify approaching critical transitions using time-series data. Here, we summarize a range of currently available early warning methods and apply them to two simulated time series that are typical of systems undergoing a critical transition. In addition to a methodological guide, our work offers a practical toolbox that may be used in a wide range of fields to help detect early warning signals of critical transitions in time series data. PMID:22815897
Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool
NASA Astrophysics Data System (ADS)
Chakraborty, Monisha; Ghosh, Dipak
2017-12-01
Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.
Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool
NASA Astrophysics Data System (ADS)
Chakraborty, Monisha; Ghosh, Dipak
2018-04-01
Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.
Lambert, Bruno; Flahault, Antoine; Chartier-Kastler, Emmanuel; Hanslik, Thomas
2013-01-01
Background Despite the fact that urinary tract infection (UTI) is a very frequent disease, little is known about its seasonality in the community. Methods and Findings To estimate seasonality of UTI using multiple time series constructed with available proxies of UTI. Eight time series based on two databases were used: sales of urinary antibacterial medications reported by a panel of pharmacy stores in France between 2000 and 2012, and search trends on the Google search engine for UTI-related terms between 2004 and 2012 in France, Germany, Italy, the USA, China, Australia and Brazil. Differences between summers and winters were statistically assessed with the Mann-Whitney test. We evaluated seasonality by applying the Harmonics Product Spectrum on Fast Fourier Transform. Seven time series out of eight displayed a significant increase in medication sales or web searches in the summer compared to the winter, ranging from 8% to 20%. The eight time series displayed a periodicity of one year. Annual increases were seen in the summer for UTI drug sales in France and Google searches in France, the USA, Germany, Italy, and China. Increases occurred in the austral summer for Google searches in Brazil and Australia. Conclusions An annual seasonality of UTIs was evidenced in seven different countries, with peaks during the summer. PMID:24204587
NASA Astrophysics Data System (ADS)
Manikumari, N.; Murugappan, A.; Vinodhini, G.
2017-07-01
Time series forecasting has gained remarkable interest of researchers in the last few decades. Neural networks based time series forecasting have been employed in various application areas. Reference Evapotranspiration (ETO) is one of the most important components of the hydrologic cycle and its precise assessment is vital in water balance and crop yield estimation, water resources system design and management. This work aimed at achieving accurate time series forecast of ETO using a combination of neural network approaches. This work was carried out using data collected in the command area of VEERANAM Tank during the period 2004 - 2014 in India. In this work, the Neural Network (NN) models were combined by ensemble learning in order to improve the accuracy for forecasting Daily ETO (for the year 2015). Bagged Neural Network (Bagged-NN) and Boosted Neural Network (Boosted-NN) ensemble learning were employed. It has been proved that Bagged-NN and Boosted-NN ensemble models are better than individual NN models in terms of accuracy. Among the ensemble models, Boosted-NN reduces the forecasting errors compared to Bagged-NN and individual NNs. Regression co-efficient, Mean Absolute Deviation, Mean Absolute Percentage error and Root Mean Square Error also ascertain that Boosted-NN lead to improved ETO forecasting performance.
Genetic programming and serial processing for time series classification.
Alfaro-Cid, Eva; Sharman, Ken; Esparcia-Alcázar, Anna I
2014-01-01
This work describes an approach devised by the authors for time series classification. In our approach genetic programming is used in combination with a serial processing of data, where the last output is the result of the classification. The use of genetic programming for classification, although still a field where more research in needed, is not new. However, the application of genetic programming to classification tasks is normally done by considering the input data as a feature vector. That is, to the best of our knowledge, there are not examples in the genetic programming literature of approaches where the time series data are processed serially and the last output is considered as the classification result. The serial processing approach presented here fills a gap in the existing literature. This approach was tested in three different problems. Two of them are real world problems whose data were gathered for online or conference competitions. As there are published results of these two problems this gives us the chance to compare the performance of our approach against top performing methods. The serial processing of data in combination with genetic programming obtained competitive results in both competitions, showing its potential for solving time series classification problems. The main advantage of our serial processing approach is that it can easily handle very large datasets.
Model Performance Evaluation and Scenario Analysis ...
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too
Keshmiri, Soheil; Sumioka, Hidenubo; Yamazaki, Ryuji; Ishiguro, Hiroshi
2018-01-01
Neuroscience research shows a growing interest in the application of Near-Infrared Spectroscopy (NIRS) in analysis and decoding of the brain activity of human subjects. Given the correlation that is observed between the Blood Oxygen Dependent Level (BOLD) responses that are exhibited by the time series data of functional Magnetic Resonance Imaging (fMRI) and the hemoglobin oxy/deoxy-genation that is captured by NIRS, linear models play a central role in these applications. This, in turn, results in adaptation of the feature extraction strategies that are well-suited for discretization of data that exhibit a high degree of linearity, namely, slope and the mean as well as their combination, to summarize the informational contents of the NIRS time series. In this article, we demonstrate that these features are inefficient in capturing the variational information of NIRS data, limiting the reliability and the adequacy of the conclusion on their results. Alternatively, we propose the linear estimate of differential entropy of these time series as a natural representation of such information. We provide evidence for our claim through comparative analysis of the application of these features on NIRS data pertinent to several working memory tasks as well as naturalistic conversational stimuli. PMID:29922144
NASA Technical Reports Server (NTRS)
Carroll, Mark; Wooten, Margaret; DiMiceli, Charlene; Sohlberg, Robert; Kelly, Maureen
2016-01-01
The availability of a dense time series of satellite observations at moderate (30 m) spatial resolution is enabling unprecedented opportunities for understanding ecosystems around the world. A time series of data from Landsat was used to generate a series of three maps at decadal time step to show how surface water has changed from 1991 to 2011 in the high northern latitudes of North America. Previous attempts to characterize the change in surface water in this region have been limited in either spatial or temporal resolution, or both. This series of maps was generated for the NASA Arctic and Boreal Vulnerability Experiment (ABoVE), which began in fall 2015. These maps show a nominal extent of surface water by using multiple observations to make a single map for each time step. This increases the confidence that any detected changes are related to climate or ecosystem changes not simply caused by short duration weather events such as flood or drought. The methods and comparison to other contemporary maps of the region are presented here. Initial verification results indicate 96% producer accuracy and 54% user accuracy when compared to 2-m resolution World View-2 data. All water bodies that were omitted were one Landsat pixel or smaller, hence below detection limits of the instrument.
A discrete wavelet spectrum approach for identifying non-monotonic trends in hydroclimate data
NASA Astrophysics Data System (ADS)
Sang, Yan-Fang; Sun, Fubao; Singh, Vijay P.; Xie, Ping; Sun, Jian
2018-01-01
The hydroclimatic process is changing non-monotonically and identifying its trends is a great challenge. Building on the discrete wavelet transform theory, we developed a discrete wavelet spectrum (DWS) approach for identifying non-monotonic trends in hydroclimate time series and evaluating their statistical significance. After validating the DWS approach using two typical synthetic time series, we examined annual temperature and potential evaporation over China from 1961-2013 and found that the DWS approach detected both the warming
and the warming hiatus
in temperature, and the reversed changes in potential evaporation. Further, the identified non-monotonic trends showed stable significance when the time series was longer than 30 years or so (i.e. the widely defined climate
timescale). The significance of trends in potential evaporation measured at 150 stations in China, with an obvious non-monotonic trend, was underestimated and was not detected by the Mann-Kendall test. Comparatively, the DWS approach overcame the problem and detected those significant non-monotonic trends at 380 stations, which helped understand and interpret the spatiotemporal variability in the hydroclimatic process. Our results suggest that non-monotonic trends of hydroclimate time series and their significance should be carefully identified, and the DWS approach proposed has the potential for wide use in the hydrological and climate sciences.
Wu, Mingquan; Yang, Chenghai; Song, Xiaoyu; Hoffmann, Wesley Clint; Huang, Wenjiang; Niu, Zheng; Wang, Changyao; Li, Wang; Yu, Bo
2018-01-31
To better understand the progression of cotton root rot within the season, time series monitoring is required. In this study, an improved spatial and temporal data fusion approach (ISTDFA) was employed to combine 250-m Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Different Vegetation Index (NDVI) and 10-m Sentinetl-2 NDVI data to generate a synthetic Sentinel-2 NDVI time series for monitoring this disease. Then, the phenology of healthy cotton and infected cotton was modeled using a logistic model. Finally, several phenology parameters, including the onset day of greenness minimum (OGM), growing season length (GLS), onset of greenness increase (OGI), max NDVI value, and integral area of the phenology curve, were calculated. The results showed that ISTDFA could be used to combine time series MODIS and Sentinel-2 NDVI data with a correlation coefficient of 0.893. The logistic model could describe the phenology curves with R-squared values from 0.791 to 0.969. Moreover, the phenology curve of infected cotton showed a significant difference from that of healthy cotton. The max NDVI value, OGM, GSL and the integral area of the phenology curve for infected cotton were reduced by 0.045, 30 days, 22 days, and 18.54%, respectively, compared with those for healthy cotton.
Comparative case study between D3 and highcharts on lustre data visualization
NASA Astrophysics Data System (ADS)
ElTayeby, Omar; John, Dwayne; Patel, Pragnesh; Simmerman, Scott
2013-12-01
One of the challenging tasks in visual analytics is to target clustered time-series data sets, since it is important for data analysts to discover patterns changing over time while keeping their focus on particular subsets. In order to leverage the humans ability to quickly visually perceive these patterns, multivariate features should be implemented according to the attributes available. However, a comparative case study has been done using JavaScript libraries to demonstrate the differences in capabilities of using them. A web-based application to monitor the Lustre file system for the systems administrators and the operation teams has been developed using D3 and Highcharts. Lustre file systems are responsible of managing Remote Procedure Calls (RPCs) which include input output (I/O) requests between clients and Object Storage Targets (OSTs). The objective of this application is to provide time-series visuals of these calls and storage patterns of users on Kraken, a University of Tennessee High Performance Computing (HPC) resource in Oak Ridge National Laboratory (ORNL).
NASA Astrophysics Data System (ADS)
Esquível, Manuel L.; Fernandes, José Moniz; Guerreiro, Gracinda R.
2016-06-01
We introduce a schematic formalism for the time evolution of a random population entering some set of classes and such that each member of the population evolves among these classes according to a scheme based on a Markov chain model. We consider that the flow of incoming members is modeled by a time series and we detail the time series structure of the elements in each of the classes. We present a practical application to data from a credit portfolio of a Cape Verdian bank; after modeling the entering population in two different ways - namely as an ARIMA process and as a deterministic sigmoid type trend plus a SARMA process for the residues - we simulate the behavior of the population and compare the results. We get that the second method is more accurate in describing the behavior of the populations when compared to the observed values in a direct simulation of the Markov chain.
ARIMA representation for daily solar irradiance and surface air temperature time series
NASA Astrophysics Data System (ADS)
Kärner, Olavi
2009-06-01
Autoregressive integrated moving average (ARIMA) models are used to compare long-range temporal variability of the total solar irradiance (TSI) at the top of the atmosphere (TOA) and surface air temperature series. The comparison shows that one and the same type of the model is applicable to represent the TSI and air temperature series. In terms of the model type surface air temperature imitates closely that for the TSI. This may mean that currently no other forcing to the climate system is capable to change the random walk type variability established by the varying activity of the rotating Sun. The result should inspire more detailed examination of the dependence of various climate series on short-range fluctuations of TSI.
Low gravity investigations in suborbital rockets
NASA Technical Reports Server (NTRS)
Wessling, Francis C.; Lundquist, Charles A.
1990-01-01
Two series of suborbital rocket missions are outlined which are intended to support materials and biotechnology investigations under microgravity conditions and enhance commercial rocket activity. The Consort series of missions employs the two-stage Starfire I rocket and recovery systems as well as a payload of three sealed or vented cylindrical sections. The Consort 1 and 2 missions are described which successfully supported six classes of experiments each. The Joust program is the second series of rocket missions, and the Prospector rocket is employed to provide comparable payload masses with twice as much microgravity time as the Consort series. The Joust and Consort missions provide 6-8 and 13-15 mins, respectively, of microgravity flight to support such experiments as polymer processing, scientific apparatus testing, and electrodeposition.
Aggregated Indexing of Biomedical Time Series Data
Woodbridge, Jonathan; Mortazavi, Bobak; Sarrafzadeh, Majid; Bui, Alex A.T.
2016-01-01
Remote and wearable medical sensing has the potential to create very large and high dimensional datasets. Medical time series databases must be able to efficiently store, index, and mine these datasets to enable medical professionals to effectively analyze data collected from their patients. Conventional high dimensional indexing methods are a two stage process. First, a superset of the true matches is efficiently extracted from the database. Second, supersets are pruned by comparing each of their objects to the query object and rejecting any objects falling outside a predetermined radius. This pruning stage heavily dominates the computational complexity of most conventional search algorithms. Therefore, indexing algorithms can be significantly improved by reducing the amount of pruning. This paper presents an online algorithm to aggregate biomedical times series data to significantly reduce the search space (index size) without compromising the quality of search results. This algorithm is built on the observation that biomedical time series signals are composed of cyclical and often similar patterns. This algorithm takes in a stream of segments and groups them to highly concentrated collections. Locality Sensitive Hashing (LSH) is used to reduce the overall complexity of the algorithm, allowing it to run online. The output of this aggregation is used to populate an index. The proposed algorithm yields logarithmic growth of the index (with respect to the total number of objects) while keeping sensitivity and specificity simultaneously above 98%. Both memory and runtime complexities of time series search are improved when using aggregated indexes. In addition, data mining tasks, such as clustering, exhibit runtimes that are orders of magnitudes faster when run on aggregated indexes. PMID:27617298
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2018-02-01
We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.
Why the null matters: statistical tests, random walks and evolution.
Sheets, H D; Mitchell, C E
2001-01-01
A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.
NASA Astrophysics Data System (ADS)
Socquet, Anne; Déprez, Aline; Cotte, Nathalie; Maubant, Louise; Walpersdorf, Andrea; Bato, Mary Grace
2017-04-01
We present here a new pan-European velocity field, obtained by processing 500+ cGPS stations in double difference, in the framework of the implementation phase of the European Plate Observing System (EPOS) project. This prototype solution spans the 2000-2016 period, and includes data from RING, NOA, RENAG and European Permanent Network (EPN) cGPS netwprks. The data set is first split into daily sub-networks (between 8 and 14 sub-networks). The sub-networks consist in about 40 stations, with 2 overlapping stations. For each day and for each sub-network, the GAMIT processing is conducted independently. Once each sub-network achieves satisfactory results, a daily combination is performed in order to produce SINEX files. The Chi square value associated with the combination allows us to evaluate its quality. Eventually, a multi year combination generates position time series for each station. Each time series is visualized and the jumps associated with material change (antenna or receiver) are estimated and corrected. This procedure allows us to generate daily solutions and position time series for all stations. The associated "interseismic" velocity field has then been estimated using a times series analysis using MIDAS software, and compared to another independent estimate obtained by Kalman filtering with globk software. In addition to this velocity field we made a specific zoom on Italy and present a strain rate map as well as time series showing co- and post- seismic movements associated with the 2016 Amatrice and Norcia earthquakes.
Evaluation of the significance of abrupt changes in precipitation and runoff process in China
NASA Astrophysics Data System (ADS)
Xie, Ping; Wu, Ziyi; Sang, Yan-Fang; Gu, Haiting; Zhao, Yuxi; Singh, Vijay P.
2018-05-01
Abrupt changes are an important manifestation of hydrological variability. How to accurately detect the abrupt changes in hydrological time series and evaluate their significance is an important issue, but methods for dealing with them effectively are lacking. In this study, we propose an approach to evaluate the significance of abrupt changes in time series at five levels: no, weak, moderate, strong, and dramatic. The approach was based on an index of correlation coefficient calculated for the original time series and its abrupt change component. A bigger value of correlation coefficient reflects a higher significance level of abrupt change. Results of Monte-Carlo experiments verified the reliability of the proposed approach, and also indicated the great influence of statistical characteristics of time series on the significance level of abrupt change. The approach was derived from the relationship between correlation coefficient index and abrupt change, and can estimate and grade the significance levels of abrupt changes in hydrological time series. Application of the proposed approach to ten major watersheds in China showed that abrupt changes mainly occurred in five watersheds in northern China, which have arid or semi-arid climate and severe shortages of water resources. Runoff processes in northern China were more sensitive to precipitation change than those in southern China. Although annual precipitation and surface water resources amount (SWRA) exhibited a harmonious relationship in most watersheds, abrupt changes in the latter were more significant. Compared with abrupt changes in annual precipitation, human activities contributed much more to the abrupt changes in the corresponding SWRA, except for the Northwest Inland River watershed.
Univariate Time Series Prediction of Solar Power Using a Hybrid Wavelet-ARMA-NARX Prediction Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazaripouya, Hamidreza; Wang, Yubo; Chu, Chi-Cheng
This paper proposes a new hybrid method for super short-term solar power prediction. Solar output power usually has a complex, nonstationary, and nonlinear characteristic due to intermittent and time varying behavior of solar radiance. In addition, solar power dynamics is fast and is inertia less. An accurate super short-time prediction is required to compensate for the fluctuations and reduce the impact of solar power penetration on the power system. The objective is to predict one step-ahead solar power generation based only on historical solar power time series data. The proposed method incorporates discrete wavelet transform (DWT), Auto-Regressive Moving Average (ARMA)more » models, and Recurrent Neural Networks (RNN), while the RNN architecture is based on Nonlinear Auto-Regressive models with eXogenous inputs (NARX). The wavelet transform is utilized to decompose the solar power time series into a set of richer-behaved forming series for prediction. ARMA model is employed as a linear predictor while NARX is used as a nonlinear pattern recognition tool to estimate and compensate the error of wavelet-ARMA prediction. The proposed method is applied to the data captured from UCLA solar PV panels and the results are compared with some of the common and most recent solar power prediction methods. The results validate the effectiveness of the proposed approach and show a considerable improvement in the prediction precision.« less
Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.
Bühler, Jonas; von Lieres, Eric; Huber, Gregor J
2018-01-01
Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.
NASA Astrophysics Data System (ADS)
Wu, Yue; Shang, Pengjian; Li, Yilong
2018-03-01
A modified multiscale sample entropy measure based on symbolic representation and similarity (MSEBSS) is proposed in this paper to research the complexity of stock markets. The modified algorithm reduces the probability of inducing undefined entropies and is confirmed to be robust to strong noise. Considering the validity and accuracy, MSEBSS is more reliable than Multiscale entropy (MSE) for time series mingled with much noise like financial time series. We apply MSEBSS to financial markets and results show American stock markets have the lowest complexity compared with European and Asian markets. There are exceptions to the regularity that stock markets show a decreasing complexity over the time scale, indicating a periodicity at certain scales. Based on MSEBSS, we introduce the modified multiscale cross-sample entropy measure based on symbolic representation and similarity (MCSEBSS) to consider the degree of the asynchrony between distinct time series. Stock markets from the same area have higher synchrony than those from different areas. And for stock markets having relative high synchrony, the entropy values will decrease with the increasing scale factor. While for stock markets having high asynchrony, the entropy values will not decrease with the increasing scale factor sometimes they tend to increase. So both MSEBSS and MCSEBSS are able to distinguish stock markets of different areas, and they are more helpful if used together for studying other features of financial time series.
Li, Shuying; Zhuang, Jun; Shen, Shifei
2017-07-01
In recent years, various types of terrorist attacks occurred, causing worldwide catastrophes. According to the Global Terrorism Database (GTD), among all attack tactics, bombing attacks happened most frequently, followed by armed assaults. In this article, a model for analyzing and forecasting the conditional probability of bombing attacks (CPBAs) based on time-series methods is developed. In addition, intervention analysis is used to analyze the sudden increase in the time-series process. The results show that the CPBA increased dramatically at the end of 2011. During that time, the CPBA increased by 16.0% in a two-month period to reach the peak value, but still stays 9.0% greater than the predicted level after the temporary effect gradually decays. By contrast, no significant fluctuation can be found in the conditional probability process of armed assault. It can be inferred that some social unrest, such as America's troop withdrawal from Afghanistan and Iraq, could have led to the increase of the CPBA in Afghanistan, Iraq, and Pakistan. The integrated time-series and intervention model is used to forecast the monthly CPBA in 2014 and through 2064. The average relative error compared with the real data in 2014 is 3.5%. The model is also applied to the total number of attacks recorded by the GTD between 2004 and 2014. © 2016 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Patra, S. R.
2017-12-01
Evapotranspiration (ET0) influences water resources and it is considered as a vital process in aridic hydrologic frameworks. It is one of the most important measure in finding the drought condition. Therefore, time series forecasting of evapotranspiration is very important in order to help the decision makers and water system mangers build up proper systems to sustain and manage water resources. Time series considers that -history repeats itself, hence by analysing the past values, better choices, or forecasts, can be carried out for the future. Ten years of ET0 data was used as a part of this study to make sure a satisfactory forecast of monthly values. In this study, three models: (ARIMA) mathematical model, artificial neural network model, support vector machine model are presented. These three models are used for forecasting monthly reference crop evapotranspiration based on ten years of past historical records (1991-2001) of measured evaporation at Ganjam region, Odisha, India without considering the climate data. The developed models will allow water resource managers to predict up to 12 months, making these predictions very useful to optimize the resources needed for effective water resources management. In this study multistep-ahead prediction is performed which is more complex and troublesome than onestep ahead. Our investigation proposed that nonlinear relationships may exist among the monthly indices, so that the ARIMA model might not be able to effectively extract the full relationship hidden in the historical data. Support vector machines are potentially helpful time series forecasting strategies on account of their strong nonlinear mapping capability and resistance to complexity in forecasting data. SVMs have great learning capability in time series modelling compared to ANN. For instance, the SVMs execute the structural risk minimization principle, which allows in better generalization as compared to neural networks that use the empirical risk minimization principle. The reliability of these computational models was analysed in light of simulation results and it was found out that SVM model produces better results among the three. The future research should be routed to extend the validation data set and to check the validity of our results on different areas with hybrid intelligence techniques.
Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data
Hallac, David; Vare, Sagar; Boyd, Stephen; Leskovec, Jure
2018-01-01
Subsequence clustering of multivariate time series is a useful tool for discovering repeated patterns in temporal data. Once these patterns have been discovered, seemingly complicated datasets can be interpreted as a temporal sequence of only a small number of states, or clusters. For example, raw sensor data from a fitness-tracking application can be expressed as a timeline of a select few actions (i.e., walking, sitting, running). However, discovering these patterns is challenging because it requires simultaneous segmentation and clustering of the time series. Furthermore, interpreting the resulting clusters is difficult, especially when the data is high-dimensional. Here we propose a new method of model-based clustering, which we call Toeplitz Inverse Covariance-based Clustering (TICC). Each cluster in the TICC method is defined by a correlation network, or Markov random field (MRF), characterizing the interdependencies between different observations in a typical subsequence of that cluster. Based on this graphical representation, TICC simultaneously segments and clusters the time series data. We solve the TICC problem through alternating minimization, using a variation of the expectation maximization (EM) algorithm. We derive closed-form solutions to efficiently solve the two resulting subproblems in a scalable way, through dynamic programming and the alternating direction method of multipliers (ADMM), respectively. We validate our approach by comparing TICC to several state-of-the-art baselines in a series of synthetic experiments, and we then demonstrate on an automobile sensor dataset how TICC can be used to learn interpretable clusters in real-world scenarios. PMID:29770257
The long-range correlation and evolution law of centennial-scale temperatures in Northeast China.
Zheng, Xiaohui; Lian, Yi; Wang, Qiguang
2018-01-01
This paper applies the detrended fluctuation analysis (DFA) method to investigate the long-range correlation of monthly mean temperatures from three typical measurement stations at Harbin, Changchun, and Shenyang in Northeast China from 1909 to 2014. The results reveal the memory characteristics of the climate system in this region. By comparing the temperatures from different time periods and investigating the variations of its scaling exponents at the three stations during these different time periods, we found that the monthly mean temperature has long-range correlation, which indicates that the temperature in Northeast China has long-term memory and good predictability. The monthly time series of temperatures over the past 106 years also shows good long-range correlation characteristics. These characteristics are also obviously observed in the annual mean temperature time series. Finally, we separated the centennial-length temperature time series into two time periods. These results reveal that the long-range correlations at the Harbin station over these two time periods have large variations, whereas no obvious variations are observed at the other two stations. This indicates that warming affects the regional climate system's predictability differently at different time periods. The research results can provide a quantitative reference point for regional climate predictability assessment and future climate model evaluation.
Children's Well-Being during Parents' Marital Disruption Process: A Pooled Time-Series Analysis.
ERIC Educational Resources Information Center
Sun, Yongmin; Li, Yuanzhang
2002-01-01
Examines the extent to which parents' marital disruption process affects children's academic performance and well-being both before and after parental divorce. Compared with peers in intact families, children of divorce faired less well. Discusses how family resources mediate detrimental effects over time. Similar results are noted for girls and…
Empirical mode decomposition and long-range correlation analysis of sunspot time series
NASA Astrophysics Data System (ADS)
Zhou, Yu; Leung, Yee
2010-12-01
Sunspots, which are the best known and most variable features of the solar surface, affect our planet in many ways. The number of sunspots during a period of time is highly variable and arouses strong research interest. When multifractal detrended fluctuation analysis (MF-DFA) is employed to study the fractal properties and long-range correlation of the sunspot series, some spurious crossover points might appear because of the periodic and quasi-periodic trends in the series. However many cycles of solar activities can be reflected by the sunspot time series. The 11-year cycle is perhaps the most famous cycle of the sunspot activity. These cycles pose problems for the investigation of the scaling behavior of sunspot time series. Using different methods to handle the 11-year cycle generally creates totally different results. Using MF-DFA, Movahed and co-workers employed Fourier truncation to deal with the 11-year cycle and found that the series is long-range anti-correlated with a Hurst exponent, H, of about 0.12. However, Hu and co-workers proposed an adaptive detrending method for the MF-DFA and discovered long-range correlation characterized by H≈0.74. In an attempt to get to the bottom of the problem in the present paper, empirical mode decomposition (EMD), a data-driven adaptive method, is applied to first extract the components with different dominant frequencies. MF-DFA is then employed to study the long-range correlation of the sunspot time series under the influence of these components. On removing the effects of these periods, the natural long-range correlation of the sunspot time series can be revealed. With the removal of the 11-year cycle, a crossover point located at around 60 months is discovered to be a reasonable point separating two different time scale ranges, H≈0.72 and H≈1.49. And on removing all cycles longer than 11 years, we have H≈0.69 and H≈0.28. The three cycle-removing methods—Fourier truncation, adaptive detrending and the proposed EMD-based method—are further compared, and possible reasons for the different results are given. Two numerical experiments are designed for quantitatively evaluating the performances of these three methods in removing periodic trends with inexact/exact cycles and in detecting the possible crossover points.
NASA Astrophysics Data System (ADS)
Menne, Matthew J.; Williams, Claude N., Jr.
2005-10-01
An evaluation of three hypothesis test statistics that are commonly used in the detection of undocumented changepoints is described. The goal of the evaluation was to determine whether the use of multiple tests could improve undocumented, artificial changepoint detection skill in climate series. The use of successive hypothesis testing is compared to optimal approaches, both of which are designed for situations in which multiple undocumented changepoints may be present. In addition, the importance of the form of the composite climate reference series is evaluated, particularly with regard to the impact of undocumented changepoints in the various component series that are used to calculate the composite.In a comparison of single test changepoint detection skill, the composite reference series formulation is shown to be less important than the choice of the hypothesis test statistic, provided that the composite is calculated from the serially complete and homogeneous component series. However, each of the evaluated composite series is not equally susceptible to the presence of changepoints in its components, which may be erroneously attributed to the target series. Moreover, a reference formulation that is based on the averaging of the first-difference component series is susceptible to random walks when the composition of the component series changes through time (e.g., values are missing), and its use is, therefore, not recommended. When more than one test is required to reject the null hypothesis of no changepoint, the number of detected changepoints is reduced proportionately less than the number of false alarms in a wide variety of Monte Carlo simulations. Consequently, a consensus of hypothesis tests appears to improve undocumented changepoint detection skill, especially when reference series homogeneity is violated. A consensus of successive hypothesis tests using a semihierarchic splitting algorithm also compares favorably to optimal solutions, even when changepoints are not hierarchic.
Polar motion excitation analysis due to global continental water redistribution
NASA Astrophysics Data System (ADS)
Fernandez, L.; Schuh, H.
2006-10-01
We present the results obtained when studying the hydrological excitation of the Earth‘s wobble due to global redistribution of continental water storage. This work was performed in two steps. First, we computed the hydrological angular momentum (HAM) time series based on the global hydrological model LaD (Land Dynamics model) for the period 1980 till 2004. Then, we compared the effectiveness of this excitation by analysing the residuals of the geodetic time series after removing atmospheric and oceanic contributions with the respective hydrological ones. The emphasis was put on low frequency variations. We also present a comparison of HAM time series from LaD with respect to that one from a global model based on the assimilated soil moisture and snow accumulation data from NCEP/NCAR (The National Center for Environmental Prediction/The National Center for Atmospheric Research) reanalysis. Finally, we evaluate the performance of LaD model in closing the polar motion budget at seasonal periods in comparison with the NCEP and the Land Data Assimilation System (LDAS) models.
Artificial Neural Network versus Linear Models Forecasting Doha Stock Market
NASA Astrophysics Data System (ADS)
Yousif, Adil; Elfaki, Faiz
2017-12-01
The purpose of this study is to determine the instability of Doha stock market and develop forecasting models. Linear time series models are used and compared with a nonlinear Artificial Neural Network (ANN) namely Multilayer Perceptron (MLP) Technique. It aims to establish the best useful model based on daily and monthly data which are collected from Qatar exchange for the period starting from January 2007 to January 2015. Proposed models are for the general index of Qatar stock exchange and also for the usages in other several sectors. With the help of these models, Doha stock market index and other various sectors were predicted. The study was conducted by using various time series techniques to study and analyze data trend in producing appropriate results. After applying several models, such as: Quadratic trend model, double exponential smoothing model, and ARIMA, it was concluded that ARIMA (2,2) was the most suitable linear model for the daily general index. However, ANN model was found to be more accurate than time series models.
A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.
Ben Taieb, Souhaib; Atiya, Amir F
2016-01-01
Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.
A multivariate time series approach to modeling and forecasting demand in the emergency department.
Jones, Spencer S; Evans, R Scott; Allen, Todd L; Thomas, Alun; Haug, Peter J; Welch, Shari J; Snow, Gregory L
2009-02-01
The goals of this investigation were to study the temporal relationships between the demands for key resources in the emergency department (ED) and the inpatient hospital, and to develop multivariate forecasting models. Hourly data were collected from three diverse hospitals for the year 2006. Descriptive analysis and model fitting were carried out using graphical and multivariate time series methods. Multivariate models were compared to a univariate benchmark model in terms of their ability to provide out-of-sample forecasts of ED census and the demands for diagnostic resources. Descriptive analyses revealed little temporal interaction between the demand for inpatient resources and the demand for ED resources at the facilities considered. Multivariate models provided more accurate forecasts of ED census and of the demands for diagnostic resources. Our results suggest that multivariate time series models can be used to reliably forecast ED patient census; however, forecasts of the demands for diagnostic resources were not sufficiently reliable to be useful in the clinical setting.
Proisy, Christophe; Viennois, Gaëlle; Sidik, Frida; Andayani, Ariani; Enright, James Anthony; Guitet, Stéphane; Gusmawati, Niken; Lemonnier, Hugues; Muthusankar, Gowrappan; Olagoke, Adewole; Prosperi, Juliana; Rahmania, Rinny; Ricout, Anaïs; Soulard, Benoit; Suhardjono
2018-06-01
Revegetation of abandoned aquaculture regions should be a priority for any integrated coastal zone management (ICZM). This paper examines the potential of a matchless time series of 20 very high spatial resolution (VHSR) optical satellite images acquired for mapping trends in the evolution of mangrove forests from 2001 to 2015 in an estuary fragmented into aquaculture ponds. Evolution of mangrove extent was quantified through robust multitemporal analysis based on supervised image classification. Results indicated that mangroves are expanding inside and outside ponds and over pond dykes. However, the yearly expansion rate of vegetation cover greatly varied between replanted ponds. Ground truthing showed that only Rhizophora species had been planted, whereas natural mangroves consist of Avicennia and Sonneratia species. In addition, the dense Rhizophora plantations present very low regeneration capabilities compared with natural mangroves. Time series of VHSR images provide comprehensive and intuitive level of information for the support of ICZM. Copyright © 2017 Elsevier Ltd. All rights reserved.
Detrended fluctuation analysis based on higher-order moments of financial time series
NASA Astrophysics Data System (ADS)
Teng, Yue; Shang, Pengjian
2018-01-01
In this paper, a generalized method of detrended fluctuation analysis (DFA) is proposed as a new measure to assess the complexity of a complex dynamical system such as stock market. We extend DFA and local scaling DFA to higher moments such as skewness and kurtosis (labeled SMDFA and KMDFA), so as to investigate the volatility scaling property of financial time series. Simulations are conducted over synthetic and financial data for providing the comparative study. We further report the results of volatility behaviors in three American countries, three Chinese and three European stock markets by using DFA and LSDFA method based on higher moments. They demonstrate the dynamics behaviors of time series in different aspects, which can quantify the changes of complexity for stock market data and provide us with more meaningful information than single exponent. And the results reveal some higher moments volatility and higher moments multiscale volatility details that cannot be obtained using the traditional DFA method.
Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling
Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...
2014-07-14
Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less
Scale invariance in chaotic time series: Classical and quantum examples
NASA Astrophysics Data System (ADS)
Landa, Emmanuel; Morales, Irving O.; Stránský, Pavel; Fossion, Rubén; Velázquez, Victor; López Vieyra, J. C.; Frank, Alejandro
Important aspects of chaotic behavior appear in systems of low dimension, as illustrated by the Map Module 1. It is indeed a remarkable fact that all systems tha make a transition from order to disorder display common properties, irrespective of their exacta functional form. We discuss evidence for 1/f power spectra in the chaotic time series associated in classical and quantum examples, the one-dimensional map module 1 and the spectrum of 48Ca. A Detrended Fluctuation Analysis (DFA) method is applied to investigate the scaling properties of the energy fluctuations in the spectrum of 48Ca obtained with a large realistic shell model calculation (ANTOINE code) and with a random shell model (TBRE) calculation also in the time series obtained with the map mod 1. We compare the scale invariant properties of the 48Ca nuclear spectrum sith similar analyses applied to the RMT ensambles GOE and GDE. A comparison with the corresponding power spectra is made in both cases. The possible consequences of the results are discussed.
A study of the effect of legal settlement on post-concussion symptoms.
Fee, C R; Rutherford, W H
1988-01-01
Forty-four consecutive patients with concussion for whom a medico-legal report had been written were followed up for 3-4 years after their accidents. Three cases were still pending at the end of the study. Fifty-seven per cent complained of symptoms when the medico-legal reports were written (mean interval from accident 12.9 months), 39% had symptoms at the time of settlement (mean interval 22.1 months) and 34% had symptoms one year later. When these results were compared with a general series from the same department some years earlier, it was found that the symptoms at the time of writing the reports were not significantly different from symptoms at 6 weeks in the earlier series, but the symptoms one year after settlement were almost two-and-a-half times greater than the symptoms at 12 months in the general series. No evidence could be found to suggest any organic basis for the higher symptom rate in the litigation series. It is suggested that the litigation process itself is a factor in the persistence of symptoms and this effect continues after legal settlement has been reached. Early settlement of the cases might significantly reduce morbidity. PMID:3408521
Automated time series forecasting for biosurveillance.
Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit
2007-09-30
For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.
Dst and a map of average equivalent ring current: 1958-2007
NASA Astrophysics Data System (ADS)
Love, J. J.
2008-12-01
A new Dst index construction is made using the original hourly magnetic-observatory data collected over the years 1958-2007; stations: Hermanus South Africa, Kakioka Japan, Honolulu Hawaii, and San Juan Puerto Rico. The construction method we use is generally consistent with the algorithm defined by Sugiura (1964), and which forms the basis for the standard Kyoto Dst index. This involves corrections for observatory baseline shifts, subtraction of the main-field secular variation, and subtraction of specific harmonics that approximate the solar-quiet (Sq) variation. Fourier analysis of the observatory data reveals the nature of Sq: it consists primarily of periodic variation driven by the Earth's rotation, the Moon's orbit, the Earth's orbit, and, to some extent, the solar cycle. Cross coupling of the harmonics associated with each of the external periodic driving forces results in a seemingly complicated Sq time series that is sometimes considered to be relatively random and unpredictable, but which is, in fact, well described in terms of Fourier series. Working in the frequency domain, Sq can be filtered out, and, upon return to the time domain, the local disturbance time series (Dist) for each observatory can be recovered. After averaging the local disturbance time series from each observatory, the global magnetic disturbance time series Dst is obtained. Analysis of this new Dst index is compared with that produced by Kyoto, and various biases and differences are discussed. The combination of the Dist and Dst time series can be used to explore the local-time/universal-time symmetry of an equivalent ring current. Individual magnetic storms can have a complicated disturbance field that is asymmetrical in longitude, presumably due to partial ring currents. Using 50 years of data we map the average local-time magnetic disturbance, finding that it is very nearly proportional to Dst. To our surprise, the primary asymmetry in mean magnetic disturbance is not between midnight and noon, but rather between dawn and dusk, with greatest mean disturbance occurring at dusk. As a result, proposed corrections to Dst for magnetopause and tail currents might be reasonably reconsidered.
Time-Series Forecast Modeling on High-Bandwidth Network Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Sim, Alex
With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less
Dynamical glucometry: Use of multiscale entropy analysis in diabetes
NASA Astrophysics Data System (ADS)
Costa, Madalena D.; Henriques, Teresa; Munshi, Medha N.; Segal, Alissa R.; Goldberger, Ary L.
2014-09-01
Diabetes mellitus (DM) is one of the world's most prevalent medical conditions. Contemporary management focuses on lowering mean blood glucose values toward a normal range, but largely ignores the dynamics of glucose fluctuations. We probed analyte time series obtained from continuous glucose monitor (CGM) sensors. We show that the fluctuations in CGM values sampled every 5 min are not uncorrelated noise. Next, using multiscale entropy analysis, we quantified the complexity of the temporal structure of the CGM time series from a group of elderly subjects with type 2 DM and age-matched controls. We further probed the structure of these CGM time series using detrended fluctuation analysis. Our findings indicate that the dynamics of glucose fluctuations from control subjects are more complex than those of subjects with type 2 DM over time scales ranging from about 5 min to 5 h. These findings support consideration of a new framework, dynamical glucometry, to guide mechanistic research and to help assess and compare therapeutic interventions, which should enhance complexity of glucose fluctuations and not just lower mean and variance of blood glucose levels.
SHARPs - A Near-Real-Time Space Weather Data Product from HMI
NASA Astrophysics Data System (ADS)
Bobra, M.; Turmon, M.; Baldner, C.; Sun, X.; Hoeksema, J. T.
2012-12-01
A data product from the Helioseismic and Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO), called Space-weather HMI Active Region Patches (SHARPs), is now available through the SDO Joint Science Operations Center (JSOC) and the Virtual Solar Observatory. SHARPs are magnetically active regions identified on the solar disk and tracked automatically in time. SHARP data are processed within a few hours of the observation time. The SHARP data series contains active region-sized disambiguated vector magnetic field data in both Lambert Cylindrical Equal-Area and CCD coordinates on a 12 minute cadence. The series also provides simultaneous HMI maps of the line-of-sight magnetic field, continuum intensity, and velocity on the same ~0.5 arc-second pixel grid. In addition, the SHARP data series provides space weather quantities computed on the inverted, disambiguated, and remapped data. The values for each tracked region are computed and updated in near real time. We present space weather results for several X-class flares; furthermore, we compare said space weather quantities with helioseismic quantities calculated using ring-diagram analysis.
Time-Series Forecast Modeling on High-Bandwidth Network Measurements
Yoo, Wucherl; Sim, Alex
2016-06-24
With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less
Multiple Indicator Stationary Time Series Models.
ERIC Educational Resources Information Center
Sivo, Stephen A.
2001-01-01
Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…
Interannual Variability of OLR as Observed by AIRS and CERES
NASA Technical Reports Server (NTRS)
Susskind, Joel; Molnar, Gyula; Iredell, Lena; Loeb, Norman G.
2012-01-01
This paper compares spatial anomaly time series of OLR (Outgoing Longwave Radiation) and OLR(sub CLR) (Clear Sky OLR) as determined using observations from CERES Terra and AIRS over the time period September 2002 through June 2011. Both AIRS and CERES show a significant decrease in global mean and tropical mean OLR over this time period. We find excellent agreement of the anomaly time-series of the two OLR data sets in almost every detail, down to 1 deg X 1 deg spatial grid point level. The extremely close agreement of OLR anomaly time series derived from observations by two different instruments implies that both sets of results must be highly stable. This agreement also validates to some extent the anomaly time series of the AIRS derived products used in the computation of the AIRS OLR product. The paper also examines the correlations of anomaly time series of AIRS and CERES OLR, on different spatial scales, as well as those of other AIRS derived products, with that of the NOAA Sea Surface Temperature (SST) product averaged over the NOAA Nino-4 spatial region. We refer to these SST anomalies as the El Nino Index. Large spatially coherent positive and negative correlations of OLR anomaly time series with that of the El Nino Index are found in different spatial regions. Anomalies of global mean, and especially tropical mean, OLR are highly positively correlated with the El Nino Index. These correlations explain that the recent global and tropical mean decreases in OLR over the period September 2002 through June 2011, as observed by both AIRS and CERES, are primarily the result of a transition from an El Nino condition at the beginning of the data record to La Nina conditions toward the end of the data period. We show that the close correlation of global mean, and especially tropical mean, OLR anomalies with the El Nino Index can be well accounted for by temporal changes of OLR within two spatial regions which lie outside the NOAA Nino-4 region, in which anomalies of cloud cover and mid-tropospheric water vapor are both highly negatively correlated with the El Nino Index. Agreement of the AIRS and CERES OLR(sub CLR) anomaly time series is less good, which may be a result of the large sampling differences in the ensemble of cases included in each OLR(sub CLR) data set.
NASA Astrophysics Data System (ADS)
Hwang, Sunghwan
1997-08-01
One of the most prominent features of helicopter rotor dynamics in forward flight is the periodic coefficients in the equations of motion introduced by the rotor rotation. The frequency response characteristics of such a linear time periodic system exhibits sideband behavior, which is not the case for linear time invariant systems. Therefore, a frequency domain identification methodology for linear systems with time periodic coefficients was developed, because the linear time invariant theory cannot account for sideband behavior. The modulated complex Fourier series was introduced to eliminate the smearing effect of Fourier series expansions of exponentially modulated periodic signals. A system identification theory was then developed using modulated complex Fourier series expansion. Correlation and spectral density functions were derived using the modulated complex Fourier series expansion for linear time periodic systems. Expressions of the identified harmonic transfer function were then formulated using the spectral density functions both with and without additive noise processes at input and/or output. A procedure was developed to identify parameters of a model to match the frequency response characteristics between measured and estimated harmonic transfer functions by minimizing an objective function defined in terms of the trace of the squared frequency response error matrix. Feasibility was demonstrated by the identification of the harmonic transfer function and parameters for helicopter rigid blade flapping dynamics in forward flight. This technique is envisioned to satisfy the needs of system identification in the rotating frame, especially in the context of individual blade control. The technique was applied to the coupled flap-lag-inflow dynamics of a rigid blade excited by an active pitch link. The linear time periodic technique results were compared with the linear time invariant technique results. Also, the effect of noise processes and initial parameter guess on the identification procedure were investigated. To study the effect of elastic modes, a rigid blade with a trailing edge flap excited by a smart actuator was selected and system parameters were successfully identified, but with some expense of computational storage and time. Conclusively, the linear time periodic technique substantially improved the identified parameter accuracy compared to the linear time invariant technique. Also, the linear time periodic technique was robust to noises and initial guess of parameters. However, an elastic mode of higher frequency relative to the system pumping frequency tends to increase the computer storage requirement and computing time.
Ensemble Deep Learning for Biomedical Time Series Classification
2016-01-01
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost. PMID:27725828
Fractal stock markets: International evidence of dynamical (in)efficiency.
Bianchi, Sergio; Frezza, Massimiliano
2017-07-01
The last systemic financial crisis has reawakened the debate on the efficient nature of financial markets, traditionally described as semimartingales. The standard approaches to endow the general notion of efficiency of an empirical content turned out to be somewhat inconclusive and misleading. We propose a topological-based approach to quantify the informational efficiency of a financial time series. The idea is to measure the efficiency by means of the pointwise regularity of a (stochastic) function, given that the signature of a martingale is that its pointwise regularity equals 12. We provide estimates for real financial time series and investigate their (in)efficient behavior by comparing three main stock indexes.
Time series ARIMA models for daily price of palm oil
NASA Astrophysics Data System (ADS)
Ariff, Noratiqah Mohd; Zamhawari, Nor Hashimah; Bakar, Mohd Aftar Abu
2015-02-01
Palm oil is deemed as one of the most important commodity that forms the economic backbone of Malaysia. Modeling and forecasting the daily price of palm oil is of great interest for Malaysia's economic growth. In this study, time series ARIMA models are used to fit the daily price of palm oil. The Akaike Infromation Criterion (AIC), Akaike Infromation Criterion with a correction for finite sample sizes (AICc) and Bayesian Information Criterion (BIC) are used to compare between different ARIMA models being considered. It is found that ARIMA(1,2,1) model is suitable for daily price of crude palm oil in Malaysia for the year 2010 to 2012.
Fractal stock markets: International evidence of dynamical (in)efficiency
NASA Astrophysics Data System (ADS)
Bianchi, Sergio; Frezza, Massimiliano
2017-07-01
The last systemic financial crisis has reawakened the debate on the efficient nature of financial markets, traditionally described as semimartingales. The standard approaches to endow the general notion of efficiency of an empirical content turned out to be somewhat inconclusive and misleading. We propose a topological-based approach to quantify the informational efficiency of a financial time series. The idea is to measure the efficiency by means of the pointwise regularity of a (stochastic) function, given that the signature of a martingale is that its pointwise regularity equals 1/2 . We provide estimates for real financial time series and investigate their (in)efficient behavior by comparing three main stock indexes.
Detecting a periodic signal in the terrestrial cratering record
NASA Technical Reports Server (NTRS)
Grieve, Richard A. F.; Rupert, James D.; Goodacre, Alan K.; Sharpton, Virgil L.
1988-01-01
A time-series analysis of model periodic data, where the period and phase are known, has been performed in order to investigate whether a significant period can be detected consistently from a mix of random and periodic impacts. Special attention is given to the effect of age uncertainties and random ages in the detection of a periodic signal. An equivalent analysis is performed with observed data on crater ages and compared with the model data, and the effects of the temporal distribution of crater ages on the results from the time-series analysis are studied. Evidence for a consistent 30-m.y. period is found to be weak.
NASA Astrophysics Data System (ADS)
Csatho, B. M.; Schenk, A. F.; Babonis, G. S.; van den Broeke, M. R.; Kuipers Munneke, P.; van der Veen, C. J.; Khan, S. A.; Porter, D. F.
2016-12-01
This study presents a new, comprehensive reconstruction of Greenland Ice Sheet elevation changes, generated using the Surface Elevation And Change detection (SERAC) approach. 35-year long elevation-change time series (1980-2015) were obtained at more than 150,000 locations from observations acquired by NASA's airborne and spaceborne laser altimeters (ATM, LVIS, ICESat), PROMICE laser altimetry data (2007-2011) and a DEM covering the ice sheet margin derived from stereo aerial photographs (1970s-80s). After removing the effect of Glacial Isostatic Adjustment (GIA) and the elastic crustal response to changes in ice loading, the time series were partitioned into changes due to surface processes and ice dynamics and then converted into mass change histories. Using gridded products, we examined ice sheet elevation, and mass change patterns, and compared them with other estimates at different scales from individual outlet glaciers through large drainage basins, on to the entire ice sheet. Both the SERAC time series and the grids derived from these time series revealed significant spatial and temporal variations of dynamic mass loss and widespread intermittent thinning, indicating the complexity of ice sheet response to climate forcing. To investigate the regional and local controls of ice dynamics, we examined thickness change time series near outlet glacier grounding lines. Changes on most outlet glaciers were consistent with one or more episodes of dynamic thinning that propagates upstream from the glacier terminus. The spatial pattern of the onset, duration, and termination of these dynamic thinning events suggest a regional control, such as warming ocean and air temperatures. However, the intricate spatiotemporal pattern of dynamic thickness change suggests that, regardless of the forcing responsible for initial glacier acceleration and thinning, the response of individual glaciers is modulated by local conditions. We use statistical methods, such as principal component analysis and multivariate regression to analyze the dynamic ice-thickness change time series derived by SERAC and to investigate the primary forcings and controls on outlet glacier changes.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
A novel weight determination method for time series data aggregation
NASA Astrophysics Data System (ADS)
Xu, Paiheng; Zhang, Rong; Deng, Yong
2017-09-01
Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.
Applied automatic offset detection using HECTOR within EPOS-IP
NASA Astrophysics Data System (ADS)
Fernandes, R. M. S.; Bos, M. S.
2016-12-01
It is well known that offsets are present in most GNSS coordinate time series. These offsets need to be taken into account in the analysis to avoid incorrect estimation of the tectonic motions. The time of the offsets are normally determined by visual inspection of the time series but with the ever increasing amount of GNSS stations, this is becoming too time consuming and automatic offset detection algorithms are required. This is particularly true in projects like EPOS (European Plate Observing System), where the routinely analysis of thousands of daily time-series will be required. It is also planned to include stations installed for technical applications which metadata is also not always properly maintained. In this research we present an offset detection scheme that uses the Bayesian Information Criterion (BIC) to determine the most likely time of an offset. The novelty of this scheme is that it takes the temporal correlation of the noise into account. This aspect is normally ignored due to the fact that it significantly increases the computation time. However, it needs to be taken into account to ensure that the estimated BIC value is correct. We were able to create a fast algorithm by adopting the methodology implemented in HECTOR (Bos et al., 2013). We evaluate the feasibility of the approach using the core IGS network, where most of the offsets have been accurately determined, which permit to have an external evaluation of this new outlier detection approach to be included in HECTOR. We also apply the scheme to regional networks in Iberia where such offsets are often not documented properly in order to compare the normal manual approach with the new automatic approach. Finally, we also compare the optimal approach used by HECTOR with other algorithms such as MIDAS and STARS.
Dimension reduction of frequency-based direct Granger causality measures on short time series.
Siggiridou, Elsa; Kimiskidis, Vasilios K; Kugiumtzis, Dimitris
2017-09-01
The mainstream in the estimation of effective brain connectivity relies on Granger causality measures in the frequency domain. If the measure is meant to capture direct causal effects accounting for the presence of other observed variables, as in multi-channel electroencephalograms (EEG), typically the fit of a vector autoregressive (VAR) model on the multivariate time series is required. For short time series of many variables, the estimation of VAR may not be stable requiring dimension reduction resulting in restricted or sparse VAR models. The restricted VAR obtained by the modified backward-in-time selection method (mBTS) is adapted to the generalized partial directed coherence (GPDC), termed restricted GPDC (RGPDC). Dimension reduction on other frequency based measures, such the direct directed transfer function (dDTF), is straightforward. First, a simulation study using linear stochastic multivariate systems is conducted and RGPDC is favorably compared to GPDC on short time series in terms of sensitivity and specificity. Then the two measures are tested for their ability to detect changes in brain connectivity during an epileptiform discharge (ED) from multi-channel scalp EEG. It is shown that RGPDC identifies better than GPDC the connectivity structure of the simulated systems, as well as changes in the brain connectivity, and is less dependent on the free parameter of VAR order. The proposed dimension reduction in frequency measures based on VAR constitutes an appropriate strategy to estimate reliably brain networks within short-time windows. Copyright © 2017 Elsevier B.V. All rights reserved.
Ostrowski, M; Paulevé, L; Schaub, T; Siegel, A; Guziolowski, C
2016-11-01
Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A morphological perceptron with gradient-based learning for Brazilian stock market forecasting.
Araújo, Ricardo de A
2012-04-01
Several linear and non-linear techniques have been proposed to solve the stock market forecasting problem. However, a limitation arises from all these techniques and is known as the random walk dilemma (RWD). In this scenario, forecasts generated by arbitrary models have a characteristic one step ahead delay with respect to the time series values, so that, there is a time phase distortion in stock market phenomena reconstruction. In this paper, we propose a suitable model inspired by concepts in mathematical morphology (MM) and lattice theory (LT). This model is generically called the increasing morphological perceptron (IMP). Also, we present a gradient steepest descent method to design the proposed IMP based on ideas from the back-propagation (BP) algorithm and using a systematic approach to overcome the problem of non-differentiability of morphological operations. Into the learning process we have included a procedure to overcome the RWD, which is an automatic correction step that is geared toward eliminating time phase distortions that occur in stock market phenomena. Furthermore, an experimental analysis is conducted with the IMP using four complex non-linear problems of time series forecasting from the Brazilian stock market. Additionally, two natural phenomena time series are used to assess forecasting performance of the proposed IMP with other non financial time series. At the end, the obtained results are discussed and compared to results found using models recently proposed in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.
Category-Specific Comparison of Univariate Alerting Methods for Biosurveillance Decision Support
Elbert, Yevgeniy; Hung, Vivian; Burkom, Howard
2013-01-01
Objective For a multi-source decision support application, we sought to match univariate alerting algorithms to surveillance data types to optimize detection performance. Introduction Temporal alerting algorithms commonly used in syndromic surveillance systems are often adjusted for data features such as cyclic behavior but are subject to overfitting or misspecification errors when applied indiscriminately. In a project for the Armed Forces Health Surveillance Center to enable multivariate decision support, we obtained 4.5 years of out-patient, prescription and laboratory test records from all US military treatment facilities. A proof-of-concept project phase produced 16 events with multiple evidence corroboration for comparison of alerting algorithms for detection performance. We used the representative streams from each data source to compare sensitivity of 6 algorithms to injected spikes, and we used all data streams from 16 known events to compare them for detection timeliness. Methods The six methods compared were: Holt-Winters generalized exponential smoothing method (1)automated choice between daily methods, regression and an exponential weighted moving average (2)adaptive daily Shewhart-type chartadaptive one-sided daily CUSUMEWMA applied to 7-day means with a trend correction; and7-day temporal scan statistic Sensitivity testing: We conducted comparative sensitivity testing for categories of time series with similar scales and seasonal behavior. We added multiples of the standard deviation of each time series as single-day injects in separate algorithm runs. For each candidate method, we then used as a sensitivity measure the proportion of these runs for which the output of each algorithm was below alerting thresholds estimated empirically for each algorithm using simulated data streams. We identified the algorithm(s) whose sensitivity was most consistently high for each data category. For each syndromic query applied to each data source (outpatient, lab test orders, and prescriptions), 502 authentic time series were derived, one for each reporting treatment facility. Data categories were selected in order to group time series with similar expected algorithm performance: Median > 100 < Median ≤ 10Median = 0Lag 7 Autocorrelation Coefficient ≥ 0.2Lag 7 Autocorrelation Coefficient < 0.2 Timeliness testing: For the timeliness testing, we avoided artificiality of simulated signals by measuring alerting detection delays in the 16 corroborated outbreaks. The multiple time series from these events gave a total of 141 time series with outbreak intervals for timeliness testing. The following measures were computed to quantify timeliness of detection: Median Detection Delay – median number of days to detect the outbreak.Penalized Mean Detection Delay –mean number of days to detect the outbreak with outbreak misses penalized as 1 day plus the maximum detection time. Results Based on the injection results, the Holt-Winters algorithm was most sensitive among time series with positive medians. The adaptive CUSUM and the Shewhart methods were most sensitive for data streams with median zero. Table 1 provides timeliness results using the 141 outbreak-associated streams on sparse (Median=0) and non-sparse data categories. [Insert table #1 here] Data median Detection Delay, days Holt-winters Regression EWMA Adaptive Shewhart Adaptive CUSUM 7-day Trend-adj. EWMA 7-day Temporal Scan Median 0 Median 3 2 4 2 4.5 2 Penalized Mean 7.2 7 6.6 6.2 7.3 7.6 Median >0 Median 2 2 2.5 2 6 4 Penalized Mean 6.1 7 7.2 7.1 7.7 6.6 The gray shading in the table 1 indicates methods with shortest detection delays for sparse and non-sparse data streams. The Holt-Winters method was again superior for non-sparse data. For data with median=0, the adaptive CUSUM was superior for a daily false alarm probability of 0.01, but the Shewhart method was timelier for more liberal thresholds. Conclusions Both kinds of detection performance analysis showed the method based on Holt-Winters exponential smoothing superior on non-sparse time series with day-of-week effects. The adaptive CUSUM and She-whart methods proved optimal on sparse data and data without weekly patterns.
Association mining of dependency between time series
NASA Astrophysics Data System (ADS)
Hafez, Alaaeldin
2001-03-01
Time series analysis is considered as a crucial component of strategic control over a broad variety of disciplines in business, science and engineering. Time series data is a sequence of observations collected over intervals of time. Each time series describes a phenomenon as a function of time. Analysis on time series data includes discovering trends (or patterns) in a time series sequence. In the last few years, data mining has emerged and been recognized as a new technology for data analysis. Data Mining is the process of discovering potentially valuable patterns, associations, trends, sequences and dependencies in data. Data mining techniques can discover information that many traditional business analysis and statistical techniques fail to deliver. In this paper, we adapt and innovate data mining techniques to analyze time series data. By using data mining techniques, maximal frequent patterns are discovered and used in predicting future sequences or trends, where trends describe the behavior of a sequence. In order to include different types of time series (e.g. irregular and non- systematic), we consider past frequent patterns of the same time sequences (local patterns) and of other dependent time sequences (global patterns). We use the word 'dependent' instead of the word 'similar' for emphasis on real life time series where two time series sequences could be completely different (in values, shapes, etc.), but they still react to the same conditions in a dependent way. In this paper, we propose the Dependence Mining Technique that could be used in predicting time series sequences. The proposed technique consists of three phases: (a) for all time series sequences, generate their trend sequences, (b) discover maximal frequent trend patterns, generate pattern vectors (to keep information of frequent trend patterns), use trend pattern vectors to predict future time series sequences.
Sigoillot, Frederic D; Huckins, Jeremy F; Li, Fuhai; Zhou, Xiaobo; Wong, Stephen T C; King, Randall W
2011-01-01
Automated time-lapse microscopy can visualize proliferation of large numbers of individual cells, enabling accurate measurement of the frequency of cell division and the duration of interphase and mitosis. However, extraction of quantitative information by manual inspection of time-lapse movies is too time-consuming to be useful for analysis of large experiments. Here we present an automated time-series approach that can measure changes in the duration of mitosis and interphase in individual cells expressing fluorescent histone 2B. The approach requires analysis of only 2 features, nuclear area and average intensity. Compared to supervised learning approaches, this method reduces processing time and does not require generation of training data sets. We demonstrate that this method is as sensitive as manual analysis in identifying small changes in interphase or mitotic duration induced by drug or siRNA treatment. This approach should facilitate automated analysis of high-throughput time-lapse data sets to identify small molecules or gene products that influence timing of cell division.
Fourier series expansion for nonlinear Hamiltonian oscillators.
Méndez, Vicenç; Sans, Cristina; Campos, Daniel; Llopis, Isaac
2010-06-01
The problem of nonlinear Hamiltonian oscillators is one of the classical questions in physics. When an analytic solution is not possible, one can resort to obtaining a numerical solution or using perturbation theory around the linear problem. We apply the Fourier series expansion to find approximate solutions to the oscillator position as a function of time as well as the period-amplitude relationship. We compare our results with other recent approaches such as variational methods or heuristic approximations, in particular the Ren-He's method. Based on its application to the Duffing oscillator, the nonlinear pendulum and the eardrum equation, it is shown that the Fourier series expansion method is the most accurate.