Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-14
... the time- series of costs and benefits into annualized values. First, DOE calculated a present value... the present value, DOE then calculated the corresponding time- series of fixed annual payments over a... does not imply that the time-series of cost and benefits from which the annualized values were...
Marwaha, Puneeta; Sunkaria, Ramesh Kumar
2016-09-01
The sample entropy (SampEn) has been widely used to quantify the complexity of RR-interval time series. It is a fact that higher complexity, and hence, entropy is associated with the RR-interval time series of healthy subjects. But, SampEn suffers from the disadvantage that it assigns higher entropy to the randomized surrogate time series as well as to certain pathological time series, which is a misleading observation. This wrong estimation of the complexity of a time series may be due to the fact that the existing SampEn technique updates the threshold value as a function of long-term standard deviation (SD) of a time series. However, time series of certain pathologies exhibits substantial variability in beat-to-beat fluctuations. So the SD of the first order difference (short term SD) of the time series should be considered while updating threshold value, to account for period-to-period variations inherited in a time series. In the present work, improved sample entropy (I-SampEn), a new methodology has been proposed in which threshold value is updated by considering the period-to-period variations of a time series. The I-SampEn technique results in assigning higher entropy value to age-matched healthy subjects than patients suffering atrial fibrillation (AF) and diabetes mellitus (DM). Our results are in agreement with the theory of reduction in complexity of RR-interval time series in patients suffering from chronic cardiovascular and non-cardiovascular diseases.
ERIC Educational Resources Information Center
St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D.
2014-01-01
Researchers are increasingly using comparative interrupted time series (CITS) designs to estimate the effects of programs and policies when randomized controlled trials are not feasible. In a simple interrupted time series design, researchers compare the pre-treatment values of a treatment group time series to post-treatment values in order to…
Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory
2016-05-12
valued times series from a sample. (A practical algorithm to compute the estimator is a work in progress.) Third, finitely-valued spatial processes...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 mathematical statistics; time series ; Markov chains; random...proved. Second, a statistical method is developed to estimate the memory depth of discrete- time and continuously-valued times series from a sample. (A
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-01-01
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques. PMID:28383496
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-04-06
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
The short time Fourier transform and local signals
NASA Astrophysics Data System (ADS)
Okumura, Shuhei
In this thesis, I examine the theoretical properties of the short time discrete Fourier transform (STFT). The STFT is obtained by applying the Fourier transform by a fixed-sized, moving window to input series. We move the window by one time point at a time, so we have overlapping windows. I present several theoretical properties of the STFT, applied to various types of complex-valued, univariate time series inputs, and their outputs in closed forms. In particular, just like the discrete Fourier transform, the STFT's modulus time series takes large positive values when the input is a periodic signal. One main point is that a white noise time series input results in the STFT output being a complex-valued stationary time series and we can derive the time and time-frequency dependency structure such as the cross-covariance functions. Our primary focus is the detection of local periodic signals. I present a method to detect local signals by computing the probability that the squared modulus STFT time series has consecutive large values exceeding some threshold after one exceeding observation following one observation less than the threshold. We discuss a method to reduce the computation of such probabilities by the Box-Cox transformation and the delta method, and show that it works well in comparison to the Monte Carlo simulation method.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-21
...-step calculation process to convert the time- series of costs and benefits into annualized values... costs and savings, for the time-series of costs and benefits using discount rates of three and seven... that the time-series of costs and benefits from which the annualized values were determined would be a...
Recurrent Neural Networks for Multivariate Time Series with Missing Values.
Che, Zhengping; Purushotham, Sanjay; Cho, Kyunghyun; Sontag, David; Liu, Yan
2018-04-17
Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.
Hierarchical time series bottom-up approach for forecast the export value in Central Java
NASA Astrophysics Data System (ADS)
Mahkya, D. A.; Ulama, B. S.; Suhartono
2017-10-01
The purpose of this study is Getting the best modeling and predicting the export value of Central Java using a Hierarchical Time Series. The export value is one variable injection in the economy of a country, meaning that if the export value of the country increases, the country’s economy will increase even more. Therefore, it is necessary appropriate modeling to predict the export value especially in Central Java. Export Value in Central Java are grouped into 21 commodities with each commodity has a different pattern. One approach that can be used time series is a hierarchical approach. Hierarchical Time Series is used Buttom-up. To Forecast the individual series at all levels using Autoregressive Integrated Moving Average (ARIMA), Radial Basis Function Neural Network (RBFNN), and Hybrid ARIMA-RBFNN. For the selection of the best models used Symmetric Mean Absolute Percentage Error (sMAPE). Results of the analysis showed that for the Export Value of Central Java, Bottom-up approach with Hybrid ARIMA-RBFNN modeling can be used for long-term predictions. As for the short and medium-term predictions, it can be used a bottom-up approach RBFNN modeling. Overall bottom-up approach with RBFNN modeling give the best result.
Process fault detection and nonlinear time series analysis for anomaly detection in safeguards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, T.L.; Mullen, M.F.; Wangen, L.E.
In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less
Fluctuations in Cerebral Hemodynamics
2003-12-01
Determination of scaling properties Detrended Fluctuations Analysis (see (28) and references therein) is commonly used to determine scaling...pressure (averaged over a cardiac beat) of a healthy subject. First 1000 values of the time series are shown. (b) Detrended fluctuation analysis (DFA...1000 values of the time series are shown. (b) Detrended fluctuation analysis of the time series shown in (a). Fig . 3 Side-by-side boxplot for the
Heart rate time series characteristics for early detection of infections in critically ill patients.
Tambuyzer, T; Guiza, F; Boonen, E; Meersseman, P; Vervenne, H; Hansen, T K; Bjerre, M; Van den Berghe, G; Berckmans, D; Aerts, J M; Meyfroidt, G
2017-04-01
It is difficult to make a distinction between inflammation and infection. Therefore, new strategies are required to allow accurate detection of infection. Here, we hypothesize that we can distinguish infected from non-infected ICU patients based on dynamic features of serum cytokine concentrations and heart rate time series. Serum cytokine profiles and heart rate time series of 39 patients were available for this study. The serum concentration of ten cytokines were measured using blood sampled every 10 min between 2100 and 0600 hours. Heart rate was recorded every minute. Ten metrics were used to extract features from these time series to obtain an accurate classification of infected patients. The predictive power of the metrics derived from the heart rate time series was investigated using decision tree analysis. Finally, logistic regression methods were used to examine whether classification performance improved with inclusion of features derived from the cytokine time series. The AUC of a decision tree based on two heart rate features was 0.88. The model had good calibration with 0.09 Hosmer-Lemeshow p value. There was no significant additional value of adding static cytokine levels or cytokine time series information to the generated decision tree model. The results suggest that heart rate is a better marker for infection than information captured by cytokine time series when the exact stage of infection is not known. The predictive value of (expensive) biomarkers should always be weighed against the routinely monitored data, and such biomarkers have to demonstrate added value.
Short Term Rain Prediction For Sustainability of Tanks in the Tropic Influenced by Shadow Rains
NASA Astrophysics Data System (ADS)
Suresh, S.
2007-07-01
Rainfall and flow prediction, adapting the Venkataraman single time series approach and Wiener multiple time series approach were conducted for Aralikottai tank system, and Kothamangalam tank system, Tamilnadu, India. The results indicated that the raw prediction of daily values is closer to actual values than trend identified predictions. The sister seasonal time series were more amenable for prediction than whole parent time series. Venkataraman single time approach was more suited for rainfall prediction. Wiener approach proved better for daily prediction of flow based on rainfall. The major conclusion is that the sister seasonal time series of rain and flow have their own identities even though they form part of the whole parent time series. Further studies with other tropical small watersheds are necessary to establish this unique characteristic of independent but not exclusive behavior of seasonal stationary stochastic processes as compared to parent non stationary stochastic processes.
Application of an Entropic Approach to Assessing Systems Integration
2012-03-01
two econometrical measures of information efficiency – Shannon entropy and Hurst exponent . Shannon entropy (which is explained in Chapter III) can be...applied to evaluate long-term correlation of time series, while Hurst exponent can be applied to classify the time series in accordance to existence...of trend. Hurst exponent is the statistical measure of time series long-range dependence, and its value falls in the interval [0, 1] – a value in
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different initial rough imputation methods.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-13
..., are the lowest observed values in the time series. The Commission may wish to protect the 2003 year... has implemented a series of management measures designed to regulate the incidental catch of BFT in... less well estimated, are the lowest observed values in the time series. The Commission may wish to...
Predicting long-term catchment nutrient export: the use of nonlinear time series models
NASA Astrophysics Data System (ADS)
Valent, Peter; Howden, Nicholas J. K.; Szolgay, Jan; Komornikova, Magda
2010-05-01
After the Second World War the nitrate concentrations in European water bodies changed significantly as the result of increased nitrogen fertilizer use and changes in land use. However, in the last decades, as a consequence of the implementation of nitrate-reducing measures in Europe, the nitrate concentrations in water bodies slowly decrease. This causes that the mean and variance of the observed time series also changes with time (nonstationarity and heteroscedascity). In order to detect changes and properly describe the behaviour of such time series by time series analysis, linear models (such as autoregressive (AR), moving average (MA) and autoregressive moving average models (ARMA)), are no more suitable. Time series with sudden changes in statistical characteristics can cause various problems in the calibration of traditional water quality models and thus give biased predictions. Proper statistical analysis of these non-stationary and heteroscedastic time series with the aim of detecting and subsequently explaining the variations in their statistical characteristics requires the use of nonlinear time series models. This information can be then used to improve the model building and calibration of conceptual water quality model or to select right calibration periods in order to produce reliable predictions. The objective of this contribution is to analyze two long time series of nitrate concentrations of the rivers Ouse and Stour with advanced nonlinear statistical modelling techniques and compare their performance with traditional linear models of the ARMA class in order to identify changes in the time series characteristics. The time series were analysed with nonlinear models with multiple regimes represented by self-exciting threshold autoregressive (SETAR) and Markov-switching models (MSW). The analysis showed that, based on the value of residual sum of squares (RSS) in both datasets, SETAR and MSW models described the time-series better than models of the ARMA class. In most cases the relative improvement of SETAR models against AR models of first order was low ranging between 1% and 4% with the exception of the three-regime model for the River Stour time-series where the improvement was 48.9%. In comparison, the relative improvement of MSW models was between 44.6% and 52.5 for two-regime and from 60.4% to 75% for three-regime models. However, the visual assessment of models plotted against original datasets showed that despite a high value of RSS, some ARMA models could describe the analyzed time-series better than AR, MA and SETAR models with lower values of RSS. In both datasets MSW models provided a very good visual fit describing most of the extreme values.
Logarithmic compression methods for spectral data
Dunham, Mark E.
2003-01-01
A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.
Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu
2015-09-21
Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.
A time series model: First-order integer-valued autoregressive (INAR(1))
NASA Astrophysics Data System (ADS)
Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.
2017-07-01
Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.
Marwaha, Puneeta; Sunkaria, Ramesh Kumar
2017-02-01
Multiscale entropy (MSE) and refined multiscale entropy (RMSE) techniques are being widely used to evaluate the complexity of a time series across multiple time scales 't'. Both these techniques, at certain time scales (sometimes for the entire time scales, in the case of RMSE), assign higher entropy to the HRV time series of certain pathologies than that of healthy subjects, and to their corresponding randomized surrogate time series. This incorrect assessment of signal complexity may be due to the fact that these techniques suffer from the following limitations: (1) threshold value 'r' is updated as a function of long-term standard deviation and hence unable to explore the short-term variability as well as substantial variability inherited in beat-to-beat fluctuations of long-term HRV time series. (2) In RMSE, entropy values assigned to different filtered scaled time series are the result of changes in variance, but do not completely reflect the real structural organization inherited in original time series. In the present work, we propose an improved RMSE (I-RMSE) technique by introducing a new procedure to set the threshold value by taking into account the period-to-period variability inherited in a signal and evaluated it on simulated and real HRV database. The proposed I-RMSE assigns higher entropy to the age-matched healthy subjects than that of patients suffering from atrial fibrillation, congestive heart failure, sudden cardiac death and diabetes mellitus, for the entire time scales. The results strongly support the reduction in complexity of HRV time series in female group, old-aged, patients suffering from severe cardiovascular and non-cardiovascular diseases, and in their corresponding surrogate time series.
Multifractal analysis of visibility graph-based Ito-related connectivity time series.
Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano
2016-02-01
In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.
Moving Average Models with Bivariate Exponential and Geometric Distributions.
1985-03-01
ordinary time series and of point processes. Developments in Statistics, Vol. 1, P.R. Krishnaiah , ed. Academic Press, New York. [9] Esary, J.D. and...valued and discrete - valued time series with ARMA correlation structure. Multivariate Analysis V, P.R. Krishnaiah , ed. North-Holland. 151-166. [28
Glossary Precipitation Frequency Data Server GIS Grids Maps Time Series Temporals Documents Probable provides a measure of the average time between years (and not events) in which a particular value is RECCURENCE INTERVAL). ANNUAL MAXIMUM SERIES (AMS) - Time series of the largest precipitation amounts in a
Forecasting Enrollments with Fuzzy Time Series.
ERIC Educational Resources Information Center
Song, Qiang; Chissom, Brad S.
The concept of fuzzy time series is introduced and used to forecast the enrollment of a university. Fuzzy time series, an aspect of fuzzy set theory, forecasts enrollment using a first-order time-invariant model. To evaluate the model, the conventional linear regression technique is applied and the predicted values obtained are compared to the…
Unveiling signatures of interdecadal climate changes by Hilbert analysis
NASA Astrophysics Data System (ADS)
Zappalà, Dario; Barreiro, Marcelo; Masoller, Cristina
2017-04-01
A recent study demonstrated that, in a class of networks of oscillators, the optimal network reconstruction from dynamics is obtained when the similarity analysis is performed not on the original dynamical time series, but on transformed series obtained by Hilbert transform. [1] That motivated us to use Hilbert transform to study another kind of (in a broad sense) "oscillating" series, such as the series of temperature. Actually, we found that Hilbert analysis of SAT (Surface Air Temperature) time series uncovers meaningful information about climate and is therefore a promising tool for the study of other climatological variables. [2] In this work we analysed a large dataset of SAT series, performing Hilbert transform and further analysis with the goal of finding signs of climate change during the analysed period. We used the publicly available ERA-Interim dataset, containing reanalysis data. [3] In particular, we worked on daily SAT time series, from year 1979 to 2015, in 16380 points arranged over a regular grid on the Earth surface. From each SAT time series we calculate the anomaly series and also, by using the Hilbert transform, we calculate the instantaneous amplitude and instantaneous frequency series. Our first approach is to calculate the relative variation: the difference between the average value on the last 10 years and the average value on the first 10 years, divided by the average value over all the analysed period. We did this calculations on our transformed series: frequency and amplitude, both with average values and standard deviation values. Furthermore, to have a comparison with an already known analysis methods, we did these same calculations on the anomaly series. We plotted these results as maps, where the colour of each site indicates the value of its relative variation. Finally, to gain insight in the interpretation of our results over real SAT data, we generated synthetic sinusoidal series with various levels of additive noise. By applying Hilbert analysis to the synthetic data, we uncovered a clear trend between mean amplitude and mean frequency: as the noise level grows, the amplitude increases while the frequency decreases. Research funded in part by AGAUR (Generalitat de Catalunya), EU LINC project (Grant No. 289447) and Spanish MINECO (FIS2015-66503-C3-2-P).
Characterising experimental time series using local intrinsic dimension
NASA Astrophysics Data System (ADS)
Buzug, Thorsten M.; von Stamm, Jens; Pfister, Gerd
1995-02-01
Experimental strange attractors are analysed with the averaged local intrinsic dimension proposed by A. Passamante et al. [Phys. Rev. A 39 (1989) 3640] which is based on singular value decomposition of local trajectory matrices. The results are compared to the values of Kaplan-Yorke and the correlation dimension. The attractors, reconstructed with Takens' delay time coordinates from scalar velocity time series, are measured in the hydrodynamic Taylor-Couette system. A period doubling route towards chaos obtained from a very short Taylor-Couette cylinder yields a sequence of experimental time series where the local intrinsic dimension is applied.
Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis
NASA Astrophysics Data System (ADS)
Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal
Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.
Long-term changes (1980-2003) in total ozone time series over Northern Hemisphere midlatitudes
NASA Astrophysics Data System (ADS)
Białek, Małgorzata
2006-03-01
Long-term changes in total ozone time series for Arosa, Belsk, Boulder and Sapporo stations are examined. For each station we analyze time series of the following statistical characteristics of the distribution of daily ozone data: seasonal mean, standard deviation, maximum and minimum of total daily ozone values for all seasons. The iterative statistical model is proposed to estimate trends and long-term changes in the statistical distribution of the daily total ozone data. The trends are calculated for the period 1980-2003. We observe lessening of negative trends in the seasonal means as compared to those calculated by WMO for 1980-2000. We discuss a possibility of a change of the distribution shape of ozone daily data using the Kolmogorov-Smirnov test and comparing trend values in the seasonal mean, standard deviation, maximum and minimum time series for the selected stations and seasons. The distribution shift toward lower values without a change in the distribution shape is suggested with the following exceptions: the spreading of the distribution toward lower values for Belsk during winter and no decisive result for Sapporo and Boulder in summer.
Long series of geomagnetic measurements - unique at satellite era
NASA Astrophysics Data System (ADS)
Mandea, Mioara; Balasis, Georgios
2017-04-01
We have long appreciated that magnetic measurements obtained at Earth's surface are of great value in characterizing geomagnetic field behavior and then probing the deep interior of our Planet. The existence of new magnetic satellite missions data offer a new detailed global understanding of the geomagnetic field. However, when our interest moves over long-time scales, the very long series of measurements play an important role. Here, we firstly provide an updated series of geomagnetic declination in Paris, shortly after a very special occasion: its value has reached zero after some 350 years of westerly values. We take this occasion to emphasize the importance of long series of continuous measurements, mainly when various techniques are used to detect the abrupt changes in geomagnetic field, the geomagnetic jerks. Many novel concepts originated in dynamical systems or information theory have been developed, partly motivated by specific research questions from the geosciences. This continuously extending toolbox of nonlinear time series analysis is a key to understand the complexity of geomagnetic field. Here, motivated by these efforts, a series of entropy analysis are applied to geomagnetic field time series aiming to detect dynamical complex changes associated with geomagnetic jerks.
NASA Astrophysics Data System (ADS)
Kurniati, Devi; Hoyyi, Abdul; Widiharih, Tatik
2018-05-01
Time series data is a series of data taken or measured based on observations at the same time interval. Time series data analysis is used to perform data analysis considering the effect of time. The purpose of time series analysis is to know the characteristics and patterns of a data and predict a data value in some future period based on data in the past. One of the forecasting methods used for time series data is the state space model. This study discusses the modeling and forecasting of electric energy consumption using the state space model for univariate data. The modeling stage is began with optimal Autoregressive (AR) order selection, determination of state vector through canonical correlation analysis, estimation of parameter, and forecasting. The result of this research shows that modeling of electric energy consumption using state space model of order 4 with Mean Absolute Percentage Error (MAPE) value 3.655%, so the model is very good forecasting category.
NASA Astrophysics Data System (ADS)
Du, Kongchang; Zhao, Ying; Lei, Jiaqiang
2017-09-01
In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu
This paper intends to reveal the ability of the linear interpolation method to predict missing values in solar radiation time series. Reliable dataset is equally tends to complete time series observed dataset. The absence or presence of radiation data alters long-term variation of solar radiation measurement values. Based on that change, the opportunities to provide bias output result for modelling and the validation process is higher. The completeness of the observed variable dataset has significantly important for data analysis. Occurrence the lack of continual and unreliable time series solar radiation data widely spread and become the main problematic issue. However,more » the limited number of research quantity that has carried out to emphasize and gives full attention to estimate missing values in the solar radiation dataset.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-20
... survey catch shows no trend over the full survey time series and is currently at about 40 cm TL (16 in... highest value in the time series and the autumn survey nearing the peak values found in the 1960s. In 2007... not initiate a review of the status of these species at this time. FOR FURTHER INFORMATION CONTACT...
Persistence analysis of extreme CO, NO2 and O3 concentrations in ambient air of Delhi
NASA Astrophysics Data System (ADS)
Chelani, Asha B.
2012-05-01
Persistence analysis of air pollutant concentration and corresponding exceedance time series is carried out to examine for temporal evolution. For this purpose, air pollutant concentrations, namely, CO, NO2 and O3 observed during 2000-2009 at a traffic site in Delhi are analyzed using detrended fluctuation analysis. Two types of extreme values are analyzed; exceeded concentrations to a threshold provided by national pollution controlling agency and time interval between two exceedances. The time series of three pollutants is observed to possess persistence property whereas the extreme value time series of only primary pollutant concentrations is found to be persistent. Two time scaling regions are observed to be significant in extreme time series of CO and NO2, mainly attributed to implementation of CNG in vehicles. The presence of persistence in three pollutant concentration time series is linked to the property of self-organized criticality. The observed persistence in the time interval between two exceeded levels is a matter of concern as persistent high concentrations can trigger health problems.
31 CFR 321.12 - Redemption value of securities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... each savings security is determined by the terms of its offering and the length of time it has been outstanding. The Bureau of the Public Debt determines redemption values for Series A-E bonds, eligible Series...
31 CFR 321.12 - Redemption value of securities.
Code of Federal Regulations, 2012 CFR
2012-07-01
... savings security is determined by the terms of its offering and the length of time it has been outstanding. The Bureau of the Public Debt determines redemption values for Series A-E bonds, eligible Series EE...
31 CFR 321.12 - Redemption value of securities.
Code of Federal Regulations, 2011 CFR
2011-07-01
... security is determined by the terms of its offering and the length of time it has been outstanding. The Bureau of the Public Debt determines redemption values for Series A-E bonds, eligible Series EE and I...
31 CFR 321.12 - Redemption value of securities.
Code of Federal Regulations, 2013 CFR
2013-07-01
... savings security is determined by the terms of its offering and the length of time it has been outstanding. The Bureau of the Public Debt determines redemption values for Series A-E bonds, eligible Series EE...
Model-based Clustering of Categorical Time Series with Multinomial Logit Classification
NASA Astrophysics Data System (ADS)
Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea
2010-09-01
A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.
Dynamical complexity of short and noisy time series. Compression-Complexity vs. Shannon entropy
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Balasubramanian, Karthi
2017-07-01
Shannon entropy has been extensively used for characterizing complexity of time series arising from chaotic dynamical systems and stochastic processes such as Markov chains. However, for short and noisy time series, Shannon entropy performs poorly. Complexity measures which are based on lossless compression algorithms are a good substitute in such scenarios. We evaluate the performance of two such Compression-Complexity Measures namely Lempel-Ziv complexity (LZ) and Effort-To-Compress (ETC) on short time series from chaotic dynamical systems in the presence of noise. Both LZ and ETC outperform Shannon entropy (H) in accurately characterizing the dynamical complexity of such systems. For very short binary sequences (which arise in neuroscience applications), ETC has higher number of distinct complexity values than LZ and H, thus enabling a finer resolution. For two-state ergodic Markov chains, we empirically show that ETC converges to a steady state value faster than LZ. Compression-Complexity measures are promising for applications which involve short and noisy time series.
Russian State Time and Earth Rotation Service: Observations, Eop Series, Prediction
NASA Astrophysics Data System (ADS)
Kaufman, M.; Pasynok, S.
2010-01-01
Russian State Time, Frequency and Earth Rotation Service provides the official EOP data and time for use in scientific, technical and metrological works in Russia. The observations of GLONASS and GPS on 30 stations in Russia, and also the Russian and worldwide observations data of VLBI (35 stations) and SLR (20 stations) are used now. To these three series of EOP the data calculated in two other Russian analysis centers are added: IAA (VLBI, GPS and SLR series) and MCC (SLR). Joint processing of these 7 series is carried out every day (the operational EOP data for the last day and the predicted values for 50 days). The EOP values are weekly refined and systematic errors of every individual series are corrected. The combined results become accessible on the VNIIFTRI server (ftp.imvp.ru) approximately at 6h UT daily.
Complex-valued time-series correlation increases sensitivity in FMRI analysis.
Kociuba, Mary C; Rowe, Daniel B
2016-07-01
To develop a linear matrix representation of correlation between complex-valued (CV) time-series in the temporal Fourier frequency domain, and demonstrate its increased sensitivity over correlation between magnitude-only (MO) time-series in functional MRI (fMRI) analysis. The standard in fMRI is to discard the phase before the statistical analysis of the data, despite evidence of task related change in the phase time-series. With a real-valued isomorphism representation of Fourier reconstruction, correlation is computed in the temporal frequency domain with CV time-series data, rather than with the standard of MO data. A MATLAB simulation compares the Fisher-z transform of MO and CV correlations for varying degrees of task related magnitude and phase amplitude change in the time-series. The increased sensitivity of the complex-valued Fourier representation of correlation is also demonstrated with experimental human data. Since the correlation description in the temporal frequency domain is represented as a summation of second order temporal frequencies, the correlation is easily divided into experimentally relevant frequency bands for each voxel's temporal frequency spectrum. The MO and CV correlations for the experimental human data are analyzed for four voxels of interest (VOIs) to show the framework with high and low contrast-to-noise ratios in the motor cortex and the supplementary motor cortex. The simulation demonstrates the increased strength of CV correlations over MO correlations for low magnitude contrast-to-noise time-series. In the experimental human data, the MO correlation maps are noisier than the CV maps, and it is more difficult to distinguish the motor cortex in the MO correlation maps after spatial processing. Including both magnitude and phase in the spatial correlation computations more accurately defines the correlated left and right motor cortices. Sensitivity in correlation analysis is important to preserve the signal of interest in fMRI data sets with high noise variance, and avoid excessive processing induced correlation. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Eduardo Virgilio Silva, Luiz; Otavio Murta, Luiz
2012-12-01
Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiffmax) for q ≠1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiffmax values were capable of distinguish HRV groups (p-values 5.10×10-3, 1.11×10-7, and 5.50×10-7 for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis.
Dunea, Daniel; Pohoata, Alin; Iordache, Stefania
2015-07-01
The paper presents the screening of various feedforward neural networks (FANN) and wavelet-feedforward neural networks (WFANN) applied to time series of ground-level ozone (O3), nitrogen dioxide (NO2), and particulate matter (PM10 and PM2.5 fractions) recorded at four monitoring stations located in various urban areas of Romania, to identify common configurations with optimal generalization performance. Two distinct model runs were performed as follows: data processing using hourly-recorded time series of airborne pollutants during cold months (O3, NO2, and PM10), when residential heating increases the local emissions, and data processing using 24-h daily averaged concentrations (PM2.5) recorded between 2009 and 2012. Dataset variability was assessed using statistical analysis. Time series were passed through various FANNs. Each time series was decomposed in four time-scale components using three-level wavelets, which have been passed also through FANN, and recomposed into a single time series. The agreement between observed and modelled output was evaluated based on the statistical significance (r coefficient and correlation between errors and data). Daubechies db3 wavelet-Rprop FANN (6-4-1) utilization gave positive results for O3 time series optimizing the exclusive use of the FANN for hourly-recorded time series. NO2 was difficult to model due to time series specificity, but wavelet integration improved FANN performances. Daubechies db3 wavelet did not improve the FANN outputs for PM10 time series. Both models (FANN/WFANN) overestimated PM2.5 forecasted values in the last quarter of time series. A potential improvement of the forecasted values could be the integration of a smoothing algorithm to adjust the PM2.5 model outputs.
Zhang, Yatao; Wei, Shoushui; Liu, Hai; Zhao, Lina; Liu, Chengyu
2016-09-01
The Lempel-Ziv (LZ) complexity and its variants have been extensively used to analyze the irregularity of physiological time series. To date, these measures cannot explicitly discern between the irregularity and the chaotic characteristics of physiological time series. Our study compared the performance of an encoding LZ (ELZ) complexity algorithm, a novel variant of the LZ complexity algorithm, with those of the classic LZ (CLZ) and multistate LZ (MLZ) complexity algorithms. Simulation experiments on Gaussian noise, logistic chaotic, and periodic time series showed that only the ELZ algorithm monotonically declined with the reduction in irregularity in time series, whereas the CLZ and MLZ approaches yielded overlapped values for chaotic time series and time series mixed with Gaussian noise, demonstrating the accuracy of the proposed ELZ algorithm in capturing the irregularity, rather than the complexity, of physiological time series. In addition, the effect of sequence length on the ELZ algorithm was more stable compared with those on CLZ and MLZ, especially when the sequence length was longer than 300. A sensitivity analysis for all three LZ algorithms revealed that both the MLZ and the ELZ algorithms could respond to the change in time sequences, whereas the CLZ approach could not. Cardiac interbeat (RR) interval time series from the MIT-BIH database were also evaluated, and the results showed that the ELZ algorithm could accurately measure the inherent irregularity of the RR interval time series, as indicated by lower LZ values yielded from a congestive heart failure group versus those yielded from a normal sinus rhythm group (p < 0.01). Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
31 CFR 321.12 - Redemption value of securities.
Code of Federal Regulations, 2014 CFR
2014-07-01
... value of each savings security is determined by the terms of its offering and the length of time it has been outstanding. The Bureau of the Fiscal Service determines redemption values for Series A-E bonds, eligible Series EE and I bonds, and savings notes, that should be used in redeeming savings securities. [63...
Autoregressive-model-based missing value estimation for DNA microarray time series data.
Choong, Miew Keen; Charbit, Maurice; Yan, Hong
2009-01-01
Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.
Rainfall disaggregation for urban hydrology: Effects of spatial consistence
NASA Astrophysics Data System (ADS)
Müller, Hannes; Haberlandt, Uwe
2015-04-01
For urban hydrology rainfall time series with a high temporal resolution are crucial. Observed time series of this kind are very short in most cases, so they cannot be used. On the contrary, time series with lower temporal resolution (daily measurements) exist for much longer periods. The objective is to derive time series with a long duration and a high resolution by disaggregating time series of the non-recording stations with information of time series of the recording stations. The multiplicative random cascade model is a well-known disaggregation model for daily time series. For urban hydrology it is often assumed, that a day consists of only 1280 minutes in total as starting point for the disaggregation process. We introduce a new variant for the cascade model, which is functional without this assumption and also outperforms the existing approach regarding time series characteristics like wet and dry spell duration, average intensity, fraction of dry intervals and extreme value representation. However, in both approaches rainfall time series of different stations are disaggregated without consideration of surrounding stations. This yields in unrealistic spatial patterns of rainfall. We apply a simulated annealing algorithm that has been used successfully for hourly values before. Relative diurnal cycles of the disaggregated time series are resampled to reproduce the spatial dependence of rainfall. To describe spatial dependence we use bivariate characteristics like probability of occurrence, continuity ratio and coefficient of correlation. Investigation area is a sewage system in Northern Germany. We show that the algorithm has the capability to improve spatial dependence. The influence of the chosen disaggregation routine and the spatial dependence on overflow occurrences and volumes of the sewage system will be analyzed.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-08
.... (``Bloomberg''), FactSet Research Systems, Inc. (``FactSet'') and Thomson Reuters (``Reuters''). Real time data... reasonably related to the current value of the underlying index at the time such series are first opened for... to which such series relates at or about the time such series of options is first opened for trading...
Appropriate use of the increment entropy for electrophysiological time series.
Liu, Xiaofeng; Wang, Xue; Zhou, Xu; Jiang, Aimin
2018-04-01
The increment entropy (IncrEn) is a new measure for quantifying the complexity of a time series. There are three critical parameters in the IncrEn calculation: N (length of the time series), m (dimensionality), and q (quantifying precision). However, the question of how to choose the most appropriate combination of IncrEn parameters for short datasets has not been extensively explored. The purpose of this research was to provide guidance on choosing suitable IncrEn parameters for short datasets by exploring the effects of varying the parameter values. We used simulated data, epileptic EEG data and cardiac interbeat (RR) data to investigate the effects of the parameters on the calculated IncrEn values. The results reveal that IncrEn is sensitive to changes in m, q and N for short datasets (N≤500). However, IncrEn reaches stability at a data length of N=1000 with m=2 and q=2, and for short datasets (N=100), it shows better relative consistency with 2≤m≤6 and 2≤q≤8 We suggest that the value of N should be no less than 100. To enable a clear distinction between different classes based on IncrEn, we recommend that m and q should take values between 2 and 4. With appropriate parameters, IncrEn enables the effective detection of complexity variations in physiological time series, suggesting that IncrEn should be useful for the analysis of physiological time series in clinical applications. Copyright © 2018 Elsevier Ltd. All rights reserved.
21 CFR 177.1330 - Ionomeric resins.
Code of Federal Regulations, 2012 CFR
2012-04-01
...% ethanol 72, 96, 120 The results from a series of extraction times demonstrate equilibrium when the net... the above time series, extraction times must be extended until three successive unchanging values for... characterizing the type of food and under the conditions of time and temperature characterizing the conditions of...
21 CFR 177.1330 - Ionomeric resins.
Code of Federal Regulations, 2011 CFR
2011-04-01
...% ethanol 72, 96, 120 The results from a series of extraction times demonstrate equilibrium when the net... the above time series, extraction times must be extended until three successive unchanging values for... characterizing the type of food and under the conditions of time and temperature characterizing the conditions of...
21 CFR 177.1330 - Ionomeric resins.
Code of Federal Regulations, 2013 CFR
2013-04-01
...% ethanol 72, 96, 120 The results from a series of extraction times demonstrate equilibrium when the net... the above time series, extraction times must be extended until three successive unchanging values for... characterizing the type of food and under the conditions of time and temperature characterizing the conditions of...
Code of Federal Regulations, 2011 CFR
2011-10-01
... recipient in the market. Present value means the value at the time of calculation of a future payment, or series of future payments discounted by the time value of money as represented by an interest rate or...
Fast Algorithms for Mining Co-evolving Time Series
2011-09-01
Keogh et al., 2001, 2004] and (b) forecasting, like an autoregressive integrated moving average model ( ARIMA ) and related meth- ods [Box et al., 1994...computing hardware? We develop models to mine time series with missing values, to extract compact representation from time sequences, to segment the...sequences, and to do forecasting. For large scale data, we propose algorithms for learning time series models , in particular, including Linear Dynamical
Variations in atmospheric angular momentum and the length of day
NASA Technical Reports Server (NTRS)
Rosen, R. D.; Salstein, D. A.
1982-01-01
Six years of twice daily global analyses were used to create and study a lengthy time series of high temporal resolution angular momentum values. Changes in these atmospheric values were compared to independently determined charges in the rotation rate of the solid Earth. Finally, the atmospheric data was examined in more detail to determine the time and space scales on which variations in momentum occur within the atmosphere and which regions are contributing most to the changes found in the global integral. The data and techniques used to derive the time series of momentum values are described.
NASA Astrophysics Data System (ADS)
Butler, P. G.; Scourse, J. D.; Richardson, C. A.; Wanamaker, A. D., Jr.
2009-04-01
Determinations of the local correction (ΔR) to the globally averaged marine radiocarbon reservoir age are often isolated in space and time, derived from heterogeneous sources and constrained by significant uncertainties. Although time series of ΔR at single sites can be obtained from sediment cores, these are subject to multiple uncertainties related to sedimentation rates, bioturbation and interspecific variations in the source of radiocarbon in the analysed samples. Coral records provide better resolution, but these are available only for tropical locations. It is shown here that it is possible to use the shell of the long-lived bivalve mollusc Arctica islandica as a source of high resolution time series of absolutely-dated marine radiocarbon determinations for the shelf seas surrounding the North Atlantic ocean. Annual growth increments in the shell can be crossdated and chronologies can be constructed in a precise analogue with the use of tree-rings. Because the calendar dates of the samples are known, ΔR can be determined with high precision and accuracy and because all the samples are from the same species, the time series of ΔR values possesses a high degree of internal consistency. Presented here is a multi-centennial (AD 1593 - AD 1933) time series of 31 ΔR values for a site in the Irish Sea close to the Isle of Man. The mean value of ΔR (-62 14C yrs) does not change significantly during this period but increased variability is apparent before AD 1750.
31 CFR 315.35 - Payment (redemption).
Code of Federal Regulations, 2013 CFR
2013-07-01
... and Savings Notes. A Series E bond will be paid at any time after two months from issue date at the... interest is $6.90 per $500. (e) Series H. A Series H bond will be redeemed at face value at any time after..., SERIES A, B, C, D, E, F, G, H, J, AND K, AND U.S. SAVINGS NOTES General Provisions for Payment § 315.35...
31 CFR 315.35 - Payment (redemption).
Code of Federal Regulations, 2010 CFR
2010-07-01
... and Savings Notes. A Series E bond will be paid at any time after two months from issue date at the... interest is $6.90 per $500. (e) Series H. A Series H bond will be redeemed at face value at any time after..., SERIES A, B, C, D, E, F, G, H, J, AND K, AND U.S. SAVINGS NOTES General Provisions for Payment § 315.35...
31 CFR 315.35 - Payment (redemption).
Code of Federal Regulations, 2012 CFR
2012-07-01
... and Savings Notes. A Series E bond will be paid at any time after two months from issue date at the... interest is $6.90 per $500. (e) Series H. A Series H bond will be redeemed at face value at any time after..., SERIES A, B, C, D, E, F, G, H, J, AND K, AND U.S. SAVINGS NOTES General Provisions for Payment § 315.35...
31 CFR 315.35 - Payment (redemption).
Code of Federal Regulations, 2011 CFR
2011-07-01
... and Savings Notes. A Series E bond will be paid at any time after two months from issue date at the... interest is $6.90 per $500. (e) Series H. A Series H bond will be redeemed at face value at any time after..., SERIES A, B, C, D, E, F, G, H, J, AND K, AND U.S. SAVINGS NOTES General Provisions for Payment § 315.35...
NASA Astrophysics Data System (ADS)
Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci
2013-04-01
This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.
A Proposed Data Fusion Architecture for Micro-Zone Analysis and Data Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kevin McCarthy; Milos Manic
Data Fusion requires the ability to combine or “fuse” date from multiple data sources. Time Series Analysis is a data mining technique used to predict future values from a data set based upon past values. Unlike other data mining techniques, however, Time Series places special emphasis on periodicity and how seasonal and other time-based factors tend to affect trends over time. One of the difficulties encountered in developing generic time series techniques is the wide variability of the data sets available for analysis. This presents challenges all the way from the data gathering stage to results presentation. This paper presentsmore » an architecture designed and used to facilitate the collection of disparate data sets well suited to Time Series analysis as well as other predictive data mining techniques. Results show this architecture provides a flexible, dynamic framework for the capture and storage of a myriad of dissimilar data sets and can serve as a foundation from which to build a complete data fusion architecture.« less
Code of Federal Regulations, 2014 CFR
2014-07-01
..., SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General... securities. This curve relates the yield on a security to its time to maturity. Yields at particular points...
Code of Federal Regulations, 2013 CFR
2013-07-01
..., SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General... securities. This curve relates the yield on a security to its time to maturity. Yields at particular points...
Code of Federal Regulations, 2010 CFR
2010-07-01
..., SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General... securities. This curve relates the yield on a security to its time to maturity. Yields at particular points...
Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick
2018-01-01
When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
On the Prony series representation of stretched exponential relaxation
NASA Astrophysics Data System (ADS)
Mauro, John C.; Mauro, Yihong Z.
2018-09-01
Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.
Aggregate Measures of Watershed Health from Reconstructed ...
Risk-based indices such as reliability, resilience, and vulnerability (R-R-V), have the potential to serve as watershed health assessment tools. Recent research has demonstrated the applicability of such indices for water quality (WQ) constituents such as total suspended solids and nutrients on an individual basis. However, the calculations can become tedious when time-series data for several WQ constituents have to be evaluated individually. Also, comparisons between locations with different sets of constituent data can prove difficult. In this study, data reconstruction using relevance vector machine algorithm was combined with dimensionality reduction via variational Bayesian noisy principal component analysis to reconstruct and condense sparse multidimensional WQ data sets into a single time series. The methodology allows incorporation of uncertainty in both the reconstruction and dimensionality-reduction steps. The R-R-V values were calculated using the aggregate time series at multiple locations within two Indiana watersheds. Results showed that uncertainty present in the reconstructed WQ data set propagates to the aggregate time series and subsequently to the aggregate R-R-V values as well. serving as motivating examples. Locations with different WQ constituents and different standards for impairment were successfully combined to provide aggregate measures of R-R-V values. Comparisons with individual constituent R-R-V values showed that v
Cost-benefit analysis of the 55-mph speed limit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forester, T.H.; McNown, R.F.; Singell, L.D.
1984-01-01
This article presents the results of an empirical study which estimates the number of reduced fatalities as a result of the imposed 55-mph speed limit. Time series data for the US from 1952 to 1979 is employed in a regression model capturing the relation between fatalities, average speed, variability of speed, and the speed limit. Also discussed are the alternative approaches to valuing human life and the value of time. Provided is a series of benefit-cost ratios based on alternative measures of the benefits and costs from life saving. The paper concludes that the 55-mph speed limit is not costmore » efficient unless additional time on the highway is valued significantly below levels estimated in the best reasearch on the value of time. 12 references, 1 table.« less
On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series
Fransson, Peter
2016-01-01
Abstract Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box–Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed. PMID:27784176
On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.
Thompson, William Hedley; Fransson, Peter
2016-12-01
Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.
Permutation entropy of finite-length white-noise time series.
Little, Douglas J; Kane, Deb M
2016-08-01
Permutation entropy (PE) is commonly used to discriminate complex structure from white noise in a time series. While the PE of white noise is well understood in the long time-series limit, analysis in the general case is currently lacking. Here the expectation value and variance of white-noise PE are derived as functions of the number of ordinal pattern trials, N, and the embedding dimension, D. It is demonstrated that the probability distribution of the white-noise PE converges to a χ^{2} distribution with D!-1 degrees of freedom as N becomes large. It is further demonstrated that the PE variance for an arbitrary time series can be estimated as the variance of a related metric, the Kullback-Leibler entropy (KLE), allowing the qualitative N≫D! condition to be recast as a quantitative estimate of the N required to achieve a desired PE calculation precision. Application of this theory to statistical inference is demonstrated in the case of an experimentally obtained noise series, where the probability of obtaining the observed PE value was calculated assuming a white-noise time series. Standard statistical inference can be used to draw conclusions whether the white-noise null hypothesis can be accepted or rejected. This methodology can be applied to other null hypotheses, such as discriminating whether two time series are generated from different complex system states.
Pseudo-random bit generator based on lag time series
NASA Astrophysics Data System (ADS)
García-Martínez, M.; Campos-Cantón, E.
2014-12-01
In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.
Complexity analysis of the turbulent environmental fluid flow time series
NASA Astrophysics Data System (ADS)
Mihailović, D. T.; Nikolić-Đorić, E.; Drešković, N.; Mimić, G.
2014-02-01
We have used the Kolmogorov complexities, sample and permutation entropies to quantify the randomness degree in river flow time series of two mountain rivers in Bosnia and Herzegovina, representing the turbulent environmental fluid, for the period 1926-1990. In particular, we have examined the monthly river flow time series from two rivers (the Miljacka and the Bosnia) in the mountain part of their flow and then calculated the Kolmogorov complexity (KL) based on the Lempel-Ziv Algorithm (LZA) (lower-KLL and upper-KLU), sample entropy (SE) and permutation entropy (PE) values for each time series. The results indicate that the KLL, KLU, SE and PE values in two rivers are close to each other regardless of the amplitude differences in their monthly flow rates. We have illustrated the changes in mountain river flow complexity by experiments using (i) the data set for the Bosnia River and (ii) anticipated human activities and projected climate changes. We have explored the sensitivity of considered measures in dependence on the length of time series. In addition, we have divided the period 1926-1990 into three subintervals: (a) 1926-1945, (b) 1946-1965, (c) 1966-1990, and calculated the KLL, KLU, SE, PE values for the various time series in these subintervals. It is found that during the period 1946-1965, there is a decrease in their complexities, and corresponding changes in the SE and PE, in comparison to the period 1926-1990. This complexity loss may be primarily attributed to (i) human interventions, after the Second World War, on these two rivers because of their use for water consumption and (ii) climate change in recent times.
A simple and fast representation space for classifying complex time series
NASA Astrophysics Data System (ADS)
Zunino, Luciano; Olivares, Felipe; Bariviera, Aurelio F.; Rosso, Osvaldo A.
2017-03-01
In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease.
A cluster merging method for time series microarray with production values.
Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio
2014-09-01
A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.
García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan
2009-02-01
An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.
A comment on measuring the Hurst exponent of financial time series
NASA Astrophysics Data System (ADS)
Couillard, Michel; Davison, Matt
2005-03-01
A fundamental hypothesis of quantitative finance is that stock price variations are independent and can be modeled using Brownian motion. In recent years, it was proposed to use rescaled range analysis and its characteristic value, the Hurst exponent, to test for independence in financial time series. Theoretically, independent time series should be characterized by a Hurst exponent of 1/2. However, finite Brownian motion data sets will always give a value of the Hurst exponent larger than 1/2 and without an appropriate statistical test such a value can mistakenly be interpreted as evidence of long term memory. We obtain a more precise statistical significance test for the Hurst exponent and apply it to real financial data sets. Our empirical analysis shows no long-term memory in some financial returns, suggesting that Brownian motion cannot be rejected as a model for price dynamics.
A Spatiotemporal Prediction Framework for Air Pollution Based on Deep RNN
NASA Astrophysics Data System (ADS)
Fan, J.; Li, Q.; Hou, J.; Feng, X.; Karimian, H.; Lin, S.
2017-10-01
Time series data in practical applications always contain missing values due to sensor malfunction, network failure, outliers etc. In order to handle missing values in time series, as well as the lack of considering temporal properties in machine learning models, we propose a spatiotemporal prediction framework based on missing value processing algorithms and deep recurrent neural network (DRNN). By using missing tag and missing interval to represent time series patterns, we implement three different missing value fixing algorithms, which are further incorporated into deep neural network that consists of LSTM (Long Short-term Memory) layers and fully connected layers. Real-world air quality and meteorological datasets (Jingjinji area, China) are used for model training and testing. Deep feed forward neural networks (DFNN) and gradient boosting decision trees (GBDT) are trained as baseline models against the proposed DRNN. Performances of three missing value fixing algorithms, as well as different machine learning models are evaluated and analysed. Experiments show that the proposed DRNN framework outperforms both DFNN and GBDT, therefore validating the capacity of the proposed framework. Our results also provides useful insights for better understanding of different strategies that handle missing values.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-26
... 500 Index option series in the pilot: (1) A time series analysis of open interest; and (2) an analysis... issue's total market share value, which is the share price times the number of shares outstanding. These... other series. Strike price intervals would be set no less than 5 points apart. Consistent with existing...
Mehdizadeh, Sina; Sanjari, Mohammad Ali
2017-11-07
This study aimed to determine the effect of added noise, filtering and time series length on the largest Lyapunov exponent (LyE) value calculated for time series obtained from a passive dynamic walker. The simplest passive dynamic walker model comprising of two massless legs connected by a frictionless hinge joint at the hip was adopted to generate walking time series. The generated time series was used to construct a state space with the embedding dimension of 3 and time delay of 100 samples. The LyE was calculated as the exponential rate of divergence of neighboring trajectories of the state space using Rosenstein's algorithm. To determine the effect of noise on LyE values, seven levels of Gaussian white noise (SNR=55-25dB with 5dB steps) were added to the time series. In addition, the filtering was performed using a range of cutoff frequencies from 3Hz to 19Hz with 2Hz steps. The LyE was calculated for both noise-free and noisy time series with different lengths of 6, 50, 100 and 150 strides. Results demonstrated a high percent error in the presence of noise for LyE. Therefore, these observations suggest that Rosenstein's algorithm might not perform well in the presence of added experimental noise. Furthermore, findings indicated that at least 50 walking strides are required to calculate LyE to account for the effect of noise. Finally, observations support that a conservative filtering of the time series with a high cutoff frequency might be more appropriate prior to calculating LyE. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dollar$ & $en$e. Part V: What is your added value?
Wilkinson, I
2001-01-01
In Part I of this series, I introduced the concept of memes (1). Memes are ideas or concepts--the information world equivalent of genes. The goal of this series of articles is to infect you with memes, so that you will assimilate, translate, and express them. No matter what our area of expertise or "-ology," we all are in the information business. Our goal is to be in the wisdom business. In the previous papers in this series, I showed that when we convert raw data into wisdom we are moving along a value chain. Each step in the chain adds a different amount of value to the final product: timely, relevant, accurate, and precise knowledge that can be applied to create the ultimate product in the value chain: wisdom. In Part II of this series, I introduced a set of memes for measuring the cost of adding value (2). In Part III of this series, I presented a new set of memes for measuring the added value of knowledge, i.e., intellectual capital (3). In Part IV of this series, I discussed practical knowledge management tools for measuring the value of people, structural, and customer capital (4). In Part V of this series, I will apply intellectual capital and knowledge management concepts at the individual level, to help answer a fundamental question: What is my added value?
NASA Astrophysics Data System (ADS)
Laib, Mohamed; Telesca, Luciano; Kanevski, Mikhail
2018-03-01
This paper studies the daily connectivity time series of a wind speed-monitoring network using multifractal detrended fluctuation analysis. It investigates the long-range fluctuation and multifractality in the residuals of the connectivity time series. Our findings reveal that the daily connectivity of the correlation-based network is persistent for any correlation threshold. Further, the multifractality degree is higher for larger absolute values of the correlation threshold.
31 CFR 351.6 - When may I redeem my Series EE savings bond?
Code of Federal Regulations, 2014 CFR
2014-07-01
... before January 1, 2003. You may redeem your Series EE savings bond at any time beginning six months after... 31 Money and Finance: Treasury 2 2014-07-01 2014-07-01 false When may I redeem my Series EE... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds...
31 CFR 351.6 - When may I redeem my Series EE savings bond?
Code of Federal Regulations, 2010 CFR
2010-07-01
... before January 1, 2003. You may redeem your Series EE savings bond at any time beginning six months after... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false When may I redeem my Series EE... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds...
Measurement of cardiac output from dynamic pulmonary circulation time CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yee, Seonghwan, E-mail: Seonghwan.Yee@Beaumont.edu; Scalzetti, Ernest M.
Purpose: To introduce a method of estimating cardiac output from the dynamic pulmonary circulation time CT that is primarily used to determine the optimal time window of CT pulmonary angiography (CTPA). Methods: Dynamic pulmonary circulation time CT series, acquired for eight patients, were retrospectively analyzed. The dynamic CT series was acquired, prior to the main CTPA, in cine mode (1 frame/s) for a single slice at the level of the main pulmonary artery covering the cross sections of ascending aorta (AA) and descending aorta (DA) during the infusion of iodinated contrast. The time series of contrast changes obtained for DA,more » which is the downstream of AA, was assumed to be related to the time series for AA by the convolution with a delay function. The delay time constant in the delay function, representing the average time interval between the cross sections of AA and DA, was determined by least square error fitting between the convoluted AA time series and the DA time series. The cardiac output was then calculated by dividing the volume of the aortic arch between the cross sections of AA and DA (estimated from the single slice CT image) by the average time interval, and multiplying the result by a correction factor. Results: The mean cardiac output value for the six patients was 5.11 (l/min) (with a standard deviation of 1.57 l/min), which is in good agreement with the literature value; the data for the other two patients were too noisy for processing. Conclusions: The dynamic single-slice pulmonary circulation time CT series also can be used to estimate cardiac output.« less
Refined composite multiscale weighted-permutation entropy of financial time series
NASA Astrophysics Data System (ADS)
Zhang, Yongping; Shang, Pengjian
2018-04-01
For quantifying the complexity of nonlinear systems, multiscale weighted-permutation entropy (MWPE) has recently been proposed. MWPE has incorporated amplitude information and been applied to account for the multiple inherent dynamics of time series. However, MWPE may be unreliable, because its estimated values show large fluctuation for slight variation of the data locations, and a significant distinction only for the different length of time series. Therefore, we propose the refined composite multiscale weighted-permutation entropy (RCMWPE). By comparing the RCMWPE results with other methods' results on both synthetic data and financial time series, RCMWPE method shows not only the advantages inherited from MWPE but also lower sensitivity to the data locations, more stable and much less dependent on the length of time series. Moreover, we present and discuss the results of RCMWPE method on the daily price return series from Asian and European stock markets. There are significant differences between Asian markets and European markets, and the entropy values of Hang Seng Index (HSI) are close to but higher than those of European markets. The reliability of the proposed RCMWPE method has been supported by simulations on generated and real data. It could be applied to a variety of fields to quantify the complexity of the systems over multiple scales more accurately.
The Value of Interrupted Time-Series Experiments for Community Intervention Research
Biglan, Anthony; Ary, Dennis; Wagenaar, Alexander C.
2015-01-01
Greater use of interrupted time-series experiments is advocated for community intervention research. Time-series designs enable the development of knowledge about the effects of community interventions and policies in circumstances in which randomized controlled trials are too expensive, premature, or simply impractical. The multiple baseline time-series design typically involves two or more communities that are repeatedly assessed, with the intervention introduced into one community at a time. It is particularly well suited to initial evaluations of community interventions and the refinement of those interventions. This paper describes the main features of multiple baseline designs and related repeated-measures time-series experiments, discusses the threats to internal validity in multiple baseline designs, and outlines techniques for statistical analyses of time-series data. Examples are given of the use of multiple baseline designs in evaluating community interventions and policy changes. PMID:11507793
CI2 for creating and comparing confidence-intervals for time-series bivariate plots.
Mullineaux, David R
2017-02-01
Currently no method exists for calculating and comparing the confidence-intervals (CI) for the time-series of a bivariate plot. The study's aim was to develop 'CI2' as a method to calculate the CI on time-series bivariate plots, and to identify if the CI between two bivariate time-series overlap. The test data were the knee and ankle angles from 10 healthy participants running on a motorised standard-treadmill and non-motorised curved-treadmill. For a recommended 10+ trials, CI2 involved calculating 95% confidence-ellipses at each time-point, then taking as the CI the points on the ellipses that were perpendicular to the direction vector between the means of two adjacent time-points. Consecutive pairs of CI created convex quadrilaterals, and any overlap of these quadrilaterals at the same time or ±1 frame as a time-lag calculated using cross-correlations, indicated where the two time-series differed. CI2 showed no group differences between left and right legs on both treadmills, but the same legs between treadmills for all participants showed differences of less knee extension on the curved-treadmill before heel-strike. To improve and standardise the use of CI2 it is recommended to remove outlier time-series, use 95% confidence-ellipses, and scale the ellipse by the fixed Chi-square value as opposed to the sample-size dependent F-value. For practical use, and to aid in standardisation or future development of CI2, Matlab code is provided. CI2 provides an effective method to quantify the CI of bivariate plots, and to explore the differences in CI between two bivariate time-series. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang
2017-10-01
Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.
Code of Federal Regulations, 2010 CFR
2010-10-01
... eligible for capital assistance. Capital assistance means Federal financial assistance for capital projects... recipient in the market. Present value means the value at the time of calculation of a future payment, or series of future payments discounted by the time value of money as represented by an interest rate or...
31 CFR 351.6 - When may I redeem my Series EE savings bond?
Code of Federal Regulations, 2012 CFR
2012-07-01
... before January 1, 2003. You may redeem your Series EE savings bond at any time beginning six months after... 31 Money and Finance:Treasury 2 2012-07-01 2012-07-01 false When may I redeem my Series EE savings... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds...
31 CFR 351.6 - When may I redeem my Series EE savings bond?
Code of Federal Regulations, 2013 CFR
2013-07-01
... before January 1, 2003. You may redeem your Series EE savings bond at any time beginning six months after... 31 Money and Finance:Treasury 2 2013-07-01 2013-07-01 false When may I redeem my Series EE savings... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds...
31 CFR 351.6 - When may I redeem my Series EE savings bond?
Code of Federal Regulations, 2011 CFR
2011-07-01
... before January 1, 2003. You may redeem your Series EE savings bond at any time beginning six months after... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false When may I redeem my Series EE savings... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds...
Entropy of electromyography time series
NASA Astrophysics Data System (ADS)
Kaufman, Miron; Zurcher, Ulrich; Sung, Paul S.
2007-12-01
A nonlinear analysis based on Renyi entropy is applied to electromyography (EMG) time series from back muscles. The time dependence of the entropy of the EMG signal exhibits a crossover from a subdiffusive regime at short times to a plateau at longer times. We argue that this behavior characterizes complex biological systems. The plateau value of the entropy can be used to differentiate between healthy and low back pain individuals.
Change point detection of the Persian Gulf sea surface temperature
NASA Astrophysics Data System (ADS)
Shirvani, A.
2017-01-01
In this study, the Student's t parametric and Mann-Whitney nonparametric change point models (CPMs) were applied to detect change point in the annual Persian Gulf sea surface temperature anomalies (PGSSTA) time series for the period 1951-2013. The PGSSTA time series, which were serially correlated, were transformed to produce an uncorrelated pre-whitened time series. The pre-whitened PGSSTA time series were utilized as the input file of change point models. Both the applied parametric and nonparametric CPMs estimated the change point in the PGSSTA in 1992. The PGSSTA follow the normal distribution up to 1992 and thereafter, but with a different mean value after year 1992. The estimated slope of linear trend in PGSSTA time series for the period 1951-1992 was negative; however, that was positive after the detected change point. Unlike the PGSSTA, the applied CPMs suggested no change point in the Niño3.4SSTA time series.
Perl Modules for Constructing Iterators
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2009-01-01
The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.
RankExplorer: Visualization of Ranking Changes in Large Time Series Data.
Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin
2012-12-01
For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.
Compression based entropy estimation of heart rate variability on multiple time scales.
Baumert, Mathias; Voss, Andreas; Javorka, Michal
2013-01-01
Heart rate fluctuates beat by beat in a complex manner. The aim of this study was to develop a framework for entropy assessment of heart rate fluctuations on multiple time scales. We employed the Lempel-Ziv algorithm for lossless data compression to investigate the compressibility of RR interval time series on different time scales, using a coarse-graining procedure. We estimated the entropy of RR interval time series of 20 young and 20 old subjects and also investigated the compressibility of randomly shuffled surrogate RR time series. The original RR time series displayed significantly smaller compression entropy values than randomized RR interval data. The RR interval time series of older subjects showed significantly different entropy characteristics over multiple time scales than those of younger subjects. In conclusion, data compression may be useful approach for multiscale entropy assessment of heart rate variability.
Mihailovic, D T; Udovičić, V; Krmar, M; Arsenić, I
2014-02-01
We have suggested a complexity measure based method for studying the dependence of measured (222)Rn concentration time series on indoor air temperature and humidity. This method is based on the Kolmogorov complexity (KL). We have introduced (i) the sequence of the KL, (ii) the Kolmogorov complexity highest value in the sequence (KLM) and (iii) the KL of the product of time series. The noticed loss of the KLM complexity of (222)Rn concentration time series can be attributed to the indoor air humidity that keeps the radon daughters in air. © 2013 Published by Elsevier Ltd.
Estimates of Zenith Total Delay trends from GPS reprocessing with autoregressive process
NASA Astrophysics Data System (ADS)
Klos, Anna; Hunegnaw, Addisu; Teferle, Felix Norman; Ebuy Abraha, Kibrom; Ahmed, Furqan; Bogusz, Janusz
2017-04-01
Nowadays, near real-time Zenith Total Delay (ZTD) estimates from Global Positioning System (GPS) observations are routinely assimilated into numerical weather prediction (NWP) models to improve the reliability of forecasts. On the other hand, ZTD time series derived from homogeneously re-processed GPS observations over long periods have the potential to improve our understanding of climate change on various temporal and spatial scales. With such time series only recently reaching somewhat adequate time spans, the application of GPS-derived ZTD estimates to climate monitoring is still to be developed further. In this research, we examine the character of noise in ZTD time series for 1995-2015 in order to estimate more realistic magnitudes of trend and its uncertainty than would be the case if the stochastic properties are not taken into account. Furthermore, the hourly sampled, homogeneously re-processed and carefully homogenized ZTD time series from over 700 globally distributed stations were classified into five major climate zones. We found that the amplitudes of annual signals reach values of 10-150 mm with minimum values for the polar and Alpine zones. The amplitudes of daily signals were estimated to be 0-12 mm with maximum values found for the dry zone. We examined seven different noise models for the residual ZTD time series after modelling all known periodicities. This identified a combination of white plus autoregressive process of fourth order to be optimal to match all changes in power of the ZTD data. When the stochastic properties are neglected, ie. a pure white noise model is employed, only 11 from 120 trends were insignificant. Using the optimum noise model more than half of the 120 examined trends became insignificant. We show that the uncertainty of ZTD trends is underestimated by a factor of 3-12 when the stochastic properties of the ZTD time series are ignored and we conclude that it is essential to properly model the noise characteristics of such time series when interpretations in terms of climate change are to be performed.
NASA Astrophysics Data System (ADS)
Baldysz, Zofia; Nykiel, Grzegorz; Figurski, Mariusz; Szafranek, Karolina; Kroszczynski, Krzysztof; Araszkiewicz, Andrzej
2015-04-01
In recent years, the GNSS system began to play an increasingly important role in the research related to the climate monitoring. Based on the GPS system, which has the longest operational capability in comparison with other systems, and a common computational strategy applied to all observations, long and homogeneous ZTD (Zenith Tropospheric Delay) time series were derived. This paper presents results of analysis of 16-year ZTD time series obtained from the EPN (EUREF Permanent Network) reprocessing performed by the Military University of Technology. To maintain the uniformity of data, analyzed period of time (1998-2013) is exactly the same for all stations - observations carried out before 1998 were removed from time series and observations processed using different strategy were recalculated according to the MUT LAC approach. For all 16-year time series (59 stations) Lomb-Scargle periodograms were created to obtain information about the oscillations in ZTD time series. Due to strong annual oscillations which disturb the character of oscillations with smaller amplitude and thus hinder their investigation, Lomb-Scargle periodograms for time series with the deleted annual oscillations were created in order to verify presence of semi-annual, ter-annual and quarto-annual oscillations. Linear trend and seasonal components were estimated using LSE (Least Square Estimation) and Mann-Kendall trend test were used to confirm the presence of linear trend designated by LSE method. In order to verify the effect of the length of time series on the estimated size of the linear trend, comparison between two different length of ZTD time series was performed. To carry out a comparative analysis, 30 stations which have been operating since 1996 were selected. For these stations two periods of time were analyzed: shortened 16-year (1998-2013) and full 18-year (1996-2013). For some stations an additional two years of observations have significant impact on changing the size of linear trend - only for 4 stations the size of linear trend was exactly the same for two periods of time. In one case, the nature of the trend has changed from negative (16-year time series) for positive (18-year time series). The average value of a linear trends for 16-year time series is 1,5 mm/decade, but their spatial distribution is not uniform. The average value of linear trends for all 18-year time series is 2,0 mm/decade, with better spatial distribution and smaller discrepancies.
An algorithm of Saxena-Easo on fuzzy time series forecasting
NASA Astrophysics Data System (ADS)
Ramadhani, L. C.; Anggraeni, D.; Kamsyakawuni, A.; Hadi, A. F.
2018-04-01
This paper presents a forecast model of Saxena-Easo fuzzy time series prediction to study the prediction of Indonesia inflation rate in 1970-2016. We use MATLAB software to compute this method. The algorithm of Saxena-Easo fuzzy time series doesn’t need stationarity like conventional forecasting method, capable of dealing with the value of time series which are linguistic and has the advantage of reducing the calculation, time and simplifying the calculation process. Generally it’s focus on percentage change as the universe discourse, interval partition and defuzzification. The result indicate that between the actual data and the forecast data are close enough with Root Mean Square Error (RMSE) = 1.5289.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-31
... year used for discounting the NPV of total consumer costs and savings, for the time-series of costs and... does not imply that the time-series of cost and benefits from which the annualized values were..., engineering resources, and other conversion activities over a longer period of time depending on the [[Page...
Dou, Chao
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. PMID:28090205
Miao, Beibei; Dou, Chao; Jin, Xuebo
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always "dirty," which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the "dirty" data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. .
Dollar$ & $en$e. Part IV: Measuring the value of people, structural, and customer capital.
Wilkinson, I
2001-01-01
In Part I of this series, I introduced the concept of memes (1). Memes are ideas or concepts, the information world equivalent of genes. The goal of this series of articles is to infect you with my memes, so that you will assimilate, translate, and express them. We discovered that no matter what our area of expertise or "-ology," we all are in the information business. Our goal is to be in the wisdom business. We saw that when we convert raw data into wisdom we are moving along a value chain. Each step in the chain adds a different amount of value to the final product: timely, relevant, accurate, and precise knowledge which can then be applied to create the ultimate product in the value chain: wisdom. In Part II of this series, I infected you with a set of memes for measuring the cost of adding value (2). In Part III of this series, I infected you with a new set of memes for measuring the added value of knowledge, i.e., intellectual capital (3). In Part IV of this series, I will infect you with memes for measuring the value of people, structural, and customer capital.
NASA Astrophysics Data System (ADS)
Ramli, Nazirah; Mutalib, Siti Musleha Ab; Mohamad, Daud
2017-08-01
Fuzzy time series forecasting model has been proposed since 1993 to cater for data in linguistic values. Many improvement and modification have been made to the model such as enhancement on the length of interval and types of fuzzy logical relation. However, most of the improvement models represent the linguistic term in the form of discrete fuzzy sets. In this paper, fuzzy time series model with data in the form of trapezoidal fuzzy numbers and natural partitioning length approach is introduced for predicting the unemployment rate. Two types of fuzzy relations are used in this study which are first order and second order fuzzy relation. This proposed model can produce the forecasted values under different degree of confidence.
Adaptive time-variant models for fuzzy-time-series forecasting.
Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching
2010-12-01
A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.
Wu, Mingquan; Yang, Chenghai; Song, Xiaoyu; Hoffmann, Wesley Clint; Huang, Wenjiang; Niu, Zheng; Wang, Changyao; Li, Wang; Yu, Bo
2018-01-31
To better understand the progression of cotton root rot within the season, time series monitoring is required. In this study, an improved spatial and temporal data fusion approach (ISTDFA) was employed to combine 250-m Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Different Vegetation Index (NDVI) and 10-m Sentinetl-2 NDVI data to generate a synthetic Sentinel-2 NDVI time series for monitoring this disease. Then, the phenology of healthy cotton and infected cotton was modeled using a logistic model. Finally, several phenology parameters, including the onset day of greenness minimum (OGM), growing season length (GLS), onset of greenness increase (OGI), max NDVI value, and integral area of the phenology curve, were calculated. The results showed that ISTDFA could be used to combine time series MODIS and Sentinel-2 NDVI data with a correlation coefficient of 0.893. The logistic model could describe the phenology curves with R-squared values from 0.791 to 0.969. Moreover, the phenology curve of infected cotton showed a significant difference from that of healthy cotton. The max NDVI value, OGM, GSL and the integral area of the phenology curve for infected cotton were reduced by 0.045, 30 days, 22 days, and 18.54%, respectively, compared with those for healthy cotton.
On demand processing of climate station sensor data
NASA Astrophysics Data System (ADS)
Wöllauer, Stephan; Forteva, Spaska; Nauss, Thomas
2015-04-01
Large sets of climate stations with several sensors produce big amounts of finegrained time series data. To gain value of this data, further processing and aggregation is needed. We present a flexible system to process the raw data on demand. Several aspects need to be considered to process the raw data in a way that scientists can use the processed data conveniently for their specific research interests. First of all, it is not feasible to pre-process the data in advance because of the great variety of ways it can be processed. Therefore, in this approach only the raw measurement data is archived in a database. When a scientist requires some time series, the system processes the required raw data according to the user-defined request. Based on the type of measurement sensor, some data validation is needed, because the climate station sensors may produce erroneous data. Currently, three validation methods are integrated in the on demand processing system and are optionally selectable. The most basic validation method checks if measurement values are within a predefined range of possible values. For example, it may be assumed that an air temperature sensor measures values within a range of -40 °C to +60 °C. Values outside of this range are considered as a measurement error by this validation method and consequently rejected. An other validation method checks for outliers in the stream of measurement values by defining a maximum change rate between subsequent measurement values. The third validation method compares measurement data to the average values of neighboring stations and rejects measurement values with a high variance. These quality checks are optional, because especially extreme climatic values may be valid but rejected by some quality check method. An other important task is the preparation of measurement data in terms of time. The observed stations measure values in intervals of minutes to hours. Often scientists need a coarser temporal resolution (days, months, years). Therefore, the interval of time aggregation is selectable for the processing. For some use cases it is desirable that the resulting time series are as continuous as possible. To meet these requirements, the processing system includes techniques to fill gaps of missing values by interpolating measurement values with data from adjacent stations using available contemporaneous measurements from the respective stations as training datasets. Alongside processing of sensor values, we created interactive visualization techniques to get a quick overview of a big amount of archived time series data.
Layered Ensemble Architecture for Time Series Forecasting.
Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin
2016-01-01
Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods.
Causal judgments about empirical information in an interrupted time series design.
White, Peter A
2016-07-19
Empirical information available for causal judgment in everyday life tends to take the form of quasi-experimental designs, lacking control groups, more than the form of contingency information that is usually presented in experiments. Stimuli were presented in which values of an outcome variable for a single individual were recorded over six time periods, and an intervention was introduced between the fifth and sixth time periods. Participants judged whether and how much the intervention affected the outcome. With numerical stimulus information, judgments were higher for a pre-intervention profile in which all values were the same than for pre-intervention profiles with any other kind of trend. With graphical stimulus information, judgments were more sensitive to trends, tending to be higher when an increase after the intervention was preceded by a decreasing series than when it was preceded by an increasing series ending on the same value at the fifth time period. It is suggested that a feature-analytic model, in which the salience of different features of information varies between presentation formats, may provide the best prospect of explaining the results.
Estimating trends in atmospheric water vapor and temperature time series over Germany
NASA Astrophysics Data System (ADS)
Alshawaf, Fadwa; Balidakis, Kyriakos; Dick, Galina; Heise, Stefan; Wickert, Jens
2017-08-01
Ground-based GNSS (Global Navigation Satellite System) has efficiently been used since the 1990s as a meteorological observing system. Recently scientists have used GNSS time series of precipitable water vapor (PWV) for climate research. In this work, we compare the temporal trends estimated from GNSS time series with those estimated from European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis (ERA-Interim) data and meteorological measurements. We aim to evaluate climate evolution in Germany by monitoring different atmospheric variables such as temperature and PWV. PWV time series were obtained by three methods: (1) estimated from ground-based GNSS observations using the method of precise point positioning, (2) inferred from ERA-Interim reanalysis data, and (3) determined based on daily in situ measurements of temperature and relative humidity. The other relevant atmospheric parameters are available from surface measurements of meteorological stations or derived from ERA-Interim. The trends are estimated using two methods: the first applies least squares to deseasonalized time series and the second uses the Theil-Sen estimator. The trends estimated at 113 GNSS sites, with 10 to 19 years temporal coverage, vary between -1.5 and 2.3 mm decade-1 with standard deviations below 0.25 mm decade-1. These results were validated by estimating the trends from ERA-Interim data over the same time windows, which show similar values. These values of the trend depend on the length and the variations of the time series. Therefore, to give a mean value of the PWV trend over Germany, we estimated the trends using ERA-Interim spanning from 1991 to 2016 (26 years) at 227 synoptic stations over Germany. The ERA-Interim data show positive PWV trends of 0.33 ± 0.06 mm decade-1 with standard errors below 0.03 mm decade-1. The increment in PWV varies between 4.5 and 6.5 % per degree Celsius rise in temperature, which is comparable to the theoretical rate of the Clausius-Clapeyron equation.
Stress Corrosion Cracking Study of Aluminum Alloys Using Electrochemical Noise Analysis
NASA Astrophysics Data System (ADS)
Rathod, R. C.; Sapate, S. G.; Raman, R.; Rathod, W. S.
2013-12-01
Stress corrosion cracking studies of aluminum alloys AA2219, AA8090, and AA5456 in heat-treated and non heat-treated condition were carried out using electrochemical noise technique with various applied stresses. Electrochemical noise time series data (corrosion potential vs. time) was obtained for the stressed tensile specimens in 3.5% NaCl aqueous solution at room temperature (27 °C). The values of drop in corrosion potential, total corrosion potential, mean corrosion potential, and hydrogen overpotential were evaluated from corrosion potential versus time series data. The electrochemical noise time series data was further analyzed with rescaled range ( R/ S) analysis proposed by Hurst to obtain the Hurst exponent. According to the results, higher values of the Hurst exponents with increased applied stresses showed more susceptibility to stress corrosion cracking as confirmed in case of alloy AA 2219 and AA8090.
Visibility graph analysis on quarterly macroeconomic series of China based on complex network theory
NASA Astrophysics Data System (ADS)
Wang, Na; Li, Dong; Wang, Qiwen
2012-12-01
The visibility graph approach and complex network theory provide a new insight into time series analysis. The inheritance of the visibility graph from the original time series was further explored in the paper. We found that degree distributions of visibility graphs extracted from Pseudo Brownian Motion series obtained by the Frequency Domain algorithm exhibit exponential behaviors, in which the exponential exponent is a binomial function of the Hurst index inherited in the time series. Our simulations presented that the quantitative relations between the Hurst indexes and the exponents of degree distribution function are different for different series and the visibility graph inherits some important features of the original time series. Further, we convert some quarterly macroeconomic series including the growth rates of value-added of three industry series and the growth rates of Gross Domestic Product series of China to graphs by the visibility algorithm and explore the topological properties of graphs associated from the four macroeconomic series, namely, the degree distribution and correlations, the clustering coefficient, the average path length, and community structure. Based on complex network analysis we find degree distributions of associated networks from the growth rates of value-added of three industry series are almost exponential and the degree distributions of associated networks from the growth rates of GDP series are scale free. We also discussed the assortativity and disassortativity of the four associated networks as they are related to the evolutionary process of the original macroeconomic series. All the constructed networks have “small-world” features. The community structures of associated networks suggest dynamic changes of the original macroeconomic series. We also detected the relationship among government policy changes, community structures of associated networks and macroeconomic dynamics. We find great influences of government policies in China on the changes of dynamics of GDP and the three industries adjustment. The work in our paper provides a new way to understand the dynamics of economic development.
Dollar$ & $en$e. Part VI: Knowledge management: the state of the art.
Wilkinson, I
2001-01-01
In Part I of this series, I introduced the concept of memes (1). Memes are ideas or concepts--the information world equivalent of genes. The goal of this series of articles is to infect you with memes so you will assimilate, translate, and express them. No matter what our area of expertise or "-ology," we all are in the information business. Our goal is to be in the wisdom business. In the previous articles in this series, I showed that when we convert raw data into wisdom, we are moving along a value chain. Each step in the chain adds a different amount of value to the final product: timely, relevant, accurate, and precise knowledge that then can be applied to create the ultimate product in the value chain--wisdom. In part II of this series, I introduced a set of memes for measuring the cost of adding value (2). In part III of this series, I presented a new set of memes for measuring the added value of knowledge, i.e., intellectual capital (3). In part IV of this series, I discussed practical knowledge management tools for measuring the value of people, structural, and customer capital (4). In part V of this series, I applied intellectual capital and knowledge management concepts at the individual level, to help answer a fundamental question: what is my added value (5)? In the final part of this series, I will review the state of intellectual capital and knowledge management development to date and outline the direction of current knowledge management initiatives and research projects.
Association mining of dependency between time series
NASA Astrophysics Data System (ADS)
Hafez, Alaaeldin
2001-03-01
Time series analysis is considered as a crucial component of strategic control over a broad variety of disciplines in business, science and engineering. Time series data is a sequence of observations collected over intervals of time. Each time series describes a phenomenon as a function of time. Analysis on time series data includes discovering trends (or patterns) in a time series sequence. In the last few years, data mining has emerged and been recognized as a new technology for data analysis. Data Mining is the process of discovering potentially valuable patterns, associations, trends, sequences and dependencies in data. Data mining techniques can discover information that many traditional business analysis and statistical techniques fail to deliver. In this paper, we adapt and innovate data mining techniques to analyze time series data. By using data mining techniques, maximal frequent patterns are discovered and used in predicting future sequences or trends, where trends describe the behavior of a sequence. In order to include different types of time series (e.g. irregular and non- systematic), we consider past frequent patterns of the same time sequences (local patterns) and of other dependent time sequences (global patterns). We use the word 'dependent' instead of the word 'similar' for emphasis on real life time series where two time series sequences could be completely different (in values, shapes, etc.), but they still react to the same conditions in a dependent way. In this paper, we propose the Dependence Mining Technique that could be used in predicting time series sequences. The proposed technique consists of three phases: (a) for all time series sequences, generate their trend sequences, (b) discover maximal frequent trend patterns, generate pattern vectors (to keep information of frequent trend patterns), use trend pattern vectors to predict future time series sequences.
Complexity multiscale asynchrony measure and behavior for interacting financial dynamics
NASA Astrophysics Data System (ADS)
Yang, Ge; Wang, Jun; Niu, Hongli
2016-08-01
A stochastic financial price process is proposed and investigated by the finite-range multitype contact dynamical system, in an attempt to study the nonlinear behaviors of real asset markets. The viruses spreading process in a finite-range multitype system is used to imitate the interacting behaviors of diverse investment attitudes in a financial market, and the empirical research on descriptive statistics and autocorrelation behaviors of return time series is performed for different values of propagation rates. Then the multiscale entropy analysis is adopted to study several different shuffled return series, including the original return series, the corresponding reversal series, the random shuffled series, the volatility shuffled series and the Zipf-type shuffled series. Furthermore, we propose and compare the multiscale cross-sample entropy and its modification algorithm called composite multiscale cross-sample entropy. We apply them to study the asynchrony of pairs of time series under different time scales.
Darius M. Adams; Kristine C. Jackson; Richard W. Haynes
1988-01-01
This report provides 35 years of information on softwood timber production and consumption in the United States and Canada. Included are regional time series on production and prices of softwood lumber, plywood, residues, and pulpwood; timber harvest volumes and values; production costs; and recovery factors.
NASA Astrophysics Data System (ADS)
Fink, G.; Koch, M.
2010-12-01
An important aspect in water resources and hydrological engineering is the assessment of hydrological risk, due to the occurrence of extreme events, e.g. droughts or floods. When dealing with the latter - as is the focus here - the classical methods of flood frequency analysis (FFA) are usually being used for the proper dimensioning of a hydraulic structure, for the purpose of bringing down the flood risk to an acceptable level. FFA is based on extreme value statistics theory. Despite the progress of methods in this scientific branch, the development, decision, and fitting of an appropriate distribution function stills remains a challenge, particularly, when certain underlying assumptions of the theory are not met in real applications. This is, for example, the case when the stationarity-condition for a random flood time series is not satisfied anymore, as could be the situation when long-term hydrological impacts of future climate change are to be considered. The objective here is to verify the applicability of classical (stationary) FFA to predicted flood time series in the Fulda catchment in central Germany, as they may occur in the wake of climate change during the 21st century. These discharge time series at the outlet of the Fulda basin have been simulated with a distributed hydrological model (SWAT) that is forced by predicted climate variables of a regional climate model for Germany (REMO). From the simulated future daily time series, annual maximum (extremes) values are computed and analyzed for the purpose of risk evaluation. Although the 21st century estimated extreme flood series of the Fulda river turn out to be only mildly non-stationary, alleviating the need for further action and concern at the first sight, the more detailed analysis of the risk, as quantified, for example, by the return period, shows non-negligent differences in the calculated risk levels. This could be verified by employing a new method, the so-called flood series maximum analysis (FSMA) method, which consists in the stochastic simulation of numerous trajectories of a stochastic process with a given GEV-distribution over a certain length of time (> larger than a desired return period). Then the maximum value for each trajectory is computed, all of which are then used to determine the empirical distribution of this maximum series. Through graphical inversion of this distribution function the size of the design flood for a given risk (quantile) and given life duration can be inferred. The results of numerous simulations show that for stationary flood series, the new FSMA method results, expectedly, in nearly identical risk values as the classical FFA approach. However, once the flood time series becomes slightly non-stationary - for reasons as discussed - and regardless of whether the trend is increasing or decreasing, large differences in the computed risk values for a given design flood occur. Or in other word, for the same risk, the new FSMA method would lead to different values in the design flood for a hydraulic structure than the classical FFA method. This, in turn, could lead to some cost savings in the realization of a hydraulic project.
76 FR 20571 - Bidding by Affiliates in Open Seasons for Pipeline Capacity
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-13
... time value of money to determine the present value of a time series of discounted cash flows.\\6\\ The... addressing yesterday's concerns may not address tomorrow's concerns. Over time, however, experience in...:30 a.m. to 5 p.m. Eastern time) at 888 First Street, NE., Room 2A, Washington, DC 20426. 27. From...
Tani, Yuji; Ogasawara, Katsuhiko
2012-01-01
This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.
Fractal dynamics of heartbeat time series of young persons with metabolic syndrome
NASA Astrophysics Data System (ADS)
Muñoz-Diosdado, A.; Alonso-Martínez, A.; Ramírez-Hernández, L.; Martínez-Hernández, G.
2012-10-01
Many physiological systems have been in recent years quantitatively characterized using fractal analysis. We applied it to study heart variability of young subjects with metabolic syndrome (MS); we examined the RR time series (time between two R waves in ECG) with the detrended fluctuation analysis (DFA) method, the Higuchi's fractal dimension method and the multifractal analysis to detect the possible presence of heart problems. The results show that although the young persons have MS, the majority do not present alterations in the heart dynamics. However, there were cases where the fractal parameter values differed significantly from the healthy people values.
Wild, Beate; Stadnitski, Tatjana; Wesche, Daniela; Stroe-Kunold, Esther; Schultz, Jobst-Hendrik; Rudofsky, Gottfried; Maser-Gluth, Christiane; Herzog, Wolfgang; Friederich, Hans-Christoph
2016-04-01
The aim of the study was to investigate the characteristics of the awakening salivary cortisol in patients with anorexia nervosa (AN) using a time series design. We included ten AN inpatients, six with a very low BMI (high symptom severity, HSS group) and four patients with less severe symptoms (low symptom severity, LSS group). Patients collected salivary cortisol daily upon awakening. The number of collected saliva samples varied across patients between n=65 and n=229 (due to the different lengths of their inpatient stay). In addition, before retiring, the patients answered questions daily on the handheld regarding disorder-related psychosocial variables. The analysis of cortisol and diary data was conducted by using a time series approach. Time series showed that the awakening cortisol of the AN patients was elevated as compared to a control group. Cortisol measurements of patients with LSS essentially fluctuated in a stationary manner around a constant mean. The series of patients with HSS were generally less stable; four HSS patients showed a non-stationary cortisol awakening series. Antipsychotic medication did not change awakening cortisol in a specific way. The lagged dependencies between cortisol and depressive feelings became significant for four patients. Here, higher cortisol values were temporally associated with higher values of depressive feelings. Upon awakening, the cortisol of all AN patients was in the standard range but elevated as compared to healthy controls. Patients with HSS appeared to show less stable awakening cortisol time series compared to patients with LSS. Copyright © 2016 Elsevier B.V. All rights reserved.
31 CFR 316.10 - Payment or redemption.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., SERIES E § 316.10 Payment or redemption. (a) General. A Series E bond may be redeemed in accordance with... tables. However, the redemption value of a bond in that denomination will be equal to ten times the.... (b) Federal Reserve Banks and Branches and United States Treasury. Owners of Series E bonds may...
31 CFR 316.10 - Payment or redemption.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., SERIES E § 316.10 Payment or redemption. (a) General. A Series E bond may be redeemed in accordance with... tables. However, the redemption value of a bond in that denomination will be equal to ten times the.... (b) Federal Reserve Banks and Branches and United States Treasury. Owners of Series E bonds may...
31 CFR 316.10 - Payment or redemption.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., SERIES E § 316.10 Payment or redemption. (a) General. A Series E bond may be redeemed in accordance with... tables. However, the redemption value of a bond in that denomination will be equal to ten times the.... (b) Federal Reserve Banks and Branches and United States Treasury. Owners of Series E bonds may...
31 CFR 316.10 - Payment or redemption.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., SERIES E § 316.10 Payment or redemption. (a) General. A Series E bond may be redeemed in accordance with... tables. However, the redemption value of a bond in that denomination will be equal to ten times the.... (b) Federal Reserve Banks and Branches and United States Treasury. Owners of Series E bonds may...
31 CFR 316.10 - Payment or redemption.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., SERIES E § 316.10 Payment or redemption. (a) General. A Series E bond may be redeemed in accordance with... tables. However, the redemption value of a bond in that denomination will be equal to ten times the.... (b) Federal Reserve Banks and Branches and United States Treasury. Owners of Series E bonds may...
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Vousdoukas, Michalis; Voukouvalas, Evangelos; Sartini, Ludovica; Feyen, Luc; Besio, Giovanni; Alfieri, Lorenzo
2016-09-01
Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MATLAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/ (Mentaschi et al., 2016).
An operational definition of a statistically meaningful trend.
Bryhn, Andreas C; Dimberg, Peter H
2011-04-28
Linear trend analysis of time series is standard procedure in many scientific disciplines. If the number of data is large, a trend may be statistically significant even if data are scattered far from the trend line. This study introduces and tests a quality criterion for time trends referred to as statistical meaningfulness, which is a stricter quality criterion for trends than high statistical significance. The time series is divided into intervals and interval mean values are calculated. Thereafter, r(2) and p values are calculated from regressions concerning time and interval mean values. If r(2) ≥ 0.65 at p ≤ 0.05 in any of these regressions, then the trend is regarded as statistically meaningful. Out of ten investigated time series from different scientific disciplines, five displayed statistically meaningful trends. A Microsoft Excel application (add-in) was developed which can perform statistical meaningfulness tests and which may increase the operationality of the test. The presented method for distinguishing statistically meaningful trends should be reasonably uncomplicated for researchers with basic statistics skills and may thus be useful for determining which trends are worth analysing further, for instance with respect to causal factors. The method can also be used for determining which segments of a time trend may be particularly worthwhile to focus on.
Complex network approach to fractional time series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manshour, Pouya
In order to extract correlation information inherited in stochastic time series, the visibility graph algorithm has been recently proposed, by which a time series can be mapped onto a complex network. We demonstrate that the visibility algorithm is not an appropriate one to study the correlation aspects of a time series. We then employ the horizontal visibility algorithm, as a much simpler one, to map fractional processes onto complex networks. The degree distributions are shown to have parabolic exponential forms with Hurst dependent fitting parameter. Further, we take into account other topological properties such as maximum eigenvalue of the adjacencymore » matrix and the degree assortativity, and show that such topological quantities can also be used to predict the Hurst exponent, with an exception for anti-persistent fractional Gaussian noises. To solve this problem, we take into account the Spearman correlation coefficient between nodes' degrees and their corresponding data values in the original time series.« less
Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error
NASA Astrophysics Data System (ADS)
Jung, Insung; Koo, Lockjo; Wang, Gi-Nam
2008-11-01
The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.
Weighted combination of LOD values oa splitted into frequency windows
NASA Astrophysics Data System (ADS)
Fernandez, L. I.; Gambis, D.; Arias, E. F.
In this analysis a one-day combined time series of LOD(length-of-day) estimates is presented. We use individual data series derived by 7 GPS and 3 SLR analysis centers, which routinely contribute to the IERS database over a recent 27-month period (Jul 1996 - Oct 1998). The result is compared to the multi-technique combined series C04 produced by the Central Bureau of the IERS that is commonly used as a reference for the study of the phenomena of Earth rotation variations. The Frequency Windows Combined Series procedure brings out a time series, which is close to C04 but shows an amplitude difference that might explain the evident periodic behavior present in the differences of these two combined series. This method could be useful to generate a new time series to be used as a reference in the high frequency variations of the Earth rotation studies.
Comparative Fatigue Lives of Rubber and PVC Wiper Cylindrical Coatings
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.; Savage, Michael
2002-01-01
Three coating materials for rotating cylindrical-coated wiping rollers were fatigue tested in 2 Intaglio printing presses. The coatings were a hard, cross-linked, plasticized PVC thermoset (P-series); a plasticized PVC (A-series); and a hard, nitryl rubber (R-series). Both 2- and 3-parameter Weibull analyses as well as a cost-benefit analysis were performed. The mean value of life for the R-series coating is 24 and 9 times longer than the P- and A-series coatings, respectively. Both the cost and replacement rate for the R-series coating was significantly less than those for the P- and A-series coatings. At a very high probability of survival the R-series coating is approximately 2 and 6 times the lives of the P- and A-series, respectively, before the first failure occurs. Where all coatings are run to failure, using the mean (life) time between removal (MTBR) for each coating to calculate the number of replacements and costs provides qualitatively similar results to those using a Weibull analysis.
Neural network versus classical time series forecasting models
NASA Astrophysics Data System (ADS)
Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam
2017-05-01
Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.
77 FR 31964 - Energy Conservation Program: Energy Conservation Standards for Residential Dishwashers
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-30
... year used for discounting the NPV of total consumer costs and savings, for the time-series of costs and... does not imply that the time-series of cost and benefits from which the annualized values were... each TSL, DOE has included tables that present a summary of the results of DOE's quantitative analysis...
Metric projection for dynamic multiplex networks.
Jurman, Giuseppe
2016-08-01
Evolving multiplex networks are a powerful model for representing the dynamics along time of different phenomena, such as social networks, power grids, biological pathways. However, exploring the structure of the multiplex network time series is still an open problem. Here we propose a two-step strategy to tackle this problem based on the concept of distance (metric) between networks. Given a multiplex graph, first a network of networks is built for each time step, and then a real valued time series is obtained by the sequence of (simple) networks by evaluating the distance from the first element of the series. The effectiveness of this approach in detecting the occurring changes along the original time series is shown on a synthetic example first, and then on the Gulf dataset of political events.
NASA Technical Reports Server (NTRS)
Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.
2017-01-01
A data driven, near photospheric, 3 D, non-force free magnetohydrodynamic model pre- dicts time series of the complete current density, and the resistive heating rate Q at the photosphere in neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of the magnetic field B observed by the Helioseismic & Magnetic Imager on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series for B in every AR pixel. Errors in B due to these periods can be significant. The number of occurrences N(q) of values of Q > or = q for each AR time series is found to be a scale invariant power law distribution, N(Q) / Q-s, above an AR dependent threshold value of Q, where 0.3952 < or = s < or = 0.5298 with mean and standard deviation of 0.4678 and 0.0454, indicating little variation between ARs. Observations show that the number of occurrences N(E) of coronal flares with a total energy released > or = E obeys the same type of distribution, N(E) / E-S, above an AR dependent threshold value of E, with 0.38 < or approx. S < or approx. 0.60, also with little variation among ARs. Within error margins the ranges of s and S are nearly identical. This strong similarity between N(Q) and N(E) suggests a fundamental connection between the process that drives coronal flares and the process that drives photospheric NLR heating rates in ARs. In addition, results suggest it is plausible that spikes in Q, several orders of magnitude above background values, are correlated with times of the subsequent occurrence of M or X flares.
NASA Technical Reports Server (NTRS)
Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.
2017-01-01
A data driven, near photospheric, 3 D, non-force free magnetohydrodynamic model predicts time series of the complete current density, and the resistive heating rate Q at the photosphere in neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of the magnetic field B observed by the Helioseismic and Magnetic Imager on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series for B in every AR pixel. Errors in B due to these periods can be significant. The number of occurrences N(q) of values of Q > or = q for each AR time series is found to be a scale invariant power law distribution, N(Q) / Q-s, above an AR dependent threshold value of Q, where 0.3952 < or = s < or = 0.5298 with mean and standard deviation of 0.4678 and 0.0454, indicating little variation between ARs. Observations show that the number of occurrences N(E) of coronal flares with a total energy released > or = E obeys the same type of distribution, N(E) / E-S, above an AR dependent threshold value of E, with 0.38 < or approx. S < or approx. 0.60, also with little variation among ARs. Within error margins the ranges of s and S are nearly identical. This strong similarity between N(Q) and N(E) suggests a fundamental connection between the process that drives coronal flares and the process that drives photospheric NLR heating rates in ARs. In addition, results suggest it is plausible that spikes in Q, several orders of magnitude above background values, are correlated with times of the subsequent occurrence of M or X flares.
NASA Astrophysics Data System (ADS)
Suresha, Suhas; Sujith, R. I.; Emerson, Benjamin; Lieuwen, Tim
2016-10-01
The flame or flow behavior of a turbulent reacting wake is known to be fundamentally different at high and low values of flame density ratio (ρu/ρb ), as the flow transitions from globally stable to unstable. This paper analyzes the nonlinear dynamics present in a bluff-body stabilized flame, and identifies the transition characteristics in the wake as ρu/ρb is varied over a Reynolds number (based on the bluff-body lip velocity) range of 1000-3300. Recurrence quantification analysis (RQA) of the experimentally obtained time series of the flame edge fluctuations reveals that the time series is highly aperiodic at high values of ρu/ρb and transitions to increasingly correlated or nearly periodic behavior at low values. From the RQA of the transverse velocity time series, we observe that periodicity in the flame oscillations are related to periodicity in the flow. Therefore, we hypothesize that this transition from aperiodic to nearly periodic behavior in the flame edge time series is a manifestation of the transition in the flow from globally stable, convective instability to global instability as ρu/ρb decreases. The recurrence analysis further reveals that the transition in periodicity is not a sudden shift; rather it occurs through an intermittent regime present at low and intermediate ρu/ρb . During intermittency, the flow behavior switches between aperiodic oscillations, reminiscent of a globally stable, convective instability, and periodic oscillations, reminiscent of a global instability. Analysis of the distribution of the lengths of the periodic regions in the intermittent time series and the first return map indicate the presence of type-II intermittency.
Support vector machines for TEC seismo-ionospheric anomalies detection
NASA Astrophysics Data System (ADS)
Akhoondzadeh, M.
2013-02-01
Using time series prediction methods, it is possible to pursue the behaviors of earthquake precursors in the future and to announce early warnings when the differences between the predicted value and the observed value exceed the predefined threshold value. Support Vector Machines (SVMs) are widely used due to their many advantages for classification and regression tasks. This study is concerned with investigating the Total Electron Content (TEC) time series by using a SVM to detect seismo-ionospheric anomalous variations induced by the three powerful earthquakes of Tohoku (11 March 2011), Haiti (12 January 2010) and Samoa (29 September 2009). The duration of TEC time series dataset is 49, 46 and 71 days, for Tohoku, Haiti and Samoa earthquakes, respectively, with each at time resolution of 2 h. In the case of Tohoku earthquake, the results show that the difference between the predicted value obtained from the SVM method and the observed value reaches the maximum value (i.e., 129.31 TECU) at earthquake time in a period of high geomagnetic activities. The SVM method detected a considerable number of anomalous occurrences 1 and 2 days prior to the Haiti earthquake and also 1 and 5 days before the Samoa earthquake in a period of low geomagnetic activities. In order to show that the method is acting sensibly with regard to the results extracted during nonevent and event TEC data, i.e., to perform some null-hypothesis tests in which the methods would also be calibrated, the same period of data from the previous year of the Samoa earthquake date has been taken into the account. Further to this, in this study, the detected TEC anomalies using the SVM method were compared to the previous results (Akhoondzadeh and Saradjian, 2011; Akhoondzadeh, 2012) obtained from the mean, median, wavelet and Kalman filter methods. The SVM detected anomalies are similar to those detected using the previous methods. It can be concluded that SVM can be a suitable learning method to detect the novelty changes of a nonlinear time series such as variations of earthquake precursors.
Conditional heteroscedasticity as a leading indicator of ecological regime shifts.
Seekell, David A; Carpenter, Stephen R; Pace, Michael L
2011-10-01
Regime shifts are massive, often irreversible, rearrangements of nonlinear ecological processes that occur when systems pass critical transition points. Ecological regime shifts sometimes have severe consequences for human well-being, including eutrophication in lakes, desertification, and species extinctions. Theoretical and laboratory evidence suggests that statistical anomalies may be detectable leading indicators of regime shifts in ecological time series, making it possible to foresee and potentially avert incipient regime shifts. Conditional heteroscedasticity is persistent variance characteristic of time series with clustered volatility. Here, we analyze conditional heteroscedasticity as a potential leading indicator of regime shifts in ecological time series. We evaluate conditional heteroscedasticity by using ecological models with and without four types of critical transition. On approaching transition points, all time series contain significant conditional heteroscedasticity. This signal is detected hundreds of time steps in advance of the regime shift. Time series without regime shifts do not have significant conditional heteroscedasticity. Because probability values are easily associated with tests for conditional heteroscedasticity, detection of false positives in time series without regime shifts is minimized. This property reduces the need for a reference system to compare with the perturbed system.
An improvement of the measurement of time series irreversibility with visibility graph approach
NASA Astrophysics Data System (ADS)
Wu, Zhenyu; Shang, Pengjian; Xiong, Hui
2018-07-01
We propose a method to improve the measure of real-valued time series irreversibility which contains two tools: the directed horizontal visibility graph and the Kullback-Leibler divergence. The degree of time irreversibility is estimated by the Kullback-Leibler divergence between the in and out degree distributions presented in the associated visibility graph. In our work, we reframe the in and out degree distributions by encoding them with different embedded dimensions used in calculating permutation entropy(PE). With this improved method, we can not only estimate time series irreversibility efficiently, but also detect time series irreversibility from multiple dimensions. We verify the validity of our method and then estimate the amount of time irreversibility of series generated by chaotic maps as well as global stock markets over the period 2005-2015. The result shows that the amount of time irreversibility reaches the peak with embedded dimension d = 3 under circumstances of experiment and financial markets.
NASA Astrophysics Data System (ADS)
ten Veldhuis, Marie-Claire; Schleiss, Marc
2017-04-01
In this study, we introduced an alternative approach for analysis of hydrological flow time series, using an adaptive sampling framework based on inter-amount times (IATs). The main difference with conventional flow time series is the rate at which low and high flows are sampled: the unit of analysis for IATs is a fixed flow amount, instead of a fixed time window. We analysed statistical distributions of flows and IATs across a wide range of sampling scales to investigate sensitivity of statistical properties such as quantiles, variance, skewness, scaling parameters and flashiness indicators to the sampling scale. We did this based on streamflow time series for 17 (semi)urbanised basins in North Carolina, US, ranging from 13 km2 to 238 km2 in size. Results showed that adaptive sampling of flow time series based on inter-amounts leads to a more balanced representation of low flow and peak flow values in the statistical distribution. While conventional sampling gives a lot of weight to low flows, as these are most ubiquitous in flow time series, IAT sampling gives relatively more weight to high flow values, when given flow amounts are accumulated in shorter time. As a consequence, IAT sampling gives more information about the tail of the distribution associated with high flows, while conventional sampling gives relatively more information about low flow periods. We will present results of statistical analyses across a range of subdaily to seasonal scales and will highlight some interesting insights that can be derived from IAT statistics with respect to basin flashiness and impact urbanisation on hydrological response.
NASA Astrophysics Data System (ADS)
Keylock, C. J.
2017-03-01
An algorithm is described that can generate random variants of a time series while preserving the probability distribution of original values and the pointwise Hölder regularity. Thus, it preserves the multifractal properties of the data. Our algorithm is similar in principle to well-known algorithms based on the preservation of the Fourier amplitude spectrum and original values of a time series. However, it is underpinned by a dual-tree complex wavelet transform rather than a Fourier transform. Our method, which we term the iterated amplitude adjusted wavelet transform can be used to generate bootstrapped versions of multifractal data, and because it preserves the pointwise Hölder regularity but not the local Hölder regularity, it can be used to test hypotheses concerning the presence of oscillating singularities in a time series, an important feature of turbulence and econophysics data. Because the locations of the data values are randomized with respect to the multifractal structure, hypotheses about their mutual coupling can be tested, which is important for the velocity-intermittency structure of turbulence and self-regulating processes.
Modeling Polio Data Using the First Order Non-Negative Integer-Valued Autoregressive, INAR(1), Model
NASA Astrophysics Data System (ADS)
Vazifedan, Turaj; Shitan, Mahendran
Time series data may consists of counts, such as the number of road accidents, the number of patients in a certain hospital, the number of customers waiting for service at a certain time and etc. When the value of the observations are large it is usual to use Gaussian Autoregressive Moving Average (ARMA) process to model the time series. However if the observed counts are small, it is not appropriate to use ARMA process to model the observed phenomenon. In such cases we need to model the time series data by using Non-Negative Integer valued Autoregressive (INAR) process. The modeling of counts data is based on the binomial thinning operator. In this paper we illustrate the modeling of counts data using the monthly number of Poliomyelitis data in United States between January 1970 until December 1983. We applied the AR(1), Poisson regression model and INAR(1) model and the suitability of these models were assessed by using the Index of Agreement(I.A.). We found that INAR(1) model is more appropriate in the sense it had a better I.A. and it is natural since the data are counts.
NASA Astrophysics Data System (ADS)
Staehelin, J.; Rieder, H. E.; Maeder, J. A.; Ribatet, M.; Davison, A. C.; Stübi, R.
2009-04-01
Atmospheric ozone protects the biota living at the Earth's surface from harmful solar UV-B and UV-C radiation. The global ozone shield is expected to gradually recover from the anthropogenic disturbance of ozone depleting substances (ODS) in the coming decades. The stratospheric ozone layer at extratropics might significantly increase above the thickness of the chemically undisturbed atmosphere which might enhance ozone concentrations at the tropopause altitude where ozone is an important greenhouse gas. At Arosa, a resort village in the Swiss Alps, total ozone measurements started in 1926 leading to the longest total ozone series of the world. One Fery spectrograph and seven Dobson spectrophotometers were operated at Arosa and the method used to homogenize the series will be presented. Due to its unique length the series allows studying total ozone in the chemically undisturbed as well as in the ODS loaded stratosphere. The series is particularly valuable to study natural variability in the period prior to 1970, when ODS started to affect stratospheric ozone. Concepts developed by extreme value statistics allow objective definitions of "ozone extreme high" and "ozone extreme low" values by fitting the (daily mean) time series using the Generalized Pareto Distribution (GPD). Extreme high ozone events can be attributed to effects of ElNino and/or NAO, whereas in the chemically disturbed stratosphere high frequencies of extreme low total ozone values simultaneously occur with periods of strong polar ozone depletion (identified by statistical modeling with Equivalent Stratospheric Chlorine times Volume of Stratospheric Polar Clouds) and volcanic eruptions (such as El Chichon and Pinatubo).
2011-01-01
Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598
Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp
2011-08-18
Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.
Characterizing artifacts in RR stress test time series.
Astudillo-Salinas, Fabian; Palacio-Baus, Kenneth; Solano-Quinde, Lizandro; Medina, Ruben; Wong, Sara
2016-08-01
Electrocardiographic stress test records have a lot of artifacts. In this paper we explore a simple method to characterize the amount of artifacts present in unprocessed RR stress test time series. Four time series classes were defined: Very good lead, Good lead, Low quality lead and Useless lead. 65 ECG, 8 lead, records of stress test series were analyzed. Firstly, RR-time series were annotated by two experts. The automatic methodology is based on dividing the RR-time series in non-overlapping windows. Each window is marked as noisy whenever it exceeds an established standard deviation threshold (SDT). Series are classified according to the percentage of windows that exceeds a given value, based upon the first manual annotation. Different SDT were explored. Results show that SDT close to 20% (as a percentage of the mean) provides the best results. The coincidence between annotators classification is 70.77% whereas, the coincidence between the second annotator and the automatic method providing the best matches is larger than 63%. Leads classified as Very good leads and Good leads could be combined to improve automatic heartbeat labeling.
NASA Astrophysics Data System (ADS)
Anwar, Faizan; Bárdossy, András; Seidel, Jochen
2017-04-01
Estimating missing values in a time series of a hydrological variable is an everyday task for a hydrologist. Existing methods such as inverse distance weighting, multivariate regression, and kriging, though simple to apply, provide no indication of the quality of the estimated value and depend mainly on the values of neighboring stations at a given step in the time series. Copulas have the advantage of representing the pure dependence structure between two or more variables (given the relationship between them is monotonic). They rid us of questions such as transforming the data before use or calculating functions that model the relationship between the considered variables. A copula-based approach is suggested to infill discharge, precipitation, and temperature data. As a first step the normal copula is used, subsequently, the necessity to use non-normal / non-symmetrical dependence is investigated. Discharge and temperature are treated as regular continuous variables and can be used without processing for infilling and quality checking. Due to the mixed distribution of precipitation values, it has to be treated differently. This is done by assigning a discrete probability to the zeros and treating the rest as a continuous distribution. Building on the work of others, along with infilling, the normal copula is also utilized to identify values in a time series that might be erroneous. This is done by treating the available value as missing, infilling it using the normal copula and checking if it lies within a confidence band (5 to 95% in our case) of the obtained conditional distribution. Hydrological data from two catchments Upper Neckar River (Germany) and Santa River (Peru) are used to demonstrate the application for datasets with different data quality. The Python code used here is also made available on GitHub. The required input is the time series of a given variable at different stations.
Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-10-01
In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.
Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-05-01
In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.
Analysis and Forecasting of Shoreline Position
NASA Astrophysics Data System (ADS)
Barton, C. C.; Tebbens, S. F.
2007-12-01
Analysis of historical shoreline positions on sandy coasts, in the geologic record, and study of sea-level rise curves reveals that the dynamics of the underlying processes produce temporal/spatial signals that exhibit power scaling and are therefore self-affine fractals. Self-affine time series signals can be quantified over many orders of magnitude in time and space in terms of persistence, a measure of the degree of correlation between adjacent values in the stochastic portion of a time series. Fractal statistics developed for self-affine time series are used to forecast a probability envelope bounding future shoreline positions. The envelope provides the standard deviation as a function of three variables: persistence, a constant equal to the value of the power spectral density when 1/period equals 1, and the number of time increments. The persistence of a twenty-year time series of the mean-high-water (MHW) shoreline positions was measured for four profiles surveyed at Duck, NC at the Field Research Facility (FRF) by the U.S. Army Corps of Engineers. The four MHW shoreline time series signals are self-affine with persistence ranging between 0.8 and 0.9, which indicates that the shoreline position time series is weakly persistent (where zero is uncorrelated), and has highly varying trends for all time intervals sampled. Forecasts of a probability envelope for future MHW positions are made for the 20 years of record and beyond to 50 years from the start of the data records. The forecasts describe the twenty-year data sets well and indicate that within a 96% confidence envelope, future decadal MHW shoreline excursions should be within 14.6 m of the position at the start of data collection. This is a stable-oscillatory shoreline. The forecasting method introduced here includes the stochastic portion of the time series while the traditional method of predicting shoreline change reduces the time series to a linear trend line fit to historic shoreline positions and extrapolated linearly to forecast future positions with a linearly increasing mean that breaks the confidence envelope eight years into the future and continues to increase. The traditional method is a poor representation of the observed shoreline position time series and is a poor basis for extrapolating future shoreline positions.
26 CFR 1.454-1 - Obligations issued at discount.
Code of Federal Regulations, 2010 CFR
2010-04-01
... series E bond, at which time the stated redemption value was $674.60. A never elected under section 454(a... obligation, in which he retains his investment in a matured series E U.S. savings bond, or (iii) A nontransferable obligation (whether or not a current income obligation) of the United States for which a series E...
Code of Federal Regulations, 2010 CFR
2010-04-01
... can be sold at or near their carrying value within a reasonably short period of time and either: (i... portion. (6) Series of a series company means any class or series of a registered investment company that issues two or more classes or series of preferred or special stock, each of which is preferred over all...
Code of Federal Regulations, 2013 CFR
2013-04-01
... can be sold at or near their carrying value within a reasonably short period of time and either: (i... portion. (6) Series of a series company means any class or series of a registered investment company that issues two or more classes or series of preferred or special stock, each of which is preferred over all...
Code of Federal Regulations, 2014 CFR
2014-04-01
... can be sold at or near their carrying value within a reasonably short period of time and either: (i... portion. (6) Series of a series company means any class or series of a registered investment company that issues two or more classes or series of preferred or special stock, each of which is preferred over all...
NASA Astrophysics Data System (ADS)
Hermosilla, Txomin; Wulder, Michael A.; White, Joanne C.; Coops, Nicholas C.; Hobart, Geordie W.
2017-12-01
The use of time series satellite data allows for the temporally dense, systematic, transparent, and synoptic capture of land dynamics over time. Subsequent to the opening of the Landsat archive, several time series approaches for characterizing landscape change have been developed, often representing a particular analytical time window. The information richness and widespread utility of these time series data have created a need to maintain the currency of time series information via the addition of new data, as it becomes available. When an existing time series is temporally extended, it is critical that previously generated change information remains consistent, thereby not altering reported change statistics or science outcomes based on that change information. In this research, we investigate the impacts and implications of adding additional years to an existing 29-year annual Landsat time series for forest change. To do so, we undertook a spatially explicit comparison of the 29 overlapping years of a time series representing 1984-2012, with a time series representing 1984-2016. Surface reflectance values, and presence, year, and type of change were compared. We found that the addition of years to extend the time series had minimal effect on the annual surface reflectance composites, with slight band-specific differences (r ≥ 0.1) in the final years of the original time series being updated. The area of stand replacing disturbances and determination of change year are virtually unchanged for the overlapping period between the two time-series products. Over the overlapping temporal period (1984-2012), the total area of change differs by 0.53%, equating to an annual difference in change area of 0.019%. Overall, the spatial and temporal agreement of the changes detected by both time series was 96%. Further, our findings suggest that the entire pre-existing historic time series does not need to be re-processed during the update process. Critically, given the time series change detection and update approach followed here, science outcomes or reports representing one temporal epoch can be considered stable and will not be altered when a time series is updated with newly available data.
Evolution of record-breaking high and low monthly mean temperatures
NASA Astrophysics Data System (ADS)
Anderson, A. L.; Kostinski, A. B.
2011-12-01
We examine the ratio of record-breaking highs to record-breaking lows with respect to extent of time-series for monthly mean temperatures within the continental United States (1900-2006) and ask the following question. How are record-breaking high and low surface temperatures in the United States affected by time period? We find that the ratio of record-breaking highs to lows in 2006 increases as the time-series extend further into the past. For example: in 2006, the ratio of record-breaking highs to record-breaking lows is ≈ 13 : 1 with 1950 as the first year and ≈ 25 : 1 with 1900 as the first year; both ratios are an order of magnitude greater than 3-σ for stationary simulations. We also find record-breaking events are more sensitive to trends in time-series of monthly averages than time-series of corresponding daily values. When we consider the ratio as it evolves with respect to a fixed start year, we find it is strongly correlated with the ensemble mean. Correlation coefficients are 0.76 and 0.82 for 1900-2006 and 1950-2006 respectively; 3-σ = 0.3 for pairs of uncorrelated stationary time-series. We find similar values for globally distributed time-series: 0.87 and 0.92 for 1900-2006 and 1950-2006 respectively. However, the ratios evolve differently: global ratios increase throughout (1920-2006) while continental United States ratios decrease from about 1940 to 1970. (Based on Anderson and Kostinski (2011), Evolution and distribution of record-breaking high and low monthly mean temperatures. Journal of Applied Meteorology and Climatology. doi: 10.1175/JAMC-D-10-05025.1)
Quesada-Medina, Joaquín; López-Cremades, Francisco Javier; Olivares-Carrillo, Pilar
2010-11-01
The solubility of lignin from hydrolyzed almond (Prunus amygdalus) shells in different acetone, ethanol and dioxane-water mixtures and conditions (extraction time and temperature) was studied. The concept of the solubility parameter (delta-value) was applied to explain the effect of organic solvent concentration on lignin solubility. The organic solvent-water mixture that led to the highest lignin extraction was composed of a 75% vol. of organic solvent for all the solvent series investigated (acetone, ethanol and dioxane). Moreover, the best lignin extraction conditions were a temperature of 210 degrees C and an extraction time of 40 min for the acetone and ethanol series, and 25 min for the dioxane series. The delta-value of the hydrolyzed almond shell lignin [14.60 (cal/cm(3))(1/2)] and that of the organic solvent-water mixtures was calculated. The experimental delignification capacity of the aqueous organic solvents clearly reflected the proximity of their delta-value to that of lignin. The hydrogen-bonding capacity of the solvent-water mixtures was also taken into account. Copyright 2010 Elsevier Ltd. All rights reserved.
Monitoring volcano activity through Hidden Markov Model
NASA Astrophysics Data System (ADS)
Cassisi, C.; Montalto, P.; Prestifilippo, M.; Aliotta, M.; Cannata, A.; Patanè, D.
2013-12-01
During 2011-2013, Mt. Etna was mainly characterized by cyclic occurrences of lava fountains, totaling to 38 episodes. During this time interval Etna volcano's states (QUIET, PRE-FOUNTAIN, FOUNTAIN, POST-FOUNTAIN), whose automatic recognition is very useful for monitoring purposes, turned out to be strongly related to the trend of RMS (Root Mean Square) of the seismic signal recorded by stations close to the summit area. Since RMS time series behavior is considered to be stochastic, we can try to model the system generating its values, assuming to be a Markov process, by using Hidden Markov models (HMMs). HMMs are a powerful tool in modeling any time-varying series. HMMs analysis seeks to recover the sequence of hidden states from the observed emissions. In our framework, observed emissions are characters generated by the SAX (Symbolic Aggregate approXimation) technique, which maps RMS time series values with discrete literal emissions. The experiments show how it is possible to guess volcano states by means of HMMs and SAX.
Gaussian mixture clustering and imputation of microarray data.
Ouyang, Ming; Welsh, William J; Georgopoulos, Panos
2004-04-12
In microarray experiments, missing entries arise from blemishes on the chips. In large-scale studies, virtually every chip contains some missing entries and more than 90% of the genes are affected. Many analysis methods require a full set of data. Either those genes with missing entries are excluded, or the missing entries are filled with estimates prior to the analyses. This study compares methods of missing value estimation. Two evaluation metrics of imputation accuracy are employed. First, the root mean squared error measures the difference between the true values and the imputed values. Second, the number of mis-clustered genes measures the difference between clustering with true values and that with imputed values; it examines the bias introduced by imputation to clustering. The Gaussian mixture clustering with model averaging imputation is superior to all other imputation methods, according to both evaluation metrics, on both time-series (correlated) and non-time series (uncorrelated) data sets.
Covariance Function for Nearshore Wave Assimilation Systems
2018-01-30
covariance can be modeled by a parameterized Gaussian function, for nearshore wave assimilation applications, the covariance function depends primarily on...case of missing values at the compiled time series, the gaps were filled by weighted interpolation. The weights depend on the number of the...averaging, in order to create the continuous time series, filters out the dependency on the instantaneous meteorological and oceanographic conditions
Temporal fluctuation scaling in populations and communities
Michael Kalyuzhny; Yishai Schreiber; Rachel Chocron; Curtis H. Flather; David A. Kessler; Nadav M. Shnerb
2014-01-01
Taylor's law, one of the most widely accepted generalizations in ecology, states that the variance of a population abundance time series scales as a power law of its mean. Here we reexamine this law and the empirical evidence presented in support of it. Specifically, we show that the exponent generally depends on the length of the time series, and its value...
NASA Astrophysics Data System (ADS)
Aliotta, M. A.; Cassisi, C.; Prestifilippo, M.; Cannata, A.; Montalto, P.; Patanè, D.
2014-12-01
During the last years, volcanic activity at Mt. Etna was often characterized by cyclic occurrences of fountains. In the period between January 2011 and June 2013, 38 episodes of lava fountains has been observed. Automatic recognition of the volcano's states related to lava fountain episodes (Quiet, Pre-Fountaining, Fountaining, Post-Fountaining) is very useful for monitoring purposes. We discovered that such states are strongly related to the trend of RMS (Root Mean Square) of the seismic signal recorded in the summit area. In the framework of the project PON SIGMA (Integrated Cloud-Sensor System for Advanced Multirisk Management) work, we tried to model the system generating its sampled values (assuming to be a Markov process and assuming that RMS time series is a stochastic process), by using Hidden Markov models (HMMs), that are a powerful tool for modeling any time-varying series. HMMs analysis seeks to discover the sequence of hidden states from the observed emissions. In our framework, observed emissions are characters generated by SAX (Symbolic Aggregate approXimation) technique. SAX is able to map RMS time series values with discrete literal emissions. Our experiments showed how to predict volcano states by means of SAX and HMMs.
NASA Astrophysics Data System (ADS)
Rieder, Harald E.; Staehelin, Johannes; Maeder, Jörg A.; Peter, Thomas; Ribatet, Mathieu; Davison, Anthony C.; Stübi, Rene; Weihs, Philipp; Holawe, Franz
2010-05-01
In this study tools from extreme value theory (e.g. Coles, 2001; Ribatet, 2007) are applied for the first time in the field of stratospheric ozone research, as statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not address the internal data structure concerning extremes adequately. The study illustrates that tools based on extreme value theory are appropriate to identify ozone extremes and to describe the tails of the world's longest total ozone record (Arosa, Switzerland - for details see Staehelin et al., 1998a,b) (Rieder et al., 2010a). A daily moving threshold was implemented for consideration of the seasonal cycle in total ozone. The frequency of days with extreme low (termed ELOs) and extreme high (termed EHOs) total ozone and the influence of those on mean values and trends is analyzed for Arosa total ozone time series. The results show (a) an increase in ELOs and (b) a decrease in EHOs during the last decades and (c) that the overall trend during the 1970s and 1980s in total ozone is strongly dominated by changes in these extreme events. After removing the extremes, the time series shows a strongly reduced trend (reduction by a factor of 2.5 for trend in annual mean). Furthermore, it is shown that the fitted model represents the tails of the total ozone data set with very high accuracy over the entire range (including absolute monthly minima and maxima). Also the frequency distribution of ozone mini-holes (using constant thresholds) can be calculated with high accuracy. Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight in time series properties. Excursions in the frequency of extreme events reveal "fingerprints" of dynamical factors such as ENSO or NAO, and chemical factors, such as cold Arctic vortex ozone losses, as well as major volcanic eruptions of the 20th century (e.g. Gunung Agung, El Chichón, Mt. Pinatubo). Furthermore, atmospheric loading in ozone depleting substances lead to a continuous modification of column ozone in the northern hemisphere also with respect to extreme values (partly again in connection with polar vortex contributions). It is shown that application of extreme value theory allows the identification of many more such fingerprints than conventional time series analysis of annual and seasonal mean values. Especially, the analysis shows the strong influence of dynamics, revealing that even moderate ENSO and NAO events have a discernible effect on total ozone (Rieder et al., 2010b). Overall the presented new extremes concept provides new information on time series properties, variability, trends and the influence of dynamics and chemistry, complementing earlier analyses focusing only on monthly (or annual) mean values. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder ,H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998a. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998b.
Long-range memory and multifractality in gold markets
NASA Astrophysics Data System (ADS)
Mali, Provash; Mukhopadhyay, Amitabha
2015-03-01
Long-range correlation and fluctuation in the gold market time series of the world's two leading gold consuming countries, namely China and India, are studied. For both the market series during the period 1985-2013 we observe a long-range persistence of memory in the sequences of maxima (minima) of returns in successive time windows of fixed length, but the series, as a whole, are found to be uncorrelated. Multifractal analysis for these series as well as for the sequences of maxima (minima) is carried out in terms of the multifractal detrended fluctuation analysis (MF-DFA) method. We observe a weak multifractal structure for the original series that mainly originates from the fat-tailed probability distribution function of the values, and the multifractal nature of the original time series is enriched into their sequences of maximal (minimal) returns. A quantitative measure of multifractality is provided by using a set of ‘complexity parameters’.
NASA Astrophysics Data System (ADS)
Febrian Umbara, Rian; Tarwidi, Dede; Budi Setiawan, Erwin
2018-03-01
The paper discusses the prediction of Jakarta Composite Index (JCI) in Indonesia Stock Exchange. The study is based on JCI historical data for 1286 days to predict the value of JCI one day ahead. This paper proposes predictions done in two stages., The first stage using Fuzzy Time Series (FTS) to predict values of ten technical indicators, and the second stage using Support Vector Regression (SVR) to predict the value of JCI one day ahead, resulting in a hybrid prediction model FTS-SVR. The performance of this combined prediction model is compared with the performance of the single stage prediction model using SVR only. Ten technical indicators are used as input for each model.
Time series of the northeast Pacific
NASA Astrophysics Data System (ADS)
Peña, M. Angelica; Bograd, Steven J.
2007-10-01
In July 2006, the North Pacific Marine Science Organization (PICES) and Fisheries & Oceans Canada sponsored the symposium “Time Series of the Northeast Pacific: A symposium to mark the 50th anniversary of Line P”. The symposium, which celebrated 50 years of oceanography along Line P and at Ocean Station Papa (OSP), explored the scientific value of the Line P and other long oceanographic time series of the northeast Pacific (NEP). Overviews of the principal NEP time-series were presented, which facilitated regional comparisons and promoted interaction and exchange of information among investigators working in the NEP. More than 80 scientists from 8 countries attended the symposium. This introductory essay is a brief overview of the symposium and the 10 papers that were selected for this special issue of Progress in Oceanography.
Strauss, Ludwig G; Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2011-03-01
(18)F-FDG kinetics are quantified by a 2-tissue-compartment model. The routine use of dynamic PET is limited because of this modality's 1-h acquisition time. We evaluated shortened acquisition protocols up to 0-30 min regarding the accuracy for data analysis with the 2-tissue-compartment model. Full dynamic series for 0-60 min were analyzed using a 2-tissue-compartment model. The time-activity curves and the resulting parameters for the model were stored in a database. Shortened acquisition data were generated from the database using the following time intervals: 0-10, 0-16, 0-20, 0-25, and 0-30 min. Furthermore, the impact of adding a 60-min uptake value to the dynamic series was evaluated. The datasets were analyzed using dedicated software to predict the results of the full dynamic series. The software is based on a modified support vector machines (SVM) algorithm and predicts the compartment parameters of the full dynamic series. The SVM-based software provides user-independent results and was accurate at predicting the compartment parameters of the full dynamic series. If a squared correlation coefficient of 0.8 (corresponding to 80% explained variance of the data) was used as a limit, a shortened acquisition of 0-16 min was accurate at predicting the 60-min 2-tissue-compartment parameters. If a limit of 0.9 (90% explained variance) was used, a dynamic series of at least 0-20 min together with the 60-min uptake values is required. Shortened acquisition protocols can be used to predict the parameters of the 2-tissue-compartment model. Either a dynamic PET series of 0-16 min or a combination of a dynamic PET/CT series of 0-20 min and a 60-min uptake value is accurate for analysis with a 2-tissue-compartment model.
Depletions in winter total ozone values over southern England
NASA Technical Reports Server (NTRS)
Lapworth, A.
1994-01-01
A study has been made of the recently re-evaluated time series of daily total ozone values for the period 1979 to 1992 for southern England. The series consists of measurements made at two stations, Bracknell and Camborne. The series shows a steady decline in ozone values in the spring months over the period, and this is consistent with data from an earlier decade that has been published but not re-evaluated. Of exceptional note is the monthly mean for January 1992 which was very significantly reduced from the normal value, and was the lowest so far measured for this month. This winter was also noteworthy for a prolonged period during which a blocking anticyclone dominated the region, and the possibility existed that this was related to the ozone anomaly. It was possible to determine whether the origin of the low ozone value lay in ascending stratospheric motions. A linear regression analysis of ozone value deviation against 100hPa temperature deviations was used to reduce ozone values to those expected in the absence of high pressure. The assumption was made that the normal regression relation was not affected by atmospheric anomalies during the winter. This showed that vertical motions in the stratosphere only accounted for part of the ozone anomaly and that the main cause of the ozone deficit lay either in a reduced stratospheric circulation to which the anticyclone may be related or in chemical effects in the reduced stratospheric temperatures above the high pressure area. A study of the ozone time series adjusted to remove variations correlated with meteorological quantities, showed that during the period since 1979, one other winter, that of 1982/3, showed a similar although less well defined deficit in total ozone values.
NASA Astrophysics Data System (ADS)
Gang, Zhang; Fansong, Meng; Jianzhong, Wang; Mingtao, Ding
2018-02-01
Determining magnetotelluric impedance precisely and accurately is fundamental to valid inversion and geological interpretation. This study aims to determine the minimum value of signal-to-noise ratio (SNR) which maintains the effectiveness of remote reference technique. Results of standard time series simulation, addition of different Gaussian noises to obtain the different SNR time series, and analysis of the intermediate data, such as polarization direction, correlation coefficient, and impedance tensor, show that when the SNR value is larger than 23.5743, the polarization direction disorder at morphology and a smooth and accurate sounding carve value can be obtained. At this condition, the correlation coefficient value of nearly complete segments between the base and remote station is larger than 0.9, and impedance tensor Zxy presents only one aggregation, which meet the natural magnetotelluric signal characteristic.
NASA Technical Reports Server (NTRS)
Menenti, M.; Azzali, S.; Verhoef, W.; Van Swol, R.
1993-01-01
Examples are presented of applications of a fast Fourier transform algorithm to analyze time series of images of Normalized Difference Vegetation Index values. The results obtained for a case study on Zambia indicated that differences in vegetation development among map units of an existing agroclimatic map were not significant, while reliable differences were observed among the map units obtained using the Fourier analysis.
NASA Astrophysics Data System (ADS)
Zhang, F.; Barriot, J. P.; Maamaatuaiahutapu, K.; Sichoix, L.; Xu, G., Sr.
2017-12-01
In order to better understand and predict the complex meteorological context of French Polynesia, we focus on the time evolution of Integrated Precipitable Water (PW) using Radiosoundings (RS) data from 1974 to 2017. In a first step, we make a comparison over selected months between the PW estimate reconstructed from raw two seconds acquisition and the PW estimate reconstructed from the highly compressed and undersampled Integrated Global Radiosonde Archive (IGRA). In a second step, we make a comparison with other techniques of PW acquisition (radio delays, temperature of sky, infrared bands absorption) in order to assess the intrinsic biases of RS acquisition. In a last step, we analyze the PW time series in our area validated at the light of the first and second step, w.r.t seasonality (dry season and wet season) and spatial location. During the wet season (November to April), the PW values are higher than the corresponding values observed during the dry season (May to October). The PW values are smaller with higher latitudes, but there are higher PW values in Tahiti than in other islands because of the presence of the South Pacific Convergence Zone (SPCZ) around Tahiti. All the PW time series show the same uptrend in French Polynesia in recent years. This study provides further evidence that the PW time series derived from RS can be assimilated in weather forecasting and climate warming models.
NASA Astrophysics Data System (ADS)
Pal, Mayukha; Madhusudana Rao, P.; Manimaran, P.
2014-12-01
We apply the recently developed multifractal detrended cross-correlation analysis method to investigate the cross-correlation behavior and fractal nature between two non-stationary time series. We analyze the daily return price of gold, West Texas Intermediate and Brent crude oil, foreign exchange rate data, over a period of 18 years. The cross correlation has been measured from the Hurst scaling exponents and the singularity spectrum quantitatively. From the results, the existence of multifractal cross-correlation between all of these time series is found. We also found that the cross correlation between gold and oil prices possess uncorrelated behavior and the remaining bivariate time series possess persistent behavior. It was observed for five bivariate series that the cross-correlation exponents are less than the calculated average generalized Hurst exponents (GHE) for q<0 and greater than GHE when q>0 and for one bivariate series the cross-correlation exponent is greater than GHE for all q values.
Improving cluster-based missing value estimation of DNA microarray data.
Brás, Lígia P; Menezes, José C
2007-06-01
We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.
Multiresolution analysis of Bursa Malaysia KLCI time series
NASA Astrophysics Data System (ADS)
Ismail, Mohd Tahir; Dghais, Amel Abdoullah Ahmed
2017-05-01
In general, a time series is simply a sequence of numbers collected at regular intervals over a period. Financial time series data processing is concerned with the theory and practice of processing asset price over time, such as currency, commodity data, and stock market data. The primary aim of this study is to understand the fundamental characteristics of selected financial time series by using the time as well as the frequency domain analysis. After that prediction can be executed for the desired system for in sample forecasting. In this study, multiresolution analysis which the assist of discrete wavelet transforms (DWT) and maximal overlap discrete wavelet transform (MODWT) will be used to pinpoint special characteristics of Bursa Malaysia KLCI (Kuala Lumpur Composite Index) daily closing prices and return values. In addition, further case study discussions include the modeling of Bursa Malaysia KLCI using linear ARIMA with wavelets to address how multiresolution approach improves fitting and forecasting results.
NASA Technical Reports Server (NTRS)
Acker, James G.; Shen, Suhung; Leptoukh, Gregory G.; Lee, Zhongping
2012-01-01
Oceanographic time-series stations provide vital data for the monitoring of oceanic processes, particularly those associated with trends over time and interannual variability. There are likely numerous locations where the establishment of a time-series station would be desirable, but for reasons of funding or logistics, such establishment may not be feasible. An alternative to an operational time-series station is monitoring of sites via remote sensing. In this study, the NASA Giovanni data system is employed to simulate the establishment of two time-series stations near the outflow region of California s Eel River, which carries a high sediment load. Previous time-series analysis of this location (Acker et al. 2009) indicated that remotely-sensed chl a exhibits a statistically significant increasing trend during summer (low flow) months, but no apparent trend during winter (high flow) months. Examination of several newly-available ocean data parameters in Giovanni, including 8-day resolution data, demonstrates the differences in ocean parameter trends at the two locations compared to regionally-averaged time-series. The hypothesis that the increased summer chl a values are related to increasing SST is evaluated, and the signature of the Eel River plume is defined with ocean optical parameters.
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
26 CFR 1.454-1 - Obligations issued at discount.
Code of Federal Regulations, 2013 CFR
2013-04-01
... series E bond, at which time the stated redemption value was $674.60. A never elected under section 454(a... current income obligation, in which he retains his investment in a matured series E U.S. savings bond, or... for which a series E U.S. savings bond was exchanged (whether or not at final maturity) in an exchange...
26 CFR 1.454-1 - Obligations issued at discount.
Code of Federal Regulations, 2011 CFR
2011-04-01
... series E bond, at which time the stated redemption value was $674.60. A never elected under section 454(a... current income obligation, in which he retains his investment in a matured series E U.S. savings bond, or... for which a series E U.S. savings bond was exchanged (whether or not at final maturity) in an exchange...
26 CFR 1.454-1 - Obligations issued at discount.
Code of Federal Regulations, 2012 CFR
2012-04-01
... series E bond, at which time the stated redemption value was $674.60. A never elected under section 454(a... current income obligation, in which he retains his investment in a matured series E U.S. savings bond, or... for which a series E U.S. savings bond was exchanged (whether or not at final maturity) in an exchange...
NASA Astrophysics Data System (ADS)
Akhoondzadeh, M.
2013-04-01
In this paper, a number of classical and intelligent methods, including interquartile, autoregressive integrated moving average (ARIMA), artificial neural network (ANN) and support vector machine (SVM), have been proposed to quantify potential thermal anomalies around the time of the 11 August 2012 Varzeghan, Iran, earthquake (Mw = 6.4). The duration of the data set, which is comprised of Aqua-MODIS land surface temperature (LST) night-time snapshot images, is 62 days. In order to quantify variations of LST data obtained from satellite images, the air temperature (AT) data derived from the meteorological station close to the earthquake epicenter has been taken into account. For the models examined here, results indicate the following: (i) ARIMA models, which are the most widely used in the time series community for short-term forecasting, are quickly and easily implemented, and can efficiently act through linear solutions. (ii) A multilayer perceptron (MLP) feed-forward neural network can be a suitable non-parametric method to detect the anomalous changes of a non-linear time series such as variations of LST. (iii) Since SVMs are often used due to their many advantages for classification and regression tasks, it can be shown that, if the difference between the predicted value using the SVM method and the observed value exceeds the pre-defined threshold value, then the observed value could be regarded as an anomaly. (iv) ANN and SVM methods could be powerful tools in modeling complex phenomena such as earthquake precursor time series where we may not know what the underlying data generating process is. There is good agreement in the results obtained from the different methods for quantifying potential anomalies in a given LST time series. This paper indicates that the detection of the potential thermal anomalies derive credibility from the overall efficiencies and potentialities of the four integrated methods.
Curve Number Application in Continuous Runoff Models: An Exercise in Futility?
NASA Astrophysics Data System (ADS)
Lamont, S. J.; Eli, R. N.
2006-12-01
The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a surrogate for the selected subset of HPSF parameters could not be justified. These results suggest that use of the Curve Number in other complex continuous time series hydrologic models may not be appropriate, given the limitations inherent in the definition of the NRCS CN method.
Forecasting of cyanobacterial density in Torrão reservoir using artificial neural networks.
Torres, Rita; Pereira, Elisa; Vasconcelos, Vítor; Teles, Luís Oliva
2011-06-01
The ability of general regression neural networks (GRNN) to forecast the density of cyanobacteria in the Torrão reservoir (Tâmega river, Portugal), in a period of 15 days, based on three years of collected physical and chemical data, was assessed. Several models were developed and 176 were selected based on their correlation values for the verification series. A time lag of 11 was used, equivalent to one sample (periods of 15 days in the summer and 30 days in the winter). Several combinations of the series were used. Input and output data collected from three depths of the reservoir were applied (surface, euphotic zone limit and bottom). The model that presented a higher average correlation value presented the correlations 0.991; 0.843; 0.978 for training, verification and test series. This model had the three series independent in time: first test series, then verification series and, finally, training series. Only six input variables were considered significant to the performance of this model: ammonia, phosphates, dissolved oxygen, water temperature, pH and water evaporation, physical and chemical parameters referring to the three depths of the reservoir. These variables are common to the next four best models produced and, although these included other input variables, their performance was not better than the selected best model.
Tissue classification using depth-dependent ultrasound time series analysis: in-vitro animal study
NASA Astrophysics Data System (ADS)
Imani, Farhad; Daoud, Mohammad; Moradi, Mehdi; Abolmaesumi, Purang; Mousavi, Parvin
2011-03-01
Time series analysis of ultrasound radio-frequency (RF) signals has been shown to be an effective tissue classification method. Previous studies of this method for tissue differentiation at high and clinical-frequencies have been reported. In this paper, analysis of RF time series is extended to improve tissue classification at the clinical frequencies by including novel features extracted from the time series spectrum. The primary feature examined is the Mean Central Frequency (MCF) computed for regions of interest (ROIs) in the tissue extending along the axial axis of the transducer. In addition, the intercept and slope of a line fitted to the MCF-values of the RF time series as a function of depth have been included. To evaluate the accuracy of the new features, an in vitro animal study is performed using three tissue types: bovine muscle, bovine liver, and chicken breast, where perfect two-way classification is achieved. The results show statistically significant improvements over the classification accuracies with previously reported features.
Investigation of the 16-year and 18-year ZTD Time Series Derived from GPS Data Processing
NASA Astrophysics Data System (ADS)
Bałdysz, Zofia; Nykiel, Grzegorz; Figurski, Mariusz; Szafranek, Karolina; KroszczyńSki, Krzysztof
2015-08-01
The GPS system can play an important role in activities related to the monitoring of climate. Long time series, coherent strategy, and very high quality of tropospheric parameter Zenith Tropospheric Delay (ZTD) estimated on the basis of GPS data analysis allows to investigate its usefulness for climate research as a direct GPS product. This paper presents results of analysis of 16-year time series derived from EUREF Permanent Network (EPN) reprocessing performed by the Military University of Technology. For 58 stations Lomb-Scargle periodograms were performed in order to obtain information about the oscillations in ZTD time series. Seasonal components and linear trend were estimated using Least Square Estimation (LSE) and Mann—Kendall trend test was used to confirm the presence of a linear trend designated by LSE method. In order to verify the impact of the length of time series on trend value, comparison between 16 and 18 years were performed.
RADON CONCENTRATION TIME SERIES MODELING AND APPLICATION DISCUSSION.
Stránský, V; Thinová, L
2017-11-01
In the year 2010 a continual radon measurement was established at Mladeč Caves in the Czech Republic using a continual radon monitor RADIM3A. In order to model radon time series in the years 2010-15, the Box-Jenkins Methodology, often used in econometrics, was applied. Because of the behavior of radon concentrations (RCs), a seasonal integrated, autoregressive moving averages model with exogenous variables (SARIMAX) has been chosen to model the measured time series. This model uses the time series seasonality, previously acquired values and delayed atmospheric parameters, to forecast RC. The developed model for RC time series is called regARIMA(5,1,3). Model residuals could be retrospectively compared with seismic evidence of local or global earthquakes, which occurred during the RCs measurement. This technique enables us to asses if continuously measured RC could serve an earthquake precursor. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Andrew G. Bunn; Esther Jansma; Mikko Korpela; Robert D. Westfall; James Baldwin
2013-01-01
Mean sensitivity (ζ) continues to be used in dendrochronology despite a literature that shows it to be of questionable value in describing the properties of a time series. We simulate first-order autoregressive models with known parameters and show that ζ is a function of variance and autocorrelation of a time series. We then use 500 random tree-ring...
Economic value of big game habitat production from natural and prescribed fire
Armando Gonzalez-Caban; John B. Loomis; Dana Griffin; Elen Wu; Daniel McCollum; Jane McKeever; Diane Freeman
2003-01-01
A macro time-series model and a micro GIS model were used to estimate a production function relating deer harvest response to prescribed fire, holding constant other environmental variables. The macro time-series model showed a marginal increase in deer harvested of 33 for an increase of 1,100 acres of prescribed burn. The marginal deer increase for the micro GIS model...
ERIC Educational Resources Information Center
Sween, Joyce; Campbell, Donald T.
Computational formulae for the following three tests of significance, useful in the interrupted time series design, are given: (1) a "t" test (Mood, 1950) for the significance of the first post-change observation from a value predicted by a linear fit of the pre-change observations; (2) an "F" test (Walker and Lev, 1953) of the…
Fault Diagnosis from Raw Sensor Data Using Deep Neural Networks Considering Temporal Coherence.
Zhang, Ran; Peng, Zhen; Wu, Lifeng; Yao, Beibei; Guan, Yong
2017-03-09
Intelligent condition monitoring and fault diagnosis by analyzing the sensor data can assure the safety of machinery. Conventional fault diagnosis and classification methods usually implement pretreatments to decrease noise and extract some time domain or frequency domain features from raw time series sensor data. Then, some classifiers are utilized to make diagnosis. However, these conventional fault diagnosis approaches suffer from the expertise of feature selection and they do not consider the temporal coherence of time series data. This paper proposes a fault diagnosis model based on Deep Neural Networks (DNN). The model can directly recognize raw time series sensor data without feature selection and signal processing. It also takes advantage of the temporal coherence of the data. Firstly, raw time series training data collected by sensors are used to train the DNN until the cost function of DNN gets the minimal value; Secondly, test data are used to test the classification accuracy of the DNN on local time series data. Finally, fault diagnosis considering temporal coherence with former time series data is implemented. Experimental results show that the classification accuracy of bearing faults can get 100%. The proposed fault diagnosis approach is effective in recognizing the type of bearing faults.
Fault Diagnosis from Raw Sensor Data Using Deep Neural Networks Considering Temporal Coherence
Zhang, Ran; Peng, Zhen; Wu, Lifeng; Yao, Beibei; Guan, Yong
2017-01-01
Intelligent condition monitoring and fault diagnosis by analyzing the sensor data can assure the safety of machinery. Conventional fault diagnosis and classification methods usually implement pretreatments to decrease noise and extract some time domain or frequency domain features from raw time series sensor data. Then, some classifiers are utilized to make diagnosis. However, these conventional fault diagnosis approaches suffer from the expertise of feature selection and they do not consider the temporal coherence of time series data. This paper proposes a fault diagnosis model based on Deep Neural Networks (DNN). The model can directly recognize raw time series sensor data without feature selection and signal processing. It also takes advantage of the temporal coherence of the data. Firstly, raw time series training data collected by sensors are used to train the DNN until the cost function of DNN gets the minimal value; Secondly, test data are used to test the classification accuracy of the DNN on local time series data. Finally, fault diagnosis considering temporal coherence with former time series data is implemented. Experimental results show that the classification accuracy of bearing faults can get 100%. The proposed fault diagnosis approach is effective in recognizing the type of bearing faults. PMID:28282936
Common mode error in Antarctic GPS coordinate time series on its effect on bedrock-uplift estimates
NASA Astrophysics Data System (ADS)
Liu, Bin; King, Matt; Dai, Wujiao
2018-05-01
Spatially-correlated common mode error always exists in regional, or-larger, GPS networks. We applied independent component analysis (ICA) to GPS vertical coordinate time series in Antarctica from 2010 to 2014 and made a comparison with the principal component analysis (PCA). Using PCA/ICA, the time series can be decomposed into a set of temporal components and their spatial responses. We assume the components with common spatial responses are common mode error (CME). An average reduction of ˜40% about the RMS values was achieved in both PCA and ICA filtering. However, the common mode components obtained from the two approaches have different spatial and temporal features. ICA time series present interesting correlations with modeled atmospheric and non-tidal ocean loading displacements. A white noise (WN) plus power law noise (PL) model was adopted in the GPS velocity estimation using maximum likelihood estimation (MLE) analysis, with ˜55% reduction of the velocity uncertainties after filtering using ICA. Meanwhile, spatiotemporal filtering reduces the amplitude of PL and periodic terms in the GPS time series. Finally, we compare the GPS uplift velocities, after correction for elastic effects, with recent models of glacial isostatic adjustment (GIA). The agreements of the GPS observed velocities and four GIA models are generally improved after the spatiotemporal filtering, with a mean reduction of ˜0.9 mm/yr of the WRMS values, possibly allowing for more confident separation of various GIA model predictions.
Long-term persistence of solar activity
NASA Technical Reports Server (NTRS)
Ruzmaikin, Alexander; Feynman, Joan; Robinson, Paul
1994-01-01
We examine the question of whether or not the non-periodic variations in solar activity are caused by a white-noise, random process. The Hurst exponent, which characterizes the persistence of a time series, is evaluated for the series of C-14 data for the time interval from about 6000 BC to 1950 AD. We find a constant Hurst exponent, suggesting that solar activity in the frequency range from 100 to 3000 years includes an important continuum component in addition to the well-known periodic variations. The value we calculate, H approximately 0.8, is significantly larger than the value of 0.5 that would correspond to variations produced by a white-noise process. This value is in good agreement with the results for the monthly sunspot data reported elsewhere, indicating that the physics that produces the continuum is a correlated random process and that it is the same type of process over a wide range of time interval lengths.
Evaluation of scaling invariance embedded in short time series.
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.
Evaluation of Scaling Invariance Embedded in Short Time Series
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356
NASA Astrophysics Data System (ADS)
Tanaka, Rie; Matsuda, Hiroaki; Sanada, Shigeru
2017-03-01
The density of lung tissue changes as demonstrated on imagery is dependent on the relative increases and decreases in the volume of air and lung vessels per unit volume of lung. Therefore, a time-series analysis of lung texture can be used to evaluate relative pulmonary function. This study was performed to assess a time-series analysis of lung texture on dynamic chest radiographs during respiration, and to demonstrate its usefulness in the diagnosis of pulmonary impairments. Sequential chest radiographs of 30 patients were obtained using a dynamic flat-panel detector (FPD; 100 kV, 0.2 mAs/pulse, 15 frames/s, SID = 2.0 m; Prototype, Konica Minolta). Imaging was performed during respiration, and 210 images were obtained over 14 seconds. Commercial bone suppression image-processing software (Clear Read Bone Suppression; Riverain Technologies, Miamisburg, Ohio, USA) was applied to the sequential chest radiographs to create corresponding bone suppression images. Average pixel values, standard deviation (SD), kurtosis, and skewness were calculated based on a density histogram analysis in lung regions. Regions of interest (ROIs) were manually located in the lungs, and the same ROIs were traced by the template matching technique during respiration. Average pixel value effectively differentiated regions with ventilatory defects and normal lung tissue. The average pixel values in normal areas changed dynamically in synchronization with the respiratory phase, whereas those in regions of ventilatory defects indicated reduced variations in pixel value. There were no significant differences between ventilatory defects and normal lung tissue in the other parameters. We confirmed that time-series analysis of lung texture was useful for the evaluation of pulmonary function in dynamic chest radiography during respiration. Pulmonary impairments were detected as reduced changes in pixel value. This technique is a simple, cost-effective diagnostic tool for the evaluation of regional pulmonary function.
Wild, Lauren A; Chenoweth, Ellen M; Mueter, Franz J; Straley, Janice M
2018-05-18
Stable isotope analysis integrates diet information over a time period specific to the type of tissue sampled. For metabolically active skin of free-ranging cetaceans, cells are generated at the basal layer of the skin and migrate outward until they eventually slough off, suggesting potential for a dietary time series. Skin samples from cetaceans were analyzed using continuous-flow elemental analyzer isotope ratio mass spectrometery (EA-IRMS). We used ANOVAs to compare the variability of δ 13 C and δ 15 N values within and among layers and columns ("cores") of the skin of a fin, humpback, and sperm whale. We then used mixed-effects models to analyze isotopic variability among layers of 28 sperm whale skin samples, over the course of a season and among years. We found layer to be a significant predictor of δ 13 C values in the sperm whale's skin, and δ 15 N values the humpback whale's skin. There was no evidence for significant differences in δ 15 N or δ 13 C values among cores for any species. Mixed effects models selected layer and day of the year as significant predictors of δ 13 C and δ 15 N values in sperm whale skin across individuals sampled during the summer months in the Gulf of Alaska. These results suggest that skin samples from cetaceans may be subsampled to reflect diet during a narrower time period; specifically different layers of skin may contain a dietary time series. This underscores the importance of selecting an appropriate portion of skin to analyze based on the species and objectives of the study. This article is protected by copyright. All rights reserved.
The long-term changes in total ozone, as derived from Dobson measurements at Arosa (1948-2001)
NASA Astrophysics Data System (ADS)
Krzyscin, J. W.
2003-04-01
The longest possible total ozone time series (Arosa, Switzerland) is examined for a detection of trends. Two-step procedure is proposed to estimate the long-term (decadal) variations in the ozone time series. The first step consists of a standard least-squares multiple regression applied to the total ozone monthly means to parameterize "natural" (related to the oscillations in the atmospheric dynamics) variations in the analyzed time series. The standard proxies for the dynamical ozone variations are used including; the 11-year solar activity cycle, and indices of QBO, ENSO and NAO. We use the detrended time series of temperature at 100 hPa and 500 hPa over Arosa to parameterize short-term variations (with time periods<1 year) in total ozone related to local changes in the meteorological conditions over the station. The second step consists of a smooth-curve fitting to the total ozone residuals (original minus modeled "natural" time series), the time derivation applied to this curve to obtain local trends, and bootstrapping of the residual time series to estimate the standard error of local trends. Locally weighted regression and the wavelet analysis methodology are used to extract the smooth component out of the residual time series. The time integral over the local trend values provides the cumulative long-term change since the data beginning. Examining the pattern of the cumulative change we see the periods with total ozone loss (the end of 50s up to early 60s - probably the effect of the nuclear bomb tests), recovery (mid 60s up to beginning of 70s), apparent decrease (beginning of 70s lasting to mid 90s - probably the effect of the atmosphere contamination by anthropogenic substances containing chlorine), and with a kind of stabilization or recovery (starting in the mid of 90s - probably the effect of the Montreal protocol to eliminate substances reducing the ozone layer). We can also estimate that a full ozone recovery (return to the undisturbed total ozone level from the beginning of 70s) is expected around 2050. We propose to calculate both time series of local trends and the cumulative long-term change instead single trend value derived as a slope of straight line fit to the data.
Models for forecasting hospital bed requirements in the acute sector.
Farmer, R D; Emami, J
1990-01-01
STUDY OBJECTIVE--The aim was to evaluate the current approach to forecasting hospital bed requirements. DESIGN--The study was a time series and regression analysis. The time series for mean duration of stay for general surgery in the age group 15-44 years (1969-1982) was used in the evaluation of different methods of forecasting future values of mean duration of stay and its subsequent use in the formation of hospital bed requirements. RESULTS--It has been suggested that the simple trend fitting approach suffers from model specification error and imposes unjustified restrictions on the data. Time series approach (Box-Jenkins method) was shown to be a more appropriate way of modelling the data. CONCLUSION--The simple trend fitting approach is inferior to the time series approach in modelling hospital bed requirements. PMID:2277253
A New Hybrid-Multiscale SSA Prediction of Non-Stationary Time Series
NASA Astrophysics Data System (ADS)
Ghanbarzadeh, Mitra; Aminghafari, Mina
2016-02-01
Singular spectral analysis (SSA) is a non-parametric method used in the prediction of non-stationary time series. It has two parameters, which are difficult to determine and very sensitive to their values. Since, SSA is a deterministic-based method, it does not give good results when the time series is contaminated with a high noise level and correlated noise. Therefore, we introduce a novel method to handle these problems. It is based on the prediction of non-decimated wavelet (NDW) signals by SSA and then, prediction of residuals by wavelet regression. The advantages of our method are the automatic determination of parameters and taking account of the stochastic structure of time series. As shown through the simulated and real data, we obtain better results than SSA, a non-parametric wavelet regression method and Holt-Winters method.
Observation of daily and annular variations in the 214Po half-life
NASA Astrophysics Data System (ADS)
Alexeev, E. N.; Gavrilyuk, Yu. M.; Gangapshev, A. M.; Gezhaev, A. M.; Kazalov, V. V.; Kuzminov, V. V.; Panasenko, S. I.; Ratkevich, S. S.
2017-11-01
Results of the analysis of a time series of values of the half-life (τ) of the 214Po nucleus with a different time step obtained from the TAU-1 (354 days) and TAU-2 (973 days) installations are presented. The annual variation with an amplitude of (9.8 ± 0.6) × 10-4 and daily variations in the solar, lunar, and sidereal times with amplitudes of (5.3 ± 0.3) × 10-4, (6.9 ± 2.0) × 10-4, and (7.2 ± 1.2) × 10-4, respectively, are found in the series of τ values. It is shown that variations in microclimatic parameters cannot be a cause of τ variations.
Clinical time series prediction: Toward a hierarchical dynamical system framework.
Liu, Zitao; Hauskrecht, Milos
2015-09-01
Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Miura, T.; Kato, A.; Wang, J.; Vargas, M.; Lindquist, M.
2015-12-01
Satellite vegetation index (VI) time series data serve as an important means to monitor and characterize seasonal changes of terrestrial vegetation and their interannual variability. It is, therefore, critical to ensure quality of such VI products and one method of validating VI product quality is cross-comparison with in situ flux tower measurements. In this study, we evaluated the quality of VI time series derived from Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (NPP) spacecraft by cross-comparison with in situ radiation flux measurements at select flux tower sites over North America and Europe. VIIRS is a new polar-orbiting satellite sensor series, slated to replace National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer in the afternoon overpass and to continue the highly-calibrated data streams initiated with Moderate Resolution Imaging Spectrometer of National Aeronautics and Space Administration's Earth Observing System. The selected sites covered a wide range of biomes, including croplands, grasslands, evergreen needle forest, woody savanna, and open shrublands. The two VIIRS indices of the Top-of-Atmosphere (TOA) Normalized Difference Vegetation Index (NDVI) and the atmospherically-corrected, Top-of-Canopy (TOC) Enhanced Vegetation Index (EVI) (daily, 375 m spatial resolution) were compared against the TOC NDVI and a two-band version of EVI (EVI2) calculated from tower radiation flux measurements, respectively. VIIRS and Tower VI time series showed comparable seasonal profiles across biomes with statistically significant correlations (> 0.60; p-value < 0.01). "Start-of-season (SOS)" phenological metric values extracted from VIIRS and Tower VI time series were also highly compatible (R2 > 0.95), with mean differences of 2.3 days and 5.0 days for the NDVI and the EVI, respectively. These results indicate that VIIRS VI time series can capture seasonal evolution of vegetated land surface as good as in situ radiometric measurements. Future studies that address biophysical or physiological interpretations of Tower VI time series derived from radiation flux measurements are desirable.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Modelling short time series in metabolomics: a functional data analysis approach.
Montana, Giovanni; Berk, Maurice; Ebbels, Tim
2011-01-01
Metabolomics is the study of the complement of small molecule metabolites in cells, biofluids and tissues. Many metabolomic experiments are designed to compare changes observed over time under two or more experimental conditions (e.g. a control and drug-treated group), thus producing time course data. Models from traditional time series analysis are often unsuitable because, by design, only very few time points are available and there are a high number of missing values. We propose a functional data analysis approach for modelling short time series arising in metabolomic studies which overcomes these obstacles. Our model assumes that each observed time series is a smooth random curve, and we propose a statistical approach for inferring this curve from repeated measurements taken on the experimental units. A test statistic for detecting differences between temporal profiles associated with two experimental conditions is then presented. The methodology has been applied to NMR spectroscopy data collected in a pre-clinical toxicology study.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Molnar, Gyula; Iredell, Lena
2011-01-01
Outline: (1) Comparison of AIRS and CERES anomaly time series of outgoing longwave radiation (OLR) and OLR(sub CLR), i.e. Clear Sky OLR (2) Explanation of recent decreases in global and tropical mean values of OLR (3) AIRS "Short-term" Longwave Cloud Radiative Feedback -- A new product
Comparison of time-series registration methods in breast dynamic infrared imaging
NASA Astrophysics Data System (ADS)
Riyahi-Alam, S.; Agostini, V.; Molinari, F.; Knaflitz, M.
2015-03-01
Automated motion reduction in dynamic infrared imaging is on demand in clinical applications, since movement disarranges time-temperature series of each pixel, thus originating thermal artifacts that might bias the clinical decision. All previously proposed registration methods are feature based algorithms requiring manual intervention. The aim of this work is to optimize the registration strategy specifically for Breast Dynamic Infrared Imaging and to make it user-independent. We implemented and evaluated 3 different 3D time-series registration methods: 1. Linear affine, 2. Non-linear Bspline, 3. Demons applied to 12 datasets of healthy breast thermal images. The results are evaluated through normalized mutual information with average values of 0.70 ±0.03, 0.74 ±0.03 and 0.81 ±0.09 (out of 1) for Affine, Bspline and Demons registration, respectively, as well as breast boundary overlap and Jacobian determinant of the deformation field. The statistical analysis of the results showed that symmetric diffeomorphic Demons' registration method outperforms also with the best breast alignment and non-negative Jacobian values which guarantee image similarity and anatomical consistency of the transformation, due to homologous forces enforcing the pixel geometric disparities to be shortened on all the frames. We propose Demons' registration as an effective technique for time-series dynamic infrared registration, to stabilize the local temperature oscillation.
Forecasting malaria cases using climatic factors in delhi, India: a time series analysis.
Kumar, Varun; Mangal, Abha; Panesar, Sanjeet; Yadav, Geeta; Talwar, Richa; Raut, Deepak; Singh, Saudan
2014-01-01
Background. Malaria still remains a public health problem in developing countries and changing environmental and climatic factors pose the biggest challenge in fighting against the scourge of malaria. Therefore, the study was designed to forecast malaria cases using climatic factors as predictors in Delhi, India. Methods. The total number of monthly cases of malaria slide positives occurring from January 2006 to December 2013 was taken from the register maintained at the malaria clinic at Rural Health Training Centre (RHTC), Najafgarh, Delhi. Climatic data of monthly mean rainfall, relative humidity, and mean maximum temperature were taken from Regional Meteorological Centre, Delhi. Expert modeler of SPSS ver. 21 was used for analyzing the time series data. Results. Autoregressive integrated moving average, ARIMA (0,1,1) (0,1,0)(12), was the best fit model and it could explain 72.5% variability in the time series data. Rainfall (P value = 0.004) and relative humidity (P value = 0.001) were found to be significant predictors for malaria transmission in the study area. Seasonal adjusted factor (SAF) for malaria cases shows peak during the months of August and September. Conclusion. ARIMA models of time series analysis is a simple and reliable tool for producing reliable forecasts for malaria in Delhi, India.
Forbidden patterns in financial time series
NASA Astrophysics Data System (ADS)
Zanin, Massimiliano
2008-03-01
The existence of forbidden patterns, i.e., certain missing sequences in a given time series, is a recently proposed instrument of potential application in the study of time series. Forbidden patterns are related to the permutation entropy, which has the basic properties of classic chaos indicators, such as Lyapunov exponent or Kolmogorov entropy, thus allowing to separate deterministic (usually chaotic) from random series; however, it requires fewer values of the series to be calculated, and it is suitable for using with small datasets. In this paper, the appearance of forbidden patterns is studied in different economical indicators such as stock indices (Dow Jones Industrial Average and Nasdaq Composite), NYSE stocks (IBM and Boeing), and others (ten year Bond interest rate), to find evidence of deterministic behavior in their evolutions. Moreover, the rate of appearance of the forbidden patterns is calculated, and some considerations about the underlying dynamics are suggested.
Dynamical glucometry: Use of multiscale entropy analysis in diabetes
NASA Astrophysics Data System (ADS)
Costa, Madalena D.; Henriques, Teresa; Munshi, Medha N.; Segal, Alissa R.; Goldberger, Ary L.
2014-09-01
Diabetes mellitus (DM) is one of the world's most prevalent medical conditions. Contemporary management focuses on lowering mean blood glucose values toward a normal range, but largely ignores the dynamics of glucose fluctuations. We probed analyte time series obtained from continuous glucose monitor (CGM) sensors. We show that the fluctuations in CGM values sampled every 5 min are not uncorrelated noise. Next, using multiscale entropy analysis, we quantified the complexity of the temporal structure of the CGM time series from a group of elderly subjects with type 2 DM and age-matched controls. We further probed the structure of these CGM time series using detrended fluctuation analysis. Our findings indicate that the dynamics of glucose fluctuations from control subjects are more complex than those of subjects with type 2 DM over time scales ranging from about 5 min to 5 h. These findings support consideration of a new framework, dynamical glucometry, to guide mechanistic research and to help assess and compare therapeutic interventions, which should enhance complexity of glucose fluctuations and not just lower mean and variance of blood glucose levels.
Hopke, P K; Liu, C; Rubin, D B
2001-03-01
Many chemical and environmental data sets are complicated by the existence of fully missing values or censored values known to lie below detection thresholds. For example, week-long samples of airborne particulate matter were obtained at Alert, NWT, Canada, between 1980 and 1991, where some of the concentrations of 24 particulate constituents were coarsened in the sense of being either fully missing or below detection limits. To facilitate scientific analysis, it is appealing to create complete data by filling in missing values so that standard complete-data methods can be applied. We briefly review commonly used strategies for handling missing values and focus on the multiple-imputation approach, which generally leads to valid inferences when faced with missing data. Three statistical models are developed for multiply imputing the missing values of airborne particulate matter. We expect that these models are useful for creating multiple imputations in a variety of incomplete multivariate time series data sets.
Statistical test for ΔρDCCA cross-correlation coefficient
NASA Astrophysics Data System (ADS)
Guedes, E. F.; Brito, A. A.; Oliveira Filho, F. M.; Fernandez, B. F.; de Castro, A. P. N.; da Silva Filho, A. M.; Zebende, G. F.
2018-07-01
In this paper we propose a new statistical test for ΔρDCCA, Detrended Cross-Correlation Coefficient Difference, a tool to measure contagion/interdependence effect in time series of size N at different time scale n. For this proposition we analyzed simulated and real time series. The results showed that the statistical significance of ΔρDCCA depends on the size N and the time scale n, and we can define a critical value for this dependency in 90%, 95%, and 99% of confidence level, as will be shown in this paper.
Combining Fourier and lagged k-nearest neighbor imputation for biomedical time series data.
Rahman, Shah Atiqur; Huang, Yuxiao; Claassen, Jan; Heintzman, Nathaniel; Kleinberg, Samantha
2015-12-01
Most clinical and biomedical data contain missing values. A patient's record may be split across multiple institutions, devices may fail, and sensors may not be worn at all times. While these missing values are often ignored, this can lead to bias and error when the data are mined. Further, the data are not simply missing at random. Instead the measurement of a variable such as blood glucose may depend on its prior values as well as that of other variables. These dependencies exist across time as well, but current methods have yet to incorporate these temporal relationships as well as multiple types of missingness. To address this, we propose an imputation method (FLk-NN) that incorporates time lagged correlations both within and across variables by combining two imputation methods, based on an extension to k-NN and the Fourier transform. This enables imputation of missing values even when all data at a time point is missing and when there are different types of missingness both within and across variables. In comparison to other approaches on three biological datasets (simulated and actual Type 1 diabetes datasets, and multi-modality neurological ICU monitoring) the proposed method has the highest imputation accuracy. This was true for up to half the data being missing and when consecutive missing values are a significant fraction of the overall time series length. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-10-01
In this study the frequency of days with extreme low (termed ELOs) and extreme high (termed EHOs) total ozone values and their influence on mean values and trends are analyzed for the world's longest total ozone record (Arosa, Switzerland). The results show (i) an increase in ELOs and (ii) a decrease in EHOs during the last decades and (iii) that the overall trend during the 1970s and 1980s in total ozone is strongly dominated by changes in these extreme events. After removing the extremes, the time series shows a strongly reduced trend (reduction by a factor of 2.5 for trend in annual mean). Excursions in the frequency of extreme events reveal "fingerprints" of dynamical factors such as ENSO or NAO, and chemical factors, such as cold Arctic vortex ozone losses, as well as major volcanic eruptions of the 20th century (Gunung Agung, El Chichón, Mt. Pinatubo). Furthermore, atmospheric loading of ozone depleting substances leads to a continuous modification of column ozone in the Northern Hemisphere also with respect to extreme values (partly again in connection with polar vortex contributions). Application of extreme value theory allows the identification of many more such "fingerprints" than conventional time series analysis of annual and seasonal mean values. The analysis shows in particular the strong influence of dynamics, revealing that even moderate ENSO and NAO events have a discernible effect on total ozone. Overall the approach to extremal modelling provides new information on time series properties, variability, trends and the influence of dynamics and chemistry, complementing earlier analyses focusing only on monthly (or annual) mean values.
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-05-01
In this study the frequency of days with extreme low (termed ELOs) and extreme high (termed EHOs) total ozone values and their influence on mean values and trends are analyzed for the world's longest total ozone record (Arosa, Switzerland). The results show (a) an increase in ELOs and (b) a decrease in EHOs during the last decades and (c) that the overall trend during the 1970s and 1980s in total ozone is strongly dominated by changes in these extreme events. After removing the extremes, the time series shows a strongly reduced trend (reduction by a factor of 2.5 for trend in annual mean). Excursions in the frequency of extreme events reveal "fingerprints" of dynamical factors such as ENSO or NAO, and chemical factors, such as cold Arctic vortex ozone losses, as well as major volcanic eruptions of the 20th century (Gunung Agung, El Chichón, Mt. Pinatubo). Furthermore, atmospheric loading of ozone depleting substances leads to a continuous modification of column ozone in the Northern Hemisphere also with respect to extreme values (partly again in connection with polar vortex contributions). Application of extreme value theory allows the identification of many more such "fingerprints" than conventional time series analysis of annual and seasonal mean values. The analysis shows in particular the strong influence of dynamics, revealing that even moderate ENSO and NAO events have a discernible effect on total ozone. Overall the approach to extremal modelling provides new information on time series properties, variability, trends and the influence of dynamics and chemistry, complementing earlier analyses focusing only on monthly (or annual) mean values.
NASA Astrophysics Data System (ADS)
Rawat, Kishan Singh; Singh, Sudhir Kumar; Jacintha, T. German Amali; Nemčić-Jurec, Jasna; Tripathi, Vinod Kumar
2017-12-01
A review has been made to understand the hydrogeochemical behaviour of groundwater through statistical analysis of long term water quality data (year 2005-2013). Water Quality Index ( WQI), descriptive statistics, Hurst exponent, fractal dimension and predictability index were estimated for each water parameter. WQI results showed that majority of samples fall in moderate category during 2005-2013, but monitoring site four falls under severe category (water unfit for domestic use). Brownian time series behaviour (a true random walk nature) exists between calcium (Ca^{2+}) and electric conductivity (EC); magnesium (Mg^{2+}) with EC; sodium (Na+) with EC; sulphate (SO4^{2-}) with EC; total dissolved solids (TDS) with chloride (Cl-) during pre- (2005-2013) and post- (2006-2013) monsoon season. These parameters have a closer value of Hurst exponent ( H) with Brownian time series behaviour condition (H=0.5). The result of times series analysis of water quality data shows a persistent behaviour (a positive autocorrelation) that has played a role between Cl- and Mg^{2+}, Cl- and Ca^{2+}, TDS and Na+, TDS and SO4^{2-}, TDS and Ca^{2+} in pre- and post-monsoon time series because of the higher value of H (>1). Whereas an anti-persistent behaviour (or negative autocorrelation) was found between Cl- and EC, TDS and EC during pre- and post-monsoon due to low value of H. The work outline shows that the groundwater of few areas needs treatment before direct consumption, and it also needs to be protected from contamination.
NASA Astrophysics Data System (ADS)
Wu, Yue; Shang, Pengjian; Li, Yilong
2018-03-01
A modified multiscale sample entropy measure based on symbolic representation and similarity (MSEBSS) is proposed in this paper to research the complexity of stock markets. The modified algorithm reduces the probability of inducing undefined entropies and is confirmed to be robust to strong noise. Considering the validity and accuracy, MSEBSS is more reliable than Multiscale entropy (MSE) for time series mingled with much noise like financial time series. We apply MSEBSS to financial markets and results show American stock markets have the lowest complexity compared with European and Asian markets. There are exceptions to the regularity that stock markets show a decreasing complexity over the time scale, indicating a periodicity at certain scales. Based on MSEBSS, we introduce the modified multiscale cross-sample entropy measure based on symbolic representation and similarity (MCSEBSS) to consider the degree of the asynchrony between distinct time series. Stock markets from the same area have higher synchrony than those from different areas. And for stock markets having relative high synchrony, the entropy values will decrease with the increasing scale factor. While for stock markets having high asynchrony, the entropy values will not decrease with the increasing scale factor sometimes they tend to increase. So both MSEBSS and MCSEBSS are able to distinguish stock markets of different areas, and they are more helpful if used together for studying other features of financial time series.
Röhling, Steffi; Dunger, Karsten; Kändler, Gerald; Klatt, Susann; Riedel, Thomas; Stümer, Wolfgang; Brötz, Johannes
2016-12-01
The German greenhouse gas inventory in the land use change sector strongly depends on national forest inventory data. As these data were collected periodically 1987, 2002, 2008 and 2012, the time series on emissions show several "jumps" due to biomass stock change, especially between 2001 and 2002 and between 2007 and 2008 while within the periods the emissions seem to be constant due to the application of periodical average emission factors. This does not reflect inter-annual variability in the time series, which would be assumed as the drivers for the carbon stock changes fluctuate between the years. Therefore additional data, which is available on annual basis, should be introduced into the calculations of the emissions inventories in order to get more plausible time series. This article explores the possibility of introducing an annual rather than periodical approach to calculating emission factors with the given data and thus smoothing the trajectory of time series for emissions from forest biomass. Two approaches are introduced to estimate annual changes derived from periodic data: the so-called logging factor method and the growth factor method. The logging factor method incorporates annual logging data to project annual values from periodic values. This is less complex to implement than the growth factor method, which additionally adds growth data into the calculations. Calculation of the input variables is based on sound statistical methodologies and periodically collected data that cannot be altered. Thus a discontinuous trajectory of the emissions over time remains, even after the adjustments. It is intended to adopt this approach in the German greenhouse gas reporting in order to meet the request for annually adjusted values.
FPGA implementation of predictive degradation model for engine oil lifetime
NASA Astrophysics Data System (ADS)
Idros, M. F. M.; Razak, A. H. A.; Junid, S. A. M. Al; Suliman, S. I.; Halim, A. K.
2018-03-01
This paper presents the implementation of linear regression model for degradation prediction on Register Transfer Logic (RTL) using QuartusII. A stationary model had been identified in the degradation trend for the engine oil in a vehicle in time series method. As for RTL implementation, the degradation model is written in Verilog HDL and the data input are taken at a certain time. Clock divider had been designed to support the timing sequence of input data. At every five data, a regression analysis is adapted for slope variation determination and prediction calculation. Here, only the negative value are taken as the consideration for the prediction purposes for less number of logic gate. Least Square Method is adapted to get the best linear model based on the mean values of time series data. The coded algorithm has been implemented on FPGA for validation purposes. The result shows the prediction time to change the engine oil.
Aerosol Index Dynamics over Athens and Beijing
NASA Astrophysics Data System (ADS)
Christodoulakis, J.; Varotsos, C.; Tzanis, C.; Xue, Y.
2014-11-01
We present the analysis of monthly mean Aerosol Index (AI) values, over Athens, Greece, and Beijing, China, for the period 1979-2012. The aim of the analysis is the identification of time scaling in the AI time series, by using a data analysis technique that would not be affected by the non-stationarity of the data. The appropriate technique satisfying this criterion is the Detrended Fluctuation Analysis (DF A). For the deseasonalization of time series classic Wiener method was applied filtering out the seasonal - 3 months, semiannual - 6 months and annual - 12 months periods. The data analysis for both Athens and Beijing revealed that the exponents α for both time periods are greater than 0.5 indicating that persistence of the correlations in the fluctuations of the deseasonalized AI values exists for time scales between about 4 months and 3.5 years (for the period 1979-1993) or 4 years (for the period 1996-2012).
Aerosol Index Dynamics over Athens and Beijing
NASA Astrophysics Data System (ADS)
Christodoulakis, J.; Varotsos, C.; Tzanis, C.; Xue, Y.
2014-11-01
We present the analysis of monthly mean Aerosol Index (AI) values, over Athens, Greece, and Beijing, China, for the period 1979- 2012. The aim of the analysis is the identification of time scaling in the AI time series, by using a data analysis technique that would not be affected by the non-stationarity of the data. The appropriate technique satisfying this criterion is the Detrended Fluctuation Analysis (DFA). For the deseasonalization of time series classic Wiener method was applied filtering out the seasonal - 3 months, semiannual - 6 months and annual - 12 months periods. The data analysis for both Athens and Beijing revealed that the exponents α for both time periods are greater than 0.5 indicating that persistence of the correlations in the fluctuations of the deseasonalized AI values exists for time scales between about 4 months and 3.5 years (for the period 1979-1993) or 4 years (for the period 1996-2012).
Power estimation using simulations for air pollution time-series studies
2012-01-01
Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599
Power estimation using simulations for air pollution time-series studies.
Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt
2012-09-20
Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.
Topological properties of the limited penetrable horizontal visibility graph family
NASA Astrophysics Data System (ADS)
Wang, Minggang; Vilela, André L. M.; Du, Ruijin; Zhao, Longfeng; Dong, Gaogao; Tian, Lixin; Stanley, H. Eugene
2018-05-01
The limited penetrable horizontal visibility graph algorithm was recently introduced to map time series in complex networks. In this work, we extend this algorithm to create a directed-limited penetrable horizontal visibility graph and an image-limited penetrable horizontal visibility graph. We define two algorithms and provide theoretical results on the topological properties of these graphs associated with different types of real-value series. We perform several numerical simulations to check the accuracy of our theoretical results. Finally, we present an application of the directed-limited penetrable horizontal visibility graph to measure real-value time series irreversibility and an application of the image-limited penetrable horizontal visibility graph that discriminates noise from chaos. We also propose a method to measure the systematic risk using the image-limited penetrable horizontal visibility graph, and the empirical results show the effectiveness of our proposed algorithms.
Memory persistency and nonlinearity in daily mean dew point across India
NASA Astrophysics Data System (ADS)
Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik; Bhattacharjee, Anup Kumar
2016-04-01
Enterprising endeavour has been taken in this work to realize and estimate the persistence in memory of the daily mean dew point time series obtained from seven different weather stations viz. Kolkata, Chennai (Madras), New Delhi, Mumbai (Bombay), Bhopal, Agartala and Ahmedabad representing different geographical zones in India. Hurst exponent values reveal an anti-persistent behaviour of these dew point series. To affirm the Hurst exponent values, five different scaling methods have been used and the corresponding results are compared to synthesize a finer and reliable conclusion out of it. The present analysis also bespeaks that the variation in daily mean dew point is governed by a non-stationary process with stationary increments. The delay vector variance (DVV) method has been exploited to investigate nonlinearity, and the present calculation confirms the presence of deterministic nonlinear profile in the daily mean dew point time series of the seven stations.
Sun Series program for the REEDA System. [predicting orbital lifetime using sunspot values
NASA Technical Reports Server (NTRS)
Shankle, R. W.
1980-01-01
Modifications made to data bases and to four programs in a series of computer programs (Sun Series) which run on the REEDA HP minicomputer system to aid NASA's solar activity predictions used in orbital life time predictions are described. These programs utilize various mathematical smoothing technique and perform statistical and graphical analysis of various solar activity data bases residing on the REEDA System.
Bendel, David; Beck, Ferdinand; Dittmer, Ulrich
2013-01-01
In the presented study climate change impacts on combined sewer overflows (CSOs) in Baden-Wuerttemberg, Southern Germany, were assessed based on continuous long-term rainfall-runoff simulations. As input data, synthetic rainfall time series were used. The applied precipitation generator NiedSim-Klima accounts for climate change effects on precipitation patterns. Time series for the past (1961-1990) and future (2041-2050) were generated for various locations. Comparing the simulated CSO activity of both periods we observe significantly higher overflow frequencies for the future. Changes in overflow volume and overflow duration depend on the type of overflow structure. Both values will increase at simple CSO structures that merely divide the flow, whereas they will decrease when the CSO structure is combined with a storage tank. However, there is a wide variation between the results of different precipitation time series (representative for different locations).
Neutron monitors and muon detectors for solar modulation studies: 2. ϕ time series
NASA Astrophysics Data System (ADS)
Ghelfi, A.; Maurin, D.; Cheminet, A.; Derome, L.; Hubert, G.; Melot, F.
2017-08-01
The level of solar modulation at different times (related to the solar activity) is a central question of solar and galactic cosmic-ray physics. In the first paper of this series, we have established a correspondence between the uncertainties on ground-based detectors count rates and the parameter ϕ (modulation level in the force-field approximation) reconstructed from these count rates. In this second paper, we detail a procedure to obtain a reference ϕ time series from neutron monitor data. We show that we can have an unbiased and accurate ϕ reconstruction (Δϕ / ϕ ≃ 10 %). We also discuss the potential of Bonner spheres spectrometers and muon detectors to provide ϕ time series. Two by-products of this calculation are updated ϕ values for the cosmic-ray database and a web interface to retrieve and plot ϕ from the 50's to today (http://lpsc.in2p3.fr/crdb).
NASA Astrophysics Data System (ADS)
Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.
2012-04-01
We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1989-01-01
This paper develops techniques to evaluate the discrete Fourier transform (DFT), the autocorrelation function (ACF), and the cross-correlation function (CCF) of time series which are not evenly sampled. The series may consist of quantized point data (e.g., yes/no processes such as photon arrival). The DFT, which can be inverted to recover the original data and the sampling, is used to compute correlation functions by means of a procedure which is effectively, but not explicitly, an interpolation. The CCF can be computed for two time series not even sampled at the same set of times. Techniques for removing the distortion of the correlation functions caused by the sampling, determining the value of a constant component to the data, and treating unequally weighted data are also discussed. FORTRAN code for the Fourier transform algorithm and numerical examples of the techniques are given.
Fractal analysis on human dynamics of library loans
NASA Astrophysics Data System (ADS)
Fan, Chao; Guo, Jin-Li; Zha, Yi-Long
2012-12-01
In this paper, the fractal characteristic of human behaviors is investigated from the perspective of time series constructed with the amount of library loans. The values of the Hurst exponent and length of non-periodic cycle calculated through rescaled range analysis indicate that the time series of human behaviors and their sub-series are fractal with self-similarity and long-range dependence. Then the time series are converted into complex networks by the visibility algorithm. The topological properties of the networks such as scale-free property and small-world effect imply that there is a close relationship among the numbers of repetitious behaviors performed by people during certain periods of time. Our work implies that there is intrinsic regularity in the human collective repetitious behaviors. The conclusions may be helpful to develop some new approaches to investigate the fractal feature and mechanism of human dynamics, and provide some references for the management and forecast of human collective behaviors.
Time reversibility from visibility graphs of nonstationary processes
NASA Astrophysics Data System (ADS)
Lacasa, Lucas; Flanagan, Ryan
2015-08-01
Visibility algorithms are a family of methods to map time series into networks, with the aim of describing the structure of time series and their underlying dynamical properties in graph-theoretical terms. Here we explore some properties of both natural and horizontal visibility graphs associated to several nonstationary processes, and we pay particular attention to their capacity to assess time irreversibility. Nonstationary signals are (infinitely) irreversible by definition (independently of whether the process is Markovian or producing entropy at a positive rate), and thus the link between entropy production and time series irreversibility has only been explored in nonequilibrium stationary states. Here we show that the visibility formalism naturally induces a new working definition of time irreversibility, which allows us to quantify several degrees of irreversibility for stationary and nonstationary series, yielding finite values that can be used to efficiently assess the presence of memory and off-equilibrium dynamics in nonstationary processes without the need to differentiate or detrend them. We provide rigorous results complemented by extensive numerical simulations on several classes of stochastic processes.
NASA Astrophysics Data System (ADS)
Vyhnalek, Brian; Zurcher, Ulrich; O'Dwyer, Rebecca; Kaufman, Miron
2009-10-01
A wide range of heart rate irregularities have been reported in small studies of patients with temporal lobe epilepsy [TLE]. We hypothesize that patients with TLE display cardiac dysautonomia in either a subclinical or clinical manner. In a small study, we have retrospectively identified (2003-8) two groups of patients from the epilepsy monitoring unit [EMU] at the Cleveland Clinic. No patients were diagnosed with cardiovascular morbidities. The control group consisted of patients with confirmed pseudoseizures and the experimental group had confirmed right temporal lobe epilepsy through a seizure free outcome after temporal lobectomy. We quantified the heart rate variability using the approximate entropy [ApEn]. We found similar values of the ApEn in all three states of consciousness (awake, sleep, and proceeding seizure onset). In the TLE group, there is some evidence for greater variability in the awake than in either the sleep or proceeding seizure onset. Here we present results for mathematically-generated time series: the heart rate fluctuations ξ follow the γ statistics i.e., p(ξ)=γ-1(k) ξ^k exp(-ξ). This probability function has well-known properties and its Shannon entropy can be expressed in terms of the γ-function. The parameter k allows us to generate a family of heart rate time series with different statistics. The ApEn calculated for the generated time series for different values of k mimic the properties found for the TLE and pseudoseizure group. Our results suggest that the ApEn is an effective tool to probe differences in statistics of heart rate fluctuations.
Rivera, Diego; Lillo, Mario; Granda, Stalin
2014-12-01
The concept of time stability has been widely used in the design and assessment of monitoring networks of soil moisture, as well as in hydrological studies, because it is as a technique that allows identifying of particular locations having the property of representing mean values of soil moisture in the field. In this work, we assess the effect of time stability calculations as new information is added and how time stability calculations are affected at shorter periods, subsampled from the original time series, containing different amounts of precipitation. In doing so, we defined two experiments to explore the time stability behavior. The first experiment sequentially adds new data to the previous time series to investigate the long-term influence of new data in the results. The second experiment applies a windowing approach, taking sequential subsamples from the entire time series to investigate the influence of short-term changes associated with the precipitation in each window. Our results from an operating network (seven monitoring points equipped with four sensors each in a 2-ha blueberry field) show that as information is added to the time series, there are changes in the location of the most stable point (MSP), and that taking the moving 21-day windows, it is clear that most of the variability of soil water content changes is associated with both the amount and intensity of rainfall. The changes of the MSP over each window depend on the amount of water entering the soil and the previous state of the soil water content. For our case study, the upper strata are proxies for hourly to daily changes in soil water content, while the deeper strata are proxies for medium-range stored water. Thus, different locations and depths are representative of processes at different time scales. This situation must be taken into account when water management depends on soil water content values from fixed locations.
Stochastic modelling of non-stationary financial assets
NASA Astrophysics Data System (ADS)
Estevens, Joana; Rocha, Paulo; Boto, João P.; Lind, Pedro G.
2017-11-01
We model non-stationary volume-price distributions with a log-normal distribution and collect the time series of its two parameters. The time series of the two parameters are shown to be stationary and Markov-like and consequently can be modelled with Langevin equations, which are derived directly from their series of values. Having the evolution equations of the log-normal parameters, we reconstruct the statistics of the first moments of volume-price distributions which fit well the empirical data. Finally, the proposed framework is general enough to study other non-stationary stochastic variables in other research fields, namely, biology, medicine, and geology.
Sample entropy applied to the analysis of synthetic time series and tachograms
NASA Astrophysics Data System (ADS)
Muñoz-Diosdado, A.; Gálvez-Coyt, G. G.; Solís-Montufar, E.
2017-01-01
Entropy is a method of non-linear analysis that allows an estimate of the irregularity of a system, however, there are different types of computational entropy that were considered and tested in order to obtain one that would give an index of signals complexity taking into account the data number of the analysed time series, the computational resources demanded by the method, and the accuracy of the calculation. An algorithm for the generation of fractal time-series with a certain value of β was used for the characterization of the different entropy algorithms. We obtained a significant variation for most of the algorithms in terms of the series size, which could result counterproductive for the study of real signals of different lengths. The chosen method was sample entropy, which shows great independence of the series size. With this method, time series of heart interbeat intervals or tachograms of healthy subjects and patients with congestive heart failure were analysed. The calculation of sample entropy was carried out for 24-hour tachograms and time subseries of 6-hours for sleepiness and wakefulness. The comparison between the two populations shows a significant difference that is accentuated when the patient is sleeping.
Does preprocessing change nonlinear measures of heart rate variability?
Gomes, Murilo E D; Guimarães, Homero N; Ribeiro, Antônio L P; Aguirre, Luis A
2002-11-01
This work investigated if methods used to produce a uniformly sampled heart rate variability (HRV) time series significantly change the deterministic signature underlying the dynamics of such signals and some nonlinear measures of HRV. Two methods of preprocessing were used: the convolution of inverse interval function values with a rectangular window and the cubic polynomial interpolation. The HRV time series were obtained from 33 Wistar rats submitted to autonomic blockade protocols and from 17 healthy adults. The analysis of determinism was carried out by the method of surrogate data sets and nonlinear autoregressive moving average modelling and prediction. The scaling exponents alpha, alpha(1) and alpha(2) derived from the detrended fluctuation analysis were calculated from raw HRV time series and respective preprocessed signals. It was shown that the technique of cubic interpolation of HRV time series did not significantly change any nonlinear characteristic studied in this work, while the method of convolution only affected the alpha(1) index. The results suggested that preprocessed time series may be used to study HRV in the field of nonlinear dynamics.
Initial Validation of NDVI time seriesfrom AVHRR, VEGETATION, and MODIS
NASA Technical Reports Server (NTRS)
Morisette, Jeffrey T.; Pinzon, Jorge E.; Brown, Molly E.; Tucker, Jim; Justice, Christopher O.
2004-01-01
The paper will address Theme 7: Multi-sensor opportunities for VEGETATION. We present analysis of a long-term vegetation record derived from three moderate resolution sensors: AVHRR, VEGETATION, and MODIS. While empirically based manipulation can ensure agreement between the three data sets, there is a need to validate the series. This paper uses atmospherically corrected ETM+ data available over the EOS Land Validation Core Sites as an independent data set with which to compare the time series. We use ETM+ data from 15 globally distributed sites, 7 of which contain repeat coverage in time. These high-resolution data are compared to the values of each sensor by spatially aggregating the ETM+ to each specific sensors' spatial coverage. The aggregated ETM+ value provides a point estimate for a specific site on a specific date. The standard deviation of that point estimate is used to construct a confidence interval for that point estimate. The values from each moderate resolution sensor are then evaluated with respect to that confident interval. Result show that AVHRR, VEGETATION, and MODIS data can be combined to assess temporal uncertainties and address data continuity issues and that the atmospherically corrected ETM+ data provide an independent source with which to compare that record. The final product is a consistent time series climate record that links historical observations to current and future measurements.
Graphic analysis and multifractal on percolation-based return interval series
NASA Astrophysics Data System (ADS)
Pei, A. Q.; Wang, J.
2015-05-01
A financial time series model is developed and investigated by the oriented percolation system (one of the statistical physics systems). The nonlinear and statistical behaviors of the return interval time series are studied for the proposed model and the real stock market by applying visibility graph (VG) and multifractal detrended fluctuation analysis (MF-DFA). We investigate the fluctuation behaviors of return intervals of the model for different parameter settings, and also comparatively study these fluctuation patterns with those of the real financial data for different threshold values. The empirical research of this work exhibits the multifractal features for the corresponding financial time series. Further, the VGs deviated from both of the simulated data and the real data show the behaviors of small-world, hierarchy, high clustering and power-law tail for the degree distributions.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Development and analysis of a meteorological database, Argonne National Laboratory, Illinois
Over, Thomas M.; Price, Thomas H.; Ishii, Audrey L.
2010-01-01
A database of hourly values of air temperature, dewpoint temperature, wind speed, and solar radiation from January 1, 1948, to September 30, 2003, primarily using data collected at the Argonne National Laboratory station, was developed for use in continuous-time hydrologic modeling in northeastern Illinois. Missing and apparently erroneous data values were replaced with adjusted values from nearby stations used as 'backup'. Temporal variations in the statistical properties of the data resulting from changes in measurement and data-storage methodologies were adjusted to match the statistical properties resulting from the data-collection procedures that have been in place since January 1, 1989. The adjustments were computed based on the regressions between the primary data series from Argonne National Laboratory and the backup series using data obtained during common periods; the statistical properties of the regressions were used to assign estimated standard errors to values that were adjusted or filled from other series. Each hourly value was assigned a corresponding data-source flag that indicates the source of the value and its transformations. An analysis of the data-source flags indicates that all the series in the database except dewpoint have a similar fraction of Argonne National Laboratory data, with about 89 percent for the entire period, about 86 percent from 1949 through 1988, and about 98 percent from 1989 through 2003. The dewpoint series, for which observations at Argonne National Laboratory did not begin until 1958, has only about 71 percent Argonne National Laboratory data for the entire period, about 63 percent from 1948 through 1988, and about 93 percent from 1989 through 2003, indicating a lower reliability of the dewpoint sensor. A basic statistical analysis of the filled and adjusted data series in the database, and a series of potential evapotranspiration computed from them using the computer program LXPET (Lamoreux Potential Evapotranspiration) also was carried out. This analysis indicates annual cycles in solar radiation and potential evapotranspiration that follow the annual cycle of extraterrestrial solar radiation, whereas temperature and dewpoint annual cycles are lagged by about 1 month relative to the solar cycle. The annual cycle of wind has a late summer minimum, and spring and fall maximums. At the annual time scale, the filled and adjusted data series and computed potential evapotranspiration have significant serial correlation and possibly have significant temporal trends. The inter-annual fluctuations of temperature and dewpoint are weakest, whereas those of wind and potential evapotranspiration are strongest.
Le Strat, Yann
2017-01-01
The objective of this paper is to evaluate a panel of statistical algorithms for temporal outbreak detection. Based on a large dataset of simulated weekly surveillance time series, we performed a systematic assessment of 21 statistical algorithms, 19 implemented in the R package surveillance and two other methods. We estimated false positive rate (FPR), probability of detection (POD), probability of detection during the first week, sensitivity, specificity, negative and positive predictive values and F1-measure for each detection method. Then, to identify the factors associated with these performance measures, we ran multivariate Poisson regression models adjusted for the characteristics of the simulated time series (trend, seasonality, dispersion, outbreak sizes, etc.). The FPR ranged from 0.7% to 59.9% and the POD from 43.3% to 88.7%. Some methods had a very high specificity, up to 99.4%, but a low sensitivity. Methods with a high sensitivity (up to 79.5%) had a low specificity. All methods had a high negative predictive value, over 94%, while positive predictive values ranged from 6.5% to 68.4%. Multivariate Poisson regression models showed that performance measures were strongly influenced by the characteristics of time series. Past or current outbreak size and duration strongly influenced detection performances. PMID:28715489
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing
NASA Astrophysics Data System (ADS)
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Biogeochemical control of marine productivity in the Mediterranean Sea during the last 50 years
Macias, Diego; Garcia-Gorriz, Elisa; Piroddi, Chiara; Stips, Adolf
2014-01-01
The temporal dynamics of biogeochemical variables derived from a coupled 3-D model of the Mediterranean Sea are evaluated for the last 50 years (1960–2010) against independent data on fisheries catch per unit effort (CPUE) for the same time period. Concordant patterns are found in the time series of all of the biological variables (from the model and from fisheries statistics), with low values at the beginning of the series, a later increase, with maximum levels reached at the end of the 1990s, and a posterior stabilization. Spectral analysis of the annual biological time series reveals coincident low-frequency signals in all of them. The first, more energetic signal peaks around the year 2000, while the second, less energetic signal peaks near 1982. Almost identical low-frequency signals are found in the nutrient loads of the rivers and in the integrated nutrient levels in the surface marine ecosystem. Nitrate concentration shows a maximum level in 1998, with a later stabilization to present-day values, coincident with the first low-frequency signal found in the biological series. Phosphate shows maximum concentrations around 1982 and a posterior sharp decline, in concordance with the second low-frequency signal observed in the biological series. That result seems to indicate that the control of marine productivity (plankton to fish) in the Mediterranean is principally mediated through bottom-up processes that could be traced back to the characteristics of riverine discharges. The high sensitivity of CPUE time series to environmental conditions might be another indicator of the overexploitation of this marine ecosystem. Key Points Biogeochemical evolution of the Mediterranean over the past 50 years River nutrient loads drive primary and secondary productions Strong link between low trophic levels and fisheries PMID:26180286
Memory effects in stock price dynamics: evidences of technical trading
Garzarelli, Federico; Cristelli, Matthieu; Pompa, Gabriele; Zaccaria, Andrea; Pietronero, Luciano
2014-01-01
Technical trading represents a class of investment strategies for Financial Markets based on the analysis of trends and recurrent patterns in price time series. According standard economical theories these strategies should not be used because they cannot be profitable. On the contrary, it is well-known that technical traders exist and operate on different time scales. In this paper we investigate if technical trading produces detectable signals in price time series and if some kind of memory effects are introduced in the price dynamics. In particular, we focus on a specific figure called supports and resistances. We first develop a criterion to detect the potential values of supports and resistances. Then we show that memory effects in the price dynamics are associated to these selected values. In fact we show that prices more likely re-bounce than cross these values. Such an effect is a quantitative evidence of the so-called self-fulfilling prophecy, that is the self-reinforcement of agents' belief and sentiment about future stock prices' behavior. PMID:24671011
Memory effects in stock price dynamics: evidences of technical trading
NASA Astrophysics Data System (ADS)
Garzarelli, Federico; Cristelli, Matthieu; Pompa, Gabriele; Zaccaria, Andrea; Pietronero, Luciano
2014-03-01
Technical trading represents a class of investment strategies for Financial Markets based on the analysis of trends and recurrent patterns in price time series. According standard economical theories these strategies should not be used because they cannot be profitable. On the contrary, it is well-known that technical traders exist and operate on different time scales. In this paper we investigate if technical trading produces detectable signals in price time series and if some kind of memory effects are introduced in the price dynamics. In particular, we focus on a specific figure called supports and resistances. We first develop a criterion to detect the potential values of supports and resistances. Then we show that memory effects in the price dynamics are associated to these selected values. In fact we show that prices more likely re-bounce than cross these values. Such an effect is a quantitative evidence of the so-called self-fulfilling prophecy, that is the self-reinforcement of agents' belief and sentiment about future stock prices' behavior.
Pearson correlation estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.
Multivariate stochastic analysis for Monthly hydrological time series at Cuyahoga River Basin
NASA Astrophysics Data System (ADS)
zhang, L.
2011-12-01
Copula has become a very powerful statistic and stochastic methodology in case of the multivariate analysis in Environmental and Water resources Engineering. In recent years, the popular one-parameter Archimedean copulas, e.g. Gumbel-Houggard copula, Cook-Johnson copula, Frank copula, the meta-elliptical copula, e.g. Gaussian Copula, Student-T copula, etc. have been applied in multivariate hydrological analyses, e.g. multivariate rainfall (rainfall intensity, duration and depth), flood (peak discharge, duration and volume), and drought analyses (drought length, mean and minimum SPI values, and drought mean areal extent). Copula has also been applied in the flood frequency analysis at the confluences of river systems by taking into account the dependence among upstream gauge stations rather than by using the hydrological routing technique. In most of the studies above, the annual time series have been considered as stationary signal which the time series have been assumed as independent identically distributed (i.i.d.) random variables. But in reality, hydrological time series, especially the daily and monthly hydrological time series, cannot be considered as i.i.d. random variables due to the periodicity existed in the data structure. Also, the stationary assumption is also under question due to the Climate Change and Land Use and Land Cover (LULC) change in the fast years. To this end, it is necessary to revaluate the classic approach for the study of hydrological time series by relaxing the stationary assumption by the use of nonstationary approach. Also as to the study of the dependence structure for the hydrological time series, the assumption of same type of univariate distribution also needs to be relaxed by adopting the copula theory. In this paper, the univariate monthly hydrological time series will be studied through the nonstationary time series analysis approach. The dependence structure of the multivariate monthly hydrological time series will be studied through the copula theory. As to the parameter estimation, the maximum likelihood estimation (MLE) will be applied. To illustrate the method, the univariate time series model and the dependence structure will be determined and tested using the monthly discharge time series of Cuyahoga River Basin.
Evolution of the Sunspot Number and Solar Wind B Time Series
NASA Astrophysics Data System (ADS)
Cliver, Edward W.; Herbst, Konstantin
2018-03-01
The past two decades have witnessed significant changes in our knowledge of long-term solar and solar wind activity. The sunspot number time series (1700-present) developed by Rudolf Wolf during the second half of the 19th century was revised and extended by the group sunspot number series (1610-1995) of Hoyt and Schatten during the 1990s. The group sunspot number is significantly lower than the Wolf series before ˜1885. An effort from 2011-2015 to understand and remove differences between these two series via a series of workshops had the unintended consequence of prompting several alternative constructions of the sunspot number. Thus it has been necessary to expand and extend the sunspot number reconciliation process. On the solar wind side, after a decade of controversy, an ISSI International Team used geomagnetic and sunspot data to obtain a high-confidence time series of the solar wind magnetic field strength (B) from 1750-present that can be compared with two independent long-term (> ˜600 year) series of annual B-values based on cosmogenic nuclides. In this paper, we trace the twists and turns leading to our current understanding of long-term solar and solar wind activity.
Scaling behaviors of precipitation over China
NASA Astrophysics Data System (ADS)
Jiang, Lei; Li, Nana; Zhao, Xia
2017-04-01
Scaling behaviors in the precipitation time series derived from 1951 to 2009 over China are investigated by detrended fluctuation analysis (DFA) method. The results show that there exists long-term memory for the precipitation time series in some stations, where the values of the scaling exponent α are less than 0.62, implying weak persistence characteristics. The values of scaling exponent in other stations indicate random behaviors. In addition, the scaling exponent α in precipitation records varies from station to station over China. A numerical test is made to verify the significance in DFA exponents by shuffling the data records many times. We think it is significant when the values of scaling exponent before shuffled precipitation records are larger than the interval threshold for 95 % confidence level after shuffling precipitation records many times. By comparison, the daily precipitation records exhibit weak positively long-range correlation in a power law fashion mainly at the stations taking on zonal distributions in south China, upper and middle reaches of the Yellow River, northern part of northeast China. This may be related to the subtropical high. Furthermore, the values of scaling exponent which cannot pass the significance test do not show a clear distribution pattern. It seems that the stations are mainly distributed in coastal areas, southwest China, and southern part of north China. In fact, many complicated factors may affect the scaling behaviors of precipitation such as the system of the east and south Asian monsoon, the interaction between sea and land, and the big landform of the Tibetan Plateau. These results may provide a better prerequisite to long-term predictor of precipitation time series for different regions over China.
Measuring information interactions on the ordinal pattern of stock time series
NASA Astrophysics Data System (ADS)
Zhao, Xiaojun; Shang, Pengjian; Wang, Jing
2013-02-01
The interactions among time series as individual components of complex systems can be quantified by measuring to what extent they exchange information among each other. In many applications, one focuses not on the original series but on its ordinal pattern. In such cases, trivial noises appear more likely to be filtered and the abrupt influence of extreme values can be weakened. Cross-sample entropy and inner composition alignment have been introduced as prominent methods to estimate the information interactions of complex systems. In this paper, we modify both methods to detect the interactions among the ordinal pattern of stock return and volatility series, and we try to uncover the information exchanges across sectors in Chinese stock markets.
On statistical inference in time series analysis of the evolution of road safety.
Commandeur, Jacques J F; Bijleveld, Frits D; Bergel-Hayat, Ruth; Antoniou, Constantinos; Yannis, George; Papadimitriou, Eleonora
2013-11-01
Data collected for building a road safety observatory usually include observations made sequentially through time. Examples of such data, called time series data, include annual (or monthly) number of road traffic accidents, traffic fatalities or vehicle kilometers driven in a country, as well as the corresponding values of safety performance indicators (e.g., data on speeding, seat belt use, alcohol use, etc.). Some commonly used statistical techniques imply assumptions that are often violated by the special properties of time series data, namely serial dependency among disturbances associated with the observations. The first objective of this paper is to demonstrate the impact of such violations to the applicability of standard methods of statistical inference, which leads to an under or overestimation of the standard error and consequently may produce erroneous inferences. Moreover, having established the adverse consequences of ignoring serial dependency issues, the paper aims to describe rigorous statistical techniques used to overcome them. In particular, appropriate time series analysis techniques of varying complexity are employed to describe the development over time, relating the accident-occurrences to explanatory factors such as exposure measures or safety performance indicators, and forecasting the development into the near future. Traditional regression models (whether they are linear, generalized linear or nonlinear) are shown not to naturally capture the inherent dependencies in time series data. Dedicated time series analysis techniques, such as the ARMA-type and DRAG approaches are discussed next, followed by structural time series models, which are a subclass of state space methods. The paper concludes with general recommendations and practice guidelines for the use of time series models in road safety research. Copyright © 2012 Elsevier Ltd. All rights reserved.
Sea change: Charting the course for biogeochemical ocean time-series research in a new millennium
NASA Astrophysics Data System (ADS)
Church, Matthew J.; Lomas, Michael W.; Muller-Karger, Frank
2013-09-01
Ocean time-series provide vital information needed for assessing ecosystem change. This paper summarizes the historical context, major program objectives, and future research priorities for three contemporary ocean time-series programs: The Hawaii Ocean Time-series (HOT), the Bermuda Atlantic Time-series Study (BATS), and the CARIACO Ocean Time-Series. These three programs operate in physically and biogeochemically distinct regions of the world's oceans, with HOT and BATS located in the open-ocean waters of the subtropical North Pacific and North Atlantic, respectively, and CARIACO situated in the anoxic Cariaco Basin of the tropical Atlantic. All three programs sustain near-monthly shipboard occupations of their field sampling sites, with HOT and BATS beginning in 1988, and CARIACO initiated in 1996. The resulting data provide some of the only multi-disciplinary, decadal-scale determinations of time-varying ecosystem change in the global ocean. Facilitated by a scoping workshop (September 2010) sponsored by the Ocean Carbon Biogeochemistry (OCB) program, leaders of these time-series programs sought community input on existing program strengths and for future research directions. Themes that emerged from these discussions included: 1. Shipboard time-series programs are key to informing our understanding of the connectivity between changes in ocean-climate and biogeochemistry 2. The scientific and logistical support provided by shipboard time-series programs forms the backbone for numerous research and education programs. Future studies should be encouraged that seek mechanistic understanding of ecological interactions underlying the biogeochemical dynamics at these sites. 3. Detecting time-varying trends in ocean properties and processes requires consistent, high-quality measurements. Time-series must carefully document analytical procedures and, where possible, trace the accuracy of analyses to certified standards and internal reference materials. 4. Leveraged implementation, testing, and validation of autonomous and remote observing technologies at time-series sites provide new insights into spatiotemporal variability underlying ecosystem changes. 5. The value of existing time-series data for formulating and validating ecosystem models should be promoted. In summary, the scientific underpinnings of ocean time-series programs remain as strong and important today as when these programs were initiated. The emerging data inform our knowledge of the ocean's biogeochemistry and ecology, and improve our predictive capacity about planetary change.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-12-18
This paper presents four algorithms to generate random forecast error time series, including a truncated-normal distribution model, a state-space based Markov model, a seasonal autoregressive moving average (ARMA) model, and a stochastic-optimization based model. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets, used for variable generation integration studies. A comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics. This paper discusses and comparesmore » the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
Chang, Li-Chiu; Chen, Pin-An; Chang, Fi-John
2012-08-01
A reliable forecast of future events possesses great value. The main purpose of this paper is to propose an innovative learning technique for reinforcing the accuracy of two-step-ahead (2SA) forecasts. The real-time recurrent learning (RTRL) algorithm for recurrent neural networks (RNNs) can effectively model the dynamics of complex processes and has been used successfully in one-step-ahead forecasts for various time series. A reinforced RTRL algorithm for 2SA forecasts using RNNs is proposed in this paper, and its performance is investigated by two famous benchmark time series and a streamflow during flood events in Taiwan. Results demonstrate that the proposed reinforced 2SA RTRL algorithm for RNNs can adequately forecast the benchmark (theoretical) time series, significantly improve the accuracy of flood forecasts, and effectively reduce time-lag effects.
Occurrence of CPPopt Values in Uncorrelated ICP and ABP Time Series.
Cabeleira, M; Czosnyka, M; Liu, X; Donnelly, J; Smielewski, P
2018-01-01
Optimal cerebral perfusion pressure (CPPopt) is a concept that uses the pressure reactivity (PRx)-CPP relationship over a given period to find a value of CPP at which PRx shows best autoregulation. It has been proposed that this relationship be modelled by a U-shaped curve, where the minimum is interpreted as being the CPP value that corresponds to the strongest autoregulation. Owing to the nature of the calculation and the signals involved in it, the occurrence of CPPopt curves generated by non-physiological variations of intracranial pressure (ICP) and arterial blood pressure (ABP), termed here "false positives", is possible. Such random occurrences would artificially increase the yield of CPPopt values and decrease the reliability of the methodology.In this work, we studied the probability of the random occurrence of false-positives and we compared the effect of the parameters used for CPPopt calculation on this probability. To simulate the occurrence of false-positives, uncorrelated ICP and ABP time series were generated by destroying the relationship between the waves in real recordings. The CPPopt algorithm was then applied to these new series and the number of false-positives was counted for different values of the algorithm's parameters. The percentage of CPPopt curves generated from uncorrelated data was demonstrated to be 11.5%. This value can be minimised by tuning some of the calculation parameters, such as increasing the calculation window and increasing the minimum PRx span accepted on the curve.
76 FR 70165 - Proposed Collection, Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-10
... vacancies, labor hires, and labor separations. As the monthly JOLTS time series grow longer, their value in... ensure that requested data can be provided in the desired format, reporting burden (time and financial... businesses and organizations. Total Total Average time Estimated Affected public respondents Frequency...
NASA Astrophysics Data System (ADS)
Miksovsky, J.; Raidl, A.
Time delays phase space reconstruction represents one of useful tools of nonlinear time series analysis, enabling number of applications. Its utilization requires the value of time delay to be known, as well as the value of embedding dimension. There are sev- eral methods how to estimate both these parameters. Typically, time delay is computed first, followed by embedding dimension. Our presented approach is slightly different - we reconstructed phase space for various combinations of mentioned parameters and used it for prediction by means of the nearest neighbours in the phase space. Then some measure of prediction's success was computed (correlation or RMSE, e.g.). The position of its global maximum (minimum) should indicate the suitable combination of time delay and embedding dimension. Several meteorological (particularly clima- tological) time series were used for the computations. We have also created a MS- Windows based program in order to implement this approach - its basic features will be presented as well.
Clinical time series prediction: towards a hierarchical dynamical system framework
Liu, Zitao; Hauskrecht, Milos
2014-01-01
Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671
NASA Astrophysics Data System (ADS)
Baldysz, Zofia; Nykiel, Grzegorz; Araszkiewicz, Andrzej; Figurski, Mariusz; Szafranek, Karolina
2016-09-01
The main purpose of this research was to acquire information about consistency of ZTD (zenith total delay) linear trends and seasonal components between two consecutive GPS reprocessing campaigns. The analysis concerned two sets of the ZTD time series which were estimated during EUREF (Reference Frame Sub-Commission for Europe) EPN (Permanent Network) reprocessing campaigns according to 2008 and 2015 MUT AC (Military University of Technology Analysis Centre) scenarios. Firstly, Lomb-Scargle periodograms were generated for 57 EPN stations to obtain a characterisation of oscillations occurring in the ZTD time series. Then, the values of seasonal components and linear trends were estimated using the LSE (least squares estimation) approach. The Mann-Kendall trend test was also carried out to verify the presence of linear long-term ZTD changes. Finally, differences in seasonal signals and linear trends between these two data sets were investigated. All these analyses were conducted for the ZTD time series of two lengths: a shortened 16-year series and a full 18-year one. In the case of spectral analysis, amplitudes of the annual and semi-annual periods were almost exactly the same for both reprocessing campaigns. Exceptions were found for only a few stations and they did not exceed 1 mm. The estimated trends were also similar. However, for the reprocessing performed in 2008, the trends values were usually higher. In general, shortening of the analysed time period by 2 years resulted in a decrease of the linear trends values of about 0.07 mm yr-1. This was confirmed by analyses based on two data sets.
Adachi, Yasumoto; Makita, Kohei
2015-09-01
Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.
NASA Astrophysics Data System (ADS)
Shirota, Yukari; Hashimoto, Takako; Fitri Sari, Riri
2018-03-01
It has been very significant to visualize time series big data. In the paper we shall discuss a new analysis method called “statistical shape analysis” or “geometry driven statistics” on time series statistical data in economics. In the paper, we analyse the agriculture, value added and industry, value added (percentage of GDP) changes from 2000 to 2010 in Asia. We handle the data as a set of landmarks on a two-dimensional image to see the deformation using the principal components. The point of the analysis method is the principal components of the given formation which are eigenvectors of its bending energy matrix. The local deformation can be expressed as the set of non-Affine transformations. The transformations give us information about the local differences between in 2000 and in 2010. Because the non-Affine transformation can be decomposed into a set of partial warps, we present the partial warps visually. The statistical shape analysis is widely used in biology but, in economics, no application can be found. In the paper, we investigate its potential to analyse the economic data.
Modeling commodity salam contract between two parties for discrete and continuous time series
NASA Astrophysics Data System (ADS)
Hisham, Azie Farhani Badrol; Jaffar, Maheran Mohd
2017-08-01
In order for Islamic finance to remain competitive as the conventional, there needs a new development of Islamic compliance product such as Islamic derivative that can be used to manage the risk. However, under syariah principles and regulations, all financial instruments must not be conflicting with five syariah elements which are riba (interest paid), rishwah (corruption), gharar (uncertainty or unnecessary risk), maysir (speculation or gambling) and jahl (taking advantage of the counterparty's ignorance). This study has proposed a traditional Islamic contract namely salam that can be built as an Islamic derivative product. Although a lot of studies has been done on discussing and proposing the implementation of salam contract as the Islamic product however they are more into qualitative and law issues. Since there is lack of quantitative study of salam contract being developed, this study introduces mathematical models that can value the appropriate salam price for a commodity salam contract between two parties. In modeling the commodity salam contract, this study has modified the existing conventional derivative model and come out with some adjustments to comply with syariah rules and regulations. The cost of carry model has been chosen as the foundation to develop the commodity salam model between two parties for discrete and continuous time series. However, the conventional time value of money results from the concept of interest that is prohibited in Islam. Therefore, this study has adopted the idea of Islamic time value of money which is known as the positive time preference, in modeling the commodity salam contract between two parties for discrete and continuous time series.
TaiWan Ionospheric Model (TWIM) prediction based on time series autoregressive analysis
NASA Astrophysics Data System (ADS)
Tsai, L. C.; Macalalad, Ernest P.; Liu, C. H.
2014-10-01
As described in a previous paper, a three-dimensional ionospheric electron density (Ne) model has been constructed from vertical Ne profiles retrieved from the FormoSat3/Constellation Observing System for Meteorology, Ionosphere, and Climate GPS radio occultation measurements and worldwide ionosonde foF2 and foE data and named the TaiWan Ionospheric Model (TWIM). The TWIM exhibits vertically fitted α-Chapman-type layers with distinct F2, F1, E, and D layers, and surface spherical harmonic approaches for the fitted layer parameters including peak density, peak density height, and scale height. To improve the TWIM into a real-time model, we have developed a time series autoregressive model to forecast short-term TWIM coefficients. The time series of TWIM coefficients are considered as realizations of stationary stochastic processes within a processing window of 30 days. These autocorrelation coefficients are used to derive the autoregressive parameters and then forecast the TWIM coefficients, based on the least squares method and Lagrange multiplier technique. The forecast root-mean-square relative TWIM coefficient errors are generally <30% for 1 day predictions. The forecast TWIM values of foE and foF2 values are also compared and evaluated using worldwide ionosonde data.
Modelling of Vortex-Induced Loading on a Single-Blade Installation Setup
NASA Astrophysics Data System (ADS)
Skrzypiński, Witold; Gaunaa, Mac; Heinz, Joachim
2016-09-01
Vortex-induced integral loading fluctuations on a single suspended blade at various inflow angles were modeled in the presents work by means of stochastic modelling methods. The reference time series were obtained by 3D DES CFD computations carried out on the DTU 10MW reference wind turbine blade. In the reference time series, the flapwise force component, Fx, showed both higher absolute values and variation than the chordwise force component, Fz, for every inflow angle considered. For this reason, the present paper focused on modelling of the Fx and not the Fz whereas Fz would be modelled using exactly the same procedure. The reference time series were significantly different, depending on the inflow angle. This made the modelling of all the time series with a single and relatively simple engineering model challenging. In order to find model parameters, optimizations were carried out, based on the root-mean-square error between the Single-Sided Amplitude Spectra of the reference and modelled time series. In order to model well defined frequency peaks present at certain inflow angles, optimized sine functions were superposed on the stochastically modelled time series. The results showed that the modelling accuracy varied depending on the inflow angle. None the less, the modelled and reference time series showed a satisfactory general agreement in terms of their visual and frequency characteristics. This indicated that the proposed method is suitable to model loading fluctuations on suspended blades.
The influence of trading volume on market efficiency: The DCCA approach
NASA Astrophysics Data System (ADS)
Sukpitak, Jessada; Hengpunya, Varagorn
2016-09-01
For a single market, the cross-correlation between market efficiency and trading volume, which is an indicator of market liquidity, is attentively analysed. The study begins with creating time series of market efficiency by applying time-varying Hurst exponent with one year sliding window to daily closing prices. The time series of trading volume corresponding to the same time period used for the market efficiency is derived from one year moving average of daily trading volume. Subsequently, the detrended cross-correlation coefficient is employed to quantify the degree of cross-correlation between the two time series. It was found that values of cross-correlation coefficient of all considered stock markets are close to 0 and are clearly out of range in which correlation being considered significant in almost every time scale. Obtained results show that the market liquidity in term of trading volume hardly has effect on the market efficiency.
Empirical forecast of quiet time ionospheric Total Electron Content maps over Europe
NASA Astrophysics Data System (ADS)
Badeke, Ronny; Borries, Claudia; Hoque, Mainul M.; Minkwitz, David
2018-06-01
An accurate forecast of the atmospheric Total Electron Content (TEC) is helpful to investigate space weather influences on the ionosphere and technical applications like satellite-receiver radio links. The purpose of this work is to compare four empirical methods for a 24-h forecast of vertical TEC maps over Europe under geomagnetically quiet conditions. TEC map data are obtained from the Space Weather Application Center Ionosphere (SWACI) and the Universitat Politècnica de Catalunya (UPC). The time-series methods Standard Persistence Model (SPM), a 27 day median model (MediMod) and a Fourier Series Expansion are compared to maps for the entire year of 2015. As a representative of the climatological coefficient models the forecast performance of the Global Neustrelitz TEC model (NTCM-GL) is also investigated. Time periods of magnetic storms, which are identified with the Dst index, are excluded from the validation. By calculating the TEC values with the most recent maps, the time-series methods perform slightly better than the coefficient model NTCM-GL. The benefit of NTCM-GL is its independence on observational TEC data. Amongst the time-series methods mentioned, MediMod delivers the best overall performance regarding accuracy and data gap handling. Quiet-time SWACI maps can be forecasted accurately and in real-time by the MediMod time-series approach.
Treatment of Outliers via Interpolation Method with Neural Network Forecast Performances
NASA Astrophysics Data System (ADS)
Wahir, N. A.; Nor, M. E.; Rusiman, M. S.; Gopal, K.
2018-04-01
Outliers often lurk in many datasets, especially in real data. Such anomalous data can negatively affect statistical analyses, primarily normality, variance, and estimation aspects. Hence, handling the occurrences of outliers require special attention. Therefore, it is important to determine the suitable ways in treating outliers so as to ensure that the quality of the analyzed data is indeed high. As such, this paper discusses an alternative method to treat outliers via linear interpolation method. In fact, assuming outlier as a missing value in the dataset allows the application of the interpolation method to interpolate the outliers thus, enabling the comparison of data series using forecast accuracy before and after outlier treatment. With that, the monthly time series of Malaysian tourist arrivals from January 1998 until December 2015 had been used to interpolate the new series. The results indicated that the linear interpolation method, which was comprised of improved time series data, displayed better results, when compared to the original time series data in forecasting from both Box-Jenkins and neural network approaches.
Extended space expectation values in quantum dynamical system evolutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demiralp, Metin
2014-10-06
The time variant power series expansion for the expectation value of a given quantum dynamical operator is well-known and well-investigated issue in quantum dynamics. However, depending on the operator and Hamiltonian singularities this expansion either may not exist or may not converge for all time instances except the beginning of the evolution. This work focuses on this issue and seeks certain cures for the negativities. We work in the extended space obtained by adding all images of the initial wave function under the system Hamiltonian’s positive integer powers. This requires the introduction of certain appropriately defined weight operators. The resultingmore » better convergence in the temporal power series urges us to call the new defined entities “extended space expectation values” even though they are constructed over certain weight operators and are somehow pseudo expectation values.« less
Combinations of Earth Orientation Measurements: SPACE2001, COMB2001, and POLE2001
NASA Technical Reports Server (NTRS)
Gross, Richard S.
2002-01-01
Independent Earth-orientation measurements taken by the space-geodetic techniques of lunar and satellite laser ranging, very long baseline interferometry, and the global positioning system have been combined using a Kalman filter. The resulting combined Earth-orientation series, SPACE2001, consists of values and uncertainties for Universal Time, polar motion, and their rates that span from September 28.0, 1976 to January 19.0, 2002 at daily intervals. The space-geodetic measurements used to generate SPACE2001 have been combined with optical astrometric measurements to form two additional combined Earth-orientation series: (1) COMB2001, consisting of values and uncertainties for Universal Time, polar motion, and their rates that span from January 20.0, 1962 to January 15.0, 2002 at five-day intervals, and (2) POLE2001, consisting of values and uncertainties for polar motion and its rates that span from January 20, 1900 to December 21, 2001 at 30.4375-day intervals.
Spatial variation of deterministic chaos in mean daily temperature and rainfall over Nigeria
NASA Astrophysics Data System (ADS)
Fuwape, I. A.; Ogunjo, S. T.; Oluyamo, S. S.; Rabiu, A. B.
2017-10-01
Daily rainfall and temperature data from 47 locations across Nigeria for the 36-year period 1979-2014 were treated to time series analysis technique to investigate some nonlinear trends in rainfall and temperature data. Some quantifiers such as Lyapunov exponents, correlation dimension, and entropy were obtained for the various locations. Positive Lyapunov exponents were obtained for the time series of mean daily rainfall for all locations in the southern part of Nigeria while negative Lyapunov exponents were obtained for all locations in the Northern part of Nigeria. The mean daily temperature had positive Lyapunov exponent values (0.35-1.6) for all the locations. Attempts were made in reconstructing the phase space of time series of rainfall and temperature.
NASA Astrophysics Data System (ADS)
Kim, Y.; Johnson, M. S.
2017-12-01
Spectral entropy (Hs) is an index which can be used to measure the structural complexity of time series data. When a time series is made up of one periodic function, the Hs value becomes smaller, while Hs becomes larger when a time series is composed of several periodic functions. We hypothesized that this characteristic of the Hs could be used to quantify the water stress history of vegetation. For the ideal condition for which sufficient water is supplied to an agricultural crop or natural vegetation, there should be a single distinct phenological cycle represented in a vegetation index time series (e.g., NDVI and EVI). However, time series data for a vegetation area that repeatedly experiences water stress may include several fluctuations that can be observed in addition to the predominant phenological cycle. This is because the process of experiencing water stress and recovering from it generates small fluctuations in phenological characteristics. Consequently, the value of Hs increases when vegetation experiences several water shortages. Therefore, the Hs could be used as an indicator for water stress history. To test this hypothesis, we analyzed Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) data for a natural area in comparison to a nearby sugarcane area in seasonally-dry western Costa Rica. In this presentation we will illustrate the use of spectral entropy to evaluate the vegetative responses of natural vegetation (dry tropical forest) and sugarcane under three different irrigation techniques (center pivot irrigation, drip irrigation and flood irrigation). Through this comparative analysis, the utility of Hs as an indicator will be tested. Furthermore, crop response to the different irrigation methods will be discussed in terms of Hs, NDVI and yield.
NASA Astrophysics Data System (ADS)
Salmon, B. P.; Kleynhans, W.; Olivier, J. C.; van den Bergh, F.; Wessels, K. J.
2018-05-01
Humans are transforming land cover at an ever-increasing rate. Accurate geographical maps on land cover, especially rural and urban settlements are essential to planning sustainable development. Time series extracted from MODerate resolution Imaging Spectroradiometer (MODIS) land surface reflectance products have been used to differentiate land cover classes by analyzing the seasonal patterns in reflectance values. The proper fitting of a parametric model to these time series usually requires several adjustments to the regression method. To reduce the workload, a global setting of parameters is done to the regression method for a geographical area. In this work we have modified a meta-optimization approach to setting a regression method to extract the parameters on a per time series basis. The standard deviation of the model parameters and magnitude of residuals are used as scoring function. We successfully fitted a triply modulated model to the seasonal patterns of our study area using a non-linear extended Kalman filter (EKF). The approach uses temporal information which significantly reduces the processing time and storage requirements to process each time series. It also derives reliability metrics for each time series individually. The features extracted using the proposed method are classified with a support vector machine and the performance of the method is compared to the original approach on our ground truth data.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Molnar, Gyula; Iredell, Lena; Loeb, Norman G.
2011-01-01
This paper compares recent spatial and temporal anomaly time series of OLR as observed by CERES and computed based on AIRS retrieved surface and atmospheric geophysical parameters over the 7 year time period September 2002 through February 2010. This time period is marked by a substantial decrease of OLR, on the order of +/-0.1 W/sq m/yr, averaged over the globe, and very large spatial variations of changes in OLR in the tropics, with local values ranging from -2.8 W/sq m/yr to +3.1 W/sq m/yr. Global and Tropical OLR both began to decrease significantly at the onset of a strong La Ni a in mid-2007. Late 2009 is characterized by a strong El Ni o, with a corresponding change in sign of both Tropical and Global OLR anomalies. The spatial patterns of the 7 year short term changes in AIRS and CERES OLR have a spatial correlation of 0.97 and slopes of the linear least squares fits of anomaly time series averaged over different spatial regions agree on the order of +/-0.01 W/sq m/yr. This essentially perfect agreement of OLR anomaly time series derived from observations by two different instruments, determined in totally independent and different manners, implies that both sets of results must be highly stable. This agreement also validates the anomaly time series of the AIRS derived products used to compute OLR and furthermore indicates that anomaly time series of AIRS derived products can be used to explain the factors contributing to anomaly time series of OLR.
Data imputation analysis for Cosmic Rays time series
NASA Astrophysics Data System (ADS)
Fernandes, R. C.; Lucio, P. S.; Fernandez, J. H.
2017-05-01
The occurrence of missing data concerning Galactic Cosmic Rays time series (GCR) is inevitable since loss of data is due to mechanical and human failure or technical problems and different periods of operation of GCR stations. The aim of this study was to perform multiple dataset imputation in order to depict the observational dataset. The study has used the monthly time series of GCR Climax (CLMX) and Roma (ROME) from 1960 to 2004 to simulate scenarios of 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% and 90% of missing data compared to observed ROME series, with 50 replicates. Then, the CLMX station as a proxy for allocation of these scenarios was used. Three different methods for monthly dataset imputation were selected: AMÉLIA II - runs the bootstrap Expectation Maximization algorithm, MICE - runs an algorithm via Multivariate Imputation by Chained Equations and MTSDI - an Expectation Maximization algorithm-based method for imputation of missing values in multivariate normal time series. The synthetic time series compared with the observed ROME series has also been evaluated using several skill measures as such as RMSE, NRMSE, Agreement Index, R, R2, F-test and t-test. The results showed that for CLMX and ROME, the R2 and R statistics were equal to 0.98 and 0.96, respectively. It was observed that increases in the number of gaps generate loss of quality of the time series. Data imputation was more efficient with MTSDI method, with negligible errors and best skill coefficients. The results suggest a limit of about 60% of missing data for imputation, for monthly averages, no more than this. It is noteworthy that CLMX, ROME and KIEL stations present no missing data in the target period. This methodology allowed reconstructing 43 time series.
Dihydropyrimidine based hydrazine dihydrochloride derivatives as potent urease inhibitors.
Khan, Ajmal; Hashim, Jamshed; Arshad, Nuzhat; Khan, Ijaz; Siddiqui, Naureen; Wadood, Abdul; Ali, Muzaffar; Arshad, Fiza; Khan, Khalid Mohammed; Choudhary, M Iqbal
2016-02-01
Four series of heterocyclic compounds 4-dihydropyrimidine-2-thiones 7-12 (series A), N,S-dimethyl-dihydropyrimidines 13-18 (series B), hydrazine derivatives of dihydropyrimidine 19-24 (series C), and tetrazolo dihydropyrimidine derivatives 25-30 (series D), were synthesized and evaluated for in vitro urease inhibitory activity. The series B-D were first time examined for urease inhibition. Series A and C were found to be significantly active with IC50 values between 34.7-42.9 and 15.0-26.0 μM, respectively. The structure-activity relationship showed that the free S atom and hydrazine moiety are the key pharmacophores against urease enzyme. The kinetic studies of the active series A (7-12) and C (19-24) were carried out to determine their modes of inhibition and dissociation constants Ki. Compounds of series A (7-12) and series C (19-24) showed a mixed-type of inhibition with Ki values ranging between 15.76-25.66 and 14.63-29.42 μM, respectively. The molecular docking results showed that all the active compounds of both series have significant binding interactions with the active sites specially Ni-ion of the urease enzyme. Cytotoxicity of all series A-D was also evaluated against mammalian mouse fibroblast 3T3 cell lines, and no toxicity was observed in cellular model. Copyright © 2016 Elsevier Inc. All rights reserved.
Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S
2005-05-15
Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE algorithm. The CMVE software is available upon request from the authors.
A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series.
Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan
2015-07-17
Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS.
A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series
Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan
2015-01-01
Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283
NASA Astrophysics Data System (ADS)
Reimer, Janet J.; Cai, Wei-Jun; Xue, Liang; Vargas, Rodrigo; Noakes, Scott; Hu, Xinping; Signorini, Sergio R.; Mathis, Jeremy T.; Feely, Richard A.; Sutton, Adrienne J.; Sabine, Christopher; Musielewicz, Sylvia; Chen, Baoshan; Wanninkhof, Rik
2017-08-01
Marine carbonate system monitoring programs often consist of multiple observational methods that include underway cruise data, moored autonomous time series, and discrete water bottle samples. Monitored parameters include all, or some of the following: partial pressure of CO2 of the water (pCO2w) and air, dissolved inorganic carbon (DIC), total alkalinity (TA), and pH. Any combination of at least two of the aforementioned parameters can be used to calculate the others. In this study at the Gray's Reef (GR) mooring in the South Atlantic Bight (SAB) we: examine the internal consistency of pCO2w from underway cruise, moored autonomous time series, and calculated from bottle samples (DIC-TA pairing); describe the seasonal to interannual pCO2w time series variability and air-sea flux (FCO2), as well as describe the potential sources of pCO2w variability; and determine the source/sink for atmospheric pCO2. Over the 8.5 years of GR mooring time series, mooring-underway and mooring-bottle calculated-pCO2w strongly correlate with r-values > 0.90. pCO2w and FCO2 time series follow seasonal thermal patterns; however, seasonal non-thermal processes, such as terrestrial export, net biological production, and air-sea exchange also influence variability. The linear slope of time series pCO2w increases by 5.2 ± 1.4 μatm y-1 with FCO2 increasing 51-70 mmol m-2 y-1. The net FCO2 sign can switch interannually with the magnitude varying greatly. Non-thermal pCO2w is also increasing over the time series, likely indicating that terrestrial export and net biological processes drive the long term pCO2w increase.
Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory
Tao, Qing
2017-01-01
Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM. PMID:29391864
Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory.
Yang, Haimin; Pan, Zhisong; Tao, Qing
2017-01-01
Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM.
Time series analysis for psychological research: examining and forecasting change
Jebb, Andrew T.; Tay, Louis; Wang, Wei; Huang, Qiming
2015-01-01
Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials. PMID:26106341
Time series analysis for psychological research: examining and forecasting change.
Jebb, Andrew T; Tay, Louis; Wang, Wei; Huang, Qiming
2015-01-01
Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials.
Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong
2012-11-01
By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (∼0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (∼0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (∼10^{2}), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.
NASA Astrophysics Data System (ADS)
Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong
2012-11-01
By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (˜0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (˜0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (˜102), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.
NASA Astrophysics Data System (ADS)
Pierini, J. O.; Restrepo, J. C.; Aguirre, J.; Bustamante, A. M.; Velásquez, G. J.
2017-04-01
A measure of the variability in seasonal extreme streamflow was estimated for the Colombian Caribbean coast, using monthly time series of freshwater discharge from ten watersheds. The aim was to detect modifications in the streamflow monthly distribution, seasonal trends, variance and extreme monthly values. A 20-year length time moving window, with 1-year successive shiftments, was applied to the monthly series to analyze the seasonal variability of streamflow. The seasonal-windowed data were statistically fitted through the Gamma distribution function. Scale and shape parameters were computed using the Maximum Likelihood Estimation (MLE) and the bootstrap method for 1000 resample. A trend analysis was performed for each windowed-serie, allowing to detect the window of maximum absolute values for trends. Significant temporal shifts in seasonal streamflow distribution and quantiles (QT), were obtained for different frequencies. Wet and dry extremes periods increased significantly in the last decades. Such increase did not occur simultaneously through the region. Some locations exhibited continuous increases only at minimum QT.
Alteration of Box-Jenkins methodology by implementing genetic algorithm method
NASA Astrophysics Data System (ADS)
Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad
2015-02-01
A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.
Giannelli, Marco; Diciotti, Stefano; Tessa, Carlo; Mascalchi, Mario
2010-01-01
Although in EPI-fMRI analyses typical acquisition parameters (TR, TE, matrix, slice thickness, etc.) are generally employed, various readout bandwidth (BW) values are used as a function of gradients characteristics of the MR scanner. Echo spacing (ES) is another fundamental parameter of EPI-fMRI acquisition sequences but the employed ES value is not usually reported in fMRI studies. In the present work, the authors investigated the effect of ES and BW on basic performances of EPI-fMRI sequences in terms of temporal stability and overall image quality of time series acquisition. EPI-fMRI acquisitions of the same water phantom were performed using two clinical MR scanner systems (scanners A and B) with different gradient characteristics and functional designs of radiofrequency coils. For both scanners, the employed ES values ranged from 0.75 to 1.33 ms. The used BW values ranged from 125.0 to 250.0 kHz/64pixels and from 78.1 to 185.2 kHz/64pixels for scanners A and B, respectively. The temporal stability of EPI-fMRI sequence was assessed measuring the signal-to-fluctuation noise ratio (SFNR) and signal drift (DR), while the overall image quality was assessed evaluating the signal-to-noise ratio (SNR(ts)) and nonuniformity (NU(ts)) of the time series acquisition. For both scanners, no significant effect of ES and BW on signal drift was revealed. The SFNR, NU(ts) and SNR(ts) values of scanner A did not significantly vary with ES. On the other hand, the SFNR, NU(ts), and SNR(ts) values of scanner B significantly varied with ES. SFNR (5.8%) and SNR(ts) (5.9%) increased with increasing ES. SFNR (25% scanner A, 32% scanner B) and SNR(ts) (26.2% scanner A, 30.1% scanner B) values of both scanners significantly decreased with increasing BW. NU(ts) values of scanners A and B were less than 3% for all BW and ES values. Nonetheless, scanner A was characterized by a significant upward trend (3% percentage of variation) of time series nonuniformity with increasing BW while NU(ts) of scanner B significantly increased (19% percentage of variation) with increasing ES. Temporal stability (SFNR and DR) and overall image quality (NU(ts) and SNR(ts)) of EPI-fMRI time series can significantly vary with echo spacing and readout bandwidth. The specific pattern of variation may depend on the performance of each single MR scanner system in terms of gradients characteristics, EPI sequence calibrations (eddy currents, shimming, etc.), and functional design of radiofrequency coil. Our results indicate that the employment of low BW improves not only the signal-to-noise ratio of EPI-fMRI time series but also the temporal stability of functional acquisitions. The use of minimum ES values is not entirely advantageous when the MR scanner system is characterized by gradients with low performances and suboptimal EPI sequence calibration. Since differences in basic performances of MR scanner system are potential source of variability for fMRI activation, phantom measurements of SFNR, DR, NU(ts), and SNR(ts) can be executed before subjects acquisitions to monitor the stability of MR scanner performances in clinical group comparison and longitudinal studies.
NASA Astrophysics Data System (ADS)
Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea; Montalto, Placido; Patanè, Domenico; Privitera, Eugenio
2016-07-01
From January 2011 to December 2015, Mt. Etna was mainly characterized by a cyclic eruptive behavior with more than 40 lava fountains from New South-East Crater. Using the RMS (Root Mean Square) of the seismic signal recorded by stations close to the summit area, an automatic recognition of the different states of volcanic activity (QUIET, PRE-FOUNTAIN, FOUNTAIN, POST-FOUNTAIN) has been applied for monitoring purposes. Since values of the RMS time series calculated on the seismic signal are generated from a stochastic process, we can try to model the system generating its sampled values, assumed to be a Markov process, using Hidden Markov Models (HMMs). HMMs analysis seeks to recover the sequence of hidden states from the observations. In our framework, observations are characters generated by the Symbolic Aggregate approXimation (SAX) technique, which maps RMS time series values with symbols of a pre-defined alphabet. The main advantages of the proposed framework, based on HMMs and SAX, with respect to other automatic systems applied on seismic signals at Mt. Etna, are the use of multiple stations and static thresholds to well characterize the volcano states. Its application on a wide seismic dataset of Etna volcano shows the possibility to guess the volcano states. The experimental results show that, in most of the cases, we detected lava fountains in advance.
Yang, Eunjoo; Park, Hyun Woo; Choi, Yeon Hwa; Kim, Jusim; Munkhdalai, Lkhagvadorj; Musa, Ibrahim; Ryu, Keun Ho
2018-05-11
Early detection of infectious disease outbreaks is one of the important and significant issues in syndromic surveillance systems. It helps to provide a rapid epidemiological response and reduce morbidity and mortality. In order to upgrade the current system at the Korea Centers for Disease Control and Prevention (KCDC), a comparative study of state-of-the-art techniques is required. We compared four different temporal outbreak detection algorithms: the CUmulative SUM (CUSUM), the Early Aberration Reporting System (EARS), the autoregressive integrated moving average (ARIMA), and the Holt-Winters algorithm. The comparison was performed based on not only 42 different time series generated taking into account trends, seasonality, and randomly occurring outbreaks, but also real-world daily and weekly data related to diarrhea infection. The algorithms were evaluated using different metrics. These were namely, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), F1 score, symmetric mean absolute percent error (sMAPE), root-mean-square error (RMSE), and mean absolute deviation (MAD). Although the comparison results showed better performance for the EARS C3 method with respect to the other algorithms, despite the characteristics of the underlying time series data, Holt⁻Winters showed better performance when the baseline frequency and the dispersion parameter values were both less than 1.5 and 2, respectively.
Changes in the Hurst exponent of heartbeat intervals during physical activity
NASA Astrophysics Data System (ADS)
Martinis, M.; Knežević, A.; Krstačić, G.; Vargović, E.
2004-07-01
The fractal scaling properties of the heartbeat time series are studied in different controlled ergometric regimes using both the improved Hurst rescaled range (R/S) analysis and the detrended fluctuation analysis (DFA). The long-time “memory effect” quantified by the value of the Hurst exponent H>0.5 is found to increase during progressive physical activity in healthy subjects, in contrast to those having stable angina pectoris, where it decreases. The results are also supported by the detrended fluctuation analysis. We argue that this finding may be used as a useful new diagnostic parameter for short heartbeat time series.
NASA Astrophysics Data System (ADS)
Carter, W. E.; Robertson, D. S.; Nothnagel, A.; Nicolson, G. D.; Schuh, H.
1988-12-01
High-accuracy geodetic very long baseline interferometry measurements between the African, Eurasian, and North American plates have been analyzed to determine the terrestrial coordinates of the Hartebeesthoek observatory to better than 10 cm, to determine the celestial coordinates of eight Southern Hemisphere radio sources with milliarc second (mas) accuracy, and to derive quasi-independent polar motion, UTI, and nutation time series. Comparison of the earth orientation time series with ongoing International Radio Interferometric Surveying project values shows agreement at about the 1 mas of arc level in polar motion and nutation and 0.1 ms of time in UTI. Given the independence of the observing sessions and the unlikeliness of common systematic error sources, this level of agreement serves to bound the total errors in both measurement series.
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
NASA Astrophysics Data System (ADS)
van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András
2015-04-01
For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.
Elbert, Yevgeniy; Burkom, Howard S
2009-11-20
This paper discusses further advances in making robust predictions with the Holt-Winters forecasts for a variety of syndromic time series behaviors and introduces a control-chart detection approach based on these forecasts. Using three collections of time series data, we compare biosurveillance alerting methods with quantified measures of forecast agreement, signal sensitivity, and time-to-detect. The study presents practical rules for initialization and parameterization of biosurveillance time series. Several outbreak scenarios are used for detection comparison. We derive an alerting algorithm from forecasts using Holt-Winters-generalized smoothing for prospective application to daily syndromic time series. The derived algorithm is compared with simple control-chart adaptations and to more computationally intensive regression modeling methods. The comparisons are conducted on background data from both authentic and simulated data streams. Both types of background data include time series that vary widely by both mean value and cyclic or seasonal behavior. Plausible, simulated signals are added to the background data for detection performance testing at signal strengths calculated to be neither too easy nor too hard to separate the compared methods. Results show that both the sensitivity and the timeliness of the Holt-Winters-based algorithm proved to be comparable or superior to that of the more traditional prediction methods used for syndromic surveillance.
Dissolved organic nitrogen dynamics in the North Sea: A time series analysis (1995-2005)
NASA Astrophysics Data System (ADS)
Van Engeland, T.; Soetaert, K.; Knuijt, A.; Laane, R. W. P. M.; Middelburg, J. J.
2010-09-01
Dissolved organic nitrogen (DON) dynamics in the North Sea was explored by means of long-term time series of nitrogen parameters from the Dutch national monitoring program. Generally, the data quality was good with little missing data points. Different imputation methods were used to verify the robustness of the patterns against these missing data. No long-term trends in DON concentrations were found over the sampling period (1995-2005). Inter-annual variability in the different time series showed both common and station-specific behavior. The stations could be divided into two regions, based on absolute concentrations and the dominant times scales of variability. Average DON concentrations were 11 μmol l -1 in the coastal region and 5 μmol l -1 in the open sea. Organic fractions of total dissolved nitrogen (TDN) averaged 38 and 71% in the coastal zone and open sea, respectively, but increased over time due to decreasing dissolved inorganic nitrogen (DIN) concentrations. In both regions intra-annual variability dominated over inter-annual variability, but DON variation in the open sea was markedly shifted towards shorter time scales relative to coastal stations. In the coastal zone a consistent seasonal DON cycle existed with high values in spring-summer and low values in autumn-winter. In the open sea seasonality was weak. A marked shift in the seasonality was found at the Dogger Bank, with DON accumulation towards summer and low values in winter prior to 1999, and accumulation in spring and decline throughout summer after 1999. This study clearly shows that DON is a dynamic actor in the North Sea and should be monitored systematically to enable us to understand fully the functioning of this ecosystem.
NASA Astrophysics Data System (ADS)
Delgado, Oihane; Campo-Bescós, Miguel A.; López, J. Javier
2017-04-01
Frequently, when we are trying to solve certain hydrological engineering problems, it is often necessary to know rain intensity values related to a specific probability or return period, T. Based on analyses of extreme rainfall events at different time scale aggregation, we can deduce the relationships among Intensity-Duration-Frequency (IDF), that are widely used in hydraulic infrastructure design. However, the lack of long time series of rainfall intensities for smaller time periods, minutes or hours, leads to use mathematical expressions to characterize and extend these curves. One way to deduce them is through the development of synthetic rainfall time series generated from stochastic models, which is evaluated in this work. From recorded accumulated rainfall time series every 10 min in the pluviograph of Igueldo (San Sebastian, Spain) for the time period between 1927-2005, their homogeneity has been checked and possible statistically significant increasing or decreasing trends have also been shown. Subsequently, two models have been calibrated: Bartlett-Lewis and Markov chains models, which are based on the successions of storms, composed for a series of rainfall events, separated by a short interval of time each. Finally, synthetic ten-minute rainfall time series are generated, which allow to estimate detailed IDF curves and compare them with the estimated IDF based on the recorded data.
NASA Astrophysics Data System (ADS)
Klevtsov, S. I.
2018-05-01
The impact of physical factors, such as temperature and others, leads to a change in the parameters of the technical object. Monitoring the change of parameters is necessary to prevent a dangerous situation. The control is carried out in real time. To predict the change in the parameter, a time series is used in this paper. Forecasting allows one to determine the possibility of a dangerous change in a parameter before the moment when this change occurs. The control system in this case has more time to prevent a dangerous situation. A simple time series was chosen. In this case, the algorithm is simple. The algorithm is executed in the microprocessor module in the background. The efficiency of using the time series is affected by its characteristics, which must be adjusted. In the work, the influence of these characteristics on the error of prediction of the controlled parameter was studied. This takes into account the behavior of the parameter. The values of the forecast lag are determined. The results of the research, in the case of their use, will improve the efficiency of monitoring the technical object during its operation.
Mobile Visualization and Analysis Tools for Spatial Time-Series Data
NASA Astrophysics Data System (ADS)
Eberle, J.; Hüttich, C.; Schmullius, C.
2013-12-01
The Siberian Earth System Science Cluster (SIB-ESS-C) provides access and analysis services for spatial time-series data build on products from the Moderate Resolution Imaging Spectroradiometer (MODIS) and climate data from meteorological stations. Until now a webportal for data access, visualization and analysis with standard-compliant web services was developed for SIB-ESS-C. As a further enhancement a mobile app was developed to provide an easy access to these time-series data for field campaigns. The app sends the current position from the GPS receiver and a specific dataset (like land surface temperature or vegetation indices) - selected by the user - to our SIB-ESS-C web service and gets the requested time-series data for the identified pixel back in real-time. The data is then being plotted directly in the app. Furthermore the user has possibilities to analyze the time-series data for breaking points and other phenological values. These processings are executed on demand of the user on our SIB-ESS-C web server and results are transfered to the app. Any processing can also be done at the SIB-ESS-C webportal. The aim of this work is to make spatial time-series data and analysis functions available for end users without the need of data processing. In this presentation the author gives an overview on this new mobile app, the functionalities, the technical infrastructure as well as technological issues (how the app was developed, our made experiences).
Nonlinear dynamics of the atmospheric pollutants in Mexico City
NASA Astrophysics Data System (ADS)
Muñoz-Diosdado, Alejandro; Barrera-Ferrer, Amilcar; Angulo-Brown, Fernando
2014-05-01
The atmospheric pollution in the Metropolitan Zone of Mexico City (MZMC) is a serious problem with social, economical and political consequences, in virtue that it is the region which concentrates both the greatest country population and a great part of commercial and industrial activities. According to the World Health Organization, maximum permissible concentrations of atmospheric pollutants are exceeded frequently. In the MZMC, the environmental monitoring has been limited to criteria pollutants, named in this way due to when their levels are measured in the atmosphere, they indicate in a precise way the air quality. The Automatic Atmospheric Monitoring Network monitors and registers the values of pollutants concentration in air in the MZMC. Actually, it is integrated by approximately 35 automatic-equipped remote stations, which report an every-hour register. Local and global invariant quantities have been widely used to describe the fractal properties of diverse time series. In the study of certain time series, many times it is assumed that they are monofractal, which means that they can be described only with one fractal dimension. But this hypothesis is unrealistic because a lot of time series are heterogeneous and non stationary, so their scaling properties are not the same throughout time and therefore they may require more fractal dimensions for their description. Complexity of the atmospheric pollutants dynamics suggests us to analyze its time series of hourly concentration registers with the multifractal formalism. So, in this work, air concentration time series of MZMC criteria pollutants were studied with the proposed method. The chosen pollutants to perform this analysis are ozone, sulfur dioxide, carbon monoxide, nitrogen dioxide and PM10 (particles less than 10 micrometers). We found that pollutants air concentration time series are multifractal. When we calculate the degree of multifractality for each time series we know that while more multifractal are the time series, there is more complexity both in the time series and in the system from which the measurements were obtained. We studied the variation of the degree of multifractality over time, by calculating the multifractal spectra of the time series for each year; we see the variation in each monitoring station from 1990 until 2013. Multifractal analysis can tell us what kinds of correlations are present in the time series, and it is interesting to consider how these correlations vary over time. Our results show that for all the pollutants and all the monitoring stations the time series have long range correlations and they are highly persistent.
Near Real-Time Event Detection & Prediction Using Intelligent Software Agents
2006-03-01
value was 0.06743. Multiple autoregressive integrated moving average ( ARIMA ) models were then build to see if the raw data, differenced data, or...slight improvement. The best adjusted r^2 value was found to be 0.1814. Successful results were not expected from linear or ARIMA -based modelling ...appear, 2005. [63] Mora-Lopez, L., Mora, J., Morales-Bueno, R., et al. Modelling time series of climatic parameters with probabilistic finite
NASA Astrophysics Data System (ADS)
Sobradelo, R.; Martí, J.; Mendoza-Rosas, A. T.; Gómez, G.
2012-04-01
The Canary Islands are an active volcanic region densely populated and visited by several millions of tourists every year. Nearly twenty eruptions have been reported through written chronicles in the last 600 years, suggesting that the probability of a new eruption in the near future is far from zero. This shows the importance of assessing and monitoring the volcanic hazard of the region in order to reduce and manage its potential volcanic risk, and ultimately contribute to the design of appropriate preparedness plans. Hence, the probabilistic analysis of the volcanic eruption time series for the Canary Islands is an essential step for the assessment of volcanic hazard and risk in the area. Such a series describes complex processes involving different types of eruptions over different time scales. Here we propose a statistical method for calculating the probabilities of future eruptions which is most appropriate given the nature of the documented historical eruptive data. We first characterise the eruptions by their magnitudes, and then carry out a preliminary analysis of the data to establish the requirements for the statistical method. Past studies in eruptive time series used conventional statistics and treated the series as an homogeneous process. In this paper, we will use a method that accounts for the time-dependence of the series and includes rare or extreme events, in the form of few data of large eruptions, since these data require special methods of analysis. Hence, we will use a statistical method from extreme value theory. In particular, we will apply a non-homogeneous Poisson process to the historical eruptive data of the Canary Islands to estimate the probability of having at least one volcanic event of a magnitude greater than one in the upcoming years. Shortly after the publication of this method an eruption in the island of El Hierro took place for the first time in historical times, supporting our method and contributing towards the validation of our results.
Occurrence analysis of daily rainfalls by using non-homogeneous Poissonian processes
NASA Astrophysics Data System (ADS)
Sirangelo, B.; Ferrari, E.; de Luca, D. L.
2009-09-01
In recent years several temporally homogeneous stochastic models have been applied to describe the rainfall process. In particular stochastic analysis of daily rainfall time series may contribute to explain the statistic features of the temporal variability related to the phenomenon. Due to the evident periodicity of the physical process, these models have to be used only to short temporal intervals in which occurrences and intensities of rainfalls can be considered reliably homogeneous. To this aim, occurrences of daily rainfalls can be considered as a stationary stochastic process in monthly periods. In this context point process models are widely used for at-site analysis of daily rainfall occurrence; they are continuous time series models, and are able to explain intermittent feature of rainfalls and simulate interstorm periods. With a different approach, periodic features of daily rainfalls can be interpreted by using a temporally non-homogeneous stochastic model characterized by parameters expressed as continuous functions in the time. In this case, great attention has to be paid to the parsimony of the models, as regards the number of parameters and the bias introduced into the generation of synthetic series, and to the influence of threshold values in extracting peak storm database from recorded daily rainfall heights. In this work, a stochastic model based on a non-homogeneous Poisson process, characterized by a time-dependent intensity of rainfall occurrence, is employed to explain seasonal effects of daily rainfalls exceeding prefixed threshold values. In particular, variation of rainfall occurrence intensity ? (t) is modelled by using Fourier series analysis, in which the non-homogeneous process is transformed into a homogeneous and unit one through a proper transformation of time domain, and the choice of the minimum number of harmonics is evaluated applying available statistical tests. The procedure is applied to a dataset of rain gauges located in different geographical zones of Mediterranean area. Time series have been selected on the basis of the availability of at least 50 years in the time period 1921-1985, chosen as calibration period, and of all the years of observation in the subsequent validation period 1986-2005, whose daily rainfall occurrence process variability is under hypothesis. Firstly, for each time series and for each fixed threshold value, parameters estimation of the non-homogeneous Poisson model is carried out, referred to calibration period. As second step, in order to test the hypothesis that daily rainfall occurrence process preserves the same behaviour in more recent time periods, the intensity distribution evaluated for calibration period is also adopted for the validation period. Starting from this and using a Monte Carlo approach, 1000 synthetic generations of daily rainfall occurrences, of length equal to validation period, have been carried out, and for each simulation sample ?(t) has been evaluated. This procedure is adopted because of the complexity of determining analytical statistical confidence limits referred to the sample intensity ?(t). Finally, sample intensity, theoretical function of the calibration period and 95% statistical band, evaluated by Monte Carlo approach, are matching, together with considering, for each threshold value, the mean square error (MSE) between the theoretical ?(t) and the sample one of recorded data, and his correspondent 95% one tail statistical band, estimated from the MSE values between the sample ?(t) of each synthetic series and the theoretical one. The results obtained may be very useful in the context of the identification and calibration of stochastic rainfall models based on historical precipitation data. Further applications of the non-homogeneous Poisson model will concern the joint analyses of the storm occurrence process with the rainfall height marks, interpreted by using a temporally homogeneous model in proper sub-year intervals.
Jane, Nancy Yesudhas; Nehemiah, Khanna Harichandran; Arputharaj, Kannan
2016-01-01
Clinical time-series data acquired from electronic health records (EHR) are liable to temporal complexities such as irregular observations, missing values and time constrained attributes that make the knowledge discovery process challenging. This paper presents a temporal rough set induced neuro-fuzzy (TRiNF) mining framework that handles these complexities and builds an effective clinical decision-making system. TRiNF provides two functionalities namely temporal data acquisition (TDA) and temporal classification. In TDA, a time-series forecasting model is constructed by adopting an improved double exponential smoothing method. The forecasting model is used in missing value imputation and temporal pattern extraction. The relevant attributes are selected using a temporal pattern based rough set approach. In temporal classification, a classification model is built with the selected attributes using a temporal pattern induced neuro-fuzzy classifier. For experimentation, this work uses two clinical time series dataset of hepatitis and thrombosis patients. The experimental result shows that with the proposed TRiNF framework, there is a significant reduction in the error rate, thereby obtaining the classification accuracy on an average of 92.59% for hepatitis and 91.69% for thrombosis dataset. The obtained classification results prove the efficiency of the proposed framework in terms of its improved classification accuracy.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.
Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Nonlinear analysis of dynamic signature
NASA Astrophysics Data System (ADS)
Rashidi, S.; Fallah, A.; Towhidkhah, F.
2013-12-01
Signature is a long trained motor skill resulting in well combination of segments like strokes and loops. It is a physical manifestation of complex motor processes. The problem, generally stated, is that how relative simplicity in behavior emerges from considerable complexity of perception-action system that produces behavior within an infinitely variable biomechanical and environmental context. To solve this problem, we present evidences which indicate that motor control dynamic in signing process is a chaotic process. This chaotic dynamic may explain a richer array of time series behavior in motor skill of signature. Nonlinear analysis is a powerful approach and suitable tool which seeks for characterizing dynamical systems through concepts such as fractal dimension and Lyapunov exponent. As a result, they can be analyzed in both horizontal and vertical for time series of position and velocity. We observed from the results that noninteger values for the correlation dimension indicates low dimensional deterministic dynamics. This result could be confirmed by using surrogate data tests. We have also used time series to calculate the largest Lyapunov exponent and obtain a positive value. These results constitute significant evidence that signature data are outcome of chaos in a nonlinear dynamical system of motor control.
Voltage and Current Clamp Transients with Membrane Dielectric Loss
Fitzhugh, R.; Cole, K. S.
1973-01-01
Transient responses of a space-clamped squid axon membrane to step changes of voltage or current are often approximated by exponential functions of time, corresponding to a series resistance and a membrane capacity of 1.0 μF/cm2. Curtis and Cole (1938, J. Gen. Physiol. 21:757) found, however, that the membrane had a constant phase angle impedance z = z1(jωτ)-α, with a mean α = 0.85. (α = 1.0 for an ideal capacitor; α < 1.0 may represent dielectric loss.) This result is supported by more recently published experimental data. For comparison with experiments, we have computed functions expressing voltage and current transients with constant phase angle capacitance, a parallel leakage conductance, and a series resistance, at nine values of α from 0.5 to 1.0. A series in powers of tα provided a good approximation for short times; one in powers of t-α, for long times; for intermediate times, a rational approximation matching both series for a finite number of terms was used. These computations may help in determining experimental series resistances and parallel leakage conductances from membrane voltage or current clamp data. PMID:4754194
NASA Astrophysics Data System (ADS)
Cernesson, Flavie; Tournoud, Marie-George; Lalande, Nathalie
2018-06-01
Among the various parameters monitored in river monitoring networks, bioindicators provide very informative data. Analysing time variations in bioindicator data is tricky for water managers because the data sets are often short, irregular, and non-normally distributed. It is then a challenging methodological issue for scientists, as it is in Saône basin (30 000 km2, France) where, between 1998 and 2010, among 812 IBGN (French macroinvertebrate bioindicator) monitoring stations, only 71 time series have got more than 10 data values and were studied here. Combining various analytical tools (three parametric and non-parametric statistical tests plus a graphical analysis), 45 IBGN time series were classified as stationary and 26 as non-stationary (only one of which showing a degradation). Series from sampling stations located within the same hydroecoregion showed similar trends, while river size classes seemed to be non-significant to explain temporal trends. So, from a methodological point of view, combining statistical tests and graphical analysis is a relevant option when striving to improve trend detection. Moreover, it was possible to propose a way to summarise series in order to analyse links between ecological river quality indicators and land use stressors.
NASA Astrophysics Data System (ADS)
Katselis, George; Koukou, Katerina; Dimitriou, Evagelos; Koutsikopoulos, Constantin
2007-07-01
In the present study we analysed the daily seaward migratory behaviour of four dominant euryhaline fish species (Mugilidae: Liza saliens, Liza aurata, Mugil cephalus and Sparidae: Sparus aurata) in the Messolonghi Etoliko lagoon system (Western Greek coast) based on the daily landings' time series of barrier traps and assessed the relationship between their migratory behaviour and various climatic variables (air temperature and atmospheric pressure) and the lunar cycle. A 2-year time series of daily fish landings (1993 and 1994), a long time series of daily air temperature and daily temperature range (1991 1998) as well as a 4-year time series of the daily atmospheric pressure (1994 1997) and daily pressure range were used. Harmonic models (HM) consisting of annual and lunar cycle harmonic components explained most (R2 > 0.80) of the mean daily species landings and temperature variations, while a rather low part of the variation (0.18 < R2 < 0.27) was explained for pressure, daily pressure range and daily temperature range. In all the time series sets the amplitude of the annual component was highest. The model values of all species revealed two important migration periods (summer and winter) corresponding to the spawning and refuge migrations. The lunar cycle effect on species' daily migration rates and the short-term fluctuation of daily migration rates were rather low. However, the short-term fluctuation of some species' daily migration rates during winter was greater than during summer. In all species, the main migration was the spawning migration. The model lunar components of the species landings showed a monthly oscillation synchronous to the full moon (S. aurata and M. cephalus) or a semi-monthly oscillation synchronous to the new and full moon (L. aurata and L. saliens). Bispectral analysis of the model values and the model residuals' time series revealed that the species daily migration were correlated (coherencies > 0.6) to the daily fluctuations of the climatic variables at seasonal, mid and short-term scales.
Ghalyan, Najah F; Miller, David J; Ray, Asok
2018-06-12
Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.
NASA Astrophysics Data System (ADS)
Di Piazza, A.; Cordano, E.; Eccel, E.
2012-04-01
The issue of climate change detection is considered a major challenge. In particular, high temporal resolution climate change scenarios are required in the evaluation of the effects of climate change on agricultural management (crop suitability, yields, risk assessment, etc.) energy production and water management. In this work, a "Weather Generator" technique was used for downscaling climate change scenarios for temperature. An R package (RMAWGEN, Cordano and Eccel, 2011 - available on http://cran.r-project.org) was developed aiming to generate synthetic daily weather conditions by using the theory of vectorial auto-regressive models (VAR). The VAR model was chosen for its ability in maintaining the temporal and spatial correlations among variables. In particular, observed time series of daily maximum and minimum temperature are transformed into "new" normally-distributed variable time series which are used to calibrate the parameters of a VAR model by using ordinary least square methods. Therefore the implemented algorithm, applied to monthly mean climatic values downscaled by Global Climate Model predictions, can generate several stochastic daily scenarios where the statistical consistency among series is saved. Further details are present in RMAWGEN documentation. An application is presented here by using a dataset with daily temperature time series recorded in 41 different sites of Trentino region for the period 1958-2010. Temperature time series were pre-processed to fill missing values (by a site-specific calibrated Inverse Distance Weighting algorithm, corrected with elevation) and to remove inhomogeneities. Several climatic indices were taken into account, useful for several impact assessment applications, and their time trends within the time series were analyzed. The indices go from the more classical ones, as annual mean temperatures, seasonal mean temperatures and their anomalies (from the reference period 1961-1990) to the climate change indices selected from the list recommended by the World Meteorological Organization Commission for Climatology (WMO-CCL) and the Research Programme on Climate Variability and Predictability (CLIVAR) project's Expert Team on Climate Change Detection, Monitoring and Indices (ETCCDMI). Each index was applied to both observed (and processed) data and to synthetic time series produced by the Weather Generator, over the thirty year reference period 1981-2010, in order to validate the procedure. Climate projections were statistically downscaled for a selection of sites for the two 30-year periods 2021-2050 and 2071-2099 of the European project "Ensembles" multi-model output (scenario A1B). The use of several climatic indices strengthens the trend analysis of both the generated synthetic series and future climate projections.
Analysis of crude oil markets with improved multiscale weighted permutation entropy
NASA Astrophysics Data System (ADS)
Niu, Hongli; Wang, Jun; Liu, Cheng
2018-03-01
Entropy measures are recently extensively used to study the complexity property in nonlinear systems. Weighted permutation entropy (WPE) can overcome the ignorance of the amplitude information of time series compared with PE and shows a distinctive ability to extract complexity information from data having abrupt changes in magnitude. Improved (or sometimes called composite) multi-scale (MS) method possesses the advantage of reducing errors and improving the accuracy when applied to evaluate multiscale entropy values of not enough long time series. In this paper, we combine the merits of WPE and improved MS to propose the improved multiscale weighted permutation entropy (IMWPE) method for complexity investigation of a time series. Then it is validated effective through artificial data: white noise and 1 / f noise, and real market data of Brent and Daqing crude oil. Meanwhile, the complexity properties of crude oil markets are explored respectively of return series, volatility series with multiple exponents and EEMD-produced intrinsic mode functions (IMFs) which represent different frequency components of return series. Moreover, the instantaneous amplitude and frequency of Brent and Daqing crude oil are analyzed by the Hilbert transform utilized to each IMF.
Brewer spectrometer total ozone column measurements in Sodankylä
NASA Astrophysics Data System (ADS)
Karppinen, Tomi; Lakkala, Kaisa; Karhu, Juha M.; Heikkinen, Pauli; Kivi, Rigel; Kyrö, Esko
2016-06-01
Brewer total ozone column measurements started in Sodankylä in May 1988, 9 months after the signing of The Montreal Protocol. The Brewer instrument has been well maintained and frequently calibrated since then to produce a high-quality ozone time series now spanning more than 25 years. The data have now been uniformly reprocessed between 1988 and 2014. The quality of the data has been assured by automatic data rejection rules as well as by manual checking. Daily mean values calculated from the highest-quality direct sun measurements are available 77 % of time with up to 75 measurements per day on clear days. Zenith sky measurements fill another 14 % of the time series and winter months are sparsely covered by moon measurements. The time series provides information to survey the evolution of Arctic ozone layer and can be used as a reference point for assessing other total ozone column measurement practices.
Automated Bayesian model development for frequency detection in biological time series.
Granqvist, Emma; Oldroyd, Giles E D; Morris, Richard J
2011-06-24
A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure.
Automated Bayesian model development for frequency detection in biological time series
2011-01-01
Background A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. Results In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Conclusions Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure. PMID:21702910
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.; Leban, F.; Mashiko, S.
1986-01-01
Single-channel pilot manual control output in closed-tracking tasks is modeled in terms of linear discrete transfer functions which are parsimonious and guaranteed stable. The transfer functions are found by applying a modified super-position time series generation technique. A Levinson-Durbin algorithm is used to determine the filter which prewhitens the input and a projective (least squares) fit of pulse response estimates is used to guarantee identified model stability. Results from two case studies are compared to previous findings, where the source of data are relatively short data records, approximately 25 seconds long. Time delay effects and pilot seasonalities are discussed and analyzed. It is concluded that single-channel time series controller modeling is feasible on short records, and that it is important for the analyst to determine a criterion for best time domain fit which allows association of model parameter values, such as pure time delay, with actual physical and physiological constraints. The purpose of the modeling is thus paramount.
Analysis of financial time series using multiscale entropy based on skewness and kurtosis
NASA Astrophysics Data System (ADS)
Xu, Meng; Shang, Pengjian
2018-01-01
There is a great interest in studying dynamic characteristics of the financial time series of the daily stock closing price in different regions. Multi-scale entropy (MSE) is effective, mainly in quantifying the complexity of time series on different time scales. This paper applies a new method for financial stability from the perspective of MSE based on skewness and kurtosis. To better understand the superior coarse-graining method for the different kinds of stock indexes, we take into account the developmental characteristics of the three continents of Asia, North America and European stock markets. We study the volatility of different financial time series in addition to analyze the similarities and differences of coarsening time series from the perspective of skewness and kurtosis. A kind of corresponding relationship between the entropy value of stock sequences and the degree of stability of financial markets, were observed. The three stocks which have particular characteristics in the eight piece of stock sequences were discussed, finding the fact that it matches the result of applying the MSE method to showing results on a graph. A comparative study is conducted to simulate over synthetic and real world data. Results show that the modified method is more effective to the change of dynamics and has more valuable information. The result is obtained at the same time, finding the results of skewness and kurtosis discrimination is obvious, but also more stable.
NASA Astrophysics Data System (ADS)
Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal
2018-06-01
Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.
Analysis of the QRS complex for apnea-bradycardia characterization in preterm infants
Altuve, Miguel; Carrault, Guy; Cruz, Julio; Beuchée, Alain; Pladys, Patrick; Hernandez, Alfredo I.
2009-01-01
This work presents an analysis of the information content of new features derived from the electrocardiogram (ECG) for the characterization of apnea-bradycardia events in preterm infants. Automatic beat detection and segmentation methods have been adapted to the ECG signals from preterm infants, through the application of two evolutionary algorithms. ECG data acquired from 32 preterm infants with persistent apnea-bradycardia have been used for quantitative evaluation. The adaptation procedure led to an improved sensitivity and positive predictive value, and a reduced jitter for the detection of the R-wave, QRS onset, QRS offset, and iso-electric level. Additionally, time series representing the RR interval, R-wave amplitude and QRS duration, were automatically extracted for periods at rest, before, during and after apnea-bradycardia episodes. Significant variations (p<0.05) were observed for all time-series when comparing the difference between values at rest versus values just before the bradycardia event, with the difference between values at rest versus values during the bradycardia event. These results reveal changes in the R-wave amplitude and QRS duration, appearing at the onset and termination of apnea-bradycardia episodes, which could be potentially useful for the early detection and characterization of these episodes. PMID:19963984
Using lagged dependence to identify (de)coupled surface and subsurface soil moisture values
NASA Astrophysics Data System (ADS)
Carranza, Coleen D. U.; van der Ploeg, Martine J.; Torfs, Paul J. J. F.
2018-04-01
Recent advances in radar remote sensing popularized the mapping of surface soil moisture at different spatial scales. Surface soil moisture measurements are used in combination with hydrological models to determine subsurface soil moisture values. However, variability of soil moisture across the soil column is important for estimating depth-integrated values, as decoupling between surface and subsurface can occur. In this study, we employ new methods to investigate the occurrence of (de)coupling between surface and subsurface soil moisture. Using time series datasets, lagged dependence was incorporated in assessing (de)coupling with the idea that surface soil moisture conditions will be reflected at the subsurface after a certain delay. The main approach involves the application of a distributed-lag nonlinear model (DLNM) to simultaneously represent both the functional relation and the lag structure in the time series. The results of an exploratory analysis using residuals from a fitted loess function serve as a posteriori information to determine (de)coupled values. Both methods allow for a range of (de)coupled soil moisture values to be quantified. Results provide new insights into the decoupled range as its occurrence among the sites investigated is not limited to dry conditions.
Homogenisation of minimum and maximum air temperature in northern Portugal
NASA Astrophysics Data System (ADS)
Freitas, L.; Pereira, M. G.; Caramelo, L.; Mendes, L.; Amorim, L.; Nunes, L.
2012-04-01
Homogenization of minimum and maximum air temperature has been carried out for northern Portugal for the period 1941-2010. The database corresponds to the values of the monthly arithmetic averages calculated from daily values observed at stations within the network of stations managed by the national Institute of Meteorology (IM). Some of the weather stations of IM's network are collecting data for more than a century; however, during the entire observing period, some factors have affected the climate series and have to be considered such as, changes in the station surroundings and changes related to replacement of manually operated instruments. Besides these typical changes, it is of particular interest the station relocation to rural areas or to the urban-rural interface and the installation of automatic weather stations in the vicinity of the principal or synoptic stations with the aim of replacing them. The information from these relocated and new stations was merged to produce just one but representative time series of that site. This process starts at the end 90's and the information of the time series fusion process constitutes the set of metadata used. Two basic procedures were performed: (i) preliminary statistical and quality control analysis; and, (ii) detection and correction of problems of homogeneity. In the first case, was developed and used software for quality control, specifically dedicated for the detection of outliers, based on the quartile values of the time series itself. The analysis of homogeneity was performed using the MASH (Multiple Analysis of Series for Homogenisation) and HOMER, which is a software application developed and recently made available within the COST Action ES0601 (COST-ES0601, 2012). Both methods provide a fast quality control of the original data and were developed for automatic processing, analyzing, homogeneity testing and adjusting of climatological data, but manual usage is also possible. Obtained results with both methods will be presented, compared and discussed along with the results of the sensitivity tests performed with both methods. COST-ES0601, 2012: "ACTION COST-ES0601 - Advances in homogenisation methods of climate series: an integrated approach HOME". Available at http://www.homogenisation.org/v_02_15/ [accessed 3 January 2012].
Tormene, Paolo; Giorgino, Toni; Quaglini, Silvana; Stefanelli, Mario
2009-01-01
The purpose of this study was to assess the performance of a real-time ("open-end") version of the dynamic time warping (DTW) algorithm for the recognition of motor exercises. Given a possibly incomplete input stream of data and a reference time series, the open-end DTW algorithm computes both the size of the prefix of reference which is best matched by the input, and the dissimilarity between the matched portions. The algorithm was used to provide real-time feedback to neurological patients undergoing motor rehabilitation. We acquired a dataset of multivariate time series from a sensorized long-sleeve shirt which contains 29 strain sensors distributed on the upper limb. Seven typical rehabilitation exercises were recorded in several variations, both correctly and incorrectly executed, and at various speeds, totaling a data set of 840 time series. Nearest-neighbour classifiers were built according to the outputs of open-end DTW alignments and their global counterparts on exercise pairs. The classifiers were also tested on well-known public datasets from heterogeneous domains. Nonparametric tests show that (1) on full time series the two algorithms achieve the same classification accuracy (p-value =0.32); (2) on partial time series, classifiers based on open-end DTW have a far higher accuracy (kappa=0.898 versus kappa=0.447;p<10(-5)); and (3) the prediction of the matched fraction follows closely the ground truth (root mean square <10%). The results hold for the motor rehabilitation and the other datasets tested, as well. The open-end variant of the DTW algorithm is suitable for the classification of truncated quantitative time series, even in the presence of noise. Early recognition and accurate class prediction can be achieved, provided that enough variance is available over the time span of the reference. Therefore, the proposed technique expands the use of DTW to a wider range of applications, such as real-time biofeedback systems.
A spectral analysis of team dynamics and tactics in Brazilian football.
Moura, Felipe Arruda; Martins, Luiz Eduardo Barreto; Anido, Ricardo O; Ruffino, Paulo Régis C; Barros, Ricardo M L; Cunha, Sergio Augusto
2013-01-01
The purposes of this study were to characterise the total space covered and the distances between players within teams over ten Brazilian First Division Championship matches. Filmed recordings, combined with a tracking system, were used to obtain the trajectories of the players (n = 277), before and after half-time. The team surface area (the area of the convex hull formed by the positions of the players) and spread (the Frobenius norm of the distance-between-player matrix) were calculated as functions of time. A Fast Fourier Transform (FFT) was applied to each time series. The median frequency was then calculated. The results of the surface area time series median frequencies for the first half (0.63 ± 0.10 cycles · min⁻¹) were significantly greater (P < 0.01) than the second-half values (0.47 ± 0.14 cycles · min⁻¹). Similarly, the spread variable median frequencies for the first half (0.60 ± 0.14 cycles · min⁻¹) were significantly greater (P < 0.01) than the second-half values (0.46 ± 0.16 cycles · min⁻¹). The median frequencies allowed the characterisation of the time series oscillations that represent the speed at which players distribute and then compact their team formation during a match. This analysis can provide insights that allow coaches to better control the team organisation on the pitch.
Malik, Nishant; Marwan, Norbert; Zou, Yong; Mucha, Peter J.; Kurths, Jürgen
2016-01-01
A method to identify distinct dynamical regimes and transitions between those regimes in a short univariate time series was recently introduced [1], employing the computation of fluctuations in a measure of nonlinear similarity based on local recurrence properties. In the present work, we describe the details of the analytical relationships between this newly introduced measure and the well known concepts of attractor dimensions and Lyapunov exponents. We show that the new measure has linear dependence on the effective dimension of the attractor and it measures the variations in the sum of the Lyapunov spectrum. To illustrate the practical usefulness of the method, we identify various types of dynamical transitions in different nonlinear models. We present testbed examples for the new method’s robustness against noise and missing values in the time series. We also use this method to analyze time series of social dynamics, specifically an analysis of the U.S. crime record time series from 1975 to 1993. Using this method, we find that dynamical complexity in robberies was influenced by the unemployment rate until the late 1980’s. We have also observed a dynamical transition in homicide and robbery rates in the late 1980’s and early 1990’s, leading to increase in the dynamical complexity of these rates. PMID:25019852
NASA Astrophysics Data System (ADS)
Aulia, D.; Ayu, S. F.; Matondang, A.
2018-01-01
Malaria is the most contagious global concern. As a public health problem with outbreaks, affect the quality of life and economy, also could lead to death. Therefore, this research is to forecast malaria cases with climatic factors as predictors in Mandailing Natal Regency. The total number of positive malaria cases on January 2008 to December 2016 were taken from health department of Mandailing Natal Regency. Climates data such as rainfall, humidity, and temperature were taken from Center of Statistic Department of Mandailing Natal Regency. E-views ver. 9 is used to analyze this study. Autoregressive integrated average, ARIMA (0,1,1) (1,0,0)12 is the best model to explain the 67,2% variability data in time series study. Rainfall (P value = 0.0005), temperature (P value = 0,0029) and humidity (P value = 0.0001) are significant predictors for malaria transmission. Seasonal adjusted factor (SAF) in November and March shows peak for malaria cases.
Prediction of South China sea level using seasonal ARIMA models
NASA Astrophysics Data System (ADS)
Fernandez, Flerida Regine; Po, Rodolfo; Montero, Neil; Addawe, Rizavel
2017-11-01
Accelerating sea level rise is an indicator of global warming and poses a threat to low-lying places and coastal countries. This study aims to fit a Seasonal Autoregressive Integrated Moving Average (SARIMA) model to the time series obtained from the TOPEX and Jason series of satellite radar altimetries of the South China Sea from the year 2008 to 2015. With altimetric measurements taken in a 10-day repeat cycle, monthly averages of the satellite altimetry measurements were taken to compose the data set used in the study. SARIMA models were then tried and fitted to the time series in order to find the best-fit model. Results show that the SARIMA(1,0,0)(0,1,1)12 model best fits the time series and was used to forecast the values for January 2016 to December 2016. The 12-month forecast using SARIMA(1,0,0)(0,1,1)12 shows that the sea level gradually increases from January to September 2016, and decreases until December 2016.
Guatteri, Mariagiovanna; Spudich, P.; Beroza, G.C.
2001-01-01
We consider the applicability of laboratory-derived rate- and state-variable friction laws to the dynamic rupture of the 1995 Kobe earthquake. We analyze the shear stress and slip evolution of Ide and Takeo's [1997] dislocation model, fitting the inferred stress change time histories by calculating the dynamic load and the instantaneous friction at a series of points within the rupture area. For points exhibiting a fast-weakening behavior, the Dieterich-Ruina friction law, with values of dc = 0.01-0.05 m for critical slip, fits the stress change time series well. This range of dc is 10-20 times smaller than the slip distance over which the stress is released, Dc, which previous studies have equated with the slip-weakening distance. The limited resolution and low-pass character of the strong motion inversion degrades the resolution of the frictional parameters and suggests that the actual dc is less than this value. Stress time series at points characterized by a slow-weakening behavior are well fitted by the Dieterich-Ruina friction law with values of dc ??? 0.01-0.05 m. The apparent fracture energy Gc can be estimated from waveform inversions more stably than the other friction parameters. We obtain a Gc = 1.5??106 J m-2 for the 1995 Kobe earthquake, in agreement with estimates for previous earthquakes. From this estimate and a plausible upper bound for the local rock strength we infer a lower bound for Dc of about 0.008 m. Copyright 2001 by the American Geophysical Union.
New insights into soil temperature time series modeling: linear or nonlinear?
NASA Astrophysics Data System (ADS)
Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram
2018-03-01
Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.
Barba, Lida; Rodríguez, Nibaldo; Montt, Cecilia
2014-01-01
Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0:26%, followed by MA-ARIMA with a MAPE of 1:12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15:51%.
Extended AIC model based on high order moments and its application in the financial market
NASA Astrophysics Data System (ADS)
Mao, Xuegeng; Shang, Pengjian
2018-07-01
In this paper, an extended method of traditional Akaike Information Criteria(AIC) is proposed to detect the volatility of time series by combining it with higher order moments, such as skewness and kurtosis. Since measures considering higher order moments are powerful in many aspects, the properties of asymmetry and flatness can be observed. Furthermore, in order to reduce the effect of noise and other incoherent features, we combine the extended AIC algorithm with multiscale wavelet analysis, in which the newly extended AIC algorithm is applied to wavelet coefficients at several scales and the time series are reconstructed by wavelet transform. After that, we create AIC planes to derive the relationship among AIC values using variance, skewness and kurtosis respectively. When we test this technique on the financial market, the aim is to analyze the trend and volatility of the closing price of stock indices and classify them. And we also adapt multiscale analysis to measure complexity of time series over a range of scales. Empirical results show that the singularity of time series in stock market can be detected via extended AIC algorithm.
NASA Astrophysics Data System (ADS)
Zakynthinaki, M. S.; Stirling, J. R.
2007-01-01
Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.
Atrial fibrillation detection using an iPhone 4S.
Lee, Jinseok; Reyes, Bersain A; McManus, David D; Maitas, Oscar; Mathias, Oscar; Chon, Ki H
2013-01-01
Atrial fibrillation (AF) affects three to five million Americans and is associated with significant morbidity and mortality. Existing methods to diagnose this paroxysmal arrhythmia are cumbersome and/or expensive. We hypothesized that an iPhone 4S can be used to detect AF based on its ability to record a pulsatile photoplethysmogram signal from a fingertip using the built-in camera lens. To investigate the capability of the iPhone 4S for AF detection, we first used two databases, the MIT-BIH AF and normal sinus rhythm (NSR) to derive discriminatory threshold values between two rhythms. Both databases include RR time series originating from 250 Hz sampled ECG recordings. We rescaled the RR time series to 30 Hz so that the RR time series resolution is 1/30 (s) which is equivalent to the resolution from an iPhone 4S. We investigated three statistical methods consisting of the root mean square of successive differences (RMSSD), the Shannon entropy (ShE) and the sample entropy (SampE), which have been proved to be useful tools for AF assessment. Using 64-beat segments from the MIT-BIH databases, we found the beat-to-beat accuracy value of 0.9405, 0.9300, and 0.9614 for RMSSD, ShE, and SampE, respectively. Using an iPhone 4S, we collected 2-min pulsatile time series from 25 prospectively recruited subjects with AF pre- and postelectrical cardioversion. Using derived threshold values of RMSSD, ShE and SampE from the MIT-BIH databases, we found the beat-to-beat accuracy of 0.9844, 0.8494, and 0.9522, respectively. It should be recognized that for clinical applications, the most relevant objective is to detect the presence of AF in the data. Using this criterion, we achieved an accuracy of 100% for both the MIT-BIH AF and iPhone 4S databases.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1983-10-04
Reduction in the maximum time uncertainty (t[sub max]--t[sub min]) of a series of paired time signals t[sub 1] and t[sub 2] varying between two input terminals and representative of a series of single events where t[sub 1][<=]t[sub 2] and t[sub 1]+t[sub 2] equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t[sub min]) of the first signal t[sub 1] closer to t[sub max] and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20--800. 6 figs.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, George E.; Dawson, John W.
1983-01-01
Reduction in the maximum time uncertainty (t.sub.max -t.sub.min) of a series of paired time signals t.sub.1 and t.sub.2 varying between two input terminals and representative of a series of single events where t.sub.1 .ltoreq.t.sub.2 and t.sub.1 +t.sub.2 equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t.sub.min) of the first signal t.sub.1 closer to t.sub.max and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20-800.
49 CFR 571.126 - Standard No. 126; Electronic stability control systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... counterclockwise steering, and the other series uses clockwise steering. The maximum time permitted between each... or side slip derivative with respect to time; (4) That has a means to monitor driver steering inputs... dwell steering input (time T0 + 1 in Figure 1) must not exceed 35 percent of the first peak value of yaw...
49 CFR 571.126 - Standard No. 126; Electronic stability control systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... counterclockwise steering, and the other series uses clockwise steering. The maximum time permitted between each... or side slip derivative with respect to time; (4) That has a means to monitor driver steering inputs... dwell steering input (time T0 + 1 in Figure 1) must not exceed 35 percent of the first peak value of yaw...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Management Reports, the NASA form 533 series, is required on cost-type, price redetermination, and fixed... implemented in the contract based on the estimated final contract value at the time of award. [62 FR 14017...
Resolution Enhancement of MODIS-derived Water Indices for Studying Persistent Flooding
NASA Astrophysics Data System (ADS)
Underwood, L. W.; Kalcic, M. T.; Fletcher, R. M.
2012-12-01
Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.
Resolution Enhancement of MODIS-Derived Water Indices for Studying Persistent Flooding
NASA Technical Reports Server (NTRS)
Underwood, L. W.; Kalcic, Maria; Fletcher, Rose
2012-01-01
Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.
Deterministic chaos in atmospheric radon dynamics
NASA Astrophysics Data System (ADS)
Cuculeanu, Vasile; Lupu, Alexandru
2001-08-01
The correlation dimension and Lyapunov exponents have been calculated for two time series of atmospheric radon daughter concentrations obtained from four daily measurements during the period 1993-1996. A number of about 6000 activity concentration values of 222Rn and 220Rn daughters have been used. The measuring method is based on aerosol collection on filters. In order to determine the filter activity, a low background gross beta measuring device with Geiger-Müller counter tubes in anticoincidence was used. The small noninteger value of the correlation dimension (≃2.2) and the existence of a positive Lyapunov exponent prove that deterministic chaos is present in the time series of atmospheric 220Rn daughters. This shows that a simple diffusion equation with a parameterized turbulent diffusion coefficient is insufficient for describing the dynamics in the near-ground layer where turbulence is not fully developed and coherent structures dominate. The analysis of 222Rn series confirms that the dynamics of the boundary layer cannot be described by a system of ordinary differential equations with a low number of independent variables.
Huang, Xin; Zeng, Jun; Zhou, Lina; Hu, Chunxiu; Yin, Peiyuan; Lin, Xiaohui
2016-08-31
Time-series metabolomics studies can provide insight into the dynamics of disease development and facilitate the discovery of prospective biomarkers. To improve the performance of early risk identification, a new strategy for analyzing time-series data based on dynamic networks (ATSD-DN) in a systematic time dimension is proposed. In ATSD-DN, the non-overlapping ratio was applied to measure the changes in feature ratios during the process of disease development and to construct dynamic networks. Dynamic concentration analysis and network topological structure analysis were performed to extract early warning information. This strategy was applied to the study of time-series lipidomics data from a stepwise hepatocarcinogenesis rat model. A ratio of lyso-phosphatidylcholine (LPC) 18:1/free fatty acid (FFA) 20:5 was identified as the potential biomarker for hepatocellular carcinoma (HCC). It can be used to classify HCC and non-HCC rats, and the area under the curve values in the discovery and external validation sets were 0.980 and 0.972, respectively. This strategy was also compared with a weighted relative difference accumulation algorithm (wRDA), multivariate empirical Bayes statistics (MEBA) and support vector machine-recursive feature elimination (SVM-RFE). The better performance of ATSD-DN suggests its potential for a more complete presentation of time-series changes and effective extraction of early warning information.
NASA Astrophysics Data System (ADS)
Huang, Xin; Zeng, Jun; Zhou, Lina; Hu, Chunxiu; Yin, Peiyuan; Lin, Xiaohui
2016-08-01
Time-series metabolomics studies can provide insight into the dynamics of disease development and facilitate the discovery of prospective biomarkers. To improve the performance of early risk identification, a new strategy for analyzing time-series data based on dynamic networks (ATSD-DN) in a systematic time dimension is proposed. In ATSD-DN, the non-overlapping ratio was applied to measure the changes in feature ratios during the process of disease development and to construct dynamic networks. Dynamic concentration analysis and network topological structure analysis were performed to extract early warning information. This strategy was applied to the study of time-series lipidomics data from a stepwise hepatocarcinogenesis rat model. A ratio of lyso-phosphatidylcholine (LPC) 18:1/free fatty acid (FFA) 20:5 was identified as the potential biomarker for hepatocellular carcinoma (HCC). It can be used to classify HCC and non-HCC rats, and the area under the curve values in the discovery and external validation sets were 0.980 and 0.972, respectively. This strategy was also compared with a weighted relative difference accumulation algorithm (wRDA), multivariate empirical Bayes statistics (MEBA) and support vector machine-recursive feature elimination (SVM-RFE). The better performance of ATSD-DN suggests its potential for a more complete presentation of time-series changes and effective extraction of early warning information.
Meteorological factors for PM10 concentration levels in Northern Spain
NASA Astrophysics Data System (ADS)
Santurtún, Ana; Mínguez, Roberto; Villar-Fernández, Alejandro; González Hidalgo, Juan Carlos; Zarrabeitia, María Teresa
2013-04-01
Atmospheric particulate matter (PM) is made up of a mixture of solid and aqueous species which enter the atmosphere by anthropogenic and natural pathways. The levels and composition of ambient air PM depend on the climatology and on the geography (topography, soil cover, proximity to arid zones or to the coast) of a given region. Spain has particular difficulties in achieving compliance with the limit values established by the European Union (based on recommendations from the World Health Organization) for particulate matter on the order of 10 micrometers of diameter or less (PM10), but not only antropogenical emissions are responsible for this: some studies show that PM10 concentrations originating from these kinds of sources are similar to what is found in other European countries, while some of the geographical features of the Iberian Peninsula (such as African mineral dust intrusion, soil aridity or rainfall) are proven to be a factor for higher PM concentrations. This work aims to describe PM10 concentration levels in Cantabria (Northern Spain) and their relationship with the following meteorological variables: rainfall, solar radiation, temperature, barometric pressure and wind speed. Data consists of daily series obtained from hourly data records for the 2000-2010 period, of PM10 concentrations from 4 different urban-background stations, and daily series of the meteorological variables provided by Spanish National Meteorology Agency. The method used for establishing the relationships between these variables consists of several steps: i) fitting a non-stationary probability density function for each variable accounting for long-term trends, seasonality during the year and possible seasonality during the week to distinguish between work and weekend days, ii) using the marginal distribution function obtained, transform the time series of historical values of each variable into a normalized Gaussian time series. This step allows using consistently time series models, iii) fitting of a times series model (Autoregressive moving average, ARMA) to the transformed historical values in order to eliminate the temporal autocorrelation structure of each stochastic process, obtaining a white noise for each variable, and finally, iv) the calculation of cross correlations between white noises at different time lags. These cross correlations allow characterization of the true correlation between signals, avoiding the problems induced by data scaling or autocorrelations inherent to each signal. Results provide the relationship and possible contribution to PM10 concentration levels associated with each meteorological variable. This information can be used to improve PM10 concentration levels forecasting using existing meteorological forecasts.
A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis
NASA Astrophysics Data System (ADS)
Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz
2018-04-01
For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.
Development and Testing of Data Mining Algorithms for Earth Observation
NASA Technical Reports Server (NTRS)
Glymour, Clark
2005-01-01
The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.
Seasonal to multi-decadal trends in apparent optical properties in the Sargasso Sea
NASA Astrophysics Data System (ADS)
Allen, James G.; Nelson, Norman B.; Siegel, David A.
2017-01-01
Multi-decadal, monthly observations of optical and biogeochemical properties, made as part of the Bermuda Bio-Optics Project (BBOP) at the Bermuda Atlantic Time-series Study (BATS) site in the Sargasso Sea, allow for the examination of temporal trends in vertical light attenuation and their potential controls. Trends in the magnitude of the diffuse attenuation coefficient, Kd(λ), and a proxy for its spectral shape reflect changes in phytoplankton and chromophoric dissolved organic matter (CDOM) characteristics. The length and methodological consistency of this time series provide an excellent opportunity to extend analyses of seasonal cycles of apparent optical properties to interannual and decadal time scales. Here, we characterize changes in the magnitude and spectral shape proxy of diffuse attenuation coefficient spectra and compare them to available biological and optical data from the BATS time series program. The time series analyses reveal a 1.01%±0.18% annual increase of the magnitude of the diffuse attenuation coefficient at 443 nm over the upper 75 m of the water column while showing no significant change in selected spectral characteristics over the study period. These and other observations indicate that changes in phytoplankton rather than changes in CDOM abundance are the primary driver for the diffuse attenuation trends on multi-year timescales for this region. Our findings are inconsistent with previous decadal-scale global ocean water clarity and global satellite ocean color analyses yet are consistent with recent analyses of the BATS time series and highlight the value of long-term consistent observation at ocean time series sites.
Reproducibility of geochemical and climatic signals in the Atlantic coral Montastraea faveolata
Smith, Joseph M.; Quinn, T.M.; Helmle, K.P.; Halley, R.B.
2006-01-01
Monthly resolved, 41-year-long stable isotopic and elemental ratio time series were generated from two separate heads of Montastraea faveolata from Looe Key, Florida, to assess the fidelity of using geochemical variations in Montastraea, the dominant reef-building coral of the Atlantic, to reconstruct sea surface environmental conditions at this site. The stable isotope time series of the two corals replicate well; mean values of ??18O and ??13C are indistinguishable between cores (compare 0.70??? versus 0.68??? for ??13C and -3.90??? versus - 3.94??? for ??18O). Mean values from the Sr/Ca time series differ by 0.037 mmol/mol, which is outside of analytical error and indicates that nonenvironmental factors are influencing the coral Sr/ Ca records at Looe Key. We have generated significant ?? 18O-sea surface temperature (SST) (R = -0.84) and Sr/ Ca-SST (R = -0.86) calibration equations at Looe Key; however, these equations are different from previously published equations for Montastraea. Variations in growth parameters or kinetic effects are not sufficient to explain either the observed differences in the mean offset between Sr/Ca time series or the disagreement between previous calibrations and our calculated ??18O-SST and Sr/Ca-SST relationships. Calibration differences are most likely due to variations in seawater chemistry in the continentally influenced waters at Looe Key. Additional geochemical replication studies of Montastraea are needed and should include multiple coral heads from open ocean localities complemented whenever possible by seawater chemistry determinations. Copyright 2006 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Mehrvand, Masoud; Baghanam, Aida Hosseini; Razzaghzadeh, Zahra; Nourani, Vahid
2017-04-01
Since statistical downscaling methods are the most largely used models to study hydrologic impact studies under climate change scenarios, nonlinear regression models known as Artificial Intelligence (AI)-based models such as Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been used to spatially downscale the precipitation outputs of Global Climate Models (GCMs). The study has been carried out using GCM and station data over GCM grid points located around the Peace-Tampa Bay watershed weather stations. Before downscaling with AI-based model, correlation coefficient values have been computed between a few selected large-scale predictor variables and local scale predictands to select the most effective predictors. The selected predictors are then assessed considering grid location for the site in question. In order to increase AI-based downscaling model accuracy pre-processing has been developed on precipitation time series. In this way, the precipitation data derived from various GCM data analyzed thoroughly to find the highest value of correlation coefficient between GCM-based historical data and station precipitation data. Both GCM and station precipitation time series have been assessed by comparing mean and variances over specific intervals. Results indicated that there is similar trend between GCM and station precipitation data; however station data has non-stationary time series while GCM data does not. Finally AI-based downscaling model have been applied to several GCMs with selected predictors by targeting local precipitation time series as predictand. The consequences of recent step have been used to produce multiple ensembles of downscaled AI-based models.
Visual Analysis among Novices: Training and Trend Lines as Graphic Aids
ERIC Educational Resources Information Center
Nelson, Peter M.; Van Norman, Ethan R.; Christ, Theodore J.
2017-01-01
The current study evaluated the degree to which novice visual analysts could discern trends in simulated time-series data across differing levels of variability and extreme values. Forty-five novice visual analysts were trained in general principles of visual analysis. One group received brief training on how to identify and omit extreme values.…
NASA Astrophysics Data System (ADS)
Zheng, Jinde; Pan, Haiyang; Cheng, Junsheng
2017-02-01
To timely detect the incipient failure of rolling bearing and find out the accurate fault location, a novel rolling bearing fault diagnosis method is proposed based on the composite multiscale fuzzy entropy (CMFE) and ensemble support vector machines (ESVMs). Fuzzy entropy (FuzzyEn), as an improvement of sample entropy (SampEn), is a new nonlinear method for measuring the complexity of time series. Since FuzzyEn (or SampEn) in single scale can not reflect the complexity effectively, multiscale fuzzy entropy (MFE) is developed by defining the FuzzyEns of coarse-grained time series, which represents the system dynamics in different scales. However, the MFE values will be affected by the data length, especially when the data are not long enough. By combining information of multiple coarse-grained time series in the same scale, the CMFE algorithm is proposed in this paper to enhance MFE, as well as FuzzyEn. Compared with MFE, with the increasing of scale factor, CMFE obtains much more stable and consistent values for a short-term time series. In this paper CMFE is employed to measure the complexity of vibration signals of rolling bearings and is applied to extract the nonlinear features hidden in the vibration signals. Also the physically meanings of CMFE being suitable for rolling bearing fault diagnosis are explored. Based on these, to fulfill an automatic fault diagnosis, the ensemble SVMs based multi-classifier is constructed for the intelligent classification of fault features. Finally, the proposed fault diagnosis method of rolling bearing is applied to experimental data analysis and the results indicate that the proposed method could effectively distinguish different fault categories and severities of rolling bearings.
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen
2016-04-01
Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5
Akkar, Sinan; Boore, David M.
2009-01-01
Most digital accelerograph recordings are plagued by long-period drifts, best seen in the velocity and displacement time series obtained from integration of the acceleration time series. These drifts often result in velocity values that are nonzero near the end of the record. This is clearly unphysical and can lead to inaccurate estimates of peak ground displacement and long-period spectral response. The source of the long-period noise seems to be variations in the acceleration baseline in many cases. These variations could be due to true ground motion (tilting and rotation, as well as local permanent ground deformation), instrumental effects, or analog-to-digital conversion. Very often the trends in velocity are well approximated by a linear trend after the strong shaking subsides. The linearity of the trend in velocity implies that no variations in the baseline could have occurred after the onset of linearity in the velocity time series. This observation, combined with the lack of any trends in the pre-event motion, allows us to compute the time interval in which any baseline variations could occur. We then use several models of the variations in a Monte Carlo procedure to derive a suite of baseline-corrected accelerations for each noise model using records from the 1999 Chi-Chi earthquake and several earthquakes in Turkey. Comparisons of the mean values of the peak ground displacements, spectral displacements, and residual displacements computed from these corrected accelerations for the different noise models can be used as a guide to the accuracy of the baseline corrections. For many of the records considered here the mean values are similar for each noise model, giving confidence in the estimation of the mean values. The dispersion of the ground-motion measures increases with period and is noise-model dependent. The dispersion of inelastic spectra is greater than the elastic spectra at short periods but approaches that of the elastic spectra at longer periods. The elastic spectra from the most basic processing, in which only the pre-event mean is removed from the acceleration time series, do not diverge from the baseline-corrected spectra until periods of 10-20 sec or more for the records studied here, implying that for many engineering purposes elastic spectra can be used from records with no baseline correction or filtering.
NASA Astrophysics Data System (ADS)
Scholkmann, Felix; Cifra, Michal; Alexandre Moraes, Thiago; de Mello Gallep, Cristiano
2011-12-01
The aim of the present study was to test whether the multifractal properties of ultra-weak photon emission (UPE) from germinating wheat seedlings (Triticum aestivum) change when the seedlings are treated with different concentrations of the toxin potassium dichromate (PD). To this end, UPE was measured (50 seedlings in one Petri dish, duration: approx. 16.6- 28 h) from samples of three groups: (i) control (group C, N = 9), (ii) treated with 25 ppm of PD (group G25, N = 32), and (iii) treated with 150 ppm of PD (group G150, N = 23). For the multifractal analysis, the following steps where performed: (i) each UPE time series was trimmed to a final length of 1000 min; (ii) each UPE time series was filtered, linear detrended and normalized; (iii) the multifractal spectrum (f(α)) was calculated for every UPE time series using the backward multifractal detrended moving average (MFDMA) method; (iv) each multifractal spectrum was characterized by calculating the mode (αmode) of the spectrum and the degree of multifractality (Δα) (v) for every UPE time series its mean, skewness and kurtosis were also calculated; finally (vi) all obtained parameters where analyzed to determine their ability to differentiate between the three groups. This was based on Fisher's discriminant ratio (FDR), which was calculated for each parameter combination. Additionally, a non-parametric test was used to test whether the parameter values are significantly different or not. The analysis showed that when comparing all the three groups, FDR had the highest values for the multifractal parameters (αmode, Δα). Furthermore, the differences in these parameters between the groups were statistically significant (p < 0.05). The classical parameters (mean, skewness and kurtosis) had lower FDR values than the multifractal parameters in all cases and showed no significant difference between the groups (except for the skewness between group C and G150). In conclusion, multifractal analysis enables changes in UPE time series to be detected even when they are hidden for normal linear signal analysis methods. The analysis of changes in the multifractal properties might be a basis to design a classification system enabling the intoxication of cell cultures to be quantified based on UPE measurements.
Volcanic hazard assessment for the Canary Islands (Spain) using extreme value theory
NASA Astrophysics Data System (ADS)
Sobradelo, R.; Martí, J.; Mendoza-Rosas, A. T.; Gómez, G.
2011-10-01
The Canary Islands are an active volcanic region densely populated and visited by several millions of tourists every year. Nearly twenty eruptions have been reported through written chronicles in the last 600 yr, suggesting that the probability of a new eruption in the near future is far from zero. This shows the importance of assessing and monitoring the volcanic hazard of the region in order to reduce and manage its potential volcanic risk, and ultimately contribute to the design of appropriate preparedness plans. Hence, the probabilistic analysis of the volcanic eruption time series for the Canary Islands is an essential step for the assessment of volcanic hazard and risk in the area. Such a series describes complex processes involving different types of eruptions over different time scales. Here we propose a statistical method for calculating the probabilities of future eruptions which is most appropriate given the nature of the documented historical eruptive data. We first characterize the eruptions by their magnitudes, and then carry out a preliminary analysis of the data to establish the requirements for the statistical method. Past studies in eruptive time series used conventional statistics and treated the series as an homogeneous process. In this paper, we will use a method that accounts for the time-dependence of the series and includes rare or extreme events, in the form of few data of large eruptions, since these data require special methods of analysis. Hence, we will use a statistical method from extreme value theory. In particular, we will apply a non-homogeneous Poisson process to the historical eruptive data of the Canary Islands to estimate the probability of having at least one volcanic event of a magnitude greater than one in the upcoming years. This is done in three steps: First, we analyze the historical eruptive series to assess independence and homogeneity of the process. Second, we perform a Weibull analysis of the distribution of repose time between successive eruptions. Third, we analyze the non-homogeneous Poisson process with a generalized Pareto distribution as the intensity function.
Wu, Wei-Sheng; Jhou, Meng-Jhun
2017-01-13
Missing value imputation is important for microarray data analyses because microarray data with missing values would significantly degrade the performance of the downstream analyses. Although many microarray missing value imputation algorithms have been developed, an objective and comprehensive performance comparison framework is still lacking. To solve this problem, we previously proposed a framework which can perform a comprehensive performance comparison of different existing algorithms. Also the performance of a new algorithm can be evaluated by our performance comparison framework. However, constructing our framework is not an easy task for the interested researchers. To save researchers' time and efforts, here we present an easy-to-use web tool named MVIAeval (Missing Value Imputation Algorithm evaluator) which implements our performance comparison framework. MVIAeval provides a user-friendly interface allowing users to upload the R code of their new algorithm and select (i) the test datasets among 20 benchmark microarray (time series and non-time series) datasets, (ii) the compared algorithms among 12 existing algorithms, (iii) the performance indices from three existing ones, (iv) the comprehensive performance scores from two possible choices, and (v) the number of simulation runs. The comprehensive performance comparison results are then generated and shown as both figures and tables. MVIAeval is a useful tool for researchers to easily conduct a comprehensive and objective performance evaluation of their newly developed missing value imputation algorithm for microarray data or any data which can be represented as a matrix form (e.g. NGS data or proteomics data). Thus, MVIAeval will greatly expedite the progress in the research of missing value imputation algorithms.
Li, Xiaoning; Huang, Lijun; Hu, Xiche; Huang, Xuefei
2009-01-01
Summary Three series of thioglycosyl donors differing only in their respective aglycon substituents within each series have been prepared as representatives of typical glycosyl donors. The relative anomeric reactivities of these donors were quantified under competitive glycosylation conditions with various reaction time, promoters, solvents and acceptors. Over three orders of magnitude reactivity difference were generated by simple transformation of the para-substituent on the aglycon with methanol as the acceptor, while chemoselectivities became lower with carbohydrate acceptors. Excellent linear correlations were attained between relative reactivity values of donors and σp values of the substituents in the Hammett plots. This indicates that the glycosylation mechanism remains the same over a wide range of reactivities and glycosylation conditions. The negative slopes of the Hammett plots suggested that electron donating substituents expedite the reactions and the magnitudes of slopes can be rationalized by neighboring group participation as well as electronic properties of the glycon protective groups. Within the same series of donors, less nucleophilic acceptors gave smaller slopes in their Hammett plots. This is consistent with the notion that acceptor nucleophilic attack onto the reactive intermediate is part of the rate limiting step of the glycosylation reaction. Excellent linear Hammett correlations were obtained between relative reactivity values of three series of donors differing only in their aglycon substituents and σp values of the substituents. PMID:19081954
Local sample thickness determination via scanning transmission electron microscopy defocus series.
Beyer, A; Straubinger, R; Belz, J; Volz, K
2016-05-01
The usable aperture sizes in (scanning) transmission electron microscopy ((S)TEM) have significantly increased in the past decade due to the introduction of aberration correction. In parallel with the consequent increase of convergence angle the depth of focus has decreased severely and optical sectioning in the STEM became feasible. Here we apply STEM defocus series to derive the local sample thickness of a TEM sample. To this end experimental as well as simulated defocus series of thin Si foils were acquired. The systematic blurring of high resolution high angle annular dark field images is quantified by evaluating the standard deviation of the image intensity for each image of a defocus series. The derived dependencies exhibit a pronounced maximum at the optimum defocus and drop to a background value for higher or lower values. The full width half maximum (FWHM) of the curve is equal to the sample thickness above a minimum thickness given by the size of the used aperture and the chromatic aberration of the microscope. The thicknesses obtained from experimental defocus series applying the proposed method are in good agreement with the values derived from other established methods. The key advantages of this method compared to others are its high spatial resolution and that it does not involve any time consuming simulations. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Baccini, Michela; Carreras, Giulia
2014-10-01
This paper describes the methods used to investigate variations in total alcoholic beverage consumption as related to selected control intervention policies and other socioeconomic factors (unplanned factors) within 12 European countries involved in the AMPHORA project. The analysis presented several critical points: presence of missing values, strong correlation among the unplanned factors, long-term waves or trends in both the time series of alcohol consumption and the time series of the main explanatory variables. These difficulties were addressed by implementing a multiple imputation procedure for filling in missing values, then specifying for each country a multiple regression model which accounted for time trend, policy measures and a limited set of unplanned factors, selected in advance on the basis of sociological and statistical considerations are addressed. This approach allowed estimating the "net" effect of the selected control policies on alcohol consumption, but not the association between each unplanned factor and the outcome.
Improvements to surrogate data methods for nonstationary time series.
Lucio, J H; Valdés, R; Rodríguez, L R
2012-05-01
The method of surrogate data has been extensively applied to hypothesis testing of system linearity, when only one realization of the system, a time series, is known. Normally, surrogate data should preserve the linear stochastic structure and the amplitude distribution of the original series. Classical surrogate data methods (such as random permutation, amplitude adjusted Fourier transform, or iterative amplitude adjusted Fourier transform) are successful at preserving one or both of these features in stationary cases. However, they always produce stationary surrogates, hence existing nonstationarity could be interpreted as dynamic nonlinearity. Certain modifications have been proposed that additionally preserve some nonstationarity, at the expense of reproducing a great deal of nonlinearity. However, even those methods generally fail to preserve the trend (i.e., global nonstationarity in the mean) of the original series. This is the case of time series with unit roots in their autoregressive structure. Additionally, those methods, based on Fourier transform, either need first and last values in the original series to match, or they need to select a piece of the original series with matching ends. These conditions are often inapplicable and the resulting surrogates are adversely affected by the well-known artefact problem. In this study, we propose a simple technique that, applied within existing Fourier-transform-based methods, generates surrogate data that jointly preserve the aforementioned characteristics of the original series, including (even strong) trends. Moreover, our technique avoids the negative effects of end mismatch. Several artificial and real, stationary and nonstationary, linear and nonlinear time series are examined, in order to demonstrate the advantages of the methods. Corresponding surrogate data are produced with the classical and with the proposed methods, and the results are compared.
NASA Technical Reports Server (NTRS)
Cox, Christopher M.; Chao, Benjamin F.; Au, Andrew Y.; Boy, J.-P.
2003-01-01
The oblateness of the Earth's gravity field, 52, has long been observed to undergo a slight decrease due to post-glacial rebound of the mantle. Sometime around 1998 this trend reversed quite suddenly. This reversal persisted until 2001, at which point the atmosphere-corrected time series appears to have reversed yet again. Presently, the time series appears to be returning to the value that would nominally have been reached had the anomaly not occurred. This anomaly signifies a large interannual change in global mass distribution whose J2 effect overshadows that of the post-glacial rebound over such timescales. A number of possible causes have been considered, with oceanic mass redistribution as the leading candidate although other effects, such as glacial melting and core effects may be contributing. The amount by which J2 returns to it's nominal value provides a valuable constraint on the separation of the causes, and will be considered. We will present our latest Satellite Laser Ranging and DORIS Doppler derived time series for J2, and various other low-degree harmonic terms, as well as our investigations into the causes. In addition, we will show the comparison of the J2 results with those derived from CHAMP, as computed at NASA GSFC, and the recently released GRACE gravity model.
A wrinkle in time: asymmetric valuation of past and future events.
Caruso, Eugene M; Gilbert, Daniel T; Wilson, Timothy D
2008-08-01
A series of studies shows that people value future events more than equivalent events in the equidistant past. Whether people imagined being compensated or compensating others, they required and offered more compensation for events that would take place in the future than for identical events that had taken place in the past. This temporal value asymmetry (TVA) was robust in between-persons comparisons and absent in within-persons comparisons, which suggests that participants considered the TVA irrational. Contemplating future events produced greater affect than did contemplating past events, and this difference mediated the TVA. We suggest that the TVA, the gain-loss asymmetry, and hyperbolic time discounting can be unified in a three-dimensional value function that describes how people value gains and losses of different magnitudes at different moments in time.
Approximate scaling properties of RNA free energy landscapes
NASA Technical Reports Server (NTRS)
Baskaran, S.; Stadler, P. F.; Schuster, P.
1996-01-01
RNA free energy landscapes are analysed by means of "time-series" that are obtained from random walks restricted to excursion sets. The power spectra, the scaling of the jump size distribution, and the scaling of the curve length measured with different yard stick lengths are used to describe the structure of these "time series". Although they are stationary by construction, we find that their local behavior is consistent with both AR(1) and self-affine processes. Random walks confined to excursion sets (i.e., with the restriction that the fitness value exceeds a certain threshold at each step) exhibit essentially the same statistics as free random walks. We find that an AR(1) time series is in general approximately self-affine on timescales up to approximately the correlation length. We present an empirical relation between the correlation parameter rho of the AR(1) model and the exponents characterizing self-affinity.
Nonparametric autocovariance estimation from censored time series by Gaussian imputation.
Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K
2009-02-01
One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.
Relation between delayed feedback and delay-coupled systems and its application to chaotic lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soriano, Miguel C., E-mail: miguel@ifisc.uib-csic.es; Flunkert, Valentin; Fischer, Ingo
2013-12-15
We present a systematic approach to identify the similarities and differences between a chaotic system with delayed feedback and two mutually delay-coupled systems. We consider the general case in which the coupled systems are either unsynchronized or in a generally synchronized state, in contrast to the mostly studied case of identical synchronization. We construct a new time-series for each of the two coupling schemes, respectively, and present analytic evidence and numerical confirmation that these two constructed time-series are statistically equivalent. From the construction, it then follows that the distribution of time-series segments that are small compared to the overall delaymore » in the system is independent of the value of the delay and of the coupling scheme. By focusing on numerical simulations of delay-coupled chaotic lasers, we present a practical example of our findings.« less
Modeling Philippine Stock Exchange Composite Index Using Time Series Analysis
NASA Astrophysics Data System (ADS)
Gayo, W. S.; Urrutia, J. D.; Temple, J. M. F.; Sandoval, J. R. D.; Sanglay, J. E. A.
2015-06-01
This study was conducted to develop a time series model of the Philippine Stock Exchange Composite Index and its volatility using the finite mixture of ARIMA model with conditional variance equations such as ARCH, GARCH, EG ARCH, TARCH and PARCH models. Also, the study aimed to find out the reason behind the behaviorof PSEi, that is, which of the economic variables - Consumer Price Index, crude oil price, foreign exchange rate, gold price, interest rate, money supply, price-earnings ratio, Producers’ Price Index and terms of trade - can be used in projecting future values of PSEi and this was examined using Granger Causality Test. The findings showed that the best time series model for Philippine Stock Exchange Composite index is ARIMA(1,1,5) - ARCH(1). Also, Consumer Price Index, crude oil price and foreign exchange rate are factors concluded to Granger cause Philippine Stock Exchange Composite Index.
A New Normal for the Sea Ice Index
NASA Technical Reports Server (NTRS)
Fetterer, Florence; Windnagel, Ann; Meier, Walter N.
2014-01-01
The NSIDC Sea Ice Index is a popular data product that shows users how ice extent and concentration have changed since the beginning of the passive microwave satellite record in 1978. It shows time series of monthly ice extent anomalies rather than actual extent values, in order to emphasize the information the data are carrying. Along with the time series, an image of average extent for the previous month is shown as a white field, with a pink line showing the median extent for that month. These are updated monthly; corresponding daily products are updated daily.
NASA Astrophysics Data System (ADS)
Kryanev, A. V.; Ivanov, V. V.; Romanova, A. O.; Sevastyanov, L. A.; Udumyan, D. K.
2018-03-01
This paper considers the problem of separating the trend and the chaotic component of chaotic time series in the absence of information on the characteristics of the chaotic component. Such a problem arises in nuclear physics, biomedicine, and many other applied fields. The scheme has two stages. At the first stage, smoothing linear splines with different values of smoothing parameter are used to separate the "trend component." At the second stage, the method of least squares is used to find the unknown variance σ2 of the noise component.
Turbulence Time Series Data Hole Filling using Karhunen-Loeve and ARIMA methods
2007-01-01
memory is represented by higher values of d. 4.1. ARIMA and EMD We applied an ARIMA (0,d,0) model to predict the behaviour of the final section of the...to a simplified ARIMA (0,d,0) model , which performed better than the linear interpolant but less effectively than the KL algorithm, disregarding edge...ar X iv :p hy si cs /0 70 12 38 v1 22 J an 2 00 7 Turbulence Time Series Data Hole Filling using Karhunen-Loève and ARIMA methods M P J L
NASA Astrophysics Data System (ADS)
Genty, Dominique; Massault, Marc
1999-05-01
Twenty-two AMS 14C measurements have been made on a modern stalagmite from SW France in order to reconstruct the 14C activity history of the calcite deposit. Annual growth laminae provides a chronology up to 1919 A.D. Results show that the stalagmite 14C activity time series is sensitive to modern atmosphere 14C activity changes such as those produced by the nuclear weapon tests. The comparison between the two 14C time series shows that the stalagmite time series is damped: its amplitude variation between pre-bomb and post-bomb values is 75% less and the time delay between the two time series peaks is 16 years ±3. A model is developed using atmosphere 14C and 13C data, fractionation processes and three soil organic matter components whose mean turnover rates are different. The linear correlation coefficient between modeled and measured activities is 0.99. These results, combined with two other stalagmite 14C time series already published and compared with local vegetation and climate, demonstrate that most of the carbon transfer dynamics are controlled in the soil by soil organic matter degradation rates. Where vegetation produces debris whose degradation is slow, the fraction of old carbon injected in the system increases, the observed 14C time series is much more damped and lag time longer than that observed under grassland sites. The same mixing model applied on the 13C shows a good agreement ( R2 = 0.78) between modeled and measured stalagmite δ 13C and demonstrates that the Suess Effect due to fossil fuel combustion in the atmosphere is recorded in the stalagmite but with a damped effect due to SOM degradation rate. The different sources of dead carbon in the seepage water are calculated and discussed.
Wet tropospheric delays forecast based on Vienna Mapping Function time series analysis
NASA Astrophysics Data System (ADS)
Rzepecka, Zofia; Kalita, Jakub
2016-04-01
It is well known that the dry part of the zenith tropospheric delay (ZTD) is much easier to model than the wet part (ZTW). The aim of the research is applying stochastic modeling and prediction of ZTW using time series analysis tools. Application of time series analysis enables closer understanding of ZTW behavior as well as short-term prediction of future ZTW values. The ZTW data used for the studies were obtained from the GGOS service hold by Vienna technical University. The resolution of the data is six hours. ZTW for the years 2010 -2013 were adopted for the study. The International GNSS Service (IGS) permanent stations LAMA and GOPE, located in mid-latitudes, were admitted for the investigations. Initially the seasonal part was separated and modeled using periodic signals and frequency analysis. The prominent annual and semi-annual signals were removed using sines and consines functions. The autocorrelation of the resulting signal is significant for several days (20-30 samples). The residuals of this fitting were further analyzed and modeled with ARIMA processes. For both the stations optimal ARMA processes based on several criterions were obtained. On this basis predicted ZTW values were computed for one day ahead, leaving the white process residuals. Accuracy of the prediction can be estimated at about 3 cm.
Deriving Vegetation Dynamics of Natural Terrestrial Ecosystems from MODIS NDVI/EVI Data over Turkey.
Evrendilek, Fatih; Gulbeyaz, Onder
2008-09-01
The 16-day composite MODIS vegetation indices (VIs) at 500-m resolution for the period between 2000 to 2007 were seasonally averaged on the basis of the estimated distribution of 16 potential natural terrestrial ecosystems (NTEs) across Turkey. Graphical and statistical analyses of the time-series VIs for the NTEs spatially disaggregated in terms of biogeoclimate zones and land cover types included descriptive statistics, correlations, discrete Fourier transform (DFT), time-series decomposition, and simple linear regression (SLR) models. Our spatio-temporal analyses revealed that both MODIS VIs, on average, depicted similar seasonal variations for the NTEs, with the NDVI values having higher mean and SD values. The seasonal VIs were most correlated in decreasing order for: barren/sparsely vegetated land > grassland > shrubland/woodland > forest; (sub)nival > warm temperate > alpine > cool temperate > boreal = Mediterranean; and summer > spring > autumn > winter. Most pronounced differences between the MODIS VI responses over Turkey occurred in boreal and Mediterranean climate zones and forests, and in winter (the senescence phase of the growing season). Our results showed the potential of the time-series MODIS VI datasets in the estimation and monitoring of seasonal and interannual ecosystem dynamics over Turkey that needs to be further improved and refined through systematic and extensive field measurements and validations across various biomes.
ERIC Educational Resources Information Center
Gabriel, Rachael; Lester, Jessica Nina
2013-01-01
Background/Context: This paper illustrates how the media, particularly "The LA Times," entered the debate surrounding teacher evaluation, resulting in a storyline that shaped how the public perceives teacher effectiveness. With their series of articles in 2010, "The LA Times" entered the conversation about the place and value…
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Tingley, M.
2015-12-01
Tide gauge records of mean sea level are some of the most valuable instrumental time series of oceanic variability and change. Yet these time series sometimes have short record lengths and intermittently missing values. Such issues can limit the utility of the data, for example, precluding rigorous analyses of return periods of extreme mean sea level events and whether they are unprecedented. With a view to filling gaps in the tide gauge mean sea level time series, we describe a hierarchical Bayesian modeling approach. The model, which is predicated on the notion of conditional probabilities, comprises three levels: a process level, which casts mean sea level as a field with spatiotemporal covariance; a data level, which represents tide gauge observations as noisy, biased versions of the true process; and a prior level, which gives prior functional forms to model parameters. Using Bayes' rule, this technique gives estimates of the posterior probability of the process and the parameters given the observations. To demonstrate the approach, we apply it to 2,967 station-years of annual mean sea level observations over 1856-2013 from 70 tide gauges along the United States East Coast from Florida to Maine (i.e., 26.8% record completeness). The model overcomes the data paucity by sharing information across space and time. The result is an ensemble of realizations, each member of which is a possible history of sea level changes at these locations over this period, which is consistent with and equally likely given the tide gauge data and underlying model assumptions. Using the ensemble of histories furnished by the Bayesian model, we identify extreme events of mean sea level change in the tide gauge time series. Specifically, we use the model to address the particular hypothesis (with rigorous uncertainty quantification) that a recently reported interannual sea level rise during 2008-2010 was unprecedented in the instrumental record along the northeast coast of North America, and that it had a return period of 850 years. Preliminary analysis suggests that this event was likely unprecedented on the coast of Maine in the last century.
Traffic dispersion through a series of signals with irregular split
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
2016-01-01
We study the traffic behavior of a group of vehicles moving through a sequence of signals with irregular splits on a roadway. We present the stochastic model of vehicular traffic controlled by signals. The dynamic behavior of vehicular traffic is clarified by analyzing traffic pattern and travel time numerically. The group of vehicles breaks up more and more by the irregularity of signal's split. The traffic dispersion is induced by the irregular split. We show that the traffic dispersion depends highly on the cycle time and the strength of split's irregularity. Also, we study the traffic behavior through the series of signals at the green-wave strategy. The dependence of the travel time on offset time is derived for various values of cycle time. The region map of the traffic dispersion is shown in (cycle time, offset time)-space.
Quantifying the value of redundant measurements at GCOS Reference Upper-Air Network sites
Madonna, F.; Rosoldi, M.; Güldner, J.; ...
2014-11-19
The potential for measurement redundancy to reduce uncertainty in atmospheric variables has not been investigated comprehensively for climate observations. We evaluated the usefulness of entropy and mutual correlation concepts, as defined in information theory, for quantifying random uncertainty and redundancy in time series of the integrated water vapour (IWV) and water vapour mixing ratio profiles provided by five highly instrumented GRUAN (GCOS, Global Climate Observing System, Reference Upper-Air Network) stations in 2010–2012. Results show that the random uncertainties on the IWV measured with radiosondes, global positioning system, microwave and infrared radiometers, and Raman lidar measurements differed by less than 8%.more » Comparisons of time series of IWV content from ground-based remote sensing instruments with in situ soundings showed that microwave radiometers have the highest redundancy with the IWV time series measured by radiosondes and therefore the highest potential to reduce the random uncertainty of the radiosondes time series. Moreover, the random uncertainty of a time series from one instrument can be reduced by ~ 60% by constraining the measurements with those from another instrument. The best reduction of random uncertainty is achieved by conditioning Raman lidar measurements with microwave radiometer measurements. In conclusion, specific instruments are recommended for atmospheric water vapour measurements at GRUAN sites. This approach can be applied to the study of redundant measurements for other climate variables.« less
NASA Astrophysics Data System (ADS)
Malik, Nishant; Marwan, Norbert; Zou, Yong; Mucha, Peter J.; Kurths, Jürgen
2014-06-01
A method to identify distinct dynamical regimes and transitions between those regimes in a short univariate time series was recently introduced [N. Malik et al., Europhys. Lett. 97, 40009 (2012), 10.1209/0295-5075/97/40009], employing the computation of fluctuations in a measure of nonlinear similarity based on local recurrence properties. In this work, we describe the details of the analytical relationships between this newly introduced measure and the well-known concepts of attractor dimensions and Lyapunov exponents. We show that the new measure has linear dependence on the effective dimension of the attractor and it measures the variations in the sum of the Lyapunov spectrum. To illustrate the practical usefulness of the method, we identify various types of dynamical transitions in different nonlinear models. We present testbed examples for the new method's robustness against noise and missing values in the time series. We also use this method to analyze time series of social dynamics, specifically an analysis of the US crime record time series from 1975 to 1993. Using this method, we find that dynamical complexity in robberies was influenced by the unemployment rate until the late 1980s. We have also observed a dynamical transition in homicide and robbery rates in the late 1980s and early 1990s, leading to increase in the dynamical complexity of these rates.
Code of Federal Regulations, 2011 CFR
2011-07-01
... rates, penalties, and redemption values for Series EE bonds with issue dates of May 1, 2005, or... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds Series Ee Savings Bonds with Issue Dates of May 1, 2005, Or Thereafter § 351.35 What do I need to know...
Code of Federal Regulations, 2012 CFR
2012-07-01
... rates, penalties, and redemption values for Series EE bonds with issue dates of May 1, 2005, or... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds Series Ee Savings Bonds with Issue Dates of May 1, 2005, Or Thereafter § 351.35 What do I need to know...
Code of Federal Regulations, 2010 CFR
2010-07-01
... rates, penalties, and redemption values for Series EE bonds with issue dates of May 1, 2005, or... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds Series Ee Savings Bonds with Issue Dates of May 1, 2005, Or Thereafter § 351.35 What do I need to know...
Code of Federal Regulations, 2014 CFR
2014-07-01
... rates, penalties, and redemption values for Series EE bonds with issue dates of May 1, 2005, or... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds Series Ee Savings Bonds with Issue Dates of May 1, 2005, Or Thereafter § 351.35 What do I need to know...
Code of Federal Regulations, 2013 CFR
2013-07-01
... rates, penalties, and redemption values for Series EE bonds with issue dates of May 1, 2005, or... SAVINGS BONDS, SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds Series Ee Savings Bonds with Issue Dates of May 1, 2005, Or Thereafter § 351.35 What do I need to know...
NASA Astrophysics Data System (ADS)
Wang, Zhuosen; Schaaf, Crystal B.; Sun, Qingsong; Kim, JiHyun; Erb, Angela M.; Gao, Feng; Román, Miguel O.; Yang, Yun; Petroy, Shelley; Taylor, Jeffrey R.; Masek, Jeffrey G.; Morisette, Jeffrey T.; Zhang, Xiaoyang; Papuga, Shirley A.
2017-07-01
Seasonal vegetation phenology can significantly alter surface albedo which in turn affects the global energy balance and the albedo warming/cooling feedbacks that impact climate change. To monitor and quantify the surface dynamics of heterogeneous landscapes, high temporal and spatial resolution synthetic time series of albedo and the enhanced vegetation index (EVI) were generated from the 500 m Moderate Resolution Imaging Spectroradiometer (MODIS) operational Collection V006 daily BRDF/NBAR/albedo products and 30 m Landsat 5 albedo and near-nadir reflectance data through the use of the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM). The traditional Landsat Albedo (Shuai et al., 2011) makes use of the MODIS BRDF/Albedo products (MCD43) by assigning appropriate BRDFs from coincident MODIS products to each Landsat image to generate a 30 m Landsat albedo product for that acquisition date. The available cloud free Landsat 5 albedos (due to clouds, generated every 16 days at best) were used in conjunction with the daily MODIS albedos to determine the appropriate 30 m albedos for the intervening daily time steps in this study. These enhanced daily 30 m spatial resolution synthetic time series were then used to track albedo and vegetation phenology dynamics over three Ameriflux tower sites (Harvard Forest in 2007, Santa Rita in 2011 and Walker Branch in 2005). These Ameriflux sites were chosen as they are all quite nearby new towers coming on line for the National Ecological Observatory Network (NEON), and thus represent locations which will be served by spatially paired albedo measures in the near future. The availability of data from the NEON towers will greatly expand the sources of tower albedometer data available for evaluation of satellite products. At these three Ameriflux tower sites the synthetic time series of broadband shortwave albedos were evaluated using the tower albedo measurements with a Root Mean Square Error (RMSE) less than 0.013 and a bias within the range of ±0.006. These synthetic time series provide much greater spatial detail than the 500 m gridded MODIS data, especially over more heterogeneous surfaces, which improves the efforts to characterize and monitor the spatial variation across species and communities. The mean of the difference between maximum and minimum synthetic time series of albedo within the MODIS pixels over a subset of satellite data of Harvard Forest (16 km by 14 km) was as high as 0.2 during the snow-covered period and reduced to around 0.1 during the snow-free period. Similarly, we have used STARFM to also couple MODIS Nadir BRDF Adjusted Reflectances (NBAR) values with Landsat 5 reflectances to generate daily synthetic times series of NBAR and thus Enhanced Vegetation Index (NBAR-EVI) at a 30 m resolution. While normally STARFM is used with directional reflectances, the use of the view angle corrected daily MODIS NBAR values will provide more consistent time series. These synthetic times series of EVI are shown to capture seasonal vegetation dynamics with finer spatial and temporal details, especially over heterogeneous land surfaces.
Wang, Zhuosen; Schaaf, Crystal B.; Sun, Qingson; Kim, JiHyun; Erb, Angela M.; Gao, Feng; Roman, Miguel O.; Yang, Yun; Petroy, Shelley; Taylor, Jeffrey; Masek, Jeffrey G.; Morisette, Jeffrey T.; Zhang, Xiaoyang; Papuga, Shirley A.
2017-01-01
Seasonal vegetation phenology can significantly alter surface albedo which in turn affects the global energy balance and the albedo warming/cooling feedbacks that impact climate change. To monitor and quantify the surface dynamics of heterogeneous landscapes, high temporal and spatial resolution synthetic time series of albedo and the enhanced vegetation index (EVI) were generated from the 500 m Moderate Resolution Imaging Spectroradiometer (MODIS) operational Collection V006 daily BRDF/NBAR/albedo products and 30 m Landsat 5 albedo and near-nadir reflectance data through the use of the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM). The traditional Landsat Albedo (Shuai et al., 2011) makes use of the MODIS BRDF/Albedo products (MCD43) by assigning appropriate BRDFs from coincident MODIS products to each Landsat image to generate a 30 m Landsat albedo product for that acquisition date. The available cloud free Landsat 5 albedos (due to clouds, generated every 16 days at best) were used in conjunction with the daily MODIS albedos to determine the appropriate 30 m albedos for the intervening daily time steps in this study. These enhanced daily 30 m spatial resolution synthetic time series were then used to track albedo and vegetation phenology dynamics over three Ameriflux tower sites (Harvard Forest in 2007, Santa Rita in 2011 and Walker Branch in 2005). These Ameriflux sites were chosen as they are all quite nearby new towers coming on line for the National Ecological Observatory Network (NEON), and thus represent locations which will be served by spatially paired albedo measures in the near future. The availability of data from the NEON towers will greatly expand the sources of tower albedometer data available for evaluation of satellite products. At these three Ameriflux tower sites the synthetic time series of broadband shortwave albedos were evaluated using the tower albedo measurements with a Root Mean Square Error (RMSE) less than 0.013 and a bias within the range of ±0.006. These synthetic time series provide much greater spatial detail than the 500 m gridded MODIS data, especially over more heterogeneous surfaces, which improves the efforts to characterize and monitor the spatial variation across species and communities. The mean of the difference between maximum and minimum synthetic time series of albedo within the MODIS pixels over a subset of satellite data of Harvard Forest (16 km by 14 km) was as high as 0.2 during the snow-covered period and reduced to around 0.1 during the snow-free period. Similarly, we have used STARFM to also couple MODIS Nadir BRDF Adjusted Reflectances (NBAR) values with Landsat 5 reflectances to generate daily synthetic times series of NBAR and thus Enhanced Vegetation Index (NBAR-EVI) at a 30 m resolution. While normally STARFM is used with directional reflectances, the use of the view angle corrected daily MODIS NBAR values will provide more consistent time series. These synthetic times series of EVI are shown to capture seasonal vegetation dynamics with finer spatial and temporal details, especially over heterogeneous land surfaces.
NASA Technical Reports Server (NTRS)
Wang, Zhuosen; Schaaf, Crystal B.; Sun, Quingsong; Kim, Jihyun; Erb, Angela M.; Gao, Feng; Roman, Miguel O.; Yang, Yun; Petroy, Shelley; Taylor, Jeffrey R.;
2017-01-01
Seasonal vegetation phenology can significantly alter surface albedo which in turn affects the global energy balance and the albedo warmingcooling feedbacks that impact climate change. To monitor and quantify the surface dynamics of heterogeneous landscapes, high temporal and spatial resolution synthetic time series of albedo and the enhanced vegetation index (EVI) were generated from the 500-meter Moderate Resolution Imaging Spectroradiometer (MODIS) operational Collection V006 daily BRDF (Bidirectional Reflectance Distribution Function) / NBAR (Nadir BRDF-Adjusted Reflectance) / albedo products and 30-meter Landsat 5 albedo and near-nadir reflectance data through the use of the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM). The traditional Landsat Albedo (Shuai et al., 2011) makes use of the MODIS BRDFAlbedo products (MCD43) by assigning appropriate BRDFs from coincident MODIS products to each Landsat image to generate a 30-meter Landsat albedo product for that acquisition date. The available cloud free Landsat 5 albedos (due to clouds, generated every 16 days at best) were used in conjunction with the daily MODIS albedos to determine the appropriate 30-meter albedos for the intervening daily time steps in this study. These enhanced daily 30-meter spatial resolution synthetic time series were then used to track albedo and vegetation phenology dynamics over three Ameriflux tower sites (Harvard Forest in 2007, Santa Rita in 2011 and Walker Branch in 2005). These Ameriflux sites were chosen as they are all quite nearby new towers coming on line for the National Ecological Observatory Network (NEON), and thus represent locations which will be served by spatially paired albedo measures in the near future. The availability of data from the NEON towers will greatly expand the sources of tower albedometer data available for evaluation of satellite products. At these three Ameriflux tower sites the synthetic time series of broadband shortwave albedos were evaluated using the tower albedo measurements with a Root Mean Square Error (RMSE) less than 0.013 and a bias within the range of 0.006. These synthetic time series provide much greater spatial detail than the 500 meter gridded MODIS data, especially over more heterogeneous surfaces, which improves the efforts to characterize and monitor the spatial variation across species and communities. The mean of the difference between maximum and minimum synthetic time series of albedo within the MODIS pixels over a subset of satellite data of Harvard Forest (16 kilometers by 14 kilometers) was as high as 0.2 during the snow-covered period and reduced to around 0.1 during the snow-free period. Similarly, we have used STARFM to also couple MODIS Nadir BRDF-Adjusted Reflectances (NBAR) values with Landsat 5 reflectances to generate daily synthetic times series of NBAR and thus Enhanced Vegetation Index (NBAR-EVI) at a 30-meter resolution. While normally STARFM is used with directional reflectances, the use of the view angle corrected daily MODIS NBAR values will provide more consistent time series. These synthetic times series of EVI are shown to capture seasonal vegetation dynamics with finer spatial and temporal details, especially over heterogeneous land surfaces.
46 CFR 2.10-105 - Prepayment of annual vessel inspection fees.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the design life or remaining expected service life of the vessel. (b) To prepay the annual vessel... present value using the following formula: ER13MR95.000 Where: PV is the Present Value of the series of... i is the interest rate for 10-year Treasury notes at the time of prepayment calculation π is the...
46 CFR 2.10-105 - Prepayment of annual vessel inspection fees.
Code of Federal Regulations, 2013 CFR
2013-10-01
... the design life or remaining expected service life of the vessel. (b) To prepay the annual vessel... present value using the following formula: ER13MR95.000 Where: PV is the Present Value of the series of... i is the interest rate for 10-year Treasury notes at the time of prepayment calculation π is the...
46 CFR 2.10-105 - Prepayment of annual vessel inspection fees.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the design life or remaining expected service life of the vessel. (b) To prepay the annual vessel... present value using the following formula: ER13MR95.000 Where: PV is the Present Value of the series of... i is the interest rate for 10-year Treasury notes at the time of prepayment calculation π is the...
46 CFR 2.10-105 - Prepayment of annual vessel inspection fees.
Code of Federal Regulations, 2012 CFR
2012-10-01
... the design life or remaining expected service life of the vessel. (b) To prepay the annual vessel... present value using the following formula: ER13MR95.000 Where: PV is the Present Value of the series of... i is the interest rate for 10-year Treasury notes at the time of prepayment calculation π is the...
46 CFR 2.10-105 - Prepayment of annual vessel inspection fees.
Code of Federal Regulations, 2014 CFR
2014-10-01
... the design life or remaining expected service life of the vessel. (b) To prepay the annual vessel... present value using the following formula: ER13MR95.000 Where: PV is the Present Value of the series of... i is the interest rate for 10-year Treasury notes at the time of prepayment calculation π is the...
Separation of components from a scale mixture of Gaussian white noises
NASA Astrophysics Data System (ADS)
Vamoş, Călin; Crăciun, Maria
2010-05-01
The time evolution of a physical quantity associated with a thermodynamic system whose equilibrium fluctuations are modulated in amplitude by a slowly varying phenomenon can be modeled as the product of a Gaussian white noise {Zt} and a stochastic process with strictly positive values {Vt} referred to as volatility. The probability density function (pdf) of the process Xt=VtZt is a scale mixture of Gaussian white noises expressed as a time average of Gaussian distributions weighted by the pdf of the volatility. The separation of the two components of {Xt} can be achieved by imposing the condition that the absolute values of the estimated white noise be uncorrelated. We apply this method to the time series of the returns of the daily S&P500 index, which has also been analyzed by means of the superstatistics method that imposes the condition that the estimated white noise be Gaussian. The advantage of our method is that this financial time series is processed without partitioning or removal of the extreme events and the estimated white noise becomes almost Gaussian only as result of the uncorrelation condition.
Tabet, Michael R.; Norman, Mantana K.; Fey, Brittney K.; Tsibulsky, Vladimir L.; Millard, Ronald W.
2011-01-01
Differences in the time to maximal effect (Tmax) of a series of dopamine receptor antagonists on the self-administration of cocaine are not consistent with their lipophilicity (octanol-water partition coefficients at pH 7.4) and expected rapid entry into the brain after intravenous injection. It was hypothesized that the Tmax reflects the time required for maximal occupancy of receptors, which would occur as equilibrium was approached. If so, the Tmax should be related to the affinity for the relevant receptor population. This hypothesis was tested using a series of nine antagonists having a 2500-fold range of Ki or Kd values for D2-like dopamine receptors. Rats self-administered cocaine at regular intervals and then were injected intravenously with a dose of antagonist, and the self-administration of cocaine was continued for 6 to 10 h. The level of cocaine at the time of every self-administration (satiety threshold) was calculated throughout the session. The satiety threshold was stable before the injection of antagonist and then increased approximately 3-fold over the baseline value at doses of antagonists selected to produce this approximately equivalent maximal magnitude of effect (maximum increase in the equiactive cocaine concentration, satiety threshold; Cmax). Despite the similar Cmax, the mean Tmax varied between 5 and 157 min across this series of antagonists. Furthermore, there was a strong and significant correlation between the in vivo Tmax values for each antagonist and the affinity for D2-like dopamine receptors measured in vitro. It is concluded that the cocaine self-administration paradigm offers a reliable and predictive bioassay for measuring the affinity of a competitive antagonist for D2-like dopamine receptors. PMID:21606176
Studies in astronomical time series analysis: Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1979-01-01
Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1981-02-11
Reduction in the maximum time uncertainty (t/sub max/ - t/sub min/) of a series of paired time signals t/sub 1/ and t/sub 2/ varying between two input terminals and representative of a series of single events where t/sub 1/ less than or equal to t/sub 2/ and t/sub 1/ + t/sub 2/ equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t/sub min/) of the first signal t/sub 1/ closer to t/sub max/ and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20 to 800.
Fifth-order complex Korteweg-de Vries-type equations
NASA Astrophysics Data System (ADS)
Khanal, Netra; Wu, Jiahong; Yuan, Juan-Ming
2012-05-01
This paper studies spatially periodic complex-valued solutions of the fifth-order Korteweg-de Vries (KdV)-type equations. The aim is at several fundamental issues including the existence, uniqueness and finite-time blowup problems. Special attention is paid to the Kawahara equation, a fifth-order KdV-type equation. When a Burgers dissipation is attached to the Kawahara equation, we establish the existence and uniqueness of the Fourier series solution with the Fourier modes decaying algebraically in terms of the wave numbers. We also examine a special series solution to the Kawahara equation and prove the convergence and global regularity of such solutions associated with a single mode initial data. In addition, finite-time blowup results are discussed for the special series solution of the Kawahara equation.
Fractal and Multifractal Analysis of Human Gait
NASA Astrophysics Data System (ADS)
Muñoz-Diosdado, A.; del Río Correa, J. L.; Angulo-Brown, F.
2003-09-01
We carried out a fractal and multifractal analysis of human gait time series of young and old individuals, and adults with three illnesses that affect the march: The Parkinson's and Huntington's diseases and the amyotrophic lateral sclerosis (ALS). We obtained cumulative plots of events, the correlation function, the Hurst exponent and the Higuchi's fractal dimension of these time series and found that these fractal markers could be a factor to characterize the march, since we obtained different values of these quantities for youths and adults and they are different also for healthy and ill persons and the most anomalous values belong to ill persons. In other physiological signals there is complexity lost related with the age and the illness, in the case of the march the opposite occurs. The multifractal analysis could be also a useful tool to understand the dynamics of these and other complex systems.
Non-stationary time series modeling on caterpillars pest of palm oil for early warning system
NASA Astrophysics Data System (ADS)
Setiyowati, Susi; Nugraha, Rida F.; Mukhaiyar, Utriweni
2015-12-01
The oil palm production has an important role for the plantation and economic sector in Indonesia. One of the important problems in the cultivation of oil palm plantation is pests which causes damage to the quality of fruits. The caterpillar pest which feed palm tree's leaves will cause decline in quality of palm oil production. Early warning system is needed to minimize losses due to this pest. Here, we applied non-stationary time series modeling, especially the family of autoregressive models to predict the number of pests based on its historical data. We realized that there is some uniqueness of these pests data, i.e. the spike value that occur almost periodically. Through some simulations and case study, we obtain that the selection of constant factor has a significance influence to the model so that it can shoot the spikes value precisely.
High gradient lens for charged particle beam
Chen, Yu-Jiuan
2014-04-29
Methods and devices enable shaping of a charged particle beam. A dynamically adjustable electric lens includes a series of alternating a series of alternating layers of insulators and conductors with a hollow center. The series of alternating layers when stacked together form a high gradient insulator (HGI) tube to allow propagation of the charged particle beam through the hollow center of the HGI tube. A plurality of transmission lines are connected to a plurality of sections of the HGI tube, and one or more voltage sources are provided to supply an adjustable voltage value to each transmission line of the plurality of transmission lines. By changing the voltage values supplied to each section of the HGI tube, any desired electric field can be established across the HGI tube. This way various functionalities including focusing, defocusing, acceleration, deceleration, intensity modulation and others can be effectuated on a time varying basis.
Calculation of power spectrums from digital time series with missing data points
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.
1980-01-01
Two algorithms are developed for calculating power spectrums from the autocorrelation function when there are missing data points in the time series. Both methods use an average sampling interval to compute lagged products. One method, the correlation function power spectrum, takes the discrete Fourier transform of the lagged products directly to obtain the spectrum, while the other, the modified Blackman-Tukey power spectrum, takes the Fourier transform of the mean lagged products. Both techniques require fewer calculations than other procedures since only 50% to 80% of the maximum lags need be calculated. The algorithms are compared with the Fourier transform power spectrum and two least squares procedures (all for an arbitrary data spacing). Examples are given showing recovery of frequency components from simulated periodic data where portions of the time series are missing and random noise has been added to both the time points and to values of the function. In addition the methods are compared using real data. All procedures performed equally well in detecting periodicities in the data.
Satellite imagery in the fight against Malaria, the case for Genetic Programming
NASA Astrophysics Data System (ADS)
Ssentongo, J. S.; Hines, E. L.
The analysis of multi-temporal data is a critical issue in the field of remote sensing and presents a constant challenge The approach used here relies primarily on utilising a method commonly used in statistics and signal processing Empirical Orthogonal Function EOF analysis Normalized Difference Vegetation Index NDVI and Rainfall Estimate RFE satellite images pertaining to the Sub-Saharan Africa region were obtained The images are derived from the Advanced Very High Resolution Radiometer AVHRR on the United States National Oceanic and Atmospheric Administration NOAA polar orbiting satellites spanning from January 2000 to December 2002 The region of interest was narrowed down to the Limpopo Province Northern Province of South Africa EOF analyses of the space-time-intensity series of dekadal mean NDVI values was been performed They reveal that NDVI can be accurately approximated by its principal component time series and contains a near sinusoidal oscillation pattern Peak greenness essentially what NDVI measures seasons last approximately 8 weeks This oscillation period is very similar to that of Malaria cases reported in the same period but lags behind by 4 dekads about 40 days Singular Value Decomposition SVD of Coupled Fields is performed on the spacetime-intensity series of dekadal mean NDVI and RFE values Correlation analyses indicate that both Malaria and greenness appear to be dependant on rainfall the onset of their seasonal highs always following an arrival of rain There is a greater
Bayesian dynamic modeling of time series of dengue disease case counts.
Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander
2017-07-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.
Krafty, Robert T; Rosen, Ori; Stoffer, David S; Buysse, Daniel J; Hall, Martica H
2017-01-01
This article considers the problem of analyzing associations between power spectra of multiple time series and cross-sectional outcomes when data are observed from multiple subjects. The motivating application comes from sleep medicine, where researchers are able to non-invasively record physiological time series signals during sleep. The frequency patterns of these signals, which can be quantified through the power spectrum, contain interpretable information about biological processes. An important problem in sleep research is drawing connections between power spectra of time series signals and clinical characteristics; these connections are key to understanding biological pathways through which sleep affects, and can be treated to improve, health. Such analyses are challenging as they must overcome the complicated structure of a power spectrum from multiple time series as a complex positive-definite matrix-valued function. This article proposes a new approach to such analyses based on a tensor-product spline model of Cholesky components of outcome-dependent power spectra. The approach exibly models power spectra as nonparametric functions of frequency and outcome while preserving geometric constraints. Formulated in a fully Bayesian framework, a Whittle likelihood based Markov chain Monte Carlo (MCMC) algorithm is developed for automated model fitting and for conducting inference on associations between outcomes and spectral measures. The method is used to analyze data from a study of sleep in older adults and uncovers new insights into how stress and arousal are connected to the amount of time one spends in bed.
Topological data analysis of financial time series: Landscapes of crashes
NASA Astrophysics Data System (ADS)
Gidea, Marian; Katz, Yuri
2018-02-01
We explore the evolution of daily returns of four major US stock market indices during the technology crash of 2000, and the financial crisis of 2007-2009. Our methodology is based on topological data analysis (TDA). We use persistence homology to detect and quantify topological patterns that appear in multidimensional time series. Using a sliding window, we extract time-dependent point cloud data sets, to which we associate a topological space. We detect transient loops that appear in this space, and we measure their persistence. This is encoded in real-valued functions referred to as a 'persistence landscapes'. We quantify the temporal changes in persistence landscapes via their Lp-norms. We test this procedure on multidimensional time series generated by various non-linear and non-equilibrium models. We find that, in the vicinity of financial meltdowns, the Lp-norms exhibit strong growth prior to the primary peak, which ascends during a crash. Remarkably, the average spectral density at low frequencies of the time series of Lp-norms of the persistence landscapes demonstrates a strong rising trend for 250 trading days prior to either dotcom crash on 03/10/2000, or to the Lehman bankruptcy on 09/15/2008. Our study suggests that TDA provides a new type of econometric analysis, which complements the standard statistical measures. The method can be used to detect early warning signals of imminent market crashes. We believe that this approach can be used beyond the analysis of financial time series presented here.
Prediction of regulatory gene pairs using dynamic time warping and gene ontology.
Yang, Andy C; Hsu, Hui-Huang; Lu, Ming-Da; Tseng, Vincent S; Shih, Timothy K
2014-01-01
Selecting informative genes is the most important task for data analysis on microarray gene expression data. In this work, we aim at identifying regulatory gene pairs from microarray gene expression data. However, microarray data often contain multiple missing expression values. Missing value imputation is thus needed before further processing for regulatory gene pairs becomes possible. We develop a novel approach to first impute missing values in microarray time series data by combining k-Nearest Neighbour (KNN), Dynamic Time Warping (DTW) and Gene Ontology (GO). After missing values are imputed, we then perform gene regulation prediction based on our proposed DTW-GO distance measurement of gene pairs. Experimental results show that our approach is more accurate when compared with existing missing value imputation methods on real microarray data sets. Furthermore, our approach can also discover more regulatory gene pairs that are known in the literature than other methods.
Off-line tracking of series parameters in distribution systems using AMI data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Tess L.; Sun, Yannan; Schneider, Kevin
2016-05-01
Electric distribution systems have historically lacked measurement points, and equipment is often operated to its failure point, resulting in customer outages. The widespread deployment of sensors at the distribution level is enabling observability. This paper presents an off-line parameter value tracking procedure that takes advantage of the increasing number of measurement devices being deployed at the distribution level to estimate changes in series impedance parameter values over time. The tracking of parameter values enables non-diurnal and non-seasonal change to be flagged for investigation. The presented method uses an unbalanced Distribution System State Estimation (DSSE) and a measurement residual-based parameter estimationmore » procedure. Measurement residuals from multiple measurement snapshots are combined in order to increase the effective local redundancy and improve the robustness of the calculations in the presence of measurement noise. Data from devices on the primary distribution system and from customer meters, via an AMI system, form the input data set. Results of simulations on the IEEE 13-Node Test Feeder are presented to illustrate the proposed approach applied to changes in series impedance parameters. A 5% change in series resistance elements can be detected in the presence of 2% measurement error when combining less than 1 day of measurement snapshots into a single estimate.« less
Nonlinear analysis of the occurrence of hurricanes in the Gulf of Mexico and the Caribbean Sea
NASA Astrophysics Data System (ADS)
Rojo-Garibaldi, Berenice; Salas-de-León, David Alberto; Adela Monreal-Gómez, María; Sánchez-Santillán, Norma Leticia; Salas-Monreal, David
2018-04-01
Hurricanes are complex systems that carry large amounts of energy. Their impact often produces natural disasters involving the loss of human lives and materials, such as infrastructure, valued at billions of US dollars. However, not everything about hurricanes is negative, as hurricanes are the main source of rainwater for the regions where they develop. This study shows a nonlinear analysis of the time series of the occurrence of hurricanes in the Gulf of Mexico and the Caribbean Sea obtained from 1749 to 2012. The construction of the hurricane time series was carried out based on the hurricane database of the North Atlantic basin hurricane database (HURDAT) and the published historical information. The hurricane time series provides a unique historical record on information about ocean-atmosphere interactions. The Lyapunov exponent indicated that the system presented chaotic dynamics, and the spectral analysis and nonlinear analyses of the time series of the hurricanes showed chaotic edge behavior. One possible explanation for this chaotic edge is the individual chaotic behavior of hurricanes, either by category or individually regardless of their category and their behavior on a regular basis.
Cross-sample entropy of foreign exchange time series
NASA Astrophysics Data System (ADS)
Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao
2010-11-01
The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.
NASA Astrophysics Data System (ADS)
Dash, Y.; Mishra, S. K.; Panigrahi, B. K.
2017-12-01
Prediction of northeast/post monsoon rainfall which occur during October, November and December (OND) over Indian peninsula is a challenging task due to the dynamic nature of uncertain chaotic climate. It is imperative to elucidate this issue by examining performance of different machine leaning (ML) approaches. The prime objective of this research is to compare between a) statistical prediction using historical rainfall observations and global atmosphere-ocean predictors like Sea Surface Temperature (SST) and Sea Level Pressure (SLP) and b) empirical prediction based on a time series analysis of past rainfall data without using any other predictors. Initially, ML techniques have been applied on SST and SLP data (1948-2014) obtained from NCEP/NCAR reanalysis monthly mean provided by the NOAA ESRL PSD. Later, this study investigated the applicability of ML methods using OND rainfall time series for 1948-2014 and forecasted up to 2018. The predicted values of aforementioned methods were verified using observed time series data collected from Indian Institute of Tropical Meteorology and the result revealed good performance of ML algorithms with minimal error scores. Thus, it is found that both statistical and empirical methods are useful for long range climatic projections.
Spectral Unmixing Analysis of Time Series Landsat 8 Images
NASA Astrophysics Data System (ADS)
Zhuo, R.; Xu, L.; Peng, J.; Chen, Y.
2018-05-01
Temporal analysis of Landsat 8 images opens up new opportunities in the unmixing procedure. Although spectral analysis of time series Landsat imagery has its own advantage, it has rarely been studied. Nevertheless, using the temporal information can provide improved unmixing performance when compared to independent image analyses. Moreover, different land cover types may demonstrate different temporal patterns, which can aid the discrimination of different natures. Therefore, this letter presents time series K-P-Means, a new solution to the problem of unmixing time series Landsat imagery. The proposed approach is to obtain the "purified" pixels in order to achieve optimal unmixing performance. The vertex component analysis (VCA) is used to extract endmembers for endmember initialization. First, nonnegative least square (NNLS) is used to estimate abundance maps by using the endmember. Then, the estimated endmember is the mean value of "purified" pixels, which is the residual of the mixed pixel after excluding the contribution of all nondominant endmembers. Assembling two main steps (abundance estimation and endmember update) into the iterative optimization framework generates the complete algorithm. Experiments using both simulated and real Landsat 8 images show that the proposed "joint unmixing" approach provides more accurate endmember and abundance estimation results compared with "separate unmixing" approach.
Clustering Multivariate Time Series Using Hidden Markov Models
Ghassempour, Shima; Girosi, Federico; Maeder, Anthony
2014-01-01
In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs), where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers. PMID:24662996
NASA Astrophysics Data System (ADS)
Hashiba, Hideki; Nakayama, Yasunori; Sugimura, Toshiro
The growth of major cities in Asia, as a consequence of economic development, is feared to have adverse influences on the natural environment of the surrounding areas. Comparison of land cover changes in major cities from the viewpoints of both spatial and time series is necessary to fully understand the characteristics of urban development in Asia. To accomplish this, multiple satellite remote sensing data were analyzed across a wide range and over a long term in this study. The process of transition of a major Asian city in Tokyo, Osaka, Beijing, Shanghai, and Hong Kong was analyzed from the characteristic changes of the vegetation index value and the land cover over about 40 years, from 1972 to 2010. Image data for LANDSAT/MSS, LAND-SAT/TM, ALOS/AVNIR-2, and ALOS/PRISM were obtained using a tandem time series. The ratio and state of detailed distribution of land cover were clarified by the classification processing. The time series clearly showed different change characteristics for each city and its surrounding natural environment of vegetation and forest etc. as a result of development processes.
Rodríguez, Nibaldo
2014-01-01
Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0 : 26%, followed by MA-ARIMA with a MAPE of 1 : 12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15 : 51%. PMID:25243200
NASA Astrophysics Data System (ADS)
Vlasenko, A. V.; Sizonenko, A. B.; Zhdanov, A. A.
2018-05-01
Discrete time series or mappings are proposed for describing the dynamics of a nonlinear system. The article considers the problems of forecasting the dynamics of the system from the time series generated by it. In particular, the commercial rate of drilling oil and gas wells can be considered as a series where each next value depends on the previous one. The main parameter here is the technical drilling speed. With the aim of eliminating the measurement error and presenting the commercial speed of the object to the current with a good accuracy, future or any of the elapsed time points, the use of the Kalman filter is suggested. For the transition from a deterministic model to a probabilistic one, the use of ensemble modeling is suggested. Ensemble systems can provide a wide range of visual output, which helps the user to evaluate the measure of confidence in the model. In particular, the availability of information on the estimated calendar duration of the construction of oil and gas wells will allow drilling companies to optimize production planning by rationalizing the approach to loading drilling rigs, which ultimately leads to maximization of profit and an increase of their competitiveness.
Awan, Imtiaz; Aziz, Wajid; Habib, Nazneen; Alowibdi, Jalal S.; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali
2018-01-01
Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features. PMID:29771977
Awan, Imtiaz; Aziz, Wajid; Shah, Imran Hussain; Habib, Nazneen; Alowibdi, Jalal S; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali
2018-01-01
Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features.
NASA Astrophysics Data System (ADS)
Vargas-Yáñez, M.; García-Martínez, M. C.; Moya, F.; Balbín, R.; López-Jurado, J. L.; Serra, M.; Zunino, P.; Pascual, J.; Salat, J.
2017-09-01
The RADMED project is devoted to the implementation and maintenance of a multidisciplinary monitoring system around the Spanish Mediterranean waters. This observing system is based on periodic multidisciplinary cruises covering the coastal waters, continental shelf and slope waters and some deep stations (>2000 m) from the Westernmost Alboran Sea to Barcelona in the Catalan Sea, including the Balearic Islands. This project was launched in 2007 unifying and extending some previous monitoring projects which had a more reduced geographical coverage. Some of the time series currently available extend from 1992, while the more recent ones were initiated in 2007. The present work updates the available time series up to 2015 (included) and shows the capability of these time series for two main purposes: the calculation of mean values for the properties of main water masses around the Spanish Mediterranean, and the study of the interannual and decadal variability of such properties. The data set provided by the RADMED project has been merged with historical data from the MEDAR/MEDATLAS data base for the calculation of temperature and salinity trends from 1900 to 2015. The analysis of these time series shows that the intermediate and deep layers of the Western Mediterranean have increased their temperature and salinity with an acceleration of the warming and salting trends from 1943. Trends for the heat absorbed by the water column for the 1943-2015 period, range between 0.2 and 0.6 W/m2 depending on the used methodology. The temperature and salinity trends for the same period and for the intermediate layer are 0.002 °C/yr and 0.001 yr-1 respectively. Deep layers warmed and increased their salinity at a rate of 0.004 °C/yr and 0.001 yr-1.
Use of a Principal Components Analysis for the Generation of Daily Time Series.
NASA Astrophysics Data System (ADS)
Dreveton, Christine; Guillou, Yann
2004-07-01
A new approach for generating daily time series is considered in response to the weather-derivatives market. This approach consists of performing a principal components analysis to create independent variables, the values of which are then generated separately with a random process. Weather derivatives are financial or insurance products that give companies the opportunity to cover themselves against adverse climate conditions. The aim of a generator is to provide a wider range of feasible situations to be used in an assessment of risk. Generation of a temperature time series is required by insurers or bankers for pricing weather options. The provision of conditional probabilities and a good representation of the interannual variance are the main challenges of a generator when used for weather derivatives. The generator was developed according to this new approach using a principal components analysis and was applied to the daily average temperature time series of the Paris-Montsouris station in France. The observed dataset was homogenized and the trend was removed to represent correctly the present climate. The results obtained with the generator show that it represents correctly the interannual variance of the observed climate; this is the main result of the work, because one of the main discrepancies of other generators is their inability to represent accurately the observed interannual climate variance—this discrepancy is not acceptable for an application to weather derivatives. The generator was also tested to calculate conditional probabilities: for example, the knowledge of the aggregated value of heating degree-days in the middle of the heating season allows one to estimate the probability if reaching a threshold at the end of the heating season. This represents the main application of a climate generator for use with weather derivatives.
NASA Astrophysics Data System (ADS)
Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.
2018-03-01
In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.
Dobson, P.F.; O'Neil, J.R.
1987-01-01
Measurements of stable isotope compositions and water contents of boninite series volcanic rocks from the island of Chichi-jima, Bonin Islands, Japan, confirm that a large amount (1.6-2.4 wt.%) of primary water was present in these unusual magmas. An enrichment of 0.6??? in 18O during differentiation is explained by crystallization of 18O-depleted mafic phases. Silicic glasses have elevated ??18O values and relatively low ??D values indicating that they were modified by low-temperature alteration and hydration processes. Mafic glasses, on the other hand, have for the most part retained their primary isotopic signatures since Eocene time. Primary ??D values of -53 for boninite glasses are higher than those of MORB and suggest that the water was derived from subducted oceanic lithosphere. ?? 1987.
NASA Astrophysics Data System (ADS)
Rodriguez-Galiano, Victor; Aragones, David; Caparros-Santiago, Jose A.; Navarro-Cerrillo, Rafael M.
2017-10-01
Land surface phenology (LSP) can improve the characterisation of forest areas and their change processes. The aim of this work was: i) to characterise the temporal dynamics in Mediterranean Pinus forests, and ii) to evaluate the potential of LSP for species discrimination. The different experiments were based on 679 mono-specific plots for the 5 native species on the Iberian Peninsula: P. sylvestris, P. pinea, P. halepensis, P. nigra and P. pinaster. The entire MODIS NDVI time series (2000-2016) of the MOD13Q1 product was used to characterise phenology. The following phenological parameters were extracted: the start, end and median days of the season, and the length of the season in days, as well as the base value, maximum value, amplitude and integrated value. Multi-temporal metrics were calculated to synthesise the inter-annual variability of the phenological parameters. The species were discriminated by the application of Random Forest (RF) classifiers from different subsets of variables: model 1) NDVI-smoothed time series, model 2) multi-temporal metrics of the phenological parameters, and model 3) multi-temporal metrics and the auxiliary physical variables (altitude, slope, aspect and distance to the coastline). Model 3 was the best, with an overall accuracy of 82%, a kappa coefficient of 0.77 and whose most important variables were: elevation, coast distance, and the end and start days of the growing season. The species that presented the largest errors was P. nigra, (kappa= 0.45), having locations with a similar behaviour to P. sylvestris or P. pinaster.
NASA Astrophysics Data System (ADS)
Rodriguez-Galiano, Victor; Aragones, David; Navarro-Cerrillo, Rafael M.; Caparros-Santiago, Jose A.
2017-04-01
Land surface phenology (LSP) can improve the monitoring of forest areas and their change processes. The aim of this work is to characterize the temporal dynamics in Mediterranean Pinus forests. The different experiments were based on 679 mono-specific plots for the 5 native species in the Iberian Peninsula: P. sylvestris, P. pinea, P. halepensis, P. nigra and P. pinaster, which were obtained from the Third National Forest Inventory of Spain. The whole MODIS NDVI time series (2000-2016) were used to characterize the seasonal behavior of the pine forest. The following phenological parameters were extracted for each cycle from the smoothed time series: the day of beginning, end, middle and the length in days of season also base value, maximum value, amplitude and integrated value. Multi-temporal metrics were calculated to synthesize the inter-annual variability of the phenological parameters. An atypical behavior was detected for the years 2004 and 2011 and 2000, 2009 and 2015 for all Pinus species, matching wet and dry cycles, respectively. The inter and intra-species analysis of NDVI and LSP showed two different patterns: an important decreasing during the summer for those species such as P. halepensis, P. pinea y P. pinaster; and a lower NDVI variation among the year for P. sylvestris and P. nigra in certain areas. P. sylvestris had a phenological behavior different to P. pinea, P. halepensis and P. pinaster. P. nigra showed and heterogeneous intra-specific behaviour that might be associated to the existence of subspecies with different phenology.
NASA Astrophysics Data System (ADS)
Eymen, Abdurrahman; Köylü, Ümran
2018-02-01
Local climate change is determined by analysis of long-term recorded meteorological data. In the statistical analysis of the meteorological data, the Mann-Kendall rank test, which is one of the non-parametrical tests, has been used; on the other hand, for determining the power of the trend, Theil-Sen method has been used on the data obtained from 16 meteorological stations. The stations cover the provinces of Kayseri, Sivas, Yozgat, and Nevşehir in the Central Anatolia region of Turkey. Changes in land-use affect local climate. Dams are structures that cause major changes on the land. Yamula Dam is located 25 km northwest of Kayseri. The dam has huge water body which is approximately 85 km2. The mentioned tests have been used for detecting the presence of any positive or negative trend in meteorological data. The meteorological data in relation to the seasonal average, maximum, and minimum values of the relative humidity and seasonal average wind speed have been organized as time series and the tests have been conducted accordingly. As a result of these tests, the following have been identified: increase was observed in minimum relative humidity values in the spring, summer, and autumn seasons. As for the seasonal average wind speed, decrease was detected for nine stations in all seasons, whereas increase was observed in four stations. After the trend analysis, pre-dam mean relative humidity time series were modeled with Autoregressive Integrated Moving Averages (ARIMA) model which is statistical modeling tool. Post-dam relative humidity values were predicted by ARIMA models.
Combinations of Earth Orientation Measurements: SPACE2005, COMB2005, and POLE2005
NASA Technical Reports Server (NTRS)
Gross, Richard S.
2006-01-01
Independent Earth orientation measurements taken by the space-geodetic techniques of lunar and satellite laser ranging, by very long baseline interferometry, and by the Global Positioning System have been combined using a Kalman filter. The resulting combined Earth orientation series, SPACE2005, consists of values and uncertainties for Universal Time, polar motion, and their rates that span from September 28, 1976, to January 7, 2006, at daily intervals and is available in versions whose epochs are given at either midnight or noon. The space-geodetic measurements used to generate SPACE2005 have then been combined with optical astrometric measurements to form two additional combined Earth orientation series: (1) COMB2005, consisting of values and uncertainties for Universal Time, polar motion, and their rates that span from January 20, 1962, to January 7, 2006, at daily intervals and which is also available in versions whose epochs are given at either midnight or noon; and (2) POLE2005, consisting of values and uncertainties for polar motion and its rate that span from January 20, 1900, to December 21, 2005, at 30.4375-day intervals.
Coral radiocarbon constraints on the source of the Indonesian throughflow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, M.D.; Schrag, D.P.; Kashgarian, M.
1997-06-01
Radiocarbon variability in {ital Porites} spp. corals from Guam and the Makassar Strait (Indonesian Seaway) was used to identify the source waters contributing to the Indonesian throughflow. Time series with bimonthly resolution were constructed using accelerator mass spectrometry. The seasonal variability ranges from 15 to 60{per_thousand}, with large interannual variability. {Delta}{sup 14}C values from Indonesia and Guam have a nearly identical range. Annual mean {Delta}{sup 14}C values from Indonesia are 50 to 60{per_thousand} higher than in corals from Canton in the South Equatorial Current [{ital Druffel}, 1987]. These observations support a year-round North Pacific source for the Indonesian throughflow andmore » imply negligible contribution by South Equatorial Current water. The large seasonality in {Delta}{sup 14}C values from both sites emphasizes the dynamic behavior of radiocarbon in the surface ocean and suggests that {Delta}{sup 14}C time series of similar resolution can help constrain seasonal and interannual changes in ocean circulation in the Pacific over the last several decades.{copyright} 1997 American Geophysical Union« less
Combinations of Earth Orientation Measurements: SPACE2003, COMB2003, and POLE2003
NASA Technical Reports Server (NTRS)
Gross, Richard S.
2004-01-01
Independent Earth orientation measurements taken by the space-geodetic techniques of lunar and satellite laser ranging, very long baseline interferometry, and the global positioning system have been combined using a Kalman filter. The resulting combined Earth orientation series, SPACE2003, consists of values and uncertainties for Universal Time, polar motion, and their rates that span from September 28.0, 1976 to January 31.0, 2004 at daily intervals and is available in versions whose epochs are given at either midnight or noon. The space-geodetic measurements used to generate SPACE2003 have then been combined with optical astrometric measurements to form two additional combined Earth orientation series: (1) COMB2003, consisting of values and uncertainties for Universal Time, polar motion, and their rates that span from January 20.0, 1962 to January 31.0, 2004 at daily intervals and which is also available in versions whose epochs are given at either midnight or noon, and (2) POLE2003, consisting of values and uncertainties for polar motion and its rate that span from January 20, 1900 to January 21,2004 at 30.4375-day intervals.
Combinations of Earth Orientation Measurements: SPACE2004, COMB2004, and POLE2004
NASA Technical Reports Server (NTRS)
Gross, Richard R.
2005-01-01
Independent Earth orientation measurements taken by the space-geodetic techniques of lunar and satellite laser ranging, very long baseline interferometry, and the global positioning system have been combined using a Kalman filter. The resulting combined Earth orientation series, SPACE2004, consists of values and uncertainties for Universal Time, polar motion, and their rates that span from September 28, 1976, to January 22, 2005, at daily intervals and is available in versions whose epochs are given at either midnight or noon. The space-geodetic measurements used to generate SPACE2004 have then been combined with optical astrometric measurements to form two additional combined Earth orientation series: (1) COMB2004, consisting of values and uncertainties for Universal Time, polar motion, and their rates that span from January 20, 1962, to January 22, 2005, at daily intervals and which is also available in versions whose epochs are given at either midnight or noon, and (2) POLE2004, consisting of values and uncertainties for polar motion and its rate that span from January 20, 1900, to January 20, 2005, at 30.4375-day intervals.
Combinations of Earth Orientation Measurements: SPACE2014, COMB2014, and POLE2014
NASA Technical Reports Server (NTRS)
Ratcliff, J. T.; Gross, R. S.
2015-01-01
Independent Earth orientation measurements taken by the space-geodetic techniques of lunar and satellite laser ranging, very long baseline interferometry, and the Global Positioning System have been combined using a Kalman filter. The resulting combined Earth orientation series, SPACE2013, consists of values and uncertainties for Universal Time, polar motion, and their rates that span from September 28, 1976, to June 30, 2014, at daily intervals and is available in versions with epochs given at either midnight or noon. The space-geodetic measurements used to generate SPACE2013 have then been combined with optical astrometric measurements to form two additional combined Earth orientation series: (1) COMB2013, consisting of values and uncertainties for Universal Time, polar motion, and their rates that span from January 20, 1962, to June 30, 2014, at daily intervals and which are also available in versions with epochs given at either midnight or noon; and (2) POLE2013, consisting of values and uncertainties for polar motion and its rate that span from January 20, 1900, to June 22, 2014, at 30.4375-day intervals.
Combinations of Earth Orientation Measurements: SPACE2011, COMB2011, and POLE2011
NASA Technical Reports Server (NTRS)
Ratcliff, J. T.; Gross, R. S.
2013-01-01
Independent Earth orientation measurements taken by the space-geodetic techniques of lunar and satellite laser ranging, very long baseline interferometry, and the Global Positioning System have been combined using a Kalman filter. The resulting combined Earth orientation series, SPACE2011, consists of values and uncertainties for Universal Time, polar motion, and their rates that span from September 28, 1976, to July 13, 2012, at daily intervals and is available in versions with epochs given at either midnight or noon. The space-geodetic measurements used to generate SPACE2011 have then been combined with optical astrometric measurements to form two additional combined Earth orientation series: (1) COMB2011, consisting of values and uncertainties for Universal Time, polar motion, and their rates that span from January 20, 1962, to July 13, 2012, at daily intervals and which are also available in versions with epochs given at either midnight or noon; and (2) POLE2011, consisting of values and uncertainties for polar motion and its rate that span from January 20, 1900, to June 21, 2012, at 30.4375-day intervals.
Combinations of Earth Orientation Measurements: SPACE2013, COMB2013, and POLE2013
NASA Technical Reports Server (NTRS)
Ratcliff, J. T.; Gross, R. S.
2015-01-01
Independent Earth orientation measurements taken by the space-geodetic techniques of lunar and satellite laser ranging, very long baseline interferometry, and the Global Positioning System have been combined using a Kalman filter. The resulting combined Earth orientation series, SPACE2013, consists of values and uncertainties for Universal Time, polar motion, and their rates that span from September 28, 1976, to June 30, 2014, at daily intervals and is available in versions with epochs given at either midnight or noon. The space-geodetic measurements used to generate SPACE2013 have then been combined with optical astrometric measurements to form two additional combined Earth orientation series: (1) COMB2013, consisting of values and uncertainties for Universal Time, polar motion, and their rates that span from January 20, 1962, to June 30, 2014, at daily intervals and which are also available in versions with epochs given at either midnight or noon; and (2) POLE2013, consisting of values and uncertainties for polar motion and its rate that span from January 20, 1900, to June 22, 2014, at 30.4375-day intervals.
Combinations of Earth Orientation Measurements: SPACE2016, COMB2016, and POLE2016
NASA Technical Reports Server (NTRS)
Ratcliff, J. T.; Gross, R. S.
2017-01-01
Independent Earth orientation measurements taken by the space-geodetic techniques of lunar and satellite laser ranging, very long baseline interferometry, and the Global Positioning System have been combined using a Kalman filter. The resulting combined Earth orientation series, SPACE2016, consists of values and uncertainties for Universal Time, polar motion, and their rates that span from September 28, 1976, to June 30, 2017, at daily intervals and is available in versions with epochs given at either midnight or noon. The space-geodetic measurements used to generate SPACE2016 have then been combined with optical astrometric measurements to form two additional combined Earth orientation series: (1) COMB2016, consisting of values and uncertainties for Universal Time, polar motion, and their rates that span from January 20, 1962, to June 30, 2017, at daily intervals and which are also available in versions with epochs given at either midnight or noon; and (2) POLE2016, consisting of values and uncertainties for polar motion and its rate that span from January 20, 1900, to June 22, 2017, at 30.4375-day intervals.
Combinations of Earth Orientation Measurements: SPACE2012, COMB2012, and POLE2012
NASA Technical Reports Server (NTRS)
Ratcliff, J. T.; Gross, R. S.
2013-01-01
Independent Earth orientation measurements taken by the space-geodetic techniques of lunar and satellite laser ranging, very long baseline interferometry, and the Global Positioning System have been combined using a Kalman filter. The resulting combined Earth orientation series, SPACE2012, consists of values and uncertainties for Universal Time, polar motion, and their rates that span from September 28, 1976, to April 26, 2013, at daily intervals and is available in versions with epochs given at either midnight or noon. The space-geodetic measurements used to generate SPACE2012 have then been combined with optical astrometric measurements to form two additional combined Earth orientation series: (1) COMB2012, consisting of values and uncertainties for Universal Time, polar motion, and their rates that span from January 20, 1962, to April 26, 2013, at daily intervals and which are also available in versions with epochs given at either midnight or noon; and (2) POLE2012, consisting of values and uncertainties for polar motion and its rate that span from January 20, 1900, to May 22, 2013, at 30.4375-day intervals.
Advanced methods for modeling water-levels and estimating drawdowns with SeriesSEE, an Excel add-in
Halford, Keith; Garcia, C. Amanda; Fenelon, Joe; Mirus, Benjamin B.
2012-12-21
Water-level modeling is used for multiple-well aquifer tests to reliably differentiate pumping responses from natural water-level changes in wells, or “environmental fluctuations.” Synthetic water levels are created during water-level modeling and represent the summation of multiple component fluctuations, including those caused by environmental forcing and pumping. Pumping signals are modeled by transforming step-wise pumping records into water-level changes by using superimposed Theis functions. Water-levels can be modeled robustly with this Theis-transform approach because environmental fluctuations and pumping signals are simulated simultaneously. Water-level modeling with Theis transforms has been implemented in the program SeriesSEE, which is a Microsoft® Excel add-in. Moving average, Theis, pneumatic-lag, and gamma functions transform time series of measured values into water-level model components in SeriesSEE. Earth tides and step transforms are additional computed water-level model components. Water-level models are calibrated by minimizing a sum-of-squares objective function where singular value decomposition and Tikhonov regularization stabilize results. Drawdown estimates from a water-level model are the summation of all Theis transforms minus residual differences between synthetic and measured water levels. The accuracy of drawdown estimates is limited primarily by noise in the data sets, not the Theis-transform approach. Drawdowns much smaller than environmental fluctuations have been detected across major fault structures, at distances of more than 1 mile from the pumping well, and with limited pre-pumping and recovery data at sites across the United States. In addition to water-level modeling, utilities exist in SeriesSEE for viewing, cleaning, manipulating, and analyzing time-series data.
NASA Astrophysics Data System (ADS)
El Yazidi, Abdelhadi; Ramonet, Michel; Ciais, Philippe; Broquet, Gregoire; Pison, Isabelle; Abbaris, Amara; Brunner, Dominik; Conil, Sebastien; Delmotte, Marc; Gheusi, Francois; Guerin, Frederic; Hazan, Lynn; Kachroudi, Nesrine; Kouvarakis, Giorgos; Mihalopoulos, Nikolaos; Rivier, Leonard; Serça, Dominique
2018-03-01
This study deals with the problem of identifying atmospheric data influenced by local emissions that can result in spikes in time series of greenhouse gases and long-lived tracer measurements. We considered three spike detection methods known as coefficient of variation (COV), robust extraction of baseline signal (REBS) and standard deviation of the background (SD) to detect and filter positive spikes in continuous greenhouse gas time series from four monitoring stations representative of the European ICOS (Integrated Carbon Observation System) Research Infrastructure network. The results of the different methods are compared to each other and against a manual detection performed by station managers. Four stations were selected as test cases to apply the spike detection methods: a continental rural tower of 100 m height in eastern France (OPE), a high-mountain observatory in the south-west of France (PDM), a regional marine background site in Crete (FKL) and a marine clean-air background site in the Southern Hemisphere on Amsterdam Island (AMS). This selection allows us to address spike detection problems in time series with different variability. Two years of continuous measurements of CO2, CH4 and CO were analysed. All methods were found to be able to detect short-term spikes (lasting from a few seconds to a few minutes) in the time series. Analysis of the results of each method leads us to exclude the COV method due to the requirement to arbitrarily specify an a priori percentage of rejected data in the time series, which may over- or underestimate the actual number of spikes. The two other methods freely determine the number of spikes for a given set of parameters, and the values of these parameters were calibrated to provide the best match with spikes known to reflect local emissions episodes that are well documented by the station managers. More than 96 % of the spikes manually identified by station managers were successfully detected both in the SD and the REBS methods after the best adjustment of parameter values. At PDM, measurements made by two analyzers located 200 m from each other allow us to confirm that the CH4 spikes identified in one of the time series but not in the other correspond to a local source from a sewage treatment facility in one of the observatory buildings. From this experiment, we also found that the REBS method underestimates the number of positive anomalies in the CH4 data caused by local sewage emissions. As a conclusion, we recommend the use of the SD method, which also appears to be the easiest one to implement in automatic data processing, used for the operational filtering of spikes in greenhouse gases time series at global and regional monitoring stations of networks like that of the ICOS atmosphere network.
Code of Federal Regulations, 2014 CFR
2014-07-01
... series of daily values represents the 98th percentile for that year. Creditable samples include daily... measured (or averaged from hourly measurements in AQS) from midnight to midnight (local standard time) from... design value (DV) or a 24-hour PM2.5 NAAQS DV to determine if those metrics, which are judged to be based...
Efficient parallel algorithms for string editing and related problems
NASA Technical Reports Server (NTRS)
Apostolico, Alberto; Atallah, Mikhail J.; Larmore, Lawrence; Mcfaddin, H. S.
1988-01-01
The string editing problem for input strings x and y consists of transforming x into y by performing a series of weighted edit operations on x of overall minimum cost. An edit operation on x can be the deletion of a symbol from x, the insertion of a symbol in x or the substitution of a symbol x with another symbol. This problem has a well known O((absolute value of x)(absolute value of y)) time sequential solution (25). The efficient Program Requirements Analysis Methods (PRAM) parallel algorithms for the string editing problem are given. If m = ((absolute value of x),(absolute value of y)) and n = max((absolute value of x),(absolute value of y)), then the CREW bound is O (log m log n) time with O (mn/log m) processors. In all algorithms, space is O (mn).
A Physiological Time Series Dynamics-Based Approach to Patient Monitoring and Outcome Prediction
Lehman, Li-Wei H.; Adams, Ryan P.; Mayaud, Louis; Moody, George B.; Malhotra, Atul; Mark, Roger G.; Nemati, Shamim
2015-01-01
Cardiovascular variables such as heart rate (HR) and blood pressure (BP) are regulated by an underlying control system, and therefore, the time series of these vital signs exhibit rich dynamical patterns of interaction in response to external perturbations (e.g., drug administration), as well as pathological states (e.g., onset of sepsis and hypotension). A question of interest is whether “similar” dynamical patterns can be identified across a heterogeneous patient cohort, and be used for prognosis of patients’ health and progress. In this paper, we used a switching vector autoregressive framework to systematically learn and identify a collection of vital sign time series dynamics, which are possibly recurrent within the same patient and may be shared across the entire cohort. We show that these dynamical behaviors can be used to characterize the physiological “state” of a patient. We validate our technique using simulated time series of the cardiovascular system, and human recordings of HR and BP time series from an orthostatic stress study with known postural states. Using the HR and BP dynamics of an intensive care unit (ICU) cohort of over 450 patients from the MIMIC II database, we demonstrate that the discovered cardiovascular dynamics are significantly associated with hospital mortality (dynamic modes 3 and 9, p = 0.001, p = 0.006 from logistic regression after adjusting for the APACHE scores). Combining the dynamics of BP time series and SAPS-I or APACHE-III provided a more accurate assessment of patient survival/mortality in the hospital than using SAPS-I and APACHE-III alone (p = 0.005 and p = 0.045). Our results suggest that the discovered dynamics of vital sign time series may contain additional prognostic value beyond that of the baseline acuity measures, and can potentially be used as an independent predictor of outcomes in the ICU. PMID:25014976
33 CFR 74.20-1 - Buoy and vessel use costs.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 7310 (series) which is available from the District Budget Office of the appropriate Coast Guard District Commander. (b) Buoy and vessel use charges under this part are made for the cost or value of time... obstruction. No charge for time and expense of Coast Guard vessels is made when the marking of the obstruction...
33 CFR 74.20-1 - Buoy and vessel use costs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 7310 (series) which is available from the District Budget Office of the appropriate Coast Guard District Commander. (b) Buoy and vessel use charges under this part are made for the cost or value of time... obstruction. No charge for time and expense of Coast Guard vessels is made when the marking of the obstruction...
NASA Astrophysics Data System (ADS)
Amorese, D.; Grasso, J.-R.; Garambois, S.; Font, M.
2018-05-01
The rank-sum multiple change-point method is a robust statistical procedure designed to search for the optimal number and the location of change points in an arbitrary continue or discrete sequence of values. As such, this procedure can be used to analyse time-series data. Twelve years of robust data sets for the Séchilienne (French Alps) rockslide show a continuous increase in average displacement rate from 50 to 280 mm per month, in the 2004-2014 period, followed by a strong decrease back to 50 mm per month in the 2014-2015 period. When possible kinematic phases are tentatively suggested in previous studies, its solely rely on the basis of empirical threshold values. In this paper, we analyse how the use of a statistical algorithm for change-point detection helps to better understand time phases in landslide kinematics. First, we test the efficiency of the statistical algorithm on geophysical benchmark data, these data sets (stream flows and Northern Hemisphere temperatures) being already analysed by independent statistical tools. Second, we apply the method to 12-yr daily time-series of the Séchilienne landslide, for rainfall and displacement data, from 2003 December to 2015 December, in order to quantitatively extract changes in landslide kinematics. We find two strong significant discontinuities in the weekly cumulated rainfall values: an average rainfall rate increase is resolved in 2012 April and a decrease in 2014 August. Four robust changes are highlighted in the displacement time-series (2008 May, 2009 November-December-2010 January, 2012 September and 2014 March), the 2010 one being preceded by a significant but weak rainfall rate increase (in 2009 November). Accordingly, we are able to quantitatively define five kinematic stages for the Séchilienne rock avalanche during this period. The synchronization between the rainfall and displacement rate, only resolved at the end of 2009 and beginning of 2010, corresponds to a remarkable change (fourfold increase in mean displacement rate) in the landslide kinematic. This suggests that an increase of the rainfall is able to drive an increase of the landslide displacement rate, but that most of the kinematics of the landslide is not directly attributable to rainfall amount. The detailed exploration of the characteristics of the five kinematic stages suggests that the weekly averaged displacement rates are more tied to the frequency or rainy days than to the rainfall rate values. These results suggest the pattern of Séchilienne rock avalanche is consistent with the previous findings that landslide kinematics is dependent upon not only rainfall but also soil moisture conditions (as known as being more strongly related to precipitation frequency than to precipitation amount). Finally, our analysis of the displacement rate time-series pinpoints a susceptibility change of slope response to rainfall, as being slower before the end of 2009 than after, respectively. The kinematic history as depicted by statistical tools opens new routes to understand the apparent complexity of Séchilienne landslide kinematic.
NASA Astrophysics Data System (ADS)
Dobrovolný, Petr; Brázdil, Rudolf; Kotyza, Oldřich; Valášek, Hubert
2010-05-01
Series of temperature and precipitation indices (in ordinal scale) based on interpretation of various sources of documentary evidence (e.g. narrative written reports, visual daily weather records, personal correspondence, special prints, official economic records, etc.) are used as predictors in the reconstruction of mean seasonal temperatures and seasonal precipitation totals for the Czech Lands from A.D. 1500. Long instrumental measurements from 1771 (temperatures) and 1805 (precipitation) are used as a target values to calibrate and verify documentary-based index series. Reconstruction is based on linear regression with variance and mean adjustments. Reconstructed series were compared with similar European documentary-based reconstructions as well as with reconstructions based on different natural proxies. Reconstructed series were analyzed with respect to trends on different time-scales and occurrence of extreme values. We discuss uncertainties typical for documentary evidence from historical archives. Besides the fact that reports on weather and climate in documentary archives cover all seasons, our reconstructions provide the best results for winter temperatures and summer precipitation. However, explained variance for these seasons is comparable to other existing reconstructions for Central Europe.
NASA Astrophysics Data System (ADS)
Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.
2008-11-01
We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.
Use of a prototype pulse oximeter for time series analysis of heart rate variability
NASA Astrophysics Data System (ADS)
González, Erika; López, Jehú; Hautefeuille, Mathieu; Velázquez, Víctor; Del Moral, Jésica
2015-05-01
This work presents the development of a low cost pulse oximeter prototype consisting of pulsed red and infrared commercial LEDs and a broad spectral photodetector used to register time series of heart rate and oxygen saturation of blood. This platform, besides providing these values, like any other pulse oximeter, processes the signals to compute a power spectrum analysis of the patient heart rate variability in real time and, additionally, the device allows access to all raw and analyzed data if databases construction is required or another kind of further analysis is desired. Since the prototype is capable of acquiring data for long periods of time, it is suitable for collecting data in real life activities, enabling the development of future wearable applications.
NASA Astrophysics Data System (ADS)
Yamada, Masayoshi; Fukuzawa, Masayuki; Kitsunezuka, Yoshiki; Kishida, Jun; Nakamori, Nobuyuki; Kanamori, Hitoshi; Sakurai, Takashi; Kodama, Souichi
1995-05-01
In order to detect pulsation from a series of noisy ultrasound-echo moving images of a newborn baby's head for pediatric diagnosis, a digital image processing system capable of recording at the video rate and processing the recorded series of images was constructed. The time-sequence variations of each pixel value in a series of moving images were analyzed and then an algorithm based on Fourier transform was developed for the pulsation detection, noting that the pulsation associated with blood flow was periodically changed by heartbeat. Pulsation detection for pediatric diagnosis was successfully made from a series of noisy ultrasound-echo moving images of newborn baby's head by using the image processing system and the pulsation detection algorithm developed here.
Reverse engineering gene regulatory networks from measurement with missing values.
Ogundijo, Oyetunji E; Elmas, Abdulkadir; Wang, Xiaodong
2016-12-01
Gene expression time series data are usually in the form of high-dimensional arrays. Unfortunately, the data may sometimes contain missing values: for either the expression values of some genes at some time points or the entire expression values of a single time point or some sets of consecutive time points. This significantly affects the performance of many algorithms for gene expression analysis that take as an input, the complete matrix of gene expression measurement. For instance, previous works have shown that gene regulatory interactions can be estimated from the complete matrix of gene expression measurement. Yet, till date, few algorithms have been proposed for the inference of gene regulatory network from gene expression data with missing values. We describe a nonlinear dynamic stochastic model for the evolution of gene expression. The model captures the structural, dynamical, and the nonlinear natures of the underlying biomolecular systems. We present point-based Gaussian approximation (PBGA) filters for joint state and parameter estimation of the system with one-step or two-step missing measurements . The PBGA filters use Gaussian approximation and various quadrature rules, such as the unscented transform (UT), the third-degree cubature rule and the central difference rule for computing the related posteriors. The proposed algorithm is evaluated with satisfying results for synthetic networks, in silico networks released as a part of the DREAM project, and the real biological network, the in vivo reverse engineering and modeling assessment (IRMA) network of yeast Saccharomyces cerevisiae . PBGA filters are proposed to elucidate the underlying gene regulatory network (GRN) from time series gene expression data that contain missing values. In our state-space model, we proposed a measurement model that incorporates the effect of the missing data points into the sequential algorithm. This approach produces a better inference of the model parameters and hence, more accurate prediction of the underlying GRN compared to when using the conventional Gaussian approximation (GA) filters ignoring the missing data points.
From Engineering Science to Big Science: The NACA and NASA Collier Trophy Research Project Winners
NASA Technical Reports Server (NTRS)
Mack, Pamela E. (Editor)
1998-01-01
The chapters of this book discuss a series of case studies of notable technological projects carried out at least in part by the NACA and NASA. The case studies chosen are those projects that won the National Aeronautic Association's (NAA) Collier Trophy for "the greatest achievement in aviation in America, the value of which has been thoroughly demonstrated by use during the preceding year." Looking back on the whole series of projects we can examine both what successes were seen as important at various times, and how the goals and organization of these notable projects changed over time.
Multifractal Value at Risk model
NASA Astrophysics Data System (ADS)
Lee, Hojin; Song, Jae Wook; Chang, Woojin
2016-06-01
In this paper new Value at Risk (VaR) model is proposed and investigated. We consider the multifractal property of financial time series and develop a multifractal Value at Risk (MFVaR). MFVaR introduced in this paper is analytically tractable and not based on simulation. Empirical study showed that MFVaR can provide the more stable and accurate forecasting performance in volatile financial markets where large loss can be incurred. This implies that our multifractal VaR works well for the risk measurement of extreme credit events.
Comparing the structure of an emerging market with a mature one under global perturbation
NASA Astrophysics Data System (ADS)
Namaki, A.; Jafari, G. R.; Raei, R.
2011-09-01
In this paper we investigate the Tehran stock exchange (TSE) and Dow Jones Industrial Average (DJIA) in terms of perturbed correlation matrices. To perturb a stock market, there are two methods, namely local and global perturbation. In the local method, we replace a correlation coefficient of the cross-correlation matrix with one calculated from two Gaussian-distributed time series, whereas in the global method, we reconstruct the correlation matrix after replacing the original return series with Gaussian-distributed time series. The local perturbation is just a technical study. We analyze these markets through two statistical approaches, random matrix theory (RMT) and the correlation coefficient distribution. By using RMT, we find that the largest eigenvalue is an influence that is common to all stocks and this eigenvalue has a peak during financial shocks. We find there are a few correlated stocks that make the essential robustness of the stock market but we see that by replacing these return time series with Gaussian-distributed time series, the mean values of correlation coefficients, the largest eigenvalues of the stock markets and the fraction of eigenvalues that deviate from the RMT prediction fall sharply in both markets. By comparing these two markets, we can see that the DJIA is more sensitive to global perturbations. These findings are crucial for risk management and portfolio selection.
Applying "domino" model to study dipolar geomagnetic field reversals and secular variation
NASA Astrophysics Data System (ADS)
Peqini, Klaudio; Duka, Bejo
2014-05-01
Aiming to understand the physical processes underneath the reversals events of geomagnetic field, different numerical models have been conceived. We considered the so named "domino" model, an Ising-Heisenberg model of interacting magnetic spins aligned along a ring [Mazaud and Laj, EPSL, 1989; Mori et al., arXiv:1110.5062v2, 2012]. We will present here some results which are slightly different from the already published results, and will give our interpretation on the differences. Following the empirical studies of the long series of the axial magnetic moment (dipolar moment or "magnetization") generated by the model varying all model parameters, we defined the set of parameters that supply the longest mean time between reversals. Using this set of parameters, a short time series (about 10,000 years) of axial magnetic moment was generated. After de-noising the fluctuation of this time series, we compared it with the series of dipolar magnetic moment values supplied by CALS10K.1b model for the last 10000 years. We found similar behavior of the both series, even if the "domino" model could not supply a full explanation of the geomagnetic field SV. In a similar way we will compare a 14000 years long series with the dipolar magnetic moment obtained by the model SHA.DIF.14k [Pavón-Carrasco et al., EPSL, 2014].
Analyzing time-ordered event data with missed observations.
Dokter, Adriaan M; van Loon, E Emiel; Fokkema, Wimke; Lameris, Thomas K; Nolet, Bart A; van der Jeugd, Henk P
2017-09-01
A common problem with observational datasets is that not all events of interest may be detected. For example, observing animals in the wild can difficult when animals move, hide, or cannot be closely approached. We consider time series of events recorded in conditions where events are occasionally missed by observers or observational devices. These time series are not restricted to behavioral protocols, but can be any cyclic or recurring process where discrete outcomes are observed. Undetected events cause biased inferences on the process of interest, and statistical analyses are needed that can identify and correct the compromised detection processes. Missed observations in time series lead to observed time intervals between events at multiples of the true inter-event time, which conveys information on their detection probability. We derive the theoretical probability density function for observed intervals between events that includes a probability of missed detection. Methodology and software tools are provided for analysis of event data with potential observation bias and its removal. The methodology was applied to simulation data and a case study of defecation rate estimation in geese, which is commonly used to estimate their digestive throughput and energetic uptake, or to calculate goose usage of a feeding site from dropping density. Simulations indicate that at a moderate chance to miss arrival events ( p = 0.3), uncorrected arrival intervals were biased upward by up to a factor 3, while parameter values corrected for missed observations were within 1% of their true simulated value. A field case study shows that not accounting for missed observations leads to substantial underestimates of the true defecation rate in geese, and spurious rate differences between sites, which are introduced by differences in observational conditions. These results show that the derived methodology can be used to effectively remove observational biases in time-ordered event data.
Forecasting air quality time series using deep learning.
Freeman, Brian S; Taylor, Graham; Gharabaghi, Bahram; Thé, Jesse
2018-04-13
This paper presents one of the first applications of deep learning (DL) techniques to predict air pollution time series. Air quality management relies extensively on time series data captured at air monitoring stations as the basis of identifying population exposure to airborne pollutants and determining compliance with local ambient air standards. In this paper, 8 hr averaged surface ozone (O 3 ) concentrations were predicted using deep learning consisting of a recurrent neural network (RNN) with long short-term memory (LSTM). Hourly air quality and meteorological data were used to train and forecast values up to 72 hours with low error rates. The LSTM was able to forecast the duration of continuous O 3 exceedances as well. Prior to training the network, the dataset was reviewed for missing data and outliers. Missing data were imputed using a novel technique that averaged gaps less than eight time steps with incremental steps based on first-order differences of neighboring time periods. Data were then used to train decision trees to evaluate input feature importance over different time prediction horizons. The number of features used to train the LSTM model was reduced from 25 features to 5 features, resulting in improved accuracy as measured by Mean Absolute Error (MAE). Parameter sensitivity analysis identified look-back nodes associated with the RNN proved to be a significant source of error if not aligned with the prediction horizon. Overall, MAE's less than 2 were calculated for predictions out to 72 hours. Novel deep learning techniques were used to train an 8-hour averaged ozone forecast model. Missing data and outliers within the captured data set were replaced using a new imputation method that generated calculated values closer to the expected value based on the time and season. Decision trees were used to identify input variables with the greatest importance. The methods presented in this paper allow air managers to forecast long range air pollution concentration while only monitoring key parameters and without transforming the data set in its entirety, thus allowing real time inputs and continuous prediction.
Rescaled range analysis of streamflow records in the São Francisco River Basin, Brazil
NASA Astrophysics Data System (ADS)
Araujo, Marcelo Vitor Oliveira; Celeste, Alcigeimes B.
2018-01-01
Hydrological time series are sometimes found to have a distinctive behavior known as long-term persistence, in which subsequent values depend on each other even under very large time scales. This implies multiyear consecutive droughts or floods. Typical models used to generate synthetic hydrological scenarios, widely used in the planning and management of water resources, fail to preserve this kind of persistence in the generated data and therefore may have a major impact on projects whose design lives span for long periods of time. This study deals with the evaluation of long-term persistence in streamflow records by means of the rescaled range analysis proposed by British engineer Harold E. Hurst, who first observed the phenomenon in the mid-twentieth century. In this paper, Hurst's procedure is enhanced by a strategy based on statistical hypothesis testing. The case study comprises the six main hydroelectric power plants located in the São Francisco River Basin, part of the Brazilian National Grid. Historical time series of inflows to the major reservoirs of the system are investigated and 5/6 sites show significant persistence, with values for the so-called Hurst exponent near or greater than 0.7, i.e., around 40% above the value 0.5 that represents a white noise process, suggesting that decision makers should take long-term persistence into consideration when conducting water resources planning and management studies in the region.
NASA Astrophysics Data System (ADS)
Patra, S. R.
2017-12-01
Evapotranspiration (ET0) influences water resources and it is considered as a vital process in aridic hydrologic frameworks. It is one of the most important measure in finding the drought condition. Therefore, time series forecasting of evapotranspiration is very important in order to help the decision makers and water system mangers build up proper systems to sustain and manage water resources. Time series considers that -history repeats itself, hence by analysing the past values, better choices, or forecasts, can be carried out for the future. Ten years of ET0 data was used as a part of this study to make sure a satisfactory forecast of monthly values. In this study, three models: (ARIMA) mathematical model, artificial neural network model, support vector machine model are presented. These three models are used for forecasting monthly reference crop evapotranspiration based on ten years of past historical records (1991-2001) of measured evaporation at Ganjam region, Odisha, India without considering the climate data. The developed models will allow water resource managers to predict up to 12 months, making these predictions very useful to optimize the resources needed for effective water resources management. In this study multistep-ahead prediction is performed which is more complex and troublesome than onestep ahead. Our investigation proposed that nonlinear relationships may exist among the monthly indices, so that the ARIMA model might not be able to effectively extract the full relationship hidden in the historical data. Support vector machines are potentially helpful time series forecasting strategies on account of their strong nonlinear mapping capability and resistance to complexity in forecasting data. SVMs have great learning capability in time series modelling compared to ANN. For instance, the SVMs execute the structural risk minimization principle, which allows in better generalization as compared to neural networks that use the empirical risk minimization principle. The reliability of these computational models was analysed in light of simulation results and it was found out that SVM model produces better results among the three. The future research should be routed to extend the validation data set and to check the validity of our results on different areas with hybrid intelligence techniques.
Adjusted monthly temperature and precipitation values for Guinea Conakry (1941-2010) using HOMER.
NASA Astrophysics Data System (ADS)
Aguilar, Enric; Aziz Barry, Abdoul; Mestre, Olivier
2013-04-01
Africa is a data sparse region and there are very few studies presenting homogenized monthly records. In this work, we introduce a dataset consisting of 12 stations spread over Guinea Conakry containing daily values of maximum and minimum temperature and accumulated rainfall for the period 1941-2010. The daily values have been quality controlled using R-Climdex routines, plus other interactive quality control applications, coded by the authors. After applying the different tests, more than 200 daily values were flagged as doubtful and carefully checked against the statistical distribution of the series and the rest of the dataset. Finally, 40 values were modified or set to missing and the rest were validated. The quality controlled daily dataset was used to produce monthly means and homogenized with HOMER, a new R-pacakge which includes the relative methods that performed better in the experiments conducted in the framework of the COST-HOME action. A total number of 38 inhomogeneities were found for temperature. As a total of 788 years of data were analyzed, the average ratio was one break every 20.7 years. The station with a larger number of inhomogeneities was Conakry (5 breaks) and one station, Kissidougou, was identified as homogeneous. The average number of breaks/station was 3.2. The mean value of the monthly factors applied to maximum (minimum) temperature was 0.17 °C (-1.08 °C) . For precipitation, due to the demand of a denser network to correctly homogenize this variable, only two major inhomogeneities in Conakry (1941-1961, -12%) and Kindia (1941-1976, -10%) were corrected. The adjusted dataset was used to compute regional series for the three variables and trends for the 1941-2010 period. The regional mean has been computed by simply averaging anomalies to 1971-2000 of the 12 time series. Two different versions have been obtained: a first one (A) makes use of the missing values interpolation made by HOMER (so all annual values in the regional series are an average of 12 anomalies); the second one (B) removes the missing values, and each value of the regional series is an average of 5 to 12 anomalies. In this case, a variance stabilization factor has been applied. As a last step a trend analysis has been applied over the regional series. This has been done using two different approaches: standard least squares regression (LS) and the implementation by Zhang of the Sen slope estimator (SEN), applied using the zyp R-package. The results for the A & B series and the different trend calculations are very similar, in terms of slopes and signification. All the identified trends are significant at the 95% confidence level or better. Using the A series and the SEN slope, the annual regional mean of maximum temperatures has increased 0.135 °C/decade (95% confidence interval: 0.087 / 0.173) and the annual regional mean of minimum temperatures 0.092 °C/decade (0.050/0.135). Maximum temperatures present high values in the 1940s to 1950s and a large increase in the last decades. In contrast, minimum temperatures were relatively cooler in the 1940s and 1950s and the increase in the last decades is more moderate. Finally, the regional mean of annual accumulated precipitation decreased between 1941 and 2010 by -2.20 mm (-3.82/-0.64). The precipitation series are dominated by the high values before 1970, followed by a well known decrease in rainfall. This homogenized monthly series will improve future analysis over this portion of Western Africa.
Multifractal Analysis of Asian Foreign Exchange Markets and Financial Crisis
NASA Astrophysics Data System (ADS)
Oh, Gabjin; Kwon, Okyu; Jung, Woo-Sung
2012-02-01
We analyze the multifractal spectra of daily foreign exchange rates for Japan, Hong-Kong, Korea, and Thailand with respect to the United States Dollar from 1991 to 2005. We find that the return time series show multifractal spectrum features for all four cases. To observe the effect of the Asian currency crisis, we also estimate the multifractal spectra of limited series before and after the crisis. We find that the Korean and Thai foreign exchange markets experienced a significant increase in multifractality compared to Hong-Kong and Japan. We also show that the multifractality is stronge related to the presence of high values of returns in the series.
A multifractal analysis of Asian foreign exchange markets
NASA Astrophysics Data System (ADS)
Oh, G.; Eom, C.; Havlin, S.; Jung, W.-S.; Wang, F.; Stanley, H. E.; Kim, S.
2012-06-01
We analyze the multifractal spectra of daily foreign exchange rates for Japan, Hong-Kong, Korea, and Thailand with respect to the United States in the period from 1991 until 2005. We find that the return time series show multifractal spectrum features for all four cases. To observe the effect of the Asian currency crisis, we also estimate the multifractal spectra of limited series before and after the crisis. We find that the Korean and Thai foreign exchange markets experienced a significant increase in multifractality compared to Hong-Kong and Japan. We also show that the multifractality is stronger related to the presence of high values of returns in the series.
Prediction possibilities of Arosa total ozone
NASA Astrophysics Data System (ADS)
Kane, R. P.
1987-01-01
Using the periodicities obtained by a Maximum Entropy Spectral Analysis (MESA) of the Arosa total ozone data ( CC') series for 1932 1971, the values predicted for 1972 onwards were compared with the observed values of the ( AD) series. A change of level was noticed, with the observed ( AD) values lower by about 7 D.U. Also, the matching was poor in 1980, 1981, 1982. In the monthly values, the most prominent periodicity was the annual wave, comprising some 80% variance. In the 12 month running averages, the annual wave was eliminated and the most prominent periodicity was T=3.7 years, encompassing roundly 20% variance. This and other periodicities at T=4.7, 5.4, 6.2, 10 and 16 years were all statistically significant at a 3.5δ a priori i.e., 2δ a posteriori level. However, the predictions from these were unsatisfactory, probably because some of these periodicities may be transient i.e., changing amplitudes and/or phases with time. Thus, no meaningful prediction seem possible for Arosa total ozone.
An effective approach for gap-filling continental scale remotely sensed time-series
Weiss, Daniel J.; Atkinson, Peter M.; Bhatt, Samir; Mappin, Bonnie; Hay, Simon I.; Gething, Peter W.
2014-01-01
The archives of imagery and modeled data products derived from remote sensing programs with high temporal resolution provide powerful resources for characterizing inter- and intra-annual environmental dynamics. The impressive depth of available time-series from such missions (e.g., MODIS and AVHRR) affords new opportunities for improving data usability by leveraging spatial and temporal information inherent to longitudinal geospatial datasets. In this research we develop an approach for filling gaps in imagery time-series that result primarily from cloud cover, which is particularly problematic in forested equatorial regions. Our approach consists of two, complementary gap-filling algorithms and a variety of run-time options that allow users to balance competing demands of model accuracy and processing time. We applied the gap-filling methodology to MODIS Enhanced Vegetation Index (EVI) and daytime and nighttime Land Surface Temperature (LST) datasets for the African continent for 2000–2012, with a 1 km spatial resolution, and an 8-day temporal resolution. We validated the method by introducing and filling artificial gaps, and then comparing the original data with model predictions. Our approach achieved R2 values above 0.87 even for pixels within 500 km wide introduced gaps. Furthermore, the structure of our approach allows estimation of the error associated with each gap-filled pixel based on the distance to the non-gap pixels used to model its fill value, thus providing a mechanism for including uncertainty associated with the gap-filling process in downstream applications of the resulting datasets. PMID:25642100
Devils Hole, Nevada, δ18O record extended to the mid-Holocene
Winograd, Isaac J.; Landwehr, Jurate M.; Coplen, Tyler B.; Sharp, Warren D.; Riggs, Alan C.; Ludwig, Kenneth R.; Kolesar, Peter T.
2006-01-01
The mid-to-late Pleistocene Devils Hole δ18O record has been extended from 60,000 to 4500 yr ago. The new δ18O time series, in conjunction with the one previously published, is shown to be a proxy of Pacific Ocean sea surface temperature (SST) off the coast of California. During marine oxygen isotope stages (MIS) 2 and 6, the Devil Hole and SST time series exhibit a steady warming that began 5000 to > 10,000 yr prior to the last and penultimate deglaciations. Several possible proximate causes for this early warming are evaluated. The magnitude of the peak δ18O or SST during the last interglacial (LIG) is significantly greater (1 per mill and 2 to 3°C, respectively) than the peak value of these parameters for the Holocene; in contrast, benthic δ18O records of ice volume show only a few tenths per mill difference in the peak value for these interglacials. Statistical analysis provides an estimate of the large shared information (variation) between the Devils Hole and Eastern Pacific SST time series from ∼ 41 to ∼ 2°N and enforces the concept of a common forcing among all of these records. The extended Devils Hole record adds to evidence of the importance of uplands bordering the eastern Pacific as a source of archives for reconstructing Pacific climate variability.
Fast Fourier Tranformation Algorithms: Experiments with Microcomputers.
1986-07-01
is, functions a with a known, discrete Fourier transform A Such functions are given fn [I]. The functions, TF1 , TF2, and TF3, were used and are...the IBM PC, all with TF1 (Eq. 1). ’The compilers provided options to improve performance, as noted, for which a penalty in compiling time has to be...BASIC only. Series I In this series the procedures were as follows: (i) Calculate the input values for TF1 of ar and the modulus Iar (which is
Performance of time-series methods in forecasting the demand for red blood cell transfusion.
Pereira, Arturo
2004-05-01
Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.
Tourism demand in the Algarve region: Evolution and forecast using SVARMA models
NASA Astrophysics Data System (ADS)
Lopes, Isabel Cristina; Soares, Filomena; Silva, Eliana Costa e.
2017-06-01
Tourism is one of the Portuguese economy's key sectors, and its relative weight has grown over recent years. The Algarve region is particularly focused on attracting foreign tourists and has built over the years a large offer of diversified hotel units. In this paper we present multivariate time series approach to forecast the number of overnight stays in hotel units (hotels, guesthouses or hostels, and tourist apartments) in Algarve. We adjust a seasonal vector autoregressive and moving averages model (SVARMA) to monthly data between 2006 and 2016. The forecast values were compared with the actual values of the overnight stays in Algarve in 2016 and led to a MAPE of 15.1% and RMSE= 53847.28. The MAPE for the Hotel series was merely 4.56%. These forecast values can be used by a hotel manager to predict their occupancy and to determine the best pricing policy.
NASA Astrophysics Data System (ADS)
Mizukami, N.; Smith, M. B.
2010-12-01
It is common for the error characteristics of long-term precipitation data to change over time due to various factors such as gauge relocation and changes in data processing methods. The temporal consistency of precipitation data error characteristics is as important as data accuracy itself for hydrologic model calibration and subsequent use of the calibrated model for streamflow prediction. In mountainous areas, the generation of precipitation grids relies on sparse gage networks, the makeup of which often varies over time. This causes a change in error characteristics of the long-term precipitation data record. We will discuss the diagnostic analysis of the consistency of gridded precipitation time series and illustrate the adverse effect of inconsistent precipitation data on a hydrologic model simulation. We used hourly 4 km gridded precipitation time series over a mountainous basin in the Sierra Nevada Mountains of California from October 1988 through September 2006. The basin is part of the broader study area that served as the focus of the second phase of the Distributed Model Intercomparison Project (DMIP-2), organized by the U.S. National Weather Service (NWS) of the National Oceanographic and Atmospheric Administration (NOAA). To check the consistency of the gridded precipitation time series, double mass analysis was performed using single pixel and basin mean areal precipitation (MAP) values derived from gridded DMIP-2 and Parameter-Elevation Regressions on Independent Slopes Model (PRISM) precipitation data. The analysis leads to the conclusion that over the entire study time period, a clear change in error characteristics in the DMIP-2 data occurred in the beginning of 2003. This matches the timing of one of the major gage network changes. The inconsistency of two MAP time series computed from the gridded precipitation fields over two elevation zones was corrected by adjusting hourly values based on the double mass analysis. We show that model simulations using the adjusted MAP data produce improved stream flow compared to simulations using the inconsistent MAP input data.
Multifractal diffusion entropy analysis: Optimal bin width of probability histograms
NASA Astrophysics Data System (ADS)
Jizba, Petr; Korbel, Jan
2014-11-01
In the framework of Multifractal Diffusion Entropy Analysis we propose a method for choosing an optimal bin-width in histograms generated from underlying probability distributions of interest. The method presented uses techniques of Rényi’s entropy and the mean squared error analysis to discuss the conditions under which the error in the multifractal spectrum estimation is minimal. We illustrate the utility of our approach by focusing on a scaling behavior of financial time series. In particular, we analyze the S&P500 stock index as sampled at a daily rate in the time period 1950-2013. In order to demonstrate a strength of the method proposed we compare the multifractal δ-spectrum for various bin-widths and show the robustness of the method, especially for large values of q. For such values, other methods in use, e.g., those based on moment estimation, tend to fail for heavy-tailed data or data with long correlations. Connection between the δ-spectrum and Rényi’s q parameter is also discussed and elucidated on a simple example of multiscale time series.
Fractal Dimensions of Umbral and Penumbral Regions of Sunspots
NASA Astrophysics Data System (ADS)
Rajkumar, B.; Haque, S.; Hrudey, W.
2017-11-01
The images of sunspots in 16 active regions taken at the University College of the Cayman Islands (UCCI) Observatory on Grand Cayman during June-November 2015 were used to determine their fractal dimensions using the perimeter-area method for the umbral and the penumbral region. Scale-free fractal dimensions of 2.09 ±0.42 and 1.72 ±0.4 were found, respectively. This value was higher than the value determined by Chumak and Chumak ( Astron. Astrophys. Trans. 10, 329, 1996), who used a similar method, but only for the penumbral region of their sample set. The umbral and penumbral fractal dimensions for the specific sunspots are positively correlated with r = 0.58. Furthermore, a similar time-series analysis was performed on eight images of AR 12403, from 21 August 2015 to 28 August 2015 taken from the Debrecen Photoheliographic Data (DPD). The correlation is r = 0.623 between the umbral and penumbral fractal dimensions in the time series, indicating that the complexity in morphology indicated by the fractal dimension between the umbra and penumbra followed each other in time as well.
Analysis of Suspended-Sediment Dynamics in Gulf of Mexico Estuaries Using MODIS/Terra 250-m Imagery
NASA Astrophysics Data System (ADS)
McCarthy, M. J.; Otis, D. B.; Muller-Karger, F. E.; Mendez-Lazaro, P.; Chen, F. R.
2016-02-01
Suspended sediments in coastal ecosystems reduce light penetration, degrade water quality, and inhibit primary production. In this study, a 15-year Moderate Resolution Imaging Spectroradiometer (MODIS/Terra) turbidity time-series was developed for use in the estuaries of the Gulf of Mexico (GOM). Remote-sensing reflectance (Rrs) at 645 nm and 250-m resolution was validated with in-situ turbidity measurements in these estuaries: Coastal Bend Bays (TX), Galveston Bay (TX), Barataria and Terrebonne Bays (LA), Mobile Bay (AL), Tampa Bay (FL), Sarasota Bay (FL), and Charlotte Harbor (FL). Mean values of turbidity over the time-series ranged from 2.5 NTU to over 10 NTU. Turbidity patterns exhibited seasonal cycles with peak values generally found during spring months, although there is considerable variability in the timing of peak turbidity. Episodes of elevated turbidity ranged from 6 episodes in Galveston Bay to 15 in Mobile Bay. The spatial extent of elevated turbidity within estuaries, frequency and duration of turbidity events, and potential driving factors behind episodes of elevated turbidity were also examined.
A statistical analysis of flank eruptions on Etna volcano
NASA Astrophysics Data System (ADS)
Mulargia, Francesco; Tinti, Stefano; Boschi, Enzo
1985-02-01
A singularly complete record exists for the eruptive activity of Etna volcano. The time series of occurrence of flank eruptions in the period 1600-1980, in which the record is presumably complete, is found to follow a stationary Poisson process. A revision of the available data shows that eruption durations are rather well correlated with the estimates of the volume of lava flows. This implies that the magnitude of an eruption can be defined directly by its duration. Extreme value statistics are then applied to the time series, using duration as a dependent variable. The probability of occurrence of a very long (300 days) eruption is greater than 50% only in time intervals of the order of 50 years. The correlation found between duration and total output also allows estimation of the probability of occurrence of a major event which exceeds a given duration and total flow of lava. The composite probabilities do not differ considerably from the pure ones. Paralleling a well established application to seismic events, extreme value theory can be profitably used in volcanic risk estimates, provided that appropriate account is also taken of all other variables.
NASA Astrophysics Data System (ADS)
Fusilli, L.; Collins, M. O.; Laneve, G.; Palombo, A.; Pignatti, S.; Santini, F.
2013-02-01
The objective of this research study is to assess the capability of time-series of MODIS imagery to provide information suitable for enhancing the understanding of the temporal cycles shown by the abnormal growth of the floating macrophytes in order to support monitoring and management action of Lake Victoria water resources. The proliferation of invasive plants and aquatic weeds is of growing concern. Starting from 1989, Lake Victoria has been interested by the high infestation of water hyacinth with significant socio-economic impact on riparian populations. In this paper, we describe an approach based on the time-series of MODIS to derive the temporal behaviour, the abundance and distribution of the floating macrophytes in the Winam Gulf (Kenyan portion of the Lake Victoria) and its possible links to the concentrations of the main water constituencies. To this end, we consider the NDVI values computed from the MODIS imagery time-series from 2000 to 2009 to identify the floating macrophytes cover and an appropriate bio-optical model to retrieve, by means of an inverse procedure, the concentrations of chlorophyll a, coloured dissolved organic matter and total suspended solid. The maps of the floating vegetation based on the NDVI values allow us to assess the spatial and temporal dynamics of the weeds with high time resolution. A floating vegetation index (FVI) has been introduced for describing the weeds pollution level. The results of the analysis show a consistent temporal relation between the water constituent concentrations within the Winam Gulf and the FVI, especially in the proximity of the greatest proliferation of floating vegetation in the last 10 years that occurred between the second half of 2006 and the first half of 2007.The adopted approach will be useful to implement an automatic system for monitoring and predicting the floating macrophytes proliferation in Lake Victoria.
The spectra and periodograms of anti-correlated discrete fractional Gaussian noise.
Raymond, G M; Percival, D B; Bassingthwaighte, J B
2003-05-01
Discrete fractional Gaussian noise (dFGN) has been proposed as a model for interpreting a wide variety of physiological data. The form of actual spectra of dFGN for frequencies near zero varies as f(1-2H), where 0 < H < 1 is the Hurst coefficient; however, this form for the spectra need not be a good approximation at other frequencies. When H approaches zero, dFGN spectra exhibit the 1 - 2H power-law behavior only over a range of low frequencies that is vanishingly small. When dealing with a time series of finite length drawn from a dFGN process with unknown H, practitioners must deal with estimated spectra in lieu of actual spectra. The most basic spectral estimator is the periodogram. The expected value of the periodogram for dFGN with small H also exhibits non-power-law behavior. At the lowest Fourier frequencies associated with a time series of N values sampled from a dFGN process, the expected value of the periodogram for H approaching zero varies as f(0) rather than f(1-2H). For finite N and small H, the expected value of the periodogram can in fact exhibit a local power-law behavior with a spectral exponent of 1 - 2H at only two distinct frequencies.
MODIS Interactive Subsetting Tool (MIST)
NASA Astrophysics Data System (ADS)
McAllister, M.; Duerr, R.; Haran, T.; Khalsa, S. S.; Miller, D.
2008-12-01
In response to requests from the user community, NSIDC has teamed with the Oak Ridge National Laboratory Distributive Active Archive Center (ORNL DAAC) and the Moderate Resolution Data Center (MrDC) to provide time series subsets of satellite data covering stations in the Greenland Climate Network (GC-NET) and the International Arctic Systems for Observing the Atmosphere (IASOA) network. To serve these data NSIDC created the MODIS Interactive Subsetting Tool (MIST). MIST works with 7 km by 7 km subset time series of certain Version 5 (V005) MODIS products over GC-Net and IASOA stations. User- selected data are delivered in a text Comma Separated Value (CSV) file format. MIST also provides online analysis capabilities that include generating time series and scatter plots. Currently, MIST is a Beta prototype and NSIDC intends that user requests will drive future development of the tool. The intent of this poster is to introduce MIST to the MODIS data user audience and illustrate some of the online analysis capabilities.
Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling
Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...
2014-07-14
Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Machiwal, Deepesh; Kumar, Sanjay; Dayal, Devi
2016-05-01
This study aimed at characterization of rainfall dynamics in a hot arid region of Gujarat, India by employing time-series modeling techniques and sustainability approach. Five characteristics, i.e., normality, stationarity, homogeneity, presence/absence of trend, and persistence of 34-year (1980-2013) period annual rainfall time series of ten stations were identified/detected by applying multiple parametric and non-parametric statistical tests. Furthermore, the study involves novelty of proposing sustainability concept for evaluating rainfall time series and demonstrated the concept, for the first time, by identifying the most sustainable rainfall series following reliability ( R y), resilience ( R e), and vulnerability ( V y) approach. Box-whisker plots, normal probability plots, and histograms indicated that the annual rainfall of Mandvi and Dayapar stations is relatively more positively skewed and non-normal compared with that of other stations, which is due to the presence of severe outlier and extreme. Results of Shapiro-Wilk test and Lilliefors test revealed that annual rainfall series of all stations significantly deviated from normal distribution. Two parametric t tests and the non-parametric Mann-Whitney test indicated significant non-stationarity in annual rainfall of Rapar station, where the rainfall was also found to be non-homogeneous based on the results of four parametric homogeneity tests. Four trend tests indicated significantly increasing rainfall trends at Rapar and Gandhidham stations. The autocorrelation analysis suggested the presence of persistence of statistically significant nature in rainfall series of Bhachau (3-year time lag), Mundra (1- and 9-year time lag), Nakhatrana (9-year time lag), and Rapar (3- and 4-year time lag). Results of sustainability approach indicated that annual rainfall of Mundra and Naliya stations ( R y = 0.50 and 0.44; R e = 0.47 and 0.47; V y = 0.49 and 0.46, respectively) are the most sustainable and dependable compared with that of other stations. The highest values of sustainability index at Mundra (0.120) and Naliya (0.112) stations confirmed the earlier findings of R y- R e- V y approach. In general, annual rainfall of the study area is less reliable, less resilient, and moderately vulnerable, which emphasizes the need of developing suitable strategies for managing water resources of the area on sustainable basis. Finally, it is recommended that multiple statistical tests (at least two) should be used in time-series modeling for making reliable decisions. Moreover, methodology and findings of the sustainability concept in rainfall time series can easily be adopted in other arid regions of the world.
Streamflow properties from time series of surface velocity and stage
Plant, W.J.; Keller, W.C.; Hayes, K.; Spicer, K.
2005-01-01
Time series of surface velocity and stage have been collected simultaneously. Surface velocity was measured using an array of newly developed continuous-wave microwave sensors. Stage was obtained from the standard U.S. Geological Survey (USGS) measurements. The depth of the river was measured several times during our experiments using sounding weights. The data clearly showed that the point of zero flow was not the bottom at the measurement site, indicating that a downstream control exists. Fathometer measurements confirmed this finding. A model of the surface velocity expected at a site having a downstream control was developed. The model showed that the standard form for the friction velocity does not apply to sites where a downstream control exists. This model fit our measured surface velocity versus stage plots very well with reasonable values of the parameters. Discharges computed using the surface velocities and measured depths matched the USGS rating curve for the site. Values of depth-weighted mean velocities derived from our data did not agree with those expected from Manning's equation due to the downstream control. These results suggest that if real-time surface velocities were available at a gauging station, unstable stream beds could be monitored. Journal of Hydraulic Engineering ?? ASCE.
Size distribution of Parkfield’s microearthquakes reflects changes in surface creep rate
Tormann, Theresa; Wiemer, Stefan; Metzger, Sabrina; Michael, Andrew J.; Hardebeck, Jeanne L.
2013-01-01
The nucleation area of the series of M6 events in Parkfield has been shown to be characterized by low b-values throughout the seismic cycle. Since low b-values represent high differential stresses, the asperity structure seems to be always stably stressed and even unaffected by the latest main shock in 2004. However, because fault loading rates and applied shear stress vary with time, some degree of temporal variability of the b-value within stable blocks is to be expected. We discuss in this study adequate techniques and uncertainty treatment for a detailed analysis of the temporal evolution of b-values. We show that the derived signal for the Parkfield asperity correlates with changes in surface creep, suggesting a sensitive time resolution of the b-value stress meter, and confirming near-critical loading conditions within the Parkfield asperity.
Improving flash flood frequency analyses by using non-systematic dendrogeomorphic data
NASA Astrophysics Data System (ADS)
Mediero, Luis; María Bodoque, Jose; Garrote, Julio; Ballesteros-Cánovas, Juan Antonio; Aroca-Jimenez, Estefania
2017-04-01
Flash floods have a rapid hydrological response in catchments with short lag times, characterized by ''peaky'' hydrographs. The peak flows are reached within a few hours, thus giving little or no advance warning to prevent and mitigate flood damage. As a result, flash floods may result in a high social risk, as shown for instance by the 1997 Biescas disaster in Spain. The analysis and management of flood risk are clearly conditioned by data availability, especially in mountain areas where usually flash-floods occur. Nevertheless, in mountain basins there is often short data series available that are not accurate in terms of statistical significance. In addition, when flow data is ready for use maximum annual values are generally not as reliable as average flow values, since conventional stream gauge stations may not record the extreme floods, leading to gaps in the time series. Dendrogeomorphology has been shown to be especially useful for improving flood frequency analyses in catchments where short flood series limit the use of conventional hydrological methods. This study presents pros and cons of using a given probability distribution function, such as the Generalized Extreme Value (GEV), and Bayesian Markov Chain Monte Carlo (MCMC) methods to account for non-systematic data provided by dendrogeomorphic techniques, in order to asses flood quantile estimates accuracy. To this end, we have considered a set of locations in Central Spain, where systematic flow available at a gauging site can be extended with non-systematic data obtained from implementation of dendrogeomorphic techniques.
NASA Astrophysics Data System (ADS)
Sandric, Ionut; Onose, Diana; Vanau, Gabriel; Ioja, Cristian
2016-04-01
The present study is focusing on the identification of urban heat island in Bucharest using both remote sensing products and low cost temperature sensors. The urban heat island in Bucharest was analyzed through a network of sensors located in 56 points (47 inside the administrative boundary of the city, 9 outside) 2009-2011. The network lost progressively its initial density, but was reformed during a new phase, 2013-2015. Time series satellite images from MODIS were intersected with the sensors for both phases. Statistical analysis were conducted to identify the temporal and spatial pattern of extreme temperatures in Bucharest. Several environmental factors like albedou, presence and absence of vegetation were used to fit a regression model between MODIS satellite products sensors in order to upscale the temperatures values recorded by MODIS For Bucharest, an important role for air temperature values in urban environments proved to have the local environmental conditions that leads to differences in air temperature at Bucharest city scale between 3-5 °C (both in the summer and in the winter). The UHI maps shows a good correlation with the presence of green areas. Differences in air temperature between higher tree density areas and isolated trees can reach much higher values, averages over 24 h periods still are in the 3-5 °C range The results have been obtained within the project UCLIMESA (Urban Heat Island Monitoring under Present and Future Climate), ongoing between 2013 and 2015 in the framework of the Programme for Research-DevelopmentInnovation for Space Technology and Advanced Research (STAR), administrated by the Romanian Space Agency Keywords: time series, urban heat island
31 CFR 359.55 - How are redemption values calculated for book-entry Series I savings bonds?
Code of Federal Regulations, 2011 CFR
2011-07-01
... for book-entry Series I savings bonds? 359.55 Section 359.55 Money and Finance: Treasury Regulations... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES I Book-Entry Series I Savings Bonds § 359.55 How are redemption values calculated for book-entry Series I savings bonds? We base current redemption...
31 CFR 351.70 - How are redemption values calculated for book-entry Series EE savings bonds?
Code of Federal Regulations, 2010 CFR
2010-07-01
... for book-entry Series EE savings bonds? 351.70 Section 351.70 Money and Finance: Treasury Regulations... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Book-Entry Series EE Savings Bonds § 351.70 How are redemption values calculated for book-entry Series EE savings bonds? We base current redemption...
31 CFR 351.70 - How are redemption values calculated for book-entry Series EE savings bonds?
Code of Federal Regulations, 2011 CFR
2011-07-01
... for book-entry Series EE savings bonds? 351.70 Section 351.70 Money and Finance: Treasury Regulations... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Book-Entry Series EE Savings Bonds § 351.70 How are redemption values calculated for book-entry Series EE savings bonds? We base current redemption...
Algorithm for Compressing Time-Series Data
NASA Technical Reports Server (NTRS)
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
Non-linear motions in reprocessed GPS station position time series
NASA Astrophysics Data System (ADS)
Rudenko, Sergei; Gendt, Gerd
2010-05-01
Global Positioning System (GPS) data of about 400 globally distributed stations obtained at time span from 1998 till 2007 were reprocessed using GFZ Potsdam EPOS (Earth Parameter and Orbit System) software within International GNSS Service (IGS) Tide Gauge Benchmark Monitoring (TIGA) Pilot Project and IGS Data Reprocessing Campaign with the purpose to determine weekly precise coordinates of GPS stations located at or near tide gauges. Vertical motions of these stations are used to correct the vertical motions of tide gauges for local motions and to tie tide gauge measurements to the geocentric reference frame. Other estimated parameters include daily values of the Earth rotation parameters and their rates, as well as satellite antenna offsets. The solution GT1 derived is based on using absolute phase center variation model, ITRF2005 as a priori reference frame, and other new models. The solution contributed also to ITRF2008. The time series of station positions are analyzed to identify non-linear motions caused by different effects. The paper presents the time series of GPS station coordinates and investigates apparent non-linear motions and their influence on GPS station height rates.
SHARPs - A Near-Real-Time Space Weather Data Product from HMI
NASA Astrophysics Data System (ADS)
Bobra, M.; Turmon, M.; Baldner, C.; Sun, X.; Hoeksema, J. T.
2012-12-01
A data product from the Helioseismic and Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO), called Space-weather HMI Active Region Patches (SHARPs), is now available through the SDO Joint Science Operations Center (JSOC) and the Virtual Solar Observatory. SHARPs are magnetically active regions identified on the solar disk and tracked automatically in time. SHARP data are processed within a few hours of the observation time. The SHARP data series contains active region-sized disambiguated vector magnetic field data in both Lambert Cylindrical Equal-Area and CCD coordinates on a 12 minute cadence. The series also provides simultaneous HMI maps of the line-of-sight magnetic field, continuum intensity, and velocity on the same ~0.5 arc-second pixel grid. In addition, the SHARP data series provides space weather quantities computed on the inverted, disambiguated, and remapped data. The values for each tracked region are computed and updated in near real time. We present space weather results for several X-class flares; furthermore, we compare said space weather quantities with helioseismic quantities calculated using ring-diagram analysis.
NASA Astrophysics Data System (ADS)
Ausloos, M.
2012-09-01
A nonlinear dynamics approach can be used in order to quantify complexity in written texts. As a first step, a one-dimensional system is examined: two written texts by one author (Lewis Carroll) are considered, together with one translation into an artificial language (i.e., Esperanto) are mapped into time series. Their corresponding shuffled versions are used for obtaining a baseline. Two different one-dimensional time series are used here: one based on word lengths (LTS), the other on word frequencies (FTS). It is shown that the generalized Hurst exponent h(q) and the derived f(α) curves of the original and translated texts show marked differences. The original texts are far from giving a parabolic f(α) function, in contrast to the shuffled texts. Moreover, the Esperanto text has more extreme values. This suggests cascade model-like, with multiscale time-asymmetric features as finally written texts. A discussion of the difference and complementarity of mapping into a LTS or FTS is presented. The FTS f(α) curves are more opened than the LTS ones.
Ausloos, M
2012-09-01
A nonlinear dynamics approach can be used in order to quantify complexity in written texts. As a first step, a one-dimensional system is examined: two written texts by one author (Lewis Carroll) are considered, together with one translation into an artificial language (i.e., Esperanto) are mapped into time series. Their corresponding shuffled versions are used for obtaining a baseline. Two different one-dimensional time series are used here: one based on word lengths (LTS), the other on word frequencies (FTS). It is shown that the generalized Hurst exponent h(q) and the derived f(α) curves of the original and translated texts show marked differences. The original texts are far from giving a parabolic f(α) function, in contrast to the shuffled texts. Moreover, the Esperanto text has more extreme values. This suggests cascade model-like, with multiscale time-asymmetric features as finally written texts. A discussion of the difference and complementarity of mapping into a LTS or FTS is presented. The FTS f(α) curves are more opened than the LTS ones.
Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool
NASA Astrophysics Data System (ADS)
Chakraborty, Monisha; Ghosh, Dipak
2017-12-01
Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.
Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool
NASA Astrophysics Data System (ADS)
Chakraborty, Monisha; Ghosh, Dipak
2018-04-01
Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.
Model Performance Evaluation and Scenario Analysis ...
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too
The excel file contains time series data of flow rates, concentrations of alachlor , atrazine, ammonia, total phosphorus, and total suspended solids observed in two watersheds in Indiana from 2002 to 2007. The aggregate time series data corresponding or representative to all these parameters was obtained using a specialized, data-driven technique. The aggregate data is hypothesized in the published paper to represent the overall health of both watersheds with respect to various potential water quality impairments. The time series data for each of the individual water quality parameters were used to compute corresponding risk measures (Rel, Res, and Vul) that are reported in Table 4 and 5. The aggregation of the risk measures, which is computed from the aggregate time series and water quality standards in Table 1, is also reported in Table 4 and 5 of the published paper. Values under column heading uncertainty reports uncertainties associated with reconstruction of missing records of the water quality parameters. Long-term records of the water quality parameters were reconstructed in order to estimate the (R-R-V) and corresponding aggregate risk measures. This dataset is associated with the following publication:Hoque, Y., S. Tripathi, M. Hantush , and R. Govindaraju. Aggregate Measures of Watershed Health from Reconstructed Water Quality Data with Uncertainty. Ed Gregorich JOURNAL OF ENVIRONMENTAL QUALITY. American Society of Agronomy, MADISON, WI,
Risk assessment of environmentally influenced airway diseases based on time-series analysis.
Herbarth, O
1995-09-01
Threshold values are of prime importance in providing a sound basis for public health decisions. A key issue is determining threshold or maximum exposure values for pollutants and assessing their potential health risks. Environmental epidemiology could be instrumental in assessing these levels, especially since the assessment of ambient exposures involves relatively low concentrations of pollutants. This paper presents a statistical method that allows the determination of threshold values as well as the assessment of the associated risk using a retrospective, longitudinal study design with a prospective follow-up. Morbidity data were analyzed using the Fourier method, a time-series analysis that is based on the assumption of a high temporal resolution of the data. This method eliminates time-dependent responses like temporal inhomogeneity and pseudocorrelation. The frequency of calls for respiratory distress conditions to the regional Mobile Medical Emergency Service (MMES) in the city of Leipzig were investigated. The entire population of Leipzig served as a pool for data collection. In addition to the collection of morbidity data, air pollution measurements were taken every 30 min for the entire study period using sulfur dioxide as the regional indicator variable. This approach allowed the calculation of a dose-response curve for respiratory diseases and air pollution indices in children and adults. Significantly higher morbidities were observed above a 24-hr mean value of 0.6 mg SO2/m3 air for children and 0.8 mg SO2/m3 for adults.(ABSTRACT TRUNCATED AT 250 WORDS)
Autoregressive modeling for the spectral analysis of oceanographic data
NASA Technical Reports Server (NTRS)
Gangopadhyay, Avijit; Cornillon, Peter; Jackson, Leland B.
1989-01-01
Over the last decade there has been a dramatic increase in the number and volume of data sets useful for oceanographic studies. Many of these data sets consist of long temporal or spatial series derived from satellites and large-scale oceanographic experiments. These data sets are, however, often 'gappy' in space, irregular in time, and always of finite length. The conventional Fourier transform (FT) approach to the spectral analysis is thus often inapplicable, or where applicable, it provides questionable results. Here, through comparative analysis with the FT for different oceanographic data sets, the possibilities offered by autoregressive (AR) modeling to perform spectral analysis of gappy, finite-length series, are discussed. The applications demonstrate that as the length of the time series becomes shorter, the resolving power of the AR approach as compared with that of the FT improves. For the longest data sets examined here, 98 points, the AR method performed only slightly better than the FT, but for the very short ones, 17 points, the AR method showed a dramatic improvement over the FT. The application of the AR method to a gappy time series, although a secondary concern of this manuscript, further underlines the value of this approach.
Analysis of temperature trends in Northern Serbia
NASA Astrophysics Data System (ADS)
Tosic, Ivana; Gavrilov, Milivoj; Unkašević, Miroslava; Marković, Slobodan; Petrović, Predrag
2017-04-01
An analysis of air temperature trends in Northern Serbia for the annual and seasonal time series is performed for two periods: 1949-2013 and 1979-2013. Three data sets of surface air temperatures: monthly mean temperatures, monthly maximum temperatures, and monthly minimum temperatures are analyzed at 9 stations that have altitudes varying between 75 m and 102 m. Monthly mean temperatures are obtained as the average of the daily mean temperatures, while monthly maximum (minimum) temperatures are the maximum (minimum) values of daily temperatures in corresponding month. Positive trends were found in 29 out of 30 time series, and the negative trend was found only in winter during the period 1979-2013. Applying the Mann-Kendall test, significant positive trends were found in 15 series; 7 in the period 1949-2013 and 8 in the period 1979-2013; and no significant trend was found in 15 series. Significant positive trends are dominated during the year, spring, and summer, where it was found in 14 out of 18 cases. Significant positive trends were found 7, 5, and 3 times in mean, maximum and minimum temperatures, respectively. It was found that the positive temperature trends are dominant in Northern Serbia.