Time Series Spectroscopic and Photometric Observations of the Massive DAV BPM 37093
NASA Astrophysics Data System (ADS)
Nitta, Atsuko; Kepler, S. O.; Chene, Andre–Nicolas; Koester, D.; Provencal, J. L.; Sullivan, D. J.; Chote, Paul; Safeko, Ramotholo; Kanaan, Antonio; Romero, Alejandra; Corti, Mariela; Corti, Mariela; Kilic, Mukremin; Winget, D. E.
2015-06-01
BPM 37093 was the first of only a handful of massive (1.05+/-0.05 M⊙; Bergeron 2004;Koester & Allard 2000) white dwarf pulsators discovered (Kanaan et al. 1992). These stars are particularly interesting because the crystallized mass-fraction as a function of mass and temperature is poorly constrained by observation, yet this process adds 1-2 Gyr uncertainty in ages of the oldest white dwarf stars observed and hence, in the ages of associations that contain them (Abrikosov 1960; Kirzhnits 1960; Salpeter 1961). Last year, we discovered that ESO uses BPM 37093 as a standard star and extracted corresponding spectra from the public archive. The data suggested a large variation in the observed hydrogen line profiles that could potentially be due to pulsations, but the measurement did not reach a detection-quality threshold. To further explore this possibility, though, we obtained 4hrs of continuous time series spectroscopy of BPM 37093 with Gemini in the Northern Spring of 2014. We present our preliminary results from these data along with those from the accompanying time series photometric observations we gathered from Mt. John (New Zealand), South African Astronomical Observatory (SAAO), Panchromatic Robotic optical Monitoring and Polarimetry Telescopes (PROMPT) in Chile, and Complejo Astronomico El Leoncito (Argentina) to support the Gemini observations.
NASA Astrophysics Data System (ADS)
Pons, Xavier; Miquel, Ninyerola; Oscar, González-Guerrero; Cristina, Cea; Pere, Serra; Alaitz, Zabala; Lluís, Pesquer; Ivette, Serral; Joan, Masó; Cristina, Domingo; Maria, Serra Josep; Jordi, Cristóbal; Chris, Hain; Martha, Anderson; Juanjo, Vidal
2014-05-01
Combining climate dynamics and land cover at a relative coarse resolution allows a very interesting approach to global studies, because in many cases these studies are based on a quite high temporal resolution, but they may be limited in large areas like the Mediterranean. However, the current availability of long time series of Landsat imagery and spatially detailed surface climate models allow thinking on global databases improving the results of mapping in areas with a complex history of landscape dynamics, characterized by fragmentation, or areas where relief creates intricate climate patterns that can be hardly monitored or modeled at coarse spatial resolutions. DinaCliVe (supported by the Spanish Government and ERDF, and by the Catalan Government, under grants CGL2012-33927 and SGR2009-1511) is the name of the project that aims analyzing land cover and land use dynamics as well as vegetation stress, with a particular emphasis on droughts, and the role that climate variation may have had in such phenomena. To meet this objective is proposed to design a massive database from long time series of Landsat land cover products (grouped in quinquennia) and monthly climate records (in situ climate data) for the Iberian Peninsula (582,000 km2). The whole area encompasses 47 Landsat WRS2 scenes (Landsat 4 to 8 missions, from path 197 to 202 and from rows 30 to 34), and 52 Landsat WRS1 scenes (for the previous Landsat missions, 212 to 221 and 30 to 34). Therefore, a mean of 49.5 Landsat scenes, 8 quinquennia per scene and a about 6 dates per quinquennium , from 1975 to present, produces around 2376 sets resulting in 30 m x 30 m spatial resolution maps. Each set is composed by highly coherent geometric and radiometric multispectral and multitemporal (to account for phenology) imagery as well as vegetation and wetness indexes, and several derived topographic information (about 10 Tbyte of data). Furthermore, on the basis on a previous work: the Digital Climatic Atlas of
NASA Astrophysics Data System (ADS)
Loredo, Thomas
The key, central objectives of the proposed Time Series Explorer project are to develop an organized collection of software tools for analysis of time series data in current and future NASA astrophysics data archives, and to make the tools available in two ways: as a library (the Time Series Toolbox) that individual science users can use to write their own data analysis pipelines, and as an application (the Time Series Automaton) providing an accessible, data-ready interface to many Toolbox algorithms, facilitating rapid exploration and automatic processing of time series databases. A number of time series analysis methods will be implemented, including techniques that range from standard ones to state-of-the-art developments by the proposers and others. Most of the algorithms will be able to handle time series data subject to real-world problems such as data gaps, sampling that is otherwise irregular, asynchronous sampling (in multi-wavelength settings), and data with non-Gaussian measurement errors. The proposed research responds to the ADAP element supporting the development of tools for mining the vast reservoir of information residing in NASA databases. The tools that will be provided to the community of astronomers studying variability of astronomical objects (from nearby stars and extrasolar planets, through galactic and extragalactic sources) will revolutionize the quality of timing analyses that can be carried out, and greatly enhance the scientific throughput of all NASA astrophysics missions past, present, and future. The Automaton will let scientists explore time series - individual records or large data bases -- with the most informative and useful analysis methods available, without having to develop the tools themselves or understand the computational details. Both elements, the Toolbox and the Automaton, will enable deep but efficient exploratory time series data analysis, which is why we have named the project the Time Series Explorer. Science
Pattern Recognition in Time Series
NASA Astrophysics Data System (ADS)
Lin, Jessica; Williamson, Sheri; Borne, Kirk D.; DeBarr, David
2012-03-01
Perhaps the most commonly encountered data types are time series, touching almost every aspect of human life, including astronomy. One obvious problem of handling time-series databases concerns with its typically massive size—gigabytes or even terabytes are common, with more and more databases reaching the petabyte scale. For example, in telecommunication, large companies like AT&T produce several hundred millions long-distance records per day [Cort00]. In astronomy, time-domain surveys are relatively new—these are surveys that cover a significant fraction of the sky with many repeat observations, thereby producing time series for millions or billions of objects. Several such time-domain sky surveys are now completed, such as the MACHO [Alco01],OGLE [Szym05], SDSS Stripe 82 [Bram08], SuperMACHO [Garg08], and Berkeley’s Transients Classification Pipeline (TCP) [Star08] projects. The Pan-STARRS project is an active sky survey—it began in 2010, a 3-year survey covering three-fourths of the sky with ˜60 observations of each field [Kais04]. The Large Synoptic Survey Telescope (LSST) project proposes to survey 50% of the visible sky repeatedly approximately 1000 times over a 10-year period, creating a 100-petabyte image archive and a 20-petabyte science database (http://www.lsst.org/). The LSST science database will include time series of over 100 scientific parameters for each of approximately 50 billion astronomical sources—this will be the largest data collection (and certainly the largest time series database) ever assembled in astronomy, and it rivals any other discipline’s massive data collections for sheer size and complexity. More common in astronomy are time series of flux measurements. As a consequence of many decades of observations (and in some cases, hundreds of years), a large variety of flux variations have been detected in astronomical objects, including periodic variations (e.g., pulsating stars, rotators, pulsars, eclipsing binaries
2007-11-02
TSDB is a Python module for storing large volumes of time series data. TSDB stores data in binary files indexed by a timestamp. Aggregation functions (such as rate, sum, avg, etc.) can be performed on the data, but data is never discarded. TSDB is presently best suited for SNMP data but new data types are easily added.
GPS Position Time Series @ JPL
NASA Technical Reports Server (NTRS)
Owen, Susan; Moore, Angelyn; Kedar, Sharon; Liu, Zhen; Webb, Frank; Heflin, Mike; Desai, Shailen
2013-01-01
Different flavors of GPS time series analysis at JPL - Use same GPS Precise Point Positioning Analysis raw time series - Variations in time series analysis/post-processing driven by different users. center dot JPL Global Time Series/Velocities - researchers studying reference frame, combining with VLBI/SLR/DORIS center dot JPL/SOPAC Combined Time Series/Velocities - crustal deformation for tectonic, volcanic, ground water studies center dot ARIA Time Series/Coseismic Data Products - Hazard monitoring and response focused center dot ARIA data system designed to integrate GPS and InSAR - GPS tropospheric delay used for correcting InSAR - Caltech's GIANT time series analysis uses GPS to correct orbital errors in InSAR - Zhen Liu's talking tomorrow on InSAR Time Series analysis
Network structure of multivariate time series.
Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito
2015-01-01
Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail. PMID:26487040
Network structure of multivariate time series
Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito
2015-01-01
Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail. PMID:26487040
Network structure of multivariate time series
NASA Astrophysics Data System (ADS)
Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito
2015-10-01
Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.
Permutations and time series analysis.
Cánovas, Jose S; Guillamón, Antonio
2009-12-01
The main aim of this paper is to show how the use of permutations can be useful in the study of time series analysis. In particular, we introduce a test for checking the independence of a time series which is based on the number of admissible permutations on it. The main improvement in our tests is that we are able to give a theoretical distribution for independent time series. PMID:20059199
NASA Astrophysics Data System (ADS)
Allan, Alasdair
2014-06-01
FROG performs time series analysis and display. It provides a simple user interface for astronomers wanting to do time-domain astrophysics but still offers the powerful features found in packages such as PERIOD (ascl:1406.005). FROG includes a number of tools for manipulation of time series. Among other things, the user can combine individual time series, detrend series (multiple methods) and perform basic arithmetic functions. The data can also be exported directly into the TOPCAT (ascl:1101.010) application for further manipulation if needed.
Time sharing massively parallel machines. Draft
Gorda, B.; Wolski, R.
1995-03-01
As part of the Massively Parallel Computing Initiative (MPCI) at the Lawrence Livermore National Laboratory, the authors have developed a simple, effective and portable time sharing mechanism by scheduling gangs of processes on tightly coupled parallel machines. By time-sharing the resources, the system interleaves production and interactive jobs. Immediate priority is given to interactive use, maintaining good response time. Production jobs are scheduled during idle periods, making use of the otherwise unused resources. In this paper the authors discuss their experience with gang scheduling over the 3 year life-time of the project. In section 2, they motivate the project and discuss some of its details. Section 3.0 describes the general scheduling problem and how gang scheduling addresses it. In section 4.0, they describe the implementation. Section 8.0 presents results culled over the lifetime of the project. They conclude this paper with some observations and possible future directions.
Linear Time Vertex Partitioning on Massive Graphs
Mell, Peter; Harang, Richard; Gueye, Assane
2016-01-01
The problem of optimally removing a set of vertices from a graph to minimize the size of the largest resultant component is known to be NP-complete. Prior work has provided near optimal heuristics with a high time complexity that function on up to hundreds of nodes and less optimal but faster techniques that function on up to thousands of nodes. In this work, we analyze how to perform vertex partitioning on massive graphs of tens of millions of nodes. We use a previously known and very simple heuristic technique: iteratively removing the node of largest degree and all of its edges. This approach has an apparent quadratic complexity since, upon removal of a node and adjoining set of edges, the node degree calculations must be updated prior to choosing the next node. However, we describe a linear time complexity solution using an array whose indices map to node degree and whose values are hash tables indicating the presence or absence of a node at that degree value. This approach also has a linear growth with respect to memory usage which is surprising since we lowered the time complexity from quadratic to linear. We empirically demonstrate linear scalability and linear memory usage on random graphs of up to 15000 nodes. We then demonstrate tractability on massive graphs through execution on a graph with 34 million nodes representing Internet wide router connectivity. PMID:27336059
Time series with tailored nonlinearities
NASA Astrophysics Data System (ADS)
Räth, C.; Laut, I.
2015-10-01
It is demonstrated how to generate time series with tailored nonlinearities by inducing well-defined constraints on the Fourier phases. Correlations between the phase information of adjacent phases and (static and dynamic) measures of nonlinearities are established and their origin is explained. By applying a set of simple constraints on the phases of an originally linear and uncorrelated Gaussian time series, the observed scaling behavior of the intensity distribution of empirical time series can be reproduced. The power law character of the intensity distributions being typical for, e.g., turbulence and financial data can thus be explained in terms of phase correlations.
Time series with tailored nonlinearities.
Räth, C; Laut, I
2015-10-01
It is demonstrated how to generate time series with tailored nonlinearities by inducing well-defined constraints on the Fourier phases. Correlations between the phase information of adjacent phases and (static and dynamic) measures of nonlinearities are established and their origin is explained. By applying a set of simple constraints on the phases of an originally linear and uncorrelated Gaussian time series, the observed scaling behavior of the intensity distribution of empirical time series can be reproduced. The power law character of the intensity distributions being typical for, e.g., turbulence and financial data can thus be explained in terms of phase correlations. PMID:26565155
ERIC Educational Resources Information Center
Bos, Theodore; Culver, Sarah E.
2000-01-01
Describes the Economagic Web site, a comprehensive site of free economic time-series data that can be used for research and instruction. Explains that it contains 100,000+ economic data series from sources such as the Federal Reserve Banking System, the Census Bureau, and the Department of Commerce. (CMK)
DETECTING MASSIVE GRAVITONS USING PULSAR TIMING ARRAYS
Lee, Kejia; Kramer, Michael; Jenet, Fredrick A.; Price, Richard H.; Wex, Norbert
2010-10-20
At the limit of weak static fields, general relativity becomes Newtonian gravity with a potential field that falls off as inverse distance rather than a theory of Yukawa-type fields with a finite range. General relativity also predicts that the speed of disturbances of its waves is c, the vacuum light speed, and is non-dispersive. For these reasons, the graviton, the boson for general relativity, can be considered to be massless. Massive gravitons, however, are features of some alternatives to general relativity. This has motivated experiments and observations that, so far, have been consistent with the zero-mass graviton of general relativity, but further tests will be valuable. A basis for new tests may be the high sensitivity gravitational wave (GW) experiments that are now being performed and the higher sensitivity experiments that are being planned. In these experiments, it should be feasible to detect low levels of dispersion due to non-zero graviton mass. One of the most promising techniques for such a detection may be the pulsar timing program that is sensitive to nano-Hertz GWs. Here, we present some details of such a detection scheme. The pulsar timing response to a GW background with the massive graviton is calculated, and the algorithm to detect the massive graviton is presented. We conclude that, with 90% probability, massless gravitons can be distinguished from gravitons heavier than 3 x 10{sup -22} eV (Compton wavelength {lambda}{sub g} = 4.1 x 10{sup 12} km), if bi-weekly observation of 60 pulsars is performed for 5 years with a pulsar rms timing accuracy of 100 ns. If 60 pulsars are observed for 10 years with the same accuracy, the detectable graviton mass is reduced to 5 x 10{sup -23} eV ({lambda}{sub g} = 2.5 x 10{sup 13} km); for 5 year observations of 100 or 300 pulsars, the sensitivity is respectively 2.5 x 10{sup -22} ({lambda}{sub g} = 5.0 x 10{sup 12} km) and 10{sup -22} eV ({lambda}{sub g} = 1.2 x 10{sup 13} km). Finally, a 10 year
Entropy of electromyography time series
NASA Astrophysics Data System (ADS)
Kaufman, Miron; Zurcher, Ulrich; Sung, Paul S.
2007-12-01
A nonlinear analysis based on Renyi entropy is applied to electromyography (EMG) time series from back muscles. The time dependence of the entropy of the EMG signal exhibits a crossover from a subdiffusive regime at short times to a plateau at longer times. We argue that this behavior characterizes complex biological systems. The plateau value of the entropy can be used to differentiate between healthy and low back pain individuals.
The rationale for chemical time-series sampling has its roots in the same fundamental relationships as govern well hydraulics. Samples of ground water are collected as a function of increasing time of pumpage. The most efficient pattern of collection consists of logarithmically s...
Random time series in astronomy.
Vaughan, Simon
2013-02-13
Progress in astronomy comes from interpreting the signals encoded in the light received from distant objects: the distribution of light over the sky (images), over photon wavelength (spectrum), over polarization angle and over time (usually called light curves by astronomers). In the time domain, we see transient events such as supernovae, gamma-ray bursts and other powerful explosions; we see periodic phenomena such as the orbits of planets around nearby stars, radio pulsars and pulsations of stars in nearby galaxies; and we see persistent aperiodic variations ('noise') from powerful systems such as accreting black holes. I review just a few of the recent and future challenges in the burgeoning area of time domain astrophysics, with particular attention to persistently variable sources, the recovery of reliable noise power spectra from sparsely sampled time series, higher order properties of accreting black holes, and time delays and correlations in multi-variate time series. PMID:23277606
Inductive time series modeling program
Kirk, B.L.; Rust, B.W.
1985-10-01
A number of features that comprise environmental time series share a common mathematical behavior. Analysis of the Mauna Loa carbon dioxide record and other time series is aimed at constructing mathematical functions which describe as many major features of the data as possible. A trend function is fit to the data, removed, and the resulting residuals analyzed for any significant behavior. This is repeated until the residuals are driven to white noise. In the following discussion, the concept of trend will include cyclic components. The mathematical tools and program packages used are VARPRO (Golub and Pereyra 1973), for the least squares fit, and a modified version of our spectral analysis program (Kirk et al. 1979), for spectrum and noise analysis. The program is written in FORTRAN. All computations are done in double precision, except for the plotting calls where the DISSPLA package is used. The core requirement varies between 600 K and 700 K. The program is implemented on the IBM 360/370. Currently, the program can analyze up to five different time series where each series contains no more than 300 points. 12 refs.
Massive subchorionic thrombohematoma: a series of 10 cases.
Fung, Tak Yuen; To, Ka Fai; Sahota, Daljit Singh; Chan, Lin Wai; Leung, Tak Yeung; Lau, Tze Kin
2010-10-01
A retrospective audit identified 10 cases of massive idiopathic subchorionic thrombohematoma. The incidence was 1:3,133. Only six of these pregnancies resulted in a livebirth and only two reached term. In eight cases there were ultrasound abnormalities, including two cases of placentomegaly both of which resulted in a pregnancy loss. There was one placental abruption. Seven of the women were nulliparous. Massive subchorionic thrombohematoma is associated with poor pregnancy outcome. Ultrasound findings of placentomegaly might be a bad prognostic sign. PMID:20846069
High Performance Biomedical Time Series Indexes Using Salient Segmentation
Woodbridge, Jonathan; Mortazavi, Bobak; Bui, Alex A.T.; Sarrafzadeh, Majid
2016-01-01
The advent of remote and wearable medical sensing has created a dire need for efficient medical time series databases. Wearable medical sensing devices provide continuous patient monitoring by various types of sensors and have the potential to create massive amounts of data. Therefore, time series databases must utilize highly optimized indexes in order to efficiently search and analyze stored data. This paper presents a highly efficient technique for indexing medical time series signals using Locality Sensitive Hashing (LSH). Unlike previous work, only salient (or interesting) segments are inserted into the index. This technique reduces search times by up to 95% while yielding near identical search results. PMID:23367072
Introduction to Time Series Analysis
NASA Technical Reports Server (NTRS)
Hardin, J. C.
1986-01-01
The field of time series analysis is explored from its logical foundations to the most modern data analysis techniques. The presentation is developed, as far as possible, for continuous data, so that the inevitable use of discrete mathematics is postponed until the reader has gained some familiarity with the concepts. The monograph seeks to provide the reader with both the theoretical overview and the practical details necessary to correctly apply the full range of these powerful techniques. In addition, the last chapter introduces many specialized areas where research is currently in progress.
Hydrodynamic analysis of time series
NASA Astrophysics Data System (ADS)
Suciu, N.; Vamos, C.; Vereecken, H.; Vanderborght, J.
2003-04-01
It was proved that balance equations for systems with corpuscular structure can be derived if a kinematic description by piece-wise analytic functions is available [1]. For example, the hydrodynamic equations for one-dimensional systems of inelastic particles, derived in [2], were used to prove the inconsistency of the Fourier law of heat with the microscopic structure of the system. The hydrodynamic description is also possible for single particle systems. In this case, averages of physical quantities associated with the particle, over a space-time window, generalizing the usual ``moving averages'' which are performed on time intervals only, were shown to be almost everywhere continuous space-time functions. Moreover, they obey balance partial differential equations (continuity equation for the 'concentration', Navier-Stokes equation, a. s. o.) [3]. Time series can be interpreted as trajectories in the space of the recorded parameter. Their hydrodynamic interpretation is expected to enable deterministic predictions, when closure relations can be obtained for the balance equations. For the time being, a first result is the estimation of the probability density for the occurrence of a given parameter value, by the normalized concentration field from the hydrodynamic description. The method is illustrated by hydrodynamic analysis of three types of time series: white noise, stock prices from financial markets and groundwater levels recorded at Krauthausen experimental field of Forschungszentrum Jülich (Germany). [1] C. Vamoş, A. Georgescu, N. Suciu, I. Turcu, Physica A 227, 81-92, 1996. [2] C. Vamoş, N. Suciu, A. Georgescu, Phys. Rev E 55, 5, 6277-6280, 1997. [3] C. Vamoş, N. Suciu, W. Blaj, Physica A, 287, 461-467, 2000.
Massive localized lymphedema: A case series and literature review.
Evans, Robin James; Scilley, Chris
2011-01-01
A large, deep, soft tissue mass is often malignant in nature; however, a recent study described a large soft tissue mass present in morbidly obese patients that was found to be benign. Massive localized lymphedema (MLL) is a large pedunculated lymphadematous mass found in the lower extremity of morbidly obese patients. MLL often enlarges over many years and may interfere with mobility. Although histologically similar to well-differentiated liposarcoma, MLL has recently emerged as a separate, benign clinical entity. The pathophysiology of MLL is yet to be understood. A literature review, and the authors' experiences are discussed to assist in clinical decision making. PMID:22942667
Multiple Indicator Stationary Time Series Models.
ERIC Educational Resources Information Center
Sivo, Stephen A.
2001-01-01
Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…
Analysis of time series from stochastic processes
Gradisek; Siegert; Friedrich; Grabec
2000-09-01
Analysis of time series from stochastic processes governed by a Langevin equation is discussed. Several applications for the analysis are proposed based on estimates of drift and diffusion coefficients of the Fokker-Planck equation. The coefficients are estimated directly from a time series. The applications are illustrated by examples employing various synthetic time series and experimental time series from metal cutting. PMID:11088809
Multivariate Time Series Similarity Searching
Wang, Jimin; Zhu, Yuelong; Li, Shijin; Wan, Dingsheng; Zhang, Pengcheng
2014-01-01
Multivariate time series (MTS) datasets are very common in various financial, multimedia, and hydrological fields. In this paper, a dimension-combination method is proposed to search similar sequences for MTS. Firstly, the similarity of single-dimension series is calculated; then the overall similarity of the MTS is obtained by synthesizing each of the single-dimension similarity based on weighted BORDA voting method. The dimension-combination method could use the existing similarity searching method. Several experiments, which used the classification accuracy as a measure, were performed on six datasets from the UCI KDD Archive to validate the method. The results show the advantage of the approach compared to the traditional similarity measures, such as Euclidean distance (ED), cynamic time warping (DTW), point distribution (PD), PCA similarity factor (SPCA), and extended Frobenius norm (Eros), for MTS datasets in some ways. Our experiments also demonstrate that no measure can fit all datasets, and the proposed measure is a choice for similarity searches. PMID:24895665
Multivariate time series similarity searching.
Wang, Jimin; Zhu, Yuelong; Li, Shijin; Wan, Dingsheng; Zhang, Pengcheng
2014-01-01
Multivariate time series (MTS) datasets are very common in various financial, multimedia, and hydrological fields. In this paper, a dimension-combination method is proposed to search similar sequences for MTS. Firstly, the similarity of single-dimension series is calculated; then the overall similarity of the MTS is obtained by synthesizing each of the single-dimension similarity based on weighted BORDA voting method. The dimension-combination method could use the existing similarity searching method. Several experiments, which used the classification accuracy as a measure, were performed on six datasets from the UCI KDD Archive to validate the method. The results show the advantage of the approach compared to the traditional similarity measures, such as Euclidean distance (ED), cynamic time warping (DTW), point distribution (PD), PCA similarity factor (SPCA), and extended Frobenius norm (Eros), for MTS datasets in some ways. Our experiments also demonstrate that no measure can fit all datasets, and the proposed measure is a choice for similarity searches. PMID:24895665
A Review of Subsequence Time Series Clustering
Teh, Ying Wah
2014-01-01
Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies. PMID:25140332
A review of subsequence time series clustering.
Zolhavarieh, Seyedjamal; Aghabozorgi, Saeed; Teh, Ying Wah
2014-01-01
Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies. PMID:25140332
Translation invariant time-dependent solutions to massive gravity
Mourad, J.; Steer, D.A. E-mail: steer@apc.univ-paris7.fr
2013-12-01
Homogeneous time-dependent solutions of massive gravity generalise the plane wave solutions of the linearised Fierz-Pauli equations for a massive spin-two particle, as well as the Kasner solutions of General Relativity. We show that they also allow a clear counting of the degrees of freedom and represent a simplified framework to work out the constraints, the equations of motion and the initial value formulation. We work in the vielbein formulation of massive gravity, find the phase space resulting from the constraints and show that several disconnected sectors of solutions exist some of which are unstable. The initial values determine the sector to which a solution belongs. Classically, the theory is not pathological but quantum mechanically the theory may suffer from instabilities. The latter are not due to an extra ghost-like degree of freedom.
Nonparametric causal inference for bivariate time series
NASA Astrophysics Data System (ADS)
McCracken, James M.; Weigel, Robert S.
2016-02-01
We introduce new quantities for exploratory causal inference between bivariate time series. The quantities, called penchants and leanings, are computationally straightforward to apply, follow directly from assumptions of probabilistic causality, do not depend on any assumed models for the time series generating process, and do not rely on any embedding procedures; these features may provide a clearer interpretation of the results than those from existing time series causality tools. The penchant and leaning are computed based on a structured method for computing probabilities.
Forecasting Enrollments with Fuzzy Time Series.
ERIC Educational Resources Information Center
Song, Qiang; Chissom, Brad S.
The concept of fuzzy time series is introduced and used to forecast the enrollment of a university. Fuzzy time series, an aspect of fuzzy set theory, forecasts enrollment using a first-order time-invariant model. To evaluate the model, the conventional linear regression technique is applied and the predicted values obtained are compared to the…
Statistical criteria for characterizing irradiance time series.
Stein, Joshua S.; Ellis, Abraham; Hansen, Clifford W.
2010-10-01
We propose and examine several statistical criteria for characterizing time series of solar irradiance. Time series of irradiance are used in analyses that seek to quantify the performance of photovoltaic (PV) power systems over time. Time series of irradiance are either measured or are simulated using models. Simulations of irradiance are often calibrated to or generated from statistics for observed irradiance and simulations are validated by comparing the simulation output to the observed irradiance. Criteria used in this comparison should derive from the context of the analyses in which the simulated irradiance is to be used. We examine three statistics that characterize time series and their use as criteria for comparing time series. We demonstrate these statistics using observed irradiance data recorded in August 2007 in Las Vegas, Nevada, and in June 2009 in Albuquerque, New Mexico.
Generation of artificial helioseismic time-series
NASA Technical Reports Server (NTRS)
Schou, J.; Brown, T. M.
1993-01-01
We present an outline of an algorithm to generate artificial helioseismic time-series, taking into account as much as possible of the knowledge we have on solar oscillations. The hope is that it will be possible to find the causes of some of the systematic errors in analysis algorithms by testing them with such artificial time-series.
Salient Segmentation of Medical Time Series Signals
Woodbridge, Jonathan; Lan, Mars; Sarrafzadeh, Majid; Bui, Alex
2016-01-01
Searching and mining medical time series databases is extremely challenging due to large, high entropy, and multidimensional datasets. Traditional time series databases are populated using segments extracted by a sliding window. The resulting database index contains an abundance of redundant time series segments with little to no alignment. This paper presents the idea of “salient segmentation”. Salient segmentation is a probabilistic segmentation technique for populating medical time series databases. Segments with the lowest probabilities are considered salient and are inserted into the index. The resulting index has little redundancy and is composed of aligned segments. This approach reduces index sizes by more than 98% over conventional sliding window techniques. Furthermore, salient segmentation can reduce redundancy in motif discovery algorithms by more than 85%, yielding a more succinct representation of a time series signal.
Entropic Analysis of Electromyography Time Series
NASA Astrophysics Data System (ADS)
Kaufman, Miron; Sung, Paul
2005-03-01
We are in the process of assessing the effectiveness of fractal and entropic measures for the diagnostic of low back pain from surface electromyography (EMG) time series. Surface electromyography (EMG) is used to assess patients with low back pain. In a typical EMG measurement, the voltage is measured every millisecond. We observed back muscle fatiguing during one minute, which results in a time series with 60,000 entries. We characterize the complexity of time series by computing the Shannon entropy time dependence. The analysis of the time series from different relevant muscles from healthy and low back pain (LBP) individuals provides evidence that the level of variability of back muscle activities is much larger for healthy individuals than for individuals with LBP. In general the time dependence of the entropy shows a crossover from a diffusive regime to a regime characterized by long time correlations (self organization) at about 0.01s.
Biclustering of time series microarray data.
Meng, Jia; Huang, Yufei
2012-01-01
Clustering is a popular data exploration technique widely used in microarray data analysis. In this chapter, we review ideas and algorithms of bicluster and its applications in time series microarray analysis. We introduce first the concept and importance of biclustering and its different variations. We then focus our discussion on the popular iterative signature algorithm (ISA) for searching biclusters in microarray dataset. Next, we discuss in detail the enrichment constraint time-dependent ISA (ECTDISA) for identifying biologically meaningful temporal transcription modules from time series microarray dataset. In the end, we provide an example of ECTDISA application to time series microarray data of Kaposi's Sarcoma-associated Herpesvirus (KSHV) infection. PMID:22130875
Homogenising time series: beliefs, dogmas and facts
NASA Astrophysics Data System (ADS)
Domonkos, P.
2011-06-01
In the recent decades various homogenisation methods have been developed, but the real effects of their application on time series are still not known sufficiently. The ongoing COST action HOME (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As a part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever before to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. Empirical results show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities often have similar statistical characteristics than natural changes caused by climatic variability, thus the pure application of the classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality than in raw time series. Some problems around detecting multiple structures of inhomogeneities, as well as that of time series comparisons within homogenisation procedures are discussed briefly in the study.
Modeling Time Series Data for Supervised Learning
ERIC Educational Resources Information Center
Baydogan, Mustafa Gokce
2012-01-01
Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning…
Developing consistent time series landsat data products
Technology Transfer Automated Retrieval System (TEKTRAN)
The Landsat series satellite has provided earth observation data record continuously since early 1970s. There are increasing demands on having a consistent time series of Landsat data products. In this presentation, I will summarize the work supported by the USGS Landsat Science Team project from 20...
Visibility Graph Based Time Series Analysis
Stephen, Mutua; Gu, Changgui; Yang, Huijie
2015-01-01
Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it’s microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks. PMID:26571115
Measuring nonlinear behavior in time series data
NASA Astrophysics Data System (ADS)
Wai, Phoong Seuk; Ismail, Mohd Tahir
2014-12-01
Stationary Test is an important test in detect the time series behavior since financial and economic data series always have missing data, structural change as well as jumps or breaks in the data set. Moreover, stationary test is able to transform the nonlinear time series variable to become stationary by taking difference-stationary process or trend-stationary process. Two different types of hypothesis testing of stationary tests that are Augmented Dickey-Fuller (ADF) test and Kwiatkowski-Philips-Schmidt-Shin (KPSS) test are examine in this paper to describe the properties of the time series variables in financial model. Besides, Least Square method is used in Augmented Dickey-Fuller test to detect the changes of the series and Lagrange multiplier is used in Kwiatkowski-Philips-Schmidt-Shin test to examine the properties of oil price, gold price and Malaysia stock market. Moreover, Quandt-Andrews, Bai-Perron and Chow tests are also use to detect the existence of break in the data series. The monthly index data are ranging from December 1989 until May 2012. Result is shown that these three series exhibit nonlinear properties but are able to transform to stationary series after taking first difference process.
Nonlinear Analysis of Surface EMG Time Series
NASA Astrophysics Data System (ADS)
Zurcher, Ulrich; Kaufman, Miron; Sung, Paul
2004-04-01
Applications of nonlinear analysis of surface electromyography time series of patients with and without low back pain are presented. Limitations of the standard methods based on the power spectrum are discussed.
Complex network approach to fractional time series
Manshour, Pouya
2015-10-15
In order to extract correlation information inherited in stochastic time series, the visibility graph algorithm has been recently proposed, by which a time series can be mapped onto a complex network. We demonstrate that the visibility algorithm is not an appropriate one to study the correlation aspects of a time series. We then employ the horizontal visibility algorithm, as a much simpler one, to map fractional processes onto complex networks. The degree distributions are shown to have parabolic exponential forms with Hurst dependent fitting parameter. Further, we take into account other topological properties such as maximum eigenvalue of the adjacency matrix and the degree assortativity, and show that such topological quantities can also be used to predict the Hurst exponent, with an exception for anti-persistent fractional Gaussian noises. To solve this problem, we take into account the Spearman correlation coefficient between nodes' degrees and their corresponding data values in the original time series.
Advanced spectral methods for climatic time series
Ghil, M.; Allen, M.R.; Dettinger, M.D.; Ide, K.; Kondrashov, D.; Mann, M.E.; Robertson, A.W.; Saunders, A.; Tian, Y.; Varadi, F.; Yiou, P.
2002-01-01
The analysis of univariate or multivariate time series provides crucial information to describe, understand, and predict climatic variability. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical field of study. Considerable progress has also been made in interpreting the information so obtained in terms of dynamical systems theory. In this review we describe the connections between time series analysis and nonlinear dynamics, discuss signal- to-noise enhancement, and present some of the novel methods for spectral analysis. The various steps, as well as the advantages and disadvantages of these methods, are illustrated by their application to an important climatic time series, the Southern Oscillation Index. This index captures major features of interannual climate variability and is used extensively in its prediction. Regional and global sea surface temperature data sets are used to illustrate multivariate spectral methods. Open questions and further prospects conclude the review.
Complex network approach to fractional time series
NASA Astrophysics Data System (ADS)
Manshour, Pouya
2015-10-01
In order to extract correlation information inherited in stochastic time series, the visibility graph algorithm has been recently proposed, by which a time series can be mapped onto a complex network. We demonstrate that the visibility algorithm is not an appropriate one to study the correlation aspects of a time series. We then employ the horizontal visibility algorithm, as a much simpler one, to map fractional processes onto complex networks. The degree distributions are shown to have parabolic exponential forms with Hurst dependent fitting parameter. Further, we take into account other topological properties such as maximum eigenvalue of the adjacency matrix and the degree assortativity, and show that such topological quantities can also be used to predict the Hurst exponent, with an exception for anti-persistent fractional Gaussian noises. To solve this problem, we take into account the Spearman correlation coefficient between nodes' degrees and their corresponding data values in the original time series.
Detecting nonlinear structure in time series
Theiler, J.
1991-01-01
We describe an approach for evaluating the statistical significance of evidence for nonlinearity in a time series. The formal application of our method requires the careful statement of a null hypothesis which characterizes a candidate linear process, the generation of an ensemble of surrogate'' data sets which are similar to the original time series but consistent with the null hypothesis, and the computation of a discriminating statistic for the original and for each of the surrogate data sets. The idea is to test the original time series against the null hypothesis by checking whether the discriminating statistic computed for the original time series differs significantly from the statistics computed for each of the surrogate sets. While some data sets very cleanly exhibit low-dimensional chaos, there are many cases where the evidence is sketchy and difficult to evaluate. We hope to provide a framework within which such claims of nonlinearity can be evaluated. 5 refs., 4 figs.
Homogenising time series: Beliefs, dogmas and facts
NASA Astrophysics Data System (ADS)
Domonkos, P.
2010-09-01
For obtaining reliable information about climate change and climate variability the use of high quality data series is essentially important, and one basic tool of quality improvements is the statistical homogenisation of observed time series. In the recent decades large number of homogenisation methods has been developed, but the real effects of their application on time series are still not known entirely. The ongoing COST HOME project (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. The author believes that several old theoretical rules have to be re-evaluated. Some examples of the hot questions, a) Statistically detected change-points can be accepted only with the confirmation of metadata information? b) Do semi-hierarchic algorithms for detecting multiple change-points in time series function effectively in practise? c) Is it good to limit the spatial comparison of candidate series with up to five other series in the neighbourhood? Empirical results - those from the COST benchmark, and other experiments too - show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities seem like part of the climatic variability, thus the pure application of classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality, than in raw time series. The developers and users of homogenisation methods have to bear in mind that
Translation invariant time-dependent solutions to massive gravity II
Mourad, J.; Steer, D.A. E-mail: steer@apc.univ-paris7.fr
2014-06-01
This paper is a sequel to JCAP 12 (2013) 004 and is also devoted to translation-invariant solutions of ghost-free massive gravity in its moving frame formulation. Here we consider a mass term which is linear in the vielbein (corresponding to a β{sub 3} term in the 4D metric formulation) in addition to the cosmological constant. We determine explicitly the constraints, and from the initial value formulation show that the time-dependent solutions can have singularities at a finite time. Although the constraints give, as in the β{sub 1} case, the correct number of degrees of freedom for a massive spin two field, we show that the lapse function can change sign at a finite time causing a singular time evolution. This is very different to the β{sub 1} case where time evolution is always well defined. We conclude that the β{sub 3} mass term can be pathological and should be treated with care.
Detecting chaos in irregularly sampled time series.
Kulp, C W
2013-09-01
Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars. PMID:24089946
Heuristic segmentation of a nonstationary time series
NASA Astrophysics Data System (ADS)
Fukuda, Kensuke; Eugene Stanley, H.; Nunes Amaral, Luís A.
2004-02-01
Many phenomena, both natural and human influenced, give rise to signals whose statistical properties change under time translation, i.e., are nonstationary. For some practical purposes, a nonstationary time series can be seen as a concatenation of stationary segments. However, the exact segmentation of a nonstationary time series is a hard computational problem which cannot be solved exactly by existing methods. For this reason, heuristic methods have been proposed. Using one such method, it has been reported that for several cases of interest—e.g., heart beat data and Internet traffic fluctuations—the distribution of durations of these stationary segments decays with a power-law tail. A potential technical difficulty that has not been thoroughly investigated is that a nonstationary time series with a (scalefree) power-law distribution of stationary segments is harder to segment than other nonstationary time series because of the wider range of possible segment lengths. Here, we investigate the validity of a heuristic segmentation algorithm recently proposed by Bernaola-Galván et al. [Phys. Rev. Lett. 87, 168105 (2001)] by systematically analyzing surrogate time series with different statistical properties. We find that if a given nonstationary time series has stationary periods whose length is distributed as a power law, the algorithm can split the time series into a set of stationary segments with the correct statistical properties. We also find that the estimated power-law exponent of the distribution of stationary-segment lengths is affected by (i) the minimum segment length and (ii) the ratio R≡σɛ/σx¯, where σx¯ is the standard deviation of the mean values of the segments and σɛ is the standard deviation of the fluctuations within a segment. Furthermore, we determine that the performance of the algorithm is generally not affected by uncorrelated noise spikes or by weak long-range temporal correlations of the fluctuations within segments.
Forbidden patterns in financial time series.
Zanin, Massimiliano
2008-03-01
The existence of forbidden patterns, i.e., certain missing sequences in a given time series, is a recently proposed instrument of potential application in the study of time series. Forbidden patterns are related to the permutation entropy, which has the basic properties of classic chaos indicators, such as Lyapunov exponent or Kolmogorov entropy, thus allowing to separate deterministic (usually chaotic) from random series; however, it requires fewer values of the series to be calculated, and it is suitable for using with small datasets. In this paper, the appearance of forbidden patterns is studied in different economical indicators such as stock indices (Dow Jones Industrial Average and Nasdaq Composite), NYSE stocks (IBM and Boeing), and others (ten year Bond interest rate), to find evidence of deterministic behavior in their evolutions. Moreover, the rate of appearance of the forbidden patterns is calculated, and some considerations about the underlying dynamics are suggested. PMID:18377070
Predicting road accidents: Structural time series approach
NASA Astrophysics Data System (ADS)
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-07-01
In this paper, the model for occurrence of road accidents in Malaysia between the years of 1970 to 2010 was developed and throughout this model the number of road accidents have been predicted by using the structural time series approach. The models are developed by using stepwise method and the residual of each step has been analyzed. The accuracy of the model is analyzed by using the mean absolute percentage error (MAPE) and the best model is chosen based on the smallest Akaike information criterion (AIC) value. A structural time series approach found that local linear trend model is the best model to represent the road accidents. This model allows level and slope component to be varied over time. In addition, this approach also provides useful information on improving the conventional time series method.
Development of an IUE Time Series Browser
NASA Technical Reports Server (NTRS)
Massa, Derck
2005-01-01
The International Ultraviolet Explorer (IUE) satellite operated successfully for more than 17 years. Its archive of more than 100,000 science exposures is widely acknowledged as an invaluable scientific resource that will not be duplicated in the foreseeable future. We have searched this archive for objects which were observed 10 or more times with the same spectral dispersion and wavelength coverage over the lifetime of IUE. Using this definition of a time series, we find that roughly half of the science exposures are members of such time series. This paper describes a WEB-based IUE time series browser which enables the user to visually inspect the repeated observations for variability and to examine each member spectrum individually. Further, if the researcher determines that a specific data set is worthy of further investigation, it can be easily downloaded for further, detailed analysis.
Learning time series for intelligent monitoring
NASA Technical Reports Server (NTRS)
Manganaris, Stefanos; Fisher, Doug
1994-01-01
We address the problem of classifying time series according to their morphological features in the time domain. In a supervised machine-learning framework, we induce a classification procedure from a set of preclassified examples. For each class, we infer a model that captures its morphological features using Bayesian model induction and the minimum message length approach to assign priors. In the performance task, we classify a time series in one of the learned classes when there is enough evidence to support that decision. Time series with sufficiently novel features, belonging to classes not present in the training set, are recognized as such. We report results from experiments in a monitoring domain of interest to NASA.
Integrated method for chaotic time series analysis
Hively, L.M.; Ng, E.G.
1998-09-29
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.
Integrated method for chaotic time series analysis
Hively, Lee M.; Ng, Esmond G.
1998-01-01
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.
Building Chaotic Model From Incomplete Time Series
NASA Astrophysics Data System (ADS)
Siek, Michael; Solomatine, Dimitri
2010-05-01
This paper presents a number of novel techniques for building a predictive chaotic model from incomplete time series. A predictive chaotic model is built by reconstructing the time-delayed phase space from observed time series and the prediction is made by a global model or adaptive local models based on the dynamical neighbors found in the reconstructed phase space. In general, the building of any data-driven models depends on the completeness and quality of the data itself. However, the completeness of the data availability can not always be guaranteed since the measurement or data transmission is intermittently not working properly due to some reasons. We propose two main solutions dealing with incomplete time series: using imputing and non-imputing methods. For imputing methods, we utilized the interpolation methods (weighted sum of linear interpolations, Bayesian principle component analysis and cubic spline interpolation) and predictive models (neural network, kernel machine, chaotic model) for estimating the missing values. After imputing the missing values, the phase space reconstruction and chaotic model prediction are executed as a standard procedure. For non-imputing methods, we reconstructed the time-delayed phase space from observed time series with missing values. This reconstruction results in non-continuous trajectories. However, the local model prediction can still be made from the other dynamical neighbors reconstructed from non-missing values. We implemented and tested these methods to construct a chaotic model for predicting storm surges at Hoek van Holland as the entrance of Rotterdam Port. The hourly surge time series is available for duration of 1990-1996. For measuring the performance of the proposed methods, a synthetic time series with missing values generated by a particular random variable to the original (complete) time series is utilized. There exist two main performance measures used in this work: (1) error measures between the actual
Fractal and natural time analysis of geoelectrical time series
NASA Astrophysics Data System (ADS)
Ramirez Rojas, A.; Moreno-Torres, L. R.; Cervantes, F.
2013-05-01
In this work we show the analysis of geoelectric time series linked with two earthquakes of M=6.6 and M=7.4. That time series were monitored at the South Pacific Mexican coast, which is the most important active seismic subduction zone in México. The geolectric time series were analyzed by using two complementary methods: a fractal analysis, by means of the detrended fluctuation analysis (DFA) in the conventional time, and the power spectrum defined in natural time domain (NTD). In conventional time we found long-range correlations prior to the EQ-occurrences and simultaneously in NTD, the behavior of the power spectrum suggest the possible existence of seismo electric signals (SES) similar with the previously reported in equivalent time series monitored in Greece prior to earthquakes of relevant magnitude.
Layered Ensemble Architecture for Time Series Forecasting.
Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin
2016-01-01
Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods. PMID:25751882
Climate Time Series Analysis and Forecasting
NASA Astrophysics Data System (ADS)
Young, P. C.; Fildes, R.
2009-04-01
This paper will discuss various aspects of climate time series data analysis, modelling and forecasting being carried out at Lancaster. This will include state-dependent parameter, nonlinear, stochastic modelling of globally averaged atmospheric carbon dioxide; the computation of emission strategies based on modern control theory; and extrapolative time series benchmark forecasts of annual average temperature, both global and local. The key to the forecasting evaluation will be the iterative estimation of forecast error based on rolling origin comparisons, as recommended in the forecasting research literature. The presentation will conclude with with a comparison of the time series forecasts with forecasts produced from global circulation models and a discussion of the implications for climate modelling research.
Intrinsic superstatistical components of financial time series
NASA Astrophysics Data System (ADS)
Vamoş, Călin; Crăciun, Maria
2014-12-01
Time series generated by a complex hierarchical system exhibit various types of dynamics at different time scales. A financial time series is an example of such a multiscale structure with time scales ranging from minutes to several years. In this paper we decompose the volatility of financial indices into five intrinsic components and we show that it has a heterogeneous scale structure. The small-scale components have a stochastic nature and they are independent 99% of the time, becoming synchronized during financial crashes and enhancing the heavy tails of the volatility distribution. The deterministic behavior of the large-scale components is related to the nonstationarity of the financial markets evolution. Our decomposition of the financial volatility is a superstatistical model more complex than those usually limited to a superposition of two independent statistics at well-separated time scales.
Characterization of Experimental Chaotic Time Series
NASA Astrophysics Data System (ADS)
Tomlin, Brett; Olsen, Thomas; Callan, Kristine; Wiener, Richard
2004-05-01
Correlation dimension and Lyapunov dimension are complementary measures of the strength of the chaotic dynamics of a nonlinear system. Long time series were obtained from experiment, both in a modified Taylor-Couette fluid flow apparatus and a non-linear electronic circuit. The irregular generation of Taylor Vortex Pairs in Taylor-Couette flow with hourglass geometry has previously demonstrated low dimensional chaos( T. Olsen, R. Bjorge, & R. Wiener, Bull. Am. Phys. Soc. 47-10), 76 (2002).. The non-linear circuit allows for the acquisition of very large time series and serves as test case for the numerical procedures. Details of the calculation and results are presented.
Detecting smoothness in noisy time series
Cawley, R.; Hsu, G.; Salvino, L.W.
1996-06-01
We describe the role of chaotic noise reduction in detecting an underlying smoothness in a dataset. We have described elsewhere a general method for assessing the presence of determinism in a time series, which is to test against the class of datasets producing smoothness (i.e., the null hypothesis is determinism). In order to reduce the likelihood of a false call, we recommend this kind of analysis be applied first to a time series whose deterministic origin is at question. We believe this step should be taken before implementing other methods of dynamical analysis and measurement, such as correlation dimension or Lyapounov spectrum. {copyright} {ital 1996 American Institute of Physics.}
Clustering Short Time-Series Microarray
NASA Astrophysics Data System (ADS)
Ping, Loh Wei; Hasan, Yahya Abu
2008-01-01
Most microarray analyses are carried out on static gene expressions. However, the dynamical study of microarrays has lately gained more attention. Most researches on time-series microarray emphasize on the bioscience and medical aspects but few from the numerical aspect. This study attempts to analyze short time-series microarray mathematically using STEM clustering tool which formally preprocess data followed by clustering. We next introduce the Circular Mould Distance (CMD) algorithm with combinations of both preprocessing and clustering analysis. Both methods are subsequently compared in terms of efficiencies.
TimeSeer: Scagnostics for high-dimensional time series.
Dang, Tuan Nhon; Anand, Anushka; Wilkinson, Leland
2013-03-01
We introduce a method (Scagnostic time series) and an application (TimeSeer) for organizing multivariate time series and for guiding interactive exploration through high-dimensional data. The method is based on nine characterizations of the 2D distributions of orthogonal pairwise projections on a set of points in multidimensional euclidean space. These characterizations include measures, such as, density, skewness, shape, outliers, and texture. Working directly with these Scagnostic measures, we can locate anomalous or interesting subseries for further analysis. Our application is designed to handle the types of doubly multivariate data series that are often found in security, financial, social, and other sectors. PMID:23307611
Multifractal analysis of polyalanines time series
NASA Astrophysics Data System (ADS)
Figueirêdo, P. H.; Nogueira, E.; Moret, M. A.; Coutinho, Sérgio
2010-05-01
Multifractal properties of the energy time series of short α-helix structures, specifically from a polyalanine family, are investigated through the MF-DFA technique ( multifractal detrended fluctuation analysis). Estimates for the generalized Hurst exponent h(q) and its associated multifractal exponents τ(q) are obtained for several series generated by numerical simulations of molecular dynamics in different systems from distinct initial conformations. All simulations were performed using the GROMOS force field, implemented in the program THOR. The main results have shown that all series exhibit multifractal behavior depending on the number of residues and temperature. Moreover, the multifractal spectra reveal important aspects of the time evolution of the system and suggest that the nucleation process of the secondary structures during the visits on the energy hyper-surface is an essential feature of the folding process.
SO2 EMISSIONS AND TIME SERIES MODELS
The paper describes a time series model that permits the estimation of the statistical properties of pounds of SO2 per million Btu in stack emissions. It uses measured values for this quantity provided by coal sampling and analysis (CSA), by a continuous emissions monitor (CEM), ...
Three Analysis Examples for Time Series Data
Technology Transfer Automated Retrieval System (TEKTRAN)
With improvements in instrumentation and the automation of data collection, plot level repeated measures and time series data are increasingly available to monitor and assess selected variables throughout the duration of an experiment or project. Records and metadata on variables of interest alone o...
Directionality volatility in electroencephalogram time series
NASA Astrophysics Data System (ADS)
Mansor, Mahayaudin M.; Green, David A.; Metcalfe, Andrew V.
2016-06-01
We compare time series of electroencephalograms (EEGs) from healthy volunteers with EEGs from subjects diagnosed with epilepsy. The EEG time series from the healthy group are recorded during awake state with their eyes open and eyes closed, and the records from subjects with epilepsy are taken from three different recording regions of pre-surgical diagnosis: hippocampal, epileptogenic and seizure zone. The comparisons for these 5 categories are in terms of deviations from linear time series models with constant variance Gaussian white noise error inputs. One feature investigated is directionality, and how this can be modelled by either non-linear threshold autoregressive models or non-Gaussian errors. A second feature is volatility, which is modelled by Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) processes. Other features include the proportion of variability accounted for by time series models, and the skewness and the kurtosis of the residuals. The results suggest these comparisons may have diagnostic potential for epilepsy and provide early warning of seizures.
Topological analysis of chaotic time series
NASA Astrophysics Data System (ADS)
Gilmore, Robert
1997-10-01
Topological methods have recently been developed for the classification, analysis, and synthesis of chaotic time series. These methods can be applied to time series with a Lyapunov dimension less than three. The procedure determines the stretching and squeezing mechanisms which operate to create a strange attractor and organize all the unstable periodic orbits in the attractor in a unique way. Strange attractors are identified by a set of integers. These are topological invariants for a two dimensional branched manifold, which is the infinite dissipation limit of the strange attractor. It is remarkable that this topological information can be extracted from chaotic time series. The data required for this analysis need not be extensive or exceptionally clean. The topological invariants: (1) are subject to validation/invalidation tests; (2) describe how to model the data; and (3) do not change as control parameters change. Topological analysis is the first step in a doubly discrete classification scheme for strange attractors. The second discrete classification involves specification of a 'basis set' set of periodic orbits whose presence forces the existence of all other periodic orbits in the strange attractor. The basis set of orbits does change as control parameters change. Quantitative models developed to describe time series data are tested by the methods of entrainment. This analysis procedure has been applied to analyze a number of data sets. Several analyses are described.
Nonlinear time-series analysis revisited
NASA Astrophysics Data System (ADS)
Bradley, Elizabeth; Kantz, Holger
2015-09-01
In 1980 and 1981, two pioneering papers laid the foundation for what became known as nonlinear time-series analysis: the analysis of observed data—typically univariate—via dynamical systems theory. Based on the concept of state-space reconstruction, this set of methods allows us to compute characteristic quantities such as Lyapunov exponents and fractal dimensions, to predict the future course of the time series, and even to reconstruct the equations of motion in some cases. In practice, however, there are a number of issues that restrict the power of this approach: whether the signal accurately and thoroughly samples the dynamics, for instance, and whether it contains noise. Moreover, the numerical algorithms that we use to instantiate these ideas are not perfect; they involve approximations, scale parameters, and finite-precision arithmetic, among other things. Even so, nonlinear time-series analysis has been used to great advantage on thousands of real and synthetic data sets from a wide variety of systems ranging from roulette wheels to lasers to the human heart. Even in cases where the data do not meet the mathematical or algorithmic requirements to assure full topological conjugacy, the results of nonlinear time-series analysis can be helpful in understanding, characterizing, and predicting dynamical systems.
Nonlinear time-series analysis revisited.
Bradley, Elizabeth; Kantz, Holger
2015-09-01
In 1980 and 1981, two pioneering papers laid the foundation for what became known as nonlinear time-series analysis: the analysis of observed data-typically univariate-via dynamical systems theory. Based on the concept of state-space reconstruction, this set of methods allows us to compute characteristic quantities such as Lyapunov exponents and fractal dimensions, to predict the future course of the time series, and even to reconstruct the equations of motion in some cases. In practice, however, there are a number of issues that restrict the power of this approach: whether the signal accurately and thoroughly samples the dynamics, for instance, and whether it contains noise. Moreover, the numerical algorithms that we use to instantiate these ideas are not perfect; they involve approximations, scale parameters, and finite-precision arithmetic, among other things. Even so, nonlinear time-series analysis has been used to great advantage on thousands of real and synthetic data sets from a wide variety of systems ranging from roulette wheels to lasers to the human heart. Even in cases where the data do not meet the mathematical or algorithmic requirements to assure full topological conjugacy, the results of nonlinear time-series analysis can be helpful in understanding, characterizing, and predicting dynamical systems. PMID:26428563
Event Discovery in Astronomical Time Series
NASA Astrophysics Data System (ADS)
Preston, D.; Protopapas, P.; Brodley, C.
2009-09-01
The discovery of events in astronomical time series data is a non-trival problem. Existing methods address the problem by requiring a fixed-sized sliding window which, given the varying lengths of events and sampling rates, could overlook important events. In this work, we develop probability models for finding the significance of an arbitrary-sized sliding window, and use these probabilities to find areas of significance. In addition, we present our analyses of major surveys archived at the Time Series Center, part of the Initiative in Innovative Computing at Harvard University. We applied our method to the time series data in order to discover events such as microlensing or any non-periodic events in the MACHO, OGLE and TAOS surveys. The analysis shows that the method is an effective tool for filtering out nearly 99% of noisy and uninteresting time series from a large set of data, but still provides full recovery of all known variable events (microlensing, blue star events, supernovae etc.). Furthermore, due to its efficiency, this method can be performed on-the-fly and will be used to analyze upcoming surveys, such as Pan-STARRS.
Nonlinear Time Series Analysis via Neural Networks
NASA Astrophysics Data System (ADS)
Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin
This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.
Classification of time series patterns from complex dynamic systems
Schryver, J.C.; Rao, N.
1998-07-01
An increasing availability of high-performance computing and data storage media at decreasing cost is making possible the proliferation of large-scale numerical databases and data warehouses. Numeric warehousing enterprises on the order of hundreds of gigabytes to terabytes are a reality in many fields such as finance, retail sales, process systems monitoring, biomedical monitoring, surveillance and transportation. Large-scale databases are becoming more accessible to larger user communities through the internet, web-based applications and database connectivity. Consequently, most researchers now have access to a variety of massive datasets. This trend will probably only continue to grow over the next several years. Unfortunately, the availability of integrated tools to explore, analyze and understand the data warehoused in these archives is lagging far behind the ability to gain access to the same data. In particular, locating and identifying patterns of interest in numerical time series data is an increasingly important problem for which there are few available techniques. Temporal pattern recognition poses many interesting problems in classification, segmentation, prediction, diagnosis and anomaly detection. This research focuses on the problem of classification or characterization of numerical time series data. Highway vehicles and their drivers are examples of complex dynamic systems (CDS) which are being used by transportation agencies for field testing to generate large-scale time series datasets. Tools for effective analysis of numerical time series in databases generated by highway vehicle systems are not yet available, or have not been adapted to the target problem domain. However, analysis tools from similar domains may be adapted to the problem of classification of numerical time series data.
Delay Differential Analysis of Time Series
Lainscsek, Claudia; Sejnowski, Terrence J.
2015-01-01
Nonlinear dynamical system analysis based on embedding theory has been used for modeling and prediction, but it also has applications to signal detection and classification of time series. An embedding creates a multidimensional geometrical object from a single time series. Traditionally either delay or derivative embeddings have been used. The delay embedding is composed of delayed versions of the signal, and the derivative embedding is composed of successive derivatives of the signal. The delay embedding has been extended to nonuniform embeddings to take multiple timescales into account. Both embeddings provide information on the underlying dynamical system without having direct access to all the system variables. Delay differential analysis is based on functional embeddings, a combination of the derivative embedding with nonuniform delay embeddings. Small delay differential equation (DDE) models that best represent relevant dynamic features of time series data are selected from a pool of candidate models for detection or classification. We show that the properties of DDEs support spectral analysis in the time domain where nonlinear correlation functions are used to detect frequencies, frequency and phase couplings, and bispectra. These can be efficiently computed with short time windows and are robust to noise. For frequency analysis, this framework is a multivariate extension of discrete Fourier transform (DFT), and for higher-order spectra, it is a linear and multivariate alternative to multidimensional fast Fourier transform of multidimensional correlations. This method can be applied to short or sparse time series and can be extended to cross-trial and cross-channel spectra if multiple short data segments of the same experiment are available. Together, this time-domain toolbox provides higher temporal resolution, increased frequency and phase coupling information, and it allows an easy and straightforward implementation of higher-order spectra across time
Remote Sensing Time Series Product Tool
NASA Technical Reports Server (NTRS)
Predos, Don; Ryan, Robert E.; Ross, Kenton W.
2006-01-01
The TSPT (Time Series Product Tool) software was custom-designed for NASA to rapidly create and display single-band and band-combination time series, such as NDVI (Normalized Difference Vegetation Index) images, for wide-area crop surveillance and for other time-critical applications. The TSPT, developed in MATLAB, allows users to create and display various MODIS (Moderate Resolution Imaging Spectroradiometer) or simulated VIIRS (Visible/Infrared Imager Radiometer Suite) products as single images, as time series plots at a selected location, or as temporally processed image videos. Manually creating these types of products is extremely labor intensive; however, the TSPT development tool makes the process simplified and efficient. MODIS is ideal for monitoring large crop areas because of its wide swath (2330 km), its relatively small ground sample distance (250 m), and its high temporal revisit time (twice daily). Furthermore, because MODIS imagery is acquired daily, rapid changes in vegetative health can potentially be detected. The new TSPT technology provides users with the ability to temporally process high-revisit-rate satellite imagery, such as that acquired from MODIS and from its successor, the VIIRS. The TSPT features the important capability of fusing data from both MODIS instruments onboard the Terra and Aqua satellites, which drastically improves cloud statistics. With the TSPT, MODIS metadata is used to find and optionally remove bad and suspect data. Noise removal and temporal processing techniques allow users to create low-noise time series plots and image videos and to select settings and thresholds that tailor particular output products. The TSPT GUI (graphical user interface) provides an interactive environment for crafting what-if scenarios by enabling a user to repeat product generation using different settings and thresholds. The TSPT Application Programming Interface provides more fine-tuned control of product generation, allowing experienced
Algorithm for Compressing Time-Series Data
NASA Technical Reports Server (NTRS)
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
Modelling population change from time series data
Barker, R.J.; Sauer, J.R.
1992-01-01
Information on change in population size over time is among the most basic inputs for population management. Unfortunately, population changes are generally difficult to identify, and once identified difficult to explain. Sources of variald (patterns) in population data include: changes in environment that affect carrying capaciyy and produce trend, autocorrelative processes, irregular environmentally induced perturbations, and stochasticity arising from population processes. In addition. populations are almost never censused and many surveys (e.g., the North American Breeding Bird Survey) produce multiple, incomplete time series of population indices, providing further sampling complications. We suggest that each source of pattern should be used to address specific hypotheses regarding population change, but that failure to correctly model each source can lead to false conclusions about the dynamics of populations. We consider hypothesis tests based on each source of pattern, and the effects of autocorrelated observations and sampling error. We identify important constraints on analyses of time series that limit their use in identifying underlying relationships.
Time series regression studies in environmental epidemiology
Bhaskaran, Krishnan; Gasparrini, Antonio; Hajat, Shakoor; Smeeth, Liam; Armstrong, Ben
2013-01-01
Time series regression studies have been widely used in environmental epidemiology, notably in investigating the short-term associations between exposures such as air pollution, weather variables or pollen, and health outcomes such as mortality, myocardial infarction or disease-specific hospital admissions. Typically, for both exposure and outcome, data are available at regular time intervals (e.g. daily pollution levels and daily mortality counts) and the aim is to explore short-term associations between them. In this article, we describe the general features of time series data, and we outline the analysis process, beginning with descriptive analysis, then focusing on issues in time series regression that differ from other regression methods: modelling short-term fluctuations in the presence of seasonal and long-term patterns, dealing with time varying confounding factors and modelling delayed (‘lagged’) associations between exposure and outcome. We finish with advice on model checking and sensitivity analysis, and some common extensions to the basic model. PMID:23760528
Time series regression studies in environmental epidemiology.
Bhaskaran, Krishnan; Gasparrini, Antonio; Hajat, Shakoor; Smeeth, Liam; Armstrong, Ben
2013-08-01
Time series regression studies have been widely used in environmental epidemiology, notably in investigating the short-term associations between exposures such as air pollution, weather variables or pollen, and health outcomes such as mortality, myocardial infarction or disease-specific hospital admissions. Typically, for both exposure and outcome, data are available at regular time intervals (e.g. daily pollution levels and daily mortality counts) and the aim is to explore short-term associations between them. In this article, we describe the general features of time series data, and we outline the analysis process, beginning with descriptive analysis, then focusing on issues in time series regression that differ from other regression methods: modelling short-term fluctuations in the presence of seasonal and long-term patterns, dealing with time varying confounding factors and modelling delayed ('lagged') associations between exposure and outcome. We finish with advice on model checking and sensitivity analysis, and some common extensions to the basic model. PMID:23760528
Sliced Inverse Regression for Time Series Analysis
NASA Astrophysics Data System (ADS)
Chen, Li-Sue
1995-11-01
In this thesis, general nonlinear models for time series data are considered. A basic form is x _{t} = f(beta_sp{1} {T}X_{t-1},beta_sp {2}{T}X_{t-1},... , beta_sp{k}{T}X_ {t-1},varepsilon_{t}), where x_{t} is an observed time series data, X_{t } is the first d time lag vector, (x _{t},x_{t-1},... ,x _{t-d-1}), f is an unknown function, beta_{i}'s are unknown vectors, varepsilon_{t }'s are independent distributed. Special cases include AR and TAR models. We investigate the feasibility applying SIR/PHD (Li 1990, 1991) (the sliced inverse regression and principal Hessian methods) in estimating beta _{i}'s. PCA (Principal component analysis) is brought in to check one critical condition for SIR/PHD. Through simulation and a study on 3 well -known data sets of Canadian lynx, U.S. unemployment rate and sunspot numbers, we demonstrate how SIR/PHD can effectively retrieve the interesting low-dimension structures for time series data.
Time-Series Analysis: A Cautionary Tale
NASA Technical Reports Server (NTRS)
Damadeo, Robert
2015-01-01
Time-series analysis has often been a useful tool in atmospheric science for deriving long-term trends in various atmospherically important parameters (e.g., temperature or the concentration of trace gas species). In particular, time-series analysis has been repeatedly applied to satellite datasets in order to derive the long-term trends in stratospheric ozone, which is a critical atmospheric constituent. However, many of the potential pitfalls relating to the non-uniform sampling of the datasets were often ignored and the results presented by the scientific community have been unknowingly biased. A newly developed and more robust application of this technique is applied to the Stratospheric Aerosol and Gas Experiment (SAGE) II version 7.0 ozone dataset and the previous biases and newly derived trends are presented.
Time Series Analysis Using Geometric Template Matching.
Frank, Jordan; Mannor, Shie; Pineau, Joelle; Precup, Doina
2013-03-01
We present a novel framework for analyzing univariate time series data. At the heart of the approach is a versatile algorithm for measuring the similarity of two segments of time series called geometric template matching (GeTeM). First, we use GeTeM to compute a similarity measure for clustering and nearest-neighbor classification. Next, we present a semi-supervised learning algorithm that uses the similarity measure with hierarchical clustering in order to improve classification performance when unlabeled training data are available. Finally, we present a boosting framework called TDEBOOST, which uses an ensemble of GeTeM classifiers. TDEBOOST augments the traditional boosting approach with an additional step in which the features used as inputs to the classifier are adapted at each step to improve the training error. We empirically evaluate the proposed approaches on several datasets, such as accelerometer data collected from wearable sensors and ECG data. PMID:22641699
Univariate time series forecasting algorithm validation
NASA Astrophysics Data System (ADS)
Ismail, Suzilah; Zakaria, Rohaiza; Muda, Tuan Zalizam Tuan
2014-12-01
Forecasting is a complex process which requires expert tacit knowledge in producing accurate forecast values. This complexity contributes to the gaps between end users and expert. Automating this process by using algorithm can act as a bridge between them. Algorithm is a well-defined rule for solving a problem. In this study a univariate time series forecasting algorithm was developed in JAVA and validated using SPSS and Excel. Two set of simulated data (yearly and non-yearly); several univariate forecasting techniques (i.e. Moving Average, Decomposition, Exponential Smoothing, Time Series Regressions and ARIMA) and recent forecasting process (such as data partition, several error measures, recursive evaluation and etc.) were employed. Successfully, the results of the algorithm tally with the results of SPSS and Excel. This algorithm will not just benefit forecaster but also end users that lacking in depth knowledge of forecasting process.
Multifractal Analysis of Sunspot Number Time Series
NASA Astrophysics Data System (ADS)
Kasde, Satish Kumar; Gwal, Ashok Kumar; Sondhiya, Deepak Kumar
2016-07-01
Multifractal analysis based approaches have been recently developed as an alternative framework to study the complex dynamical fluctuations in sunspot numbers data including solar cycles 20 to 23 and ascending phase of current solar cycle 24.To reveal the multifractal nature, the time series data of monthly sunspot number are analyzed by singularity spectrum and multi resolution wavelet analysis. Generally, the multifractility in sunspot number generate turbulence with the typical characteristics of the anomalous process governing the magnetosphere and interior of Sun. our analysis shows that singularities spectrum of sunspot data shows well Gaussian shape spectrum, which clearly establishes the fact that monthly sunspot number has multifractal character. The multifractal analysis is able to provide a local and adaptive description of the cyclic components of sunspot number time series, which are non-stationary and result of nonlinear processes. Keywords: Sunspot Numbers, Magnetic field, Multifractal analysis and wavelet Transform Techniques.
Aggregated Indexing of Biomedical Time Series Data
Woodbridge, Jonathan; Mortazavi, Bobak; Sarrafzadeh, Majid; Bui, Alex A.T.
2016-01-01
Remote and wearable medical sensing has the potential to create very large and high dimensional datasets. Medical time series databases must be able to efficiently store, index, and mine these datasets to enable medical professionals to effectively analyze data collected from their patients. Conventional high dimensional indexing methods are a two stage process. First, a superset of the true matches is efficiently extracted from the database. Second, supersets are pruned by comparing each of their objects to the query object and rejecting any objects falling outside a predetermined radius. This pruning stage heavily dominates the computational complexity of most conventional search algorithms. Therefore, indexing algorithms can be significantly improved by reducing the amount of pruning. This paper presents an online algorithm to aggregate biomedical times series data to significantly reduce the search space (index size) without compromising the quality of search results. This algorithm is built on the observation that biomedical time series signals are composed of cyclical and often similar patterns. This algorithm takes in a stream of segments and groups them to highly concentrated collections. Locality Sensitive Hashing (LSH) is used to reduce the overall complexity of the algorithm, allowing it to run online. The output of this aggregation is used to populate an index. The proposed algorithm yields logarithmic growth of the index (with respect to the total number of objects) while keeping sensitivity and specificity simultaneously above 98%. Both memory and runtime complexities of time series search are improved when using aggregated indexes. In addition, data mining tasks, such as clustering, exhibit runtimes that are orders of magnitudes faster when run on aggregated indexes.
FTSPlot: Fast Time Series Visualization for Large Datasets
Riss, Michael
2014-01-01
The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of ; the visualization itself can be done with a complexity of and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with ms. The current 64-bit implementation theoretically supports datasets with up to bytes, on the x86_64 architecture currently up to bytes are supported, and benchmarks have been conducted with bytes/1 TiB or double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments. PMID:24732865
Analysis of Polyphonic Musical Time Series
NASA Astrophysics Data System (ADS)
Sommer, Katrin; Weihs, Claus
A general model for pitch tracking of polyphonic musical time series will be introduced. Based on a model of Davy and Godsill (Bayesian harmonic models for musical pitch estimation and analysis, Technical Report 431, Cambridge University Engineering Department, 2002) Davy and Godsill (2002) the different pitches of the musical sound are estimated with MCMC methods simultaneously. Additionally a preprocessing step is designed to improve the estimation of the fundamental frequencies (A comparative study on polyphonic musical time series using MCMC methods. In C. Preisach et al., editors, Data Analysis, Machine Learning, and Applications, Springer, Berlin, 2008). The preprocessing step compares real audio data with an alphabet constructed from the McGill Master Samples (Opolko and Wapnick, McGill University Master Samples [Compact disc], McGill University, Montreal, 1987) and consists of tones of different instruments. The tones with minimal Itakura-Saito distortion (Gray et al., Transactions on Acoustics, Speech, and Signal Processing ASSP-28(4):367-376, 1980) are chosen as first estimates and as starting points for the MCMC algorithms. Furthermore the implementation of the alphabet is an approach for the recognition of the instruments generating the musical time series. Results are presented for mixed monophonic data from McGill and for self recorded polyphonic audio data.
Characterization of noisy symbolic time series.
Kulp, Christopher W; Smith, Suzanne
2011-02-01
The 0-1 test for chaos is a recently developed time series characterization algorithm that can determine whether a system is chaotic or nonchaotic. While the 0-1 test was designed for deterministic series, in real-world measurement situations, noise levels may not be known and the 0-1 test may have difficulty distinguishing between chaos and randomness. In this paper, we couple the 0-1 test for chaos with a test for determinism and apply these tests to noisy symbolic series generated from various model systems. We find that the pairing of the 0-1 test with a test for determinism improves the ability to correctly distinguish between chaos and randomness from a noisy series. Furthermore, we explore the modes of failure for the 0-1 test and the test for determinism so that we can better understand the effectiveness of the two tests to handle various levels of noise. We find that while the tests can handle low noise and high noise situations, moderate levels of noise can lead to inconclusive results from the two tests. PMID:21405890
Spectrophotometric Time Series of η Carinae's Great Eruption
NASA Astrophysics Data System (ADS)
Rest, Armin; Bianco, Federica; Chornock, Ryan; Clocchiatti, Alejandro; James, David; Margheim, Steve; Matheson, Thomas; Prieto, Jose Luis; Smith, Chris; Smith, Nathan; Walborn, Nolan; Welch, Doug; Zenteno, Alfredo
2014-08-01
η Car serves as our most important template for understanding non-SN transients from massive stars in external galaxies. However, until recently, no spectra were available because its historic ``Great Eruption'' (GE) occurred from 1838-1858, before the invention of the astronomical spectrograph, and only visual estimates of its brightness were recorded teSF11. Now we can also obtain a spectral sequence of the eruption through its light echoes we discovered, which will be of great value since spectra are our most important tool for inferring physical properties of extragalactic transients. Subsequent spectroscopic follow-up revealed that its outburst was most similar to those of G-type supergiants, rather than reported LBV outburst spectral types of F-type (or earlier) teRest12_eta. These differences between the GE and the extragalactic transients presumed to be its analogues raise questions about traditional scenarios for the outburst. We propose to obtain a spectrophotometric time series of the GE from different directions, allowing the original eruption of η Car to be studied as a function of time as well as latitude, something only possible with light echoes. This unique detailed spectroscopic study of the light echoes of η Car will help us understand (episodic) mass- loss in the most massive evolved stars and their connection to the most energetic core-collapse SNe.
Evolutionary factor analysis of replicated time series.
Motta, Giovanni; Ombao, Hernando
2012-09-01
In this article, we develop a novel method that explains the dynamic structure of multi-channel electroencephalograms (EEGs) recorded from several trials in a motor-visual task experiment. Preliminary analyses of our data suggest two statistical challenges. First, the variance at each channel and cross-covariance between each pair of channels evolve over time. Moreover, the cross-covariance profiles display a common structure across all pairs, and these features consistently appear across all trials. In the light of these features, we develop a novel evolutionary factor model (EFM) for multi-channel EEG data that systematically integrates information across replicated trials and allows for smoothly time-varying factor loadings. The individual EEGs series share common features across trials, thus, suggesting the need to pool information across trials, which motivates the use of the EFM for replicated time series. We explain the common co-movements of EEG signals through the existence of a small number of common factors. These latent factors are primarily responsible for processing the visual-motor task which, through the loadings, drive the behavior of the signals observed at different channels. The estimation of the time-varying loadings is based on the spectral decomposition of the estimated time-varying covariance matrix. PMID:22364516
Homogenization of precipitation time series with ACMANT
NASA Astrophysics Data System (ADS)
Domonkos, Peter
2015-10-01
New method for the time series homogenization of observed precipitation (PP) totals is presented; this method is a unit of the ACMANT software package. ACMANT is a relative homogenization method; minimum four time series with adequate spatial correlations are necessary for its use. The detection of inhomogeneities (IHs) is performed with fitting optimal step function, while the calculation of adjustment terms is based on the minimization of the residual variance in homogenized datasets. Together with the presentation of PP homogenization with ACMANT, some peculiarities of PP homogenization as, for instance, the frequency and seasonal variation of IHs in observed PP data and their relation to the performance of homogenization methods are discussed. In climatic regions of snowy winters, ACMANT distinguishes two seasons, namely, rainy season and snowy season, and the seasonal IHs are searched with bivariate detection. ACMANT is a fully automatic method, is freely downloadable from internet and treats either daily or monthly input. Series of observed data in the input dataset may cover different periods, and the occurrence of data gaps is allowed. False zero values instead of missing data code or physical outliers should be corrected before running ACMANT. Efficiency tests indicate that ACMANT belongs to the best performing methods, although further comparative tests of automatic homogenization methods are needed to confirm or reject this finding.
Fractal fluctuations in cardiac time series
NASA Technical Reports Server (NTRS)
West, B. J.; Zhang, R.; Sanders, A. W.; Miniyar, S.; Zuckerman, J. H.; Levine, B. D.; Blomqvist, C. G. (Principal Investigator)
1999-01-01
Human heart rate, controlled by complex feedback mechanisms, is a vital index of systematic circulation. However, it has been shown that beat-to-beat values of heart rate fluctuate continually over a wide range of time scales. Herein we use the relative dispersion, the ratio of the standard deviation to the mean, to show, by systematically aggregating the data, that the correlation in the beat-to-beat cardiac time series is a modulated inverse power law. This scaling property indicates the existence of long-time memory in the underlying cardiac control process and supports the conclusion that heart rate variability is a temporal fractal. We argue that the cardiac control system has allometric properties that enable it to respond to a dynamical environment through scaling.
Time series analyses of global change data.
Lane, L J; Nichols, M H; Osborn, H B
1994-01-01
The hypothesis that statistical analyses of historical time series data can be used to separate the influences of natural variations from anthropogenic sources on global climate change is tested. Point, regional, national, and global temperature data are analyzed. Trend analyses for the period 1901-1987 suggest mean annual temperatures increased (in degrees C per century) globally at the rate of about 0.5, in the USA at about 0.3, in the south-western USA desert region at about 1.2, and at the Walnut Gulch Experimental Watershed in south-eastern Arizona at about 0.8. However, the rates of temperature change are not constant but vary within the 87-year period. Serial correlation and spectral density analysis of the temperature time series showed weak periodicities at various frequencies. The only common periodicity among the temperature series is an apparent cycle of about 43 years. The temperature time series were correlated with the Wolf sunspot index, atmospheric CO(2) concentrations interpolated from the Siple ice core data, and atmospheric CO(2) concentration data from Mauna Loa measurements. Correlation analysis of temperature data with concurrent data on atmospheric CO(2) concentrations and the Wolf sunspot index support previously reported significant correlation over the 1901-1987 period. Correlation analysis between temperature, atmospheric CO(2) concentration, and the Wolf sunspot index for the shorter period, 1958-1987, when continuous Mauna Loa CO(2) data are available, suggest significant correlation between global warming and atmospheric CO(2) concentrations but no significant correlation between global warming and the Wolf sunspot index. This may be because the Wolf sunspot index apparently increased from 1901 until about 1960 and then decreased thereafter, while global warming apparently continued to increase through 1987. Correlation of sunspot activity with global warming may be spurious but additional analyses are required to test this hypothesis
Time Series Photometry of KZ Lacertae
NASA Astrophysics Data System (ADS)
Joner, Michael D.
2016-01-01
We present BVRI time series photometry of the high amplitude delta Scuti star KZ Lacertae secured using the 0.9-meter telescope located at the Brigham Young University West Mountain Observatory. In addition to the multicolor light curves that are presented, the V data from the last six years of observations are used to plot an O-C diagram in order to determine the ephemeris and evaluate evidence for period change. We wish to thank the Brigham Young University College of Physical and Mathematical Sciences as well as the Department of Physics and Astronomy for their continued support of the research activities at the West Mountain Observatory.
Time series analysis of temporal networks
NASA Astrophysics Data System (ADS)
Sikdar, Sandipan; Ganguly, Niloy; Mukherjee, Animesh
2016-01-01
A common but an important feature of all real-world networks is that they are temporal in nature, i.e., the network structure changes over time. Due to this dynamic nature, it becomes difficult to propose suitable growth models that can explain the various important characteristic properties of these networks. In fact, in many application oriented studies only knowing these properties is sufficient. For instance, if one wishes to launch a targeted attack on a network, this can be done even without the knowledge of the full network structure; rather an estimate of some of the properties is sufficient enough to launch the attack. We, in this paper show that even if the network structure at a future time point is not available one can still manage to estimate its properties. We propose a novel method to map a temporal network to a set of time series instances, analyze them and using a standard forecast model of time series, try to predict the properties of a temporal network at a later time instance. To our aim, we consider eight properties such as number of active nodes, average degree, clustering coefficient etc. and apply our prediction framework on them. We mainly focus on the temporal network of human face-to-face contacts and observe that it represents a stochastic process with memory that can be modeled as Auto-Regressive-Integrated-Moving-Average (ARIMA). We use cross validation techniques to find the percentage accuracy of our predictions. An important observation is that the frequency domain properties of the time series obtained from spectrogram analysis could be used to refine the prediction framework by identifying beforehand the cases where the error in prediction is likely to be high. This leads to an improvement of 7.96% (for error level ≤20%) in prediction accuracy on an average across all datasets. As an application we show how such prediction scheme can be used to launch targeted attacks on temporal networks. Contribution to the Topical Issue
Time series modelling of surface pressure data
NASA Astrophysics Data System (ADS)
Al-Awadhi, Shafeeqah; Jolliffe, Ian
1998-03-01
In this paper we examine time series modelling of surface pressure data, as measured by a barograph, at Herne Bay, England, during the years 1981-1989. Autoregressive moving average (ARMA) models have been popular in many fields over the past 20 years, although applications in climatology have been rather less widespread than in some disciplines. Some recent examples are Milionis and Davies (Int. J. Climatol., 14, 569-579) and Seleshi et al. (Int. J. Climatol., 14, 911-923). We fit standard ARMA models to the pressure data separately for each of six 2-month natural seasons. Differences between the best fitting models for different seasons are discussed. Barograph data are recorded continuously, whereas ARMA models are fitted to discretely recorded data. The effect of different spacings between the fitted data on the models chosen is discussed briefly.Often, ARMA models can give a parsimonious and interpretable representation of a time series, but for many series the assumptions underlying such models are not fully satisfied, and more complex models may be considered. A specific feature of surface pressure data in the UK is that its behaviour is different at high and at low pressures: day-to-day changes are typically larger at low pressure levels than at higher levels. This means that standard assumptions used in fitting ARMA models are not valid, and two ways of overcoming this problem are investigated. Transformation of the data to better satisfy the usual assumptions is considered, as is the use of non-linear, specifically threshold autoregressive (TAR), models.
Ensemble vs. time averages in financial time series analysis
NASA Astrophysics Data System (ADS)
Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.
2012-12-01
Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.
Singular spectrum analysis for time series with missing data
Schoellhamer, D.H.
2001-01-01
Geophysical time series often contain missing data, which prevents analysis with many signal processing and multivariate tools. A modification of singular spectrum analysis for time series with missing data is developed and successfully tested with synthetic and actual incomplete time series of suspended-sediment concentration from San Francisco Bay. This method also can be used to low pass filter incomplete time series.
Nonparametric, nonnegative deconvolution of large time series
NASA Astrophysics Data System (ADS)
Cirpka, O. A.
2006-12-01
There is a long tradition of characterizing hydrologic systems by linear models, in which the response of the system to a time-varying stimulus is computed by convolution of a system-specific transfer function with the input signal. Despite its limitations, the transfer-function concept has been shown valuable for many situations such as the precipitation/run-off relationships of catchments and solute transport in agricultural soils and aquifers. A practical difficulty lies in the identification of the transfer function. A common approach is to fit a parametric function, enforcing a particular shape of the transfer function, which may be in contradiction to the real behavior (e.g., multimodal transfer functions, long tails, etc.). In our nonparametric deconvolution, the transfer function is assumed an auto-correlated random time function, which is conditioned on the data by a Bayesian approach. Nonnegativity, which is a vital constraint for solute-transport applications, is enforced by the method of Lagrange multipliers. This makes the inverse problem nonlinear. In nonparametric deconvolution, identifying the auto-correlation parameters is crucial. Enforcing too much smoothness prohibits the identification of important features, whereas insufficient smoothing leads to physically meaningless transfer functions, mapping noise components in the two data series onto each other. We identify optimal smoothness parameters by the expectation-maximization method, which requires the repeated generation of many conditional realizations. The overall approach, however, is still significantly faster than Markov-Chain Monte-Carlo methods presented recently. We apply our approach to electric-conductivity time series measured in a river and monitoring wells in the adjacent aquifer. The data cover 1.5 years with a temporal resolution of 1h. The identified transfer functions have lengths of up to 60 days, making up 1440 parameters. We believe that nonparametric deconvolution is an
Assessing burn severity using satellite time series
NASA Astrophysics Data System (ADS)
Veraverbeke, Sander; Lhermitte, Stefaan; Verstraeten, Willem; Goossens, Rudi
2010-05-01
In this study a multi-temporal differenced Normalized Burn Ratio (dNBRMT) is presented to assess burn severity of the 2007 Peloponnese (Greece) wildfires. 8-day composites were created using the daily near infrared (NIR) and mid infrared (MIR) reflectance products of the Moderate Resolution Imaging Spectroradiometer (MODIS). Prior to the calculation of the dNBRMT a pixel-based control plot selection procedure was initiated for each burned pixel based on time series similarity of the pre-fire year 2006 to estimate the spatio-temporal NBR dynamics in the case that no fire event would have occurred. The dNBRMT is defined as the one-year post-fire integrated difference between the NBR values of the control and focal pixels. Results reveal the temporal dependency of the absolute values of bi-temporal dNBR maps as the mean temporal standard deviation of the one-year post-fire bi-temporal dNBR time series equaled 0.14 (standard deviation of 0.04). The dNBRMT's integration of temporal variability into one value potentially enhances the comparability of fires across space and time. In addition, the dNBRMT is robust to random noise thanks to the averaging effect. The dNBRMT, based on coarse resolution imagery with high temporal frequency, has the potential to become either a valuable complement to fine resolution Landsat dNBR mapping or an imperative option for assessing burn severity at a continental to global scale.
A New SBUV Ozone Profile Time Series
NASA Technical Reports Server (NTRS)
McPeters, Richard
2011-01-01
Under NASA's MEaSUREs program for creating long term multi-instrument data sets, our group at Goddard has re-processed ozone profile data from a series of SBUV instruments. We have processed data from the Nimbus 7 SBUV instrument (1979-1990) and data from SBUV/2 instruments on NOAA-9 (1985-1998), NOAA-11 (1989-1995), NOAA-16 (2001-2010), NOAA-17 (2002-2010), and NOAA-18 (2005-2010). This reprocessing uses the version 8 ozone profile algorithm but now uses the Brion, Daumont, and Malicet (BMD) ozone cross sections instead of the Bass and Paur cross sections. The new cross sections have much better resolution, and extended wavelength range, and a more consistent temperature dependence. The re-processing also uses an improved cloud height climatology based on the Raman cloud retrievals of OMI. Finally, the instrument-to-instrument calibration is set using matched scenes so that ozone diurnal variation in the upper stratosphere does not alias into the ozone trands. Where there is no instrument overlap, SAGE and MLS are used to estimate calibration offsets. Preliminary analysis shows a more coherent time series as a function of altitude. The net effect on profile total column ozone is on average an absolute reduction of about one percent. Comparisons with ground-based systems are significantly better at high latitudes.
Periodograms for multiband astronomical time series
NASA Astrophysics Data System (ADS)
Ivezic, Z.; VanderPlas, J. T.
2016-05-01
We summarize the multiband periodogram, a general extension of the well-known Lomb-Scargle approach for detecting periodic signals in time- domain data developed by VanderPlas & Ivezic (2015). A Python implementation of this method is available on GitHub. The multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES, and LSST), and can treat non-uniform sampling and heteroscedastic errors. The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands. The key aspect is the use of Tikhonov regularization which drives most of the variability into the so-called base model common to all bands, while fits for individual bands describe residuals relative to the base model and typically require lower-order Fourier series. We use simulated light curves and randomly subsampled SDSS Stripe 82 data to demonstrate the superiority of this method compared to other methods from the literature, and find that this method will be able to efficiently determine the correct period in the majority of LSST's bright RR Lyrae stars with as little as six months of LSST data.
Time Series Analysis of the Blazar OJ 287
NASA Astrophysics Data System (ADS)
Gamel, Ellen; Ryle, W. T.; Carini, M. T.
2013-06-01
Blazars are a subset of active galactic nuclei (AGN) where the light is viewed along the jet of radiation produced by the central supermassive black hole. These very luminous objects vary in brightness and are associated with the cores of distant galaxies. The blazar, OJ 287, has been monitored and its brightness tracked over time. From these light curves the relationship between the characteristic “break frequency” and black hole mass can be determined through the use of power density spectra. In order to obtain a well-sampled light curve, this blazar will be observed at a wide range of timescales. Long time scales will be obtained using archived light curves from published literature. Medium time scales were obtained through a combination of data provided by Western Kentucky University and data collected at The Bank of Kentucky Observatory. Short time scales were achieved via a single night of observation at the 72” Perkins Telescope at Lowell Observatory in Flagstaff, AZ. Using time series analysis, we present a revised mass estimate for the super massive black hole of OJ 287. This object is of particular interest because it may harbor a binary black hole at its center.
Normalizing the causality between time series.
Liang, X San
2015-08-01
Recently, a rigorous yet concise formula was derived to evaluate information flow, and hence the causality in a quantitative sense, between time series. To assess the importance of a resulting causality, it needs to be normalized. The normalization is achieved through distinguishing a Lyapunov exponent-like, one-dimensional phase-space stretching rate and a noise-to-signal ratio from the rate of information flow in the balance of the marginal entropy evolution of the flow recipient. It is verified with autoregressive models and applied to a real financial analysis problem. An unusually strong one-way causality is identified from IBM (International Business Machines Corporation) to GE (General Electric Company) in their early era, revealing to us an old story, which has almost faded into oblivion, about "Seven Dwarfs" competing with a giant for the mainframe computer market. PMID:26382363
Scaling laws from geomagnetic time series
Voros, Z.; Kovacs, P.; Juhasz, A.; Kormendi, A.; Green, A.W.
1998-01-01
The notion of extended self-similarity (ESS) is applied here for the X - component time series of geomagnetic field fluctuations. Plotting nth order structure functions against the fourth order structure function we show that low-frequency geomagnetic fluctuations up to the order n = 10 follow the same scaling laws as MHD fluctuations in solar wind, however, for higher frequencies (f > l/5[h]) a clear departure from the expected universality is observed for n > 6. ESS does not allow to make an unambiguous statement about the non triviality of scaling laws in "geomagnetic" turbulence. However, we suggest to use higher order moments as promising diagnostic tools for mapping the contributions of various remote magnetospheric sources to local observatory data. Copyright 1998 by the American Geophysical Union.
Using entropy to cut complex time series
NASA Astrophysics Data System (ADS)
Mertens, David; Poncela Casasnovas, Julia; Spring, Bonnie; Amaral, L. A. N.
2013-03-01
Using techniques from statistical physics, physicists have modeled and analyzed human phenomena varying from academic citation rates to disease spreading to vehicular traffic jams. The last decade's explosion of digital information and the growing ubiquity of smartphones has led to a wealth of human self-reported data. This wealth of data comes at a cost, including non-uniform sampling and statistically significant but physically insignificant correlations. In this talk I present our work using entropy to identify stationary sub-sequences of self-reported human weight from a weight management web site. Our entropic approach-inspired by the infomap network community detection algorithm-is far less biased by rare fluctuations than more traditional time series segmentation techniques. Supported by the Howard Hughes Medical Institute
Normalizing the causality between time series
NASA Astrophysics Data System (ADS)
Liang, X. San
2015-08-01
Recently, a rigorous yet concise formula was derived to evaluate information flow, and hence the causality in a quantitative sense, between time series. To assess the importance of a resulting causality, it needs to be normalized. The normalization is achieved through distinguishing a Lyapunov exponent-like, one-dimensional phase-space stretching rate and a noise-to-signal ratio from the rate of information flow in the balance of the marginal entropy evolution of the flow recipient. It is verified with autoregressive models and applied to a real financial analysis problem. An unusually strong one-way causality is identified from IBM (International Business Machines Corporation) to GE (General Electric Company) in their early era, revealing to us an old story, which has almost faded into oblivion, about "Seven Dwarfs" competing with a giant for the mainframe computer market.
Spectrophotometric Time Series of η Carinae's Great Eruption
NASA Astrophysics Data System (ADS)
Rest, Armin; Bianco, Federica; Chornock, Ryan; Matheson, Thomas; Prieto, Jose Luis; Smith, Chris; Smith, Nathan; Walborn, Nolan; Welch, Doug
2014-02-01
η Carinae (η Car) serves as our most important template for understanding non-SN transients from massive stars in external galaxies. However, until recently, no spectra were available because its historic ``Great Eruption'' (GE) occurred before the invention of the astronomical spectrograph. Now we can also obtain a spectral sequence of the eruption through echoes, which will be of great value since spectra are our most important tool for inferring physical properties of extragalactic transients. η Car was seen as the second brightest star in the sky during its 1800s GE, but only visual estimates of its brightness were recorded teSF11. In 2011 we discovered several light echoes (LEs) which appear to be from the 1838- 1858 η Car eruptions teRest12_eta. Subsequent spectroscopic follow-up revealed that its outburst spectral type was most similar to those of G-type supergiants, rather than reported LBV outburst spectral types of F-type (or earlier) teRest12_eta. These differences between the GE and the extragalactic transients presumed to be its analogues raise questions about traditional scenarios for the outburst. We propose to obtain a spectrophotometric time series of the GE from different directions, allowing the original eruption of η Car to be studied as a function of time as well as latitude. A detailed spectroscopic study of the LEs of η Car would help us understand (episodic) mass-loss in the most massive evolved stars and their connection to the most energetic core-collapse SNe.
Periodograms for Multiband Astronomical Time Series
NASA Astrophysics Data System (ADS)
VanderPlas, Jacob T.; Iv´, Željko
2015-10-01
This paper introduces the multiband periodogram, a general extension of the well-known Lomb-Scargle approach for detecting periodic signals in time-domain data. In addition to advantages of the Lomb-Scargle method such as treatment of non-uniform sampling and heteroscedastic errors, the multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES, and LSST). The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands. The key aspect is the use of Tikhonov regularization which drives most of the variability into the so-called base model common to all bands, while fits for individual bands describe residuals relative to the base model and typically require lower-order Fourier series. This decrease in the effective model complexity is the main reason for improved performance. After a pedagogical development of the formalism of least-squares spectral analysis, which motivates the essential features of the multiband model, we use simulated light curves and randomly subsampled SDSS Stripe 82 data to demonstrate the superiority of this method compared to other methods from the literature and find that this method will be able to efficiently determine the correct period in the majority of LSST’s bright RR Lyrae stars with as little as six months of LSST data, a vast improvement over the years of data reported to be required by previous studies. A Python implementation of this method, along with code to fully reproduce the results reported here, is available on GitHub.
Fast and Flexible Multivariate Time Series Subsequence Search
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Oza, Nikunj C.; Zhu, Qiang; Srivastava, Ashok N.
2010-01-01
Multivariate Time-Series (MTS) are ubiquitous, and are generated in areas as disparate as sensor recordings in aerospace systems, music and video streams, medical monitoring, and financial systems. Domain experts are often interested in searching for interesting multivariate patterns from these MTS databases which often contain several gigabytes of data. Surprisingly, research on MTS search is very limited. Most of the existing work only supports queries with the same length of data, or queries on a fixed set of variables. In this paper, we propose an efficient and flexible subsequence search framework for massive MTS databases, that, for the first time, enables querying on any subset of variables with arbitrary time delays between them. We propose two algorithms to solve this problem (1) a List Based Search (LBS) algorithm which uses sorted lists for indexing, and (2) a R*-tree Based Search (RBS) which uses Minimum Bounding Rectangles (MBR) to organize the subsequences. Both algorithms guarantee that all matching patterns within the specified thresholds will be returned (no false dismissals). The very few false alarms can be removed by a post-processing step. Since our framework is also capable of Univariate Time-Series (UTS) subsequence search, we first demonstrate the efficiency of our algorithms on several UTS datasets previously used in the literature. We follow this up with experiments using two large MTS databases from the aviation domain, each containing several millions of observations. Both these tests show that our algorithms have very high prune rates (>99%) thus needing actual disk access for only less than 1% of the observations. To the best of our knowledge, MTS subsequence search has never been attempted on datasets of the size we have used in this paper.
Timing calibration and spectral cleaning of LOFAR time series data
NASA Astrophysics Data System (ADS)
Corstanje, A.; Buitink, S.; Enriquez, J. E.; Falcke, H.; Hörandel, J. R.; Krause, M.; Nelles, A.; Rachen, J. P.; Schellart, P.; Scholten, O.; ter Veen, S.; Thoudam, S.; Trinh, T. N. G.
2016-05-01
We describe a method for spectral cleaning and timing calibration of short time series data of the voltage in individual radio interferometer receivers. It makes use of phase differences in fast Fourier transform (FFT) spectra across antenna pairs. For strong, localized terrestrial sources these are stable over time, while being approximately uniform-random for a sum over many sources or for noise. Using only milliseconds-long datasets, the method finds the strongest interfering transmitters, a first-order solution for relative timing calibrations, and faulty data channels. No knowledge of gain response or quiescent noise levels of the receivers is required. With relatively small data volumes, this approach is suitable for use in an online system monitoring setup for interferometric arrays. We have applied the method to our cosmic-ray data collection, a collection of measurements of short pulses from extensive air showers, recorded by the LOFAR radio telescope. Per air shower, we have collected 2 ms of raw time series data for each receiver. The spectral cleaning has a calculated optimal sensitivity corresponding to a power signal-to-noise ratio of 0.08 (or -11 dB) in a spectral window of 25 kHz, for 2 ms of data in 48 antennas. This is well sufficient for our application. Timing calibration across individual antenna pairs has been performed at 0.4 ns precision; for calibration of signal clocks across stations of 48 antennas the precision is 0.1 ns. Monitoring differences in timing calibration per antenna pair over the course of the period 2011 to 2015 shows a precision of 0.08 ns, which is useful for monitoring and correcting drifts in signal path synchronizations. A cross-check method for timing calibration is presented, using a pulse transmitter carried by a drone flying over the array. Timing precision is similar, 0.3 ns, but is limited by transmitter position measurements, while requiring dedicated flights.
`Geologic time series' of earth surface deformation
NASA Astrophysics Data System (ADS)
Friedrich, A. M.
2004-12-01
The debate of whether the earth has evolved gradually or by catastrophic change has dominated the geological sciences for many centuries. On a human timescale, the earth appears to be changing slowly except for a few sudden events (singularities) such as earthquakes, floods, or landslides. While these singularities dramatically affect the loss of life or the destruction of habitat locally, they have little effect on the global population growth rate or evolution of the earth's surface. It is also unclear to what degree such events leave their traces in the geologic record. Yet, the earth's surface is changing! For example, rocks that equilibrated at depths of > 30 km below the surface are exposed at high elevations in mountains belts indicating vertical motion (uplift) of tens of kilometers; and rocks that acquired a signature of the earth's magnetic field are found up to hundreds of kilometers from their origin indicating significant horizontal transport along great faults. Whether such long-term motion occurs at the rate indicated by the recurrence interval of singular events, or whether singularities also operate at a higher-order scale ("mega-singularities") are open questions. Attempts to address these questions require time series significantly longer than several recurrence intervals of singularities. For example, for surface rupturing earthquakes (Magnitude > 7) with recurrence intervals ranging from tens to tens of thousands of years, observation periods on the order of thousands of years to a million years would be needed. However, few if any of the presently available measurement methods provide both the necessary resolution and "recording duration." While paleoseismic methods have the appropriate spatial and temporal resolution, data collection along most faults has been limited to the last one or two earthquakes. Geologic and geomorphic measurements may record long-term changes in fault slip, but only provide rates averaged over many recurrence
Peat conditions mapping using MODIS time series
NASA Astrophysics Data System (ADS)
Poggio, Laura; Gimona, Alessandro; Bruneau, Patricia; Johnson, Sally; McBride, Andrew; Artz, Rebekka
2016-04-01
Large areas of Scotland are covered in peatlands, providing an important sink of carbon in their near natural state but act as a potential source of gaseous and dissolved carbon emission if not in good conditions. Data on the condition of most peatlands in Scotland are, however, scarce and largely confined to sites under nature protection designations, often biased towards sites in better condition. The best information available at present is derived from labour intensive field-based monitoring of relatively few designated sites (Common Standard Monitoring Dataset). In order to provide a national dataset of peat conditions, the available point information from the CSM data was modelled with morphological features and information derived from MODIS sensor. In particular we used time series of indices describing vegetation greenness (Enhanced Vegetation Index), water availability (Normalised Water Difference index), Land Surface Temperature and vegetation productivity (Gross Primary productivity). A scorpan-kriging approach was used, in particular using Generalised Additive Models for the description of the trend. The model provided the probability of a site to be in favourable conditions and the uncertainty of the predictions was taken into account. The internal validation (leave-one-out) provided a mis-classification error of around 0.25. The derived dataset was then used, among others, in the decision making process for the selection of sites for restoration.
NASA Astrophysics Data System (ADS)
Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui
2014-07-01
The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.
Singular spectrum analysis and forecasting of hydrological time series
NASA Astrophysics Data System (ADS)
Marques, C. A. F.; Ferreira, J. A.; Rocha, A.; Castanheira, J. M.; Melo-Gonçalves, P.; Vaz, N.; Dias, J. M.
The singular spectrum analysis (SSA) technique is applied to some hydrological univariate time series to assess its ability to uncover important information from those series, and also its forecast skill. The SSA is carried out on annual precipitation, monthly runoff, and hourly water temperature time series. Information is obtained by extracting important components or, when possible, the whole signal from the time series. The extracted components are then subject to forecast by the SSA algorithm. It is illustrated the SSA ability to extract a slowly varying component (i.e. the trend) from the precipitation time series, the trend and oscillatory components from the runoff time series, and the whole signal from the water temperature time series. The SSA was also able to accurately forecast the extracted components of these time series.
Intercomparison of six Mediterranean zooplankton time series
NASA Astrophysics Data System (ADS)
Berline, Léo; Siokou-Frangou, Ioanna; Marasović, Ivona; Vidjak, Olja; Fernández de Puelles, M.^{a.} Luz; Mazzocchi, Maria Grazia; Assimakopoulou, Georgia; Zervoudaki, Soultana; Fonda-Umani, Serena; Conversi, Alessandra; Garcia-Comas, Carmen; Ibanez, Frédéric; Gasparini, Stéphane; Stemmann, Lars; Gorsky, Gabriel
2012-05-01
We analyzed and compared Mediterranean mesozooplankton time series spanning 1957-2006 from six coastal stations in the Balearic, Ligurian, Tyrrhenian, North and Middle Adriatic and Aegean Sea. Our analysis focused on fluctuations of major zooplankton taxonomic groups and their relation with environmental and climatic variability. Average seasonal cycles and interannual trends were derived. Stations spanned a large range of trophic status from oligotrophic to moderately eutrophic. Intra-station analyses showed (1) coherent multi-taxa trends off Villefranche sur mer that diverge from the previous results found at species level, (2) in Baleares, covariation of zooplankton and water masses as a consequence of the boundary hydrographic regime in the middle Western Mediterranean, (3) decrease in trophic status and abundance of some taxonomic groups off Naples, and (4) off Athens, an increase of zooplankton abundance and decrease in chlorophyll possibly caused by reduction of anthropogenic nutrient input, increase of microbial components, and more efficient grazing control on phytoplankton. (5) At basin scale, the analysis of temperature revealed significant positive correlations between Villefranche, Trieste and Naples for annual and/or winter average, and synchronous abrupt cooling and warming events centered in 1987 at the same three sites. After correction for multiple comparisons, we found no significant correlations between climate indices and local temperature or zooplankton abundance, nor between stations for zooplankton abundance, therefore we suggest that for these coastal stations local drivers (climatic, anthropogenic) are dominant and that the link between local and larger scale of climate should be investigated further if we are to understand zooplankton fluctuations.
The Massive Hosts of Radio Galaxies Across Cosmic Time
NASA Astrophysics Data System (ADS)
Seymour, Nick; SHzRG Collaboration
2007-05-01
We present the results of a comprehensive Spitzer survey of 69 radio galaxies across 1
NASA Astrophysics Data System (ADS)
Vyhnalek, Brian; Zurcher, Ulrich; O'Dwyer, Rebecca; Kaufman, Miron
2009-10-01
A wide range of heart rate irregularities have been reported in small studies of patients with temporal lobe epilepsy [TLE]. We hypothesize that patients with TLE display cardiac dysautonomia in either a subclinical or clinical manner. In a small study, we have retrospectively identified (2003-8) two groups of patients from the epilepsy monitoring unit [EMU] at the Cleveland Clinic. No patients were diagnosed with cardiovascular morbidities. The control group consisted of patients with confirmed pseudoseizures and the experimental group had confirmed right temporal lobe epilepsy through a seizure free outcome after temporal lobectomy. We quantified the heart rate variability using the approximate entropy [ApEn]. We found similar values of the ApEn in all three states of consciousness (awake, sleep, and proceeding seizure onset). In the TLE group, there is some evidence for greater variability in the awake than in either the sleep or proceeding seizure onset. Here we present results for mathematically-generated time series: the heart rate fluctuations ξ follow the γ statistics i.e., p(ξ)=γ-1(k) ξ^k exp(-ξ). This probability function has well-known properties and its Shannon entropy can be expressed in terms of the γ-function. The parameter k allows us to generate a family of heart rate time series with different statistics. The ApEn calculated for the generated time series for different values of k mimic the properties found for the TLE and pseudoseizure group. Our results suggest that the ApEn is an effective tool to probe differences in statistics of heart rate fluctuations.
Multiscale entropy to distinguish physiologic and synthetic RR time series.
Costa, M; Goldberger, A L; Peng, C-K
2002-01-01
We address the challenge of distinguishing physiologic interbeat interval time series from those generated by synthetic algorithms via a newly developed multiscale entropy method. Traditional measures of time series complexity only quantify the degree of regularity on a single time scale. However, many physiologic variables, such as heart rate, fluctuate in a very complex manner and present correlations over multiple time scales. We have proposed a new method to calculate multiscale entropy from complex signals. In order to distinguish between physiologic and synthetic time series, we first applied the method to a learning set of RR time series derived from healthy subjects. We empirically established selected criteria characterizing the entropy dependence on scale factor for these datasets. We then applied this algorithm to the CinC 2002 test datasets. Using only the multiscale entropy method, we correctly classified 48 of 50 (96%) time series. In combination with Fourier spectral analysis, we correctly classified all time series. PMID:14686448
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
Multifractal Analysis of Aging and Complexity in Heartbeat Time Series
NASA Astrophysics Data System (ADS)
Muñoz D., Alejandro; Almanza V., Victor H.; del Río C., José L.
2004-09-01
Recently multifractal analysis has been used intensively in the analysis of physiological time series. In this work we apply the multifractal analysis to the study of heartbeat time series from healthy young subjects and other series obtained from old healthy subjects. We show that this multifractal formalism could be a useful tool to discriminate these two kinds of series. We used the algorithm proposed by Chhabra and Jensen that provides a highly accurate, practical and efficient method for the direct computation of the singularity spectrum. Aging causes loss of multifractality in the heartbeat time series, it means that heartbeat time series of elderly persons are less complex than the time series of young persons. This analysis reveals a new level of complexity characterized by the wide range of necessary exponents to characterize the dynamics of young people.
Time efficient 3-D electromagnetic modeling on massively parallel computers
Alumbaugh, D.L.; Newman, G.A.
1995-08-01
A numerical modeling algorithm has been developed to simulate the electromagnetic response of a three dimensional earth to a dipole source for frequencies ranging from 100 to 100MHz. The numerical problem is formulated in terms of a frequency domain--modified vector Helmholtz equation for the scattered electric fields. The resulting differential equation is approximated using a staggered finite difference grid which results in a linear system of equations for which the matrix is sparse and complex symmetric. The system of equations is solved using a preconditioned quasi-minimum-residual method. Dirichlet boundary conditions are employed at the edges of the mesh by setting the tangential electric fields equal to zero. At frequencies less than 1MHz, normal grid stretching is employed to mitigate unwanted reflections off the grid boundaries. For frequencies greater than this, absorbing boundary conditions must be employed by making the stretching parameters of the modified vector Helmholtz equation complex which introduces loss at the boundaries. To allow for faster calculation of realistic models, the original serial version of the code has been modified to run on a massively parallel architecture. This modification involves three distinct tasks; (1) mapping the finite difference stencil to a processor stencil which allows for the necessary information to be exchanged between processors that contain adjacent nodes in the model, (2) determining the most efficient method to input the model which is accomplished by dividing the input into ``global`` and ``local`` data and then reading the two sets in differently, and (3) deciding how to output the data which is an inherently nonparallel process.
Visibility graph network analysis of gold price time series
NASA Astrophysics Data System (ADS)
Long, Yu
2013-08-01
Mapping time series into a visibility graph network, the characteristics of the gold price time series and return temporal series, and the mechanism underlying the gold price fluctuation have been explored from the perspective of complex network theory. The network degree distribution characters, which change from power law to exponent law when the series was shuffled from original sequence, and the average path length characters, which change from L∼lnN into lnL∼lnN as the sequence was shuffled, demonstrate that price series and return series are both long-rang dependent fractal series. The relations of Hurst exponent to the power-law exponent of degree distribution demonstrate that the logarithmic price series is a fractal Brownian series and the logarithmic return series is a fractal Gaussian series. Power-law exponents of degree distribution in a time window changing with window moving demonstrates that a logarithmic gold price series is a multifractal series. The Power-law average clustering coefficient demonstrates that the gold price visibility graph is a hierarchy network. The hierarchy character, in light of the correspondence of graph to price fluctuation, means that gold price fluctuation is a hierarchy structure, which appears to be in agreement with Elliot’s experiential Wave Theory on stock price fluctuation, and the local-rule growth theory of a hierarchy network means that the hierarchy structure of gold price fluctuation originates from persistent, short term factors, such as short term speculation.
Apparatus for statistical time-series analysis of electrical signals
NASA Technical Reports Server (NTRS)
Stewart, C. H. (Inventor)
1973-01-01
An apparatus for performing statistical time-series analysis of complex electrical signal waveforms, permitting prompt and accurate determination of statistical characteristics of the signal is presented.
Interpretable Early Classification of Multivariate Time Series
ERIC Educational Resources Information Center
Ghalwash, Mohamed F.
2013-01-01
Recent advances in technology have led to an explosion in data collection over time rather than in a single snapshot. For example, microarray technology allows us to measure gene expression levels in different conditions over time. Such temporal data grants the opportunity for data miners to develop algorithms to address domain-related problems,…
Time series change detection: Algorithms for land cover change
NASA Astrophysics Data System (ADS)
Boriah, Shyam
can be used for decision making and policy planning purposes. In particular, previous change detection studies have primarily relied on examining differences between two or more satellite images acquired on different dates. Thus, a technological solution that detects global land cover change using high temporal resolution time series data will represent a paradigm-shift in the field of land cover change studies. To realize these ambitious goals, a number of computational challenges in spatio-temporal data mining need to be addressed. Specifically, analysis and discovery approaches need to be cognizant of climate and ecosystem data characteristics such as seasonality, non-stationarity/inter-region variability, multi-scale nature, spatio-temporal autocorrelation, high-dimensionality and massive data size. This dissertation, a step in that direction, translates earth science challenges to computer science problems, and provides computational solutions to address these problems. In particular, three key technical capabilities are developed: (1) Algorithms for time series change detection that are effective and can scale up to handle the large size of earth science data; (2) Change detection algorithms that can handle large numbers of missing and noisy values present in satellite data sets; and (3) Spatio-temporal analysis techniques to identify the scale and scope of disturbance events.
Simulation of Ground Winds Time Series
NASA Technical Reports Server (NTRS)
Adelfang, S. I.
2008-01-01
A simulation process has been developed for generation of the longitudinal and lateral components of ground wind atmospheric turbulence as a function of mean wind speed, elevation, temporal frequency range and distance between locations. The distance between locations influences the spectral coherence between the simulated series at adjacent locations. Short distances reduce correlation only at high frequencies; as distances increase correlation is reduced over a wider range of frequencies. The choice of values for the constants d1 and d3 in the PSD model is the subject of work in progress. An improved knowledge of the values for zO as a function of wind direction at the ARES-1 launch pads is necessary for definition of d1. Results of other studies at other locations may be helpful as summarized in Fichtl's recent correspondence. Ideally, further research is needed based on measurements of ground wind turbulence with high resolution anemometers at a number of altitudes at a new KSC tower located closer to the ARES-1 launch pad .The proposed research would be based on turbulence measurements that may be influenced by surface terrain roughness that may be significantly different from roughness prior to 1970 in Fichtl's measurements. Significant improvements in instrumentation, data storage end processing will greatly enhance the capability to model ground wind profiles and ground wind turbulence.
How to analyse irregularly sampled geophysical time series?
NASA Astrophysics Data System (ADS)
Eroglu, Deniz; Ozken, Ibrahim; Stemler, Thomas; Marwan, Norbert; Wyrwoll, Karl-Heinz; Kurths, Juergen
2015-04-01
One of the challenges of time series analysis is to detect dynamical changes in the dynamics of the underlying system.There are numerous methods that can be used to detect such regime changes in regular sampled times series. Here we present a new approach, that can be applied, when the time series is irregular sampled. Such data sets occur frequently in real world applications as in paleo climate proxy records. The basic idea follows Victor and Purpura [1] and considers segments of the time series. For each segment we compute the cost of transforming the segment into the following one. If the time series is from one dynamical regime the cost of transformation should be similar for each segment of the data. Dramatic changes in the cost time series indicate a change in the underlying dynamics. Any kind of analysis can be applicable to the cost time series since it is a regularly sampled time series. While recurrence plots are not the best choice for irregular sampled data with some measurement noise component, we show that a recurrence plot analysis based on the cost time series can successfully identify the changes in the dynamics of the system. We tested this method using synthetically created time series and will use these results to highlight the performance of our method. Furthermore we present our analysis of a suite of calcite and aragonite stalagmites located in the eastern Kimberley region of tropical Western Australia. This oxygen isotopic data is a proxy for the monsoon activity over the last 8,000 years. In this time series our method picks up several so far undetected changes from wet to dry in the monsoon system and therefore enables us to get a better understanding of the monsoon dynamics in the North-East of Australia over the last couple of thousand years. [1] J. D. Victor and K. P. Purpura, Network: Computation in Neural Systems 8, 127 (1997)
Astrophysics in the Era of Massive Time-Domain Surveys
NASA Astrophysics Data System (ADS)
Djorgovski, G.
Synoptic sky surveys are now the largest data producers in astronomy, entering the Petascale regime, opening the time domain for a systematic exploration. A great variety of interesting phenomena, spanning essentially all subfields of astronomy, can only be studied in the time domain, and these new surveys are producing large statistical samples of the known types of objects and events for further studies (e.g., SNe, AGN, variable stars of many kinds), and have already uncovered previously unknown subtypes of these (e.g., rare or peculiar types of SNe). These surveys are generating a new science, and paving the way for even larger surveys to come, e.g., the LSST; our ability to fully exploit such forthcoming facilities depends critically on the science, methodology, and experience that are being accumulated now. Among the outstanding challenges, the foremost is our ability to conduct an effective follow-up of the interesting events discovered by the surveys in any wavelength regime. The follow-up resources, especially spectroscopy, are already and, for the predictable future, will be severely limited, thus requiring an intelligent down-selection of the most astrophysically interesting events to follow. The first step in that process is an automated, real-time, iterative classification of events, that incorporates heterogeneous data from the surveys themselves, archival and contextual information (spatial, temporal, and multiwavelength), and the incoming follow-up observations. The second step is an optimal automated event prioritization and allocation of the available follow-up resources that also change in time. Both of these challenges are highly non-trivial, and require a strong cyber-infrastructure based on the Virtual Observatory data grid, and the various astroinformatics efforts. Time domain astronomy is inherently an astronomy of telescope-computational systems, and will increasingly depend on novel machine learning and artificial intelligence tools
Volatility modeling of rainfall time series
NASA Astrophysics Data System (ADS)
Yusof, Fadhilah; Kane, Ibrahim Lawal
2013-07-01
Networks of rain gauges can provide a better insight into the spatial and temporal variability of rainfall, but they tend to be too widely spaced for accurate estimates. A way to estimate the spatial variability of rainfall between gauge points is to interpolate between them. This paper evaluates the spatial autocorrelation of rainfall data in some locations in Peninsular Malaysia using geostatistical technique. The results give an insight on the spatial variability of rainfall in the area, as such, two rain gauges were selected for an in-depth study of the temporal dependence of the rainfall data-generating process. It could be shown that rainfall data are affected by nonlinear characteristics of the variance often referred to as variance clustering or volatility, where large changes tend to follow large changes and small changes tend to follow small changes. The autocorrelation structure of the residuals and the squared residuals derived from autoregressive integrated moving average (ARIMA) models were inspected, the residuals are uncorrelated but the squared residuals show autocorrelation, and the Ljung-Box test confirmed the results. A test based on the Lagrange multiplier principle was applied to the squared residuals from the ARIMA models. The results of this auxiliary test show a clear evidence to reject the null hypothesis of no autoregressive conditional heteroskedasticity (ARCH) effect. Hence, it indicates that generalized ARCH (GARCH) modeling is necessary. An ARIMA error model is proposed to capture the mean behavior and a GARCH model for modeling heteroskedasticity (variance behavior) of the residuals from the ARIMA model. Therefore, the composite ARIMA-GARCH model captures the dynamics of daily rainfall in the study area. On the other hand, seasonal ARIMA model became a suitable model for the monthly average rainfall series of the same locations treated.
Common trends in northeast Atlantic squid time series
NASA Astrophysics Data System (ADS)
Zuur, A. F.; Pierce, G. J.
2004-06-01
In this paper, dynamic factor analysis is used to estimate common trends in time series of squid catch per unit effort in Scottish (UK) waters. Results indicated that time series of most months were related to sea surface temperature measured at Millport (UK) and a few series were related to the NAO index. The DFA methodology identified three common trends in the squid time series not revealed by traditional approaches, which suggest a possible shift in relative abundance of summer- and winter-spawning populations.
Time series analysis of air pollutants in Beirut, Lebanon.
Farah, Wehbeh; Nakhlé, Myriam Mrad; Abboud, Maher; Annesi-Maesano, Isabella; Zaarour, Rita; Saliba, Nada; Germanos, Georges; Gerard, Jocelyne
2014-12-01
This study reports for the first time a time series analysis of daily urban air pollutant levels (CO, NO, NO2, O3, PM10, and SO2) in Beirut, Lebanon. The study examines data obtained between September 2005 and July 2006, and their descriptive analysis shows long-term variations of daily levels of air pollution concentrations. Strong persistence of these daily levels is identified in the time series using an autocorrelation function, except for SO2. Time series of standardized residual values (SRVs) are also calculated to compare fluctuations of the time series with different levels. Time series plots of the SRVs indicate that NO and NO2 had similar temporal fluctuations. However, NO2 and O3 had opposite temporal fluctuations, attributable to weather conditions and the accumulation of vehicular emissions. The effects of both desert dust storms and airborne particulate matter resulting from the Lebanon War in July 2006 are also discernible in the SRV plots. PMID:25150052
Horizontal visibility graphs: exact results for random time series.
Luque, B; Lacasa, L; Ballesteros, F; Luque, J
2009-10-01
The visibility algorithm has been recently introduced as a mapping between time series and complex networks. This procedure allows us to apply methods of complex network theory for characterizing time series. In this work we present the horizontal visibility algorithm, a geometrically simpler and analytically solvable version of our former algorithm, focusing on the mapping of random series (series of independent identically distributed random variables). After presenting some properties of the algorithm, we present exact results on the topological properties of graphs associated with random series, namely, the degree distribution, the clustering coefficient, and the mean path length. We show that the horizontal visibility algorithm stands as a simple method to discriminate randomness in time series since any random series maps to a graph with an exponential degree distribution of the shape P(k)=(1/3)(2/3)(k-2), independent of the probability distribution from which the series was generated. Accordingly, visibility graphs with other P(k) are related to nonrandom series. Numerical simulations confirm the accuracy of the theorems for finite series. In a second part, we show that the method is able to distinguish chaotic series from independent and identically distributed (i.i.d.) theory, studying the following situations: (i) noise-free low-dimensional chaotic series, (ii) low-dimensional noisy chaotic series, even in the presence of large amounts of noise, and (iii) high-dimensional chaotic series (coupled map lattice), without needs for additional techniques such as surrogate data or noise reduction methods. Finally, heuristic arguments are given to explain the topological properties of chaotic series, and several sequences that are conjectured to be random are analyzed. PMID:19905386
Analysis of Time-Series Quasi-Experiments. Final Report.
ERIC Educational Resources Information Center
Glass, Gene V.; Maguire, Thomas O.
The objective of this project was to investigate the adequacy of statistical models developed by G. E. P. Box and G. C. Tiao for the analysis of time-series quasi-experiments: (1) The basic model developed by Box and Tiao is applied to actual time-series experiment data from two separate experiments, one in psychology and one in educational…
Spectral Procedures Enhance the Analysis of Three Agricultural Time Series
Technology Transfer Automated Retrieval System (TEKTRAN)
Many agricultural and environmental variables are influenced by cyclic processes that occur naturally. Consequently their time series often have cyclic behavior. This study developed times series models for three different phenomenon: (1) a 60 year-long state average crop yield record, (2) a four ...
A Computer Evolution in Teaching Undergraduate Time Series
ERIC Educational Resources Information Center
Hodgess, Erin M.
2004-01-01
In teaching undergraduate time series courses, we have used a mixture of various statistical packages. We have finally been able to teach all of the applied concepts within one statistical package; R. This article describes the process that we use to conduct a thorough analysis of a time series. An example with a data set is provided. We compare…
Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models
ERIC Educational Resources Information Center
Price, Larry R.
2012-01-01
The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…
Nonlinear parametric model for Granger causality of time series
NASA Astrophysics Data System (ADS)
Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano
2006-06-01
The notion of Granger causality between two time series examines if the prediction of one series could be improved by incorporating information of the other. In particular, if the prediction error of the first time series is reduced by including measurements from the second time series, then the second time series is said to have a causal influence on the first one. We propose a radial basis function approach to nonlinear Granger causality. The proposed model is not constrained to be additive in variables from the two time series and can approximate any function of these variables, still being suitable to evaluate causality. Usefulness of this measure of causality is shown in two applications. In the first application, a physiological one, we consider time series of heart rate and blood pressure in congestive heart failure patients and patients affected by sepsis: we find that sepsis patients, unlike congestive heart failure patients, show symmetric causal relationships between the two time series. In the second application, we consider the feedback loop in a model of excitatory and inhibitory neurons: we find that in this system causality measures the combined influence of couplings and membrane time constants.
Measurements of spatial population synchrony: influence of time series transformations.
Chevalier, Mathieu; Laffaille, Pascal; Ferdy, Jean-Baptiste; Grenouillet, Gaël
2015-09-01
Two mechanisms have been proposed to explain spatial population synchrony: dispersal among populations, and the spatial correlation of density-independent factors (the "Moran effect"). To identify which of these two mechanisms is driving spatial population synchrony, time series transformations (TSTs) of abundance data have been used to remove the signature of one mechanism, and highlight the effect of the other. However, several issues with TSTs remain, and to date no consensus has emerged about how population time series should be handled in synchrony studies. Here, by using 3131 time series involving 34 fish species found in French rivers, we computed several metrics commonly used in synchrony studies to determine whether a large-scale climatic factor (temperature) influenced fish population dynamics at the regional scale, and to test the effect of three commonly used TSTs (detrending, prewhitening and a combination of both) on these metrics. We also tested whether the influence of TSTs on time series and population synchrony levels was related to the features of the time series using both empirical and simulated time series. For several species, and regardless of the TST used, we evidenced a Moran effect on freshwater fish populations. However, these results were globally biased downward by TSTs which reduced our ability to detect significant signals. Depending on the species and the features of the time series, we found that TSTs could lead to contradictory results, regardless of the metric considered. Finally, we suggest guidelines on how population time series should be processed in synchrony studies. PMID:25953116
Using Time-Series Regression to Predict Academic Library Circulations.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1984-01-01
Four methods were used to forecast monthly circulation totals in 15 midwestern academic libraries: dummy time-series regression, lagged time-series regression, simple average (straight-line forecasting), monthly average (naive forecasting). In tests of forecasting accuracy, dummy regression method and monthly mean method exhibited smallest average…
Sunspot Time Series: Passive and Active Intervals
NASA Astrophysics Data System (ADS)
Zięba, S.; Nieckarz, Z.
2014-07-01
Solar activity slowly and irregularly decreases from the first spotless day (FSD) in the declining phase of the old sunspot cycle and systematically, but also in an irregular way, increases to the new cycle maximum after the last spotless day (LSD). The time interval between the first and the last spotless day can be called the passive interval (PI), while the time interval from the last spotless day to the first one after the new cycle maximum is the related active interval (AI). Minima of solar cycles are inside PIs, while maxima are inside AIs. In this article, we study the properties of passive and active intervals to determine the relation between them. We have found that some properties of PIs, and related AIs, differ significantly between two group of solar cycles; this has allowed us to classify Cycles 8 - 15 as passive cycles, and Cycles 17 - 23 as active ones. We conclude that the solar activity in the PI declining phase (a descending phase of the previous cycle) determines the strength of the approaching maximum in the case of active cycles, while the activity of the PI rising phase (a phase of the ongoing cycle early growth) determines the strength of passive cycles. This can have implications for solar dynamo models. Our approach indicates the important role of solar activity during the declining and the rising phases of the solar-cycle minimum.
Functional and stochastic models estimation for GNSS coordinates time series
NASA Astrophysics Data System (ADS)
Galera Monico, J. F.; Silva, H. A.; Marques, H. A.
2014-12-01
GNSS has been largely used in Geodesy and correlated areas for positioning. The position and velocity of terrestrial stations have been estimated using GNSS data based on daily solutions. So, currently it is possible to analyse the GNSS coordinates time series aiming to improve the functional and stochastic models what can help to understand geodynamic phenomena. Several sources of errors are mathematically modelled or estimated in the GNSS data processing to obtain precise coordinates what in general is carried out by using scientific software. However, due to impossibility to model all errors some kind of noises can remain contaminating the coordinate time series, especially those related with seasonal effects. The noise affecting GNSS coordinate time series can be composed by white and coloured noises what can be characterized from Variance Component Estimation technique through Least Square Method. The methodology to characterize noise in GNSS coordinates time series will be presented in this paper so that the estimated variance can be used to reconstruct stochastic and functional models of the times series providing a more realistic and reliable modeling of time series. Experiments were carried out by using GNSS time series for few Brazilian stations considering almost ten years of daily solutions. The noises components were characterized as white, flicker and random walk noise and applied to estimate the times series functional model considering semiannual and annual effects. The results show that the adoption of an adequate stochastic model considering the noises variances of time series can produce more realistic and reliable functional model for GNSS coordinate time series. Such results may be applied in the context of the realization of the Brazilian Geodetic System.
The delay time distribution of massive double compact star mergers
NASA Astrophysics Data System (ADS)
Mennekens, N.; Vanbeveren, D.
2016-05-01
To investigate the temporal evolution of binary populations, in general, and double compact-star binaries and mergers, in particular, within a galactic evolution context, a very straightforward method is obviously to implement a detailed binary evolutionary model in a galactic chemical evolution code. To our knowledge, the Brussels galactic chemical evolution code is the only one that fully and consistently accounts for the important effects of interacting binaries on the predictions of chemical evolution. With a galactic code that does not explicitly include binaries, the temporal evolution of the population of double compact star binaries and mergers can be estimated with reasonable accuracy if the delay time distribution (DTD) for these mergers is available. The DTD for supernovae type Ia has been studied extensively in the past decade. In the present paper we present the DTD for merging double neutron-star binaries and mixed systems consisting of a neutron star and a black hole. The latter mergers are very promising sites for producing r-process elements, and the DTDs can be used to study the galactic evolution of these elements with a code that does not explicitly account for binaries.
Comparison of New and Old Sunspot Number Time Series
NASA Astrophysics Data System (ADS)
Cliver, E. W.
2016-06-01
Four new sunspot number time series have been published in this Topical Issue: a backbone-based group number in Svalgaard and Schatten (Solar Phys., 2016; referred to here as SS, 1610 - present), a group number series in Usoskin et al. (Solar Phys., 2016; UEA, 1749 - present) that employs active day fractions from which it derives an observational threshold in group spot area as a measure of observer merit, a provisional group number series in Cliver and Ling (Solar Phys., 2016; CL, 1841 - 1976) that removed flaws in the Hoyt and Schatten (Solar Phys. 179, 189, 1998a; 181, 491, 1998b) normalization scheme for the original relative group sunspot number ( RG, 1610 - 1995), and a corrected Wolf (international, RI) number in Clette and Lefèvre (Solar Phys., 2016; SN, 1700 - present). Despite quite different construction methods, the four new series agree well after about 1900. Before 1900, however, the UEA time series is lower than SS, CL, and SN, particularly so before about 1885. Overall, the UEA series most closely resembles the original RG series. Comparison of the UEA and SS series with a new solar wind B time series (Owens et al. in J. Geophys. Res., 2016; 1845 - present) indicates that the UEA time series is too low before 1900. We point out incongruities in the Usoskin et al. (Solar Phys., 2016) observer normalization scheme and present evidence that this method under-estimates group counts before 1900. In general, a correction factor time series, obtained by dividing an annual group count series by the corresponding yearly averages of raw group counts for all observers, can be used to assess the reliability of new sunspot number reconstructions.
Time series photometry and starspot properties
NASA Astrophysics Data System (ADS)
Oláh, Katalin
2011-08-01
Systematic efforts of monitoring starspots from the middle of the XXth century, and the results obtained from the datasets, are summarized with special focus on the observations made by automated telescopes. Multicolour photometry shows correlations between colour indices and brightness, indicating spotted regions with different average temperatures originating from spots and faculae. Long-term monitoring of spotted stars reveals variability on different timescales. On the rotational timescale new spot appearances and starspot proper motions are followed from continuous changes of light curves during subsequent rotations. Sudden interchange of the more and less active hemispheres on the stellar surfaces is the so called flip-flop phenomenon. The existence and strength of the differential rotation is seen from the rotational signals of spots being at different stellar latitudes. Long datasets, with only short, annual interruptions, shed light on the nature of stellar activity cycles and multiple cycles. The systematic and/or random changes of the spot cycle lengths are discovered and described using various time-frequency analysis tools. Positions and sizes of spotted regions on stellar surfaces are calculated from photometric data by various softwares. From spot positions derived for decades, active longitudes on the stellar surfaces are found, which, in case of synchronized eclipsing binaries can be well positioned in the orbital frame, with respect to, and affected by, the companion stars.
Sensor-Generated Time Series Events: A Definition Language
Anguera, Aurea; Lara, Juan A.; Lizcano, David; Martínez, Maria Aurora; Pazos, Juan
2012-01-01
There are now a great many domains where information is recorded by sensors over a limited time period or on a permanent basis. This data flow leads to sequences of data known as time series. In many domains, like seismography or medicine, time series analysis focuses on particular regions of interest, known as events, whereas the remainder of the time series contains hardly any useful information. In these domains, there is a need for mechanisms to identify and locate such events. In this paper, we propose an events definition language that is general enough to be used to easily and naturally define events in time series recorded by sensors in any domain. The proposed language has been applied to the definition of time series events generated within the branch of medicine dealing with balance-related functions in human beings. A device, called posturograph, is used to study balance-related functions. The platform has four sensors that record the pressure intensity being exerted on the platform, generating four interrelated time series. As opposed to the existing ad hoc proposals, the results confirm that the proposed language is valid, that is generally applicable and accurate, for identifying the events contained in the time series.
From time series to complex networks: The visibility graph
Lacasa, Lucas; Luque, Bartolo; Ballesteros, Fernando; Luque, Jordi; Nuño, Juan Carlos
2008-01-01
In this work we present a simple and fast computational method, the visibility algorithm, that converts a time series into a graph. The constructed graph inherits several properties of the series in its structure. Thereby, periodic series convert into regular graphs, and random series do so into random graphs. Moreover, fractal series convert into scale-free networks, enhancing the fact that power law degree distributions are related to fractality, something highly discussed recently. Some remarkable examples and analytical tools are outlined to test the method's reliability. Many different measures, recently developed in the complex network theory, could by means of this new approach characterize time series from a new point of view. PMID:18362361
From time series to complex networks: the visibility graph.
Lacasa, Lucas; Luque, Bartolo; Ballesteros, Fernando; Luque, Jordi; Nuño, Juan Carlos
2008-04-01
In this work we present a simple and fast computational method, the visibility algorithm, that converts a time series into a graph. The constructed graph inherits several properties of the series in its structure. Thereby, periodic series convert into regular graphs, and random series do so into random graphs. Moreover, fractal series convert into scale-free networks, enhancing the fact that power law degree distributions are related to fractality, something highly discussed recently. Some remarkable examples and analytical tools are outlined to test the method's reliability. Many different measures, recently developed in the complex network theory, could by means of this new approach characterize time series from a new point of view. PMID:18362361
DEM time series of an agricultural watershed
NASA Astrophysics Data System (ADS)
Pineux, Nathalie; Lisein, Jonathan; Swerts, Gilles; Degré, Aurore
2014-05-01
In agricultural landscape soil surface evolves notably due to erosion and deposition phenomenon. Even if most of the field data come from plot scale studies, the watershed scale seems to be more appropriate to understand them. Currently, small unmanned aircraft systems and images treatments are improving. In this way, 3D models are built from multiple covering shots. When techniques for large areas would be to expensive for a watershed level study or techniques for small areas would be too time consumer, the unmanned aerial system seems to be a promising solution to quantify the erosion and deposition patterns. The increasing technical improvements in this growth field allow us to obtain a really good quality of data and a very high spatial resolution with a high Z accuracy. In the center of Belgium, we equipped an agricultural watershed of 124 ha. For three years (2011-2013), we have been monitoring weather (including rainfall erosivity using a spectropluviograph), discharge at three different locations, sediment in runoff water, and watershed microtopography through unmanned airborne imagery (Gatewing X100). We also collected all available historical data to try to capture the "long-term" changes in watershed morphology during the last decades: old topography maps, soil historical descriptions, etc. An erosion model (LANDSOIL) is also used to assess the evolution of the relief. Short-term evolution of the surface are now observed through flights done at 200m height. The pictures are taken with a side overlap equal to 80%. To precisely georeference the DEM produced, ground control points are placed on the study site and surveyed using a Leica GPS1200 (accuracy of 1cm for x and y coordinates and 1.5cm for the z coordinate). Flights are done each year in December to have an as bare as possible ground surface. Specific treatments are developed to counteract vegetation effect because it is know as key sources of error in the DEM produced by small unmanned aircraft
Performance of multifractal detrended fluctuation analysis on short time series
NASA Astrophysics Data System (ADS)
López, Juan Luis; Contreras, Jesús Guillermo
2013-02-01
The performance of the multifractal detrended analysis on short time series is evaluated for synthetic samples of several mono- and multifractal models. The reconstruction of the generalized Hurst exponents is used to determine the range of applicability of the method and the precision of its results as a function of the decreasing length of the series. As an application the series of the daily exchange rate between the U.S. dollar and the euro is studied.
Time series modeling of system self-assessment of survival
Lu, H.; Kolarik, W.J.
1999-06-01
Self-assessment of survival for a system, subsystem or component is implemented by assessing conditional performance reliability in real-time, which includes modeling and analysis of physical performance data. This paper proposes a time series analysis approach to system self-assessment (prediction) of survival. In the approach, physical performance data are modeled in a time series. The performance forecast is based on the model developed and is converted to the reliability of system survival. In contrast to a standard regression model, a time series model, using on-line data, is suitable for the real-time performance prediction. This paper illustrates an example of time series modeling and survival assessment, regarding an excessive tool edge wear failure mode for a twist drill operation.
Database for Hydrological Time Series of Inland Waters (DAHITI)
NASA Astrophysics Data System (ADS)
Schwatke, Christian; Dettmering, Denise
2016-04-01
Satellite altimetry was designed for ocean applications. However, since some years, satellite altimetry is also used over inland water to estimate water level time series of lakes, rivers and wetlands. The resulting water level time series can help to understand the water cycle of system earth and makes altimetry to a very useful instrument for hydrological applications. In this poster, we introduce the "Database for Hydrological Time Series of Inland Waters" (DAHITI). Currently, the database contains about 350 water level time series of lakes, reservoirs, rivers, and wetlands which are freely available after a short registration process via http://dahiti.dgfi.tum.de. In this poster, we introduce the product of DAHITI and the functionality of the DAHITI web service. Furthermore, selected examples of inland water targets are presented in detail. DAHITI provides time series of water level heights of inland water bodies and their formal errors . These time series are available within the period of 1992-2015 and have varying temporal resolutions depending on the data coverage of the investigated water body. The accuracies of the water level time series depend mainly on the extent of the investigated water body and the quality of the altimeter measurements. Hereby, an external validation with in-situ data reveals RMS differences between 5 cm and 40 cm for lakes and 10 cm and 140 cm for rivers, respectively.
Estimation of Parameters from Discrete Random Nonstationary Time Series
NASA Astrophysics Data System (ADS)
Takayasu, H.; Nakamura, T.
For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.
Detecting temporal and spatial correlations in pseudoperiodic time series
NASA Astrophysics Data System (ADS)
Zhang, Jie; Luo, Xiaodong; Nakamura, Tomomichi; Sun, Junfeng; Small, Michael
2007-01-01
Recently there has been much attention devoted to exploring the complicated possibly chaotic dynamics in pseudoperiodic time series. Two methods [Zhang , Phys. Rev. E 73, 016216 (2006); Zhang and Small, Phys. Rev. Lett. 96, 238701 (2006)] have been forwarded to reveal the chaotic temporal and spatial correlations, respectively, among the cycles in the time series. Both these methods treat the cycle as the basic unit and design specific statistics that indicate the presence of chaotic dynamics. In this paper, we verify the validity of these statistics to capture the chaotic correlation among cycles by using the surrogate data method. In particular, the statistics computed for the original time series are compared with those from its surrogates. The surrogate data we generate is pseudoperiodic type (PPS), which preserves the inherent periodic components while destroying the subtle nonlinear (chaotic) structure. Since the inherent chaotic correlations among cycles, either spatial or temporal (which are suitably characterized by the proposed statistics), are eliminated through the surrogate generation process, we expect the statistics from the surrogate to take significantly different values than those from the original time series. Hence the ability of the statistics to capture the chaotic correlation in the time series can be validated. Application of this procedure to both chaotic time series and real world data clearly demonstrates the effectiveness of the statistics. We have found clear evidence of chaotic correlations among cycles in human electrocardiogram and vowel time series. Furthermore, we show that this framework is more sensitive to examine the subtle changes in the dynamics of the time series due to the match between PPS surrogate and the statistics adopted. It offers a more reliable tool to reveal the possible correlations among cycles intrinsic to the chaotic nature of the pseudoperiodic time series.
Modelling road accidents: An approach using structural time series
NASA Astrophysics Data System (ADS)
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
Wavelet analysis and scaling properties of time series
NASA Astrophysics Data System (ADS)
Manimaran, P.; Panigrahi, Prasanta K.; Parikh, Jitendra C.
2005-10-01
We propose a wavelet based method for the characterization of the scaling behavior of nonstationary time series. It makes use of the built-in ability of the wavelets for capturing the trends in a data set, in variable window sizes. Discrete wavelets from the Daubechies family are used to illustrate the efficacy of this procedure. After studying binomial multifractal time series with the present and earlier approaches of detrending for comparison, we analyze the time series of averaged spin density in the 2D Ising model at the critical temperature, along with several experimental data sets possessing multifractal behavior.
Estimation of connectivity measures in gappy time series
NASA Astrophysics Data System (ADS)
Papadopoulos, G.; Kugiumtzis, D.
2015-10-01
A new method is proposed to compute connectivity measures on multivariate time series with gaps. Rather than removing or filling the gaps, the rows of the joint data matrix containing empty entries are removed and the calculations are done on the remainder matrix. The method, called measure adapted gap removal (MAGR), can be applied to any connectivity measure that uses a joint data matrix, such as cross correlation, cross mutual information and transfer entropy. MAGR is favorably compared using these three measures to a number of known gap-filling techniques, as well as the gap closure. The superiority of MAGR is illustrated on time series from synthetic systems and financial time series.
Wavelet analysis and scaling properties of time series.
Manimaran, P; Panigrahi, Prasanta K; Parikh, Jitendra C
2005-10-01
We propose a wavelet based method for the characterization of the scaling behavior of nonstationary time series. It makes use of the built-in ability of the wavelets for capturing the trends in a data set, in variable window sizes. Discrete wavelets from the Daubechies family are used to illustrate the efficacy of this procedure. After studying binomial multifractal time series with the present and earlier approaches of detrending for comparison, we analyze the time series of averaged spin density in the 2D Ising model at the critical temperature, along with several experimental data sets possessing multifractal behavior. PMID:16383481
Scale-dependent intrinsic entropies of complex time series.
Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E
2016-04-13
Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. PMID:26953181
Quantifying Memory in Complex Physiological Time-Series
Shirazi, Amir H.; Raoufy, Mohammad R.; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R.; Amodio, Piero; Jafari, G. Reza; Montagnese, Sara; Mani, Ali R.
2013-01-01
In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of “memory length” was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are ‘forgotten’ quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations. PMID:24039811
Nonstationary time series prediction combined with slow feature analysis
NASA Astrophysics Data System (ADS)
Wang, G.; Chen, X.
2015-07-01
Almost all climate time series have some degree of nonstationarity due to external driving forces perturbing the observed system. Therefore, these external driving forces should be taken into account when constructing the climate dynamics. This paper presents a new technique of obtaining the driving forces of a time series from the slow feature analysis (SFA) approach, and then introduces them into a predictive model to predict nonstationary time series. The basic theory of the technique is to consider the driving forces as state variables and to incorporate them into the predictive model. Experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted to test the model. The results showed improved prediction skills.
A mixed time series model of binomial counts
NASA Astrophysics Data System (ADS)
Khoo, Wooi Chen; Ong, Seng Huat
2015-10-01
Continuous time series modelling has been an active research in the past few decades. However, time series data in terms of correlated counts appear in many situations such as the counts of rainy days and access downloading. Therefore, the study on count data has become popular in time series modelling recently. This article introduces a new mixture model, which is an univariate non-negative stationary time series model with binomial marginal distribution, arising from the combination of the well-known binomial thinning and Pegram's operators. A brief review of important properties will be carried out and the EM algorithm is applied in parameter estimation. A numerical study is presented to show the performance of the model. Finally, a potential real application will be presented to illustrate the advantage of the new mixture model.
The use of synthetic input sequences in time series modeling
NASA Astrophysics Data System (ADS)
de Oliveira, Dair José; Letellier, Christophe; Gomes, Murilo E. D.; Aguirre, Luis A.
2008-08-01
In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure.
Time Series Analysis of Insar Data: Methods and Trends
NASA Technical Reports Server (NTRS)
Osmanoglu, Batuhan; Sunar, Filiz; Wdowinski, Shimon; Cano-Cabral, Enrique
2015-01-01
Time series analysis of InSAR data has emerged as an important tool for monitoring and measuring the displacement of the Earth's surface. Changes in the Earth's surface can result from a wide range of phenomena such as earthquakes, volcanoes, landslides, variations in ground water levels, and changes in wetland water levels. Time series analysis is applied to interferometric phase measurements, which wrap around when the observed motion is larger than one-half of the radar wavelength. Thus, the spatio-temporal ''unwrapping" of phase observations is necessary to obtain physically meaningful results. Several different algorithms have been developed for time series analysis of InSAR data to solve for this ambiguity. These algorithms may employ different models for time series analysis, but they all generate a first-order deformation rate, which can be compared to each other. However, there is no single algorithm that can provide optimal results in all cases. Since time series analyses of InSAR data are used in a variety of applications with different characteristics, each algorithm possesses inherently unique strengths and weaknesses. In this review article, following a brief overview of InSAR technology, we discuss several algorithms developed for time series analysis of InSAR data using an example set of results for measuring subsidence rates in Mexico City.
Comparison of New and Old Sunspot Number Time Series
NASA Astrophysics Data System (ADS)
Cliver, Edward W.; Clette, Frédéric; Lefévre, Laure; Svalgaard, Leif
2016-05-01
As a result of the Sunspot Number Workshops, five new sunspot series have recently been proposed: a revision of the original Wolf or international sunspot number (Lockwood et al., 2014), a backbone-based group sunspot number (Svalgaard and Schatten, 2016), a revised group number series that employs active day fractions (Usoskin et al., 2016), a provisional group sunspot number series (Cliver and Ling, 2016) that removes flaws in the normalization scheme for the original group sunspot number (Hoyt and Schatten,1998), and a revised Wolf or international number (termed SN) published on the SILSO website as a replacement for the original Wolf number (Clette and Lefèvre, 2016; thttp://www.sidc.be/silso/datafiles). Despite quite different construction methods, the five new series agree reasonably well after about 1900. From 1750 to ~1875, however, the Lockwood et al. and Usoskin et al. time series are lower than the other three series. Analysis of the Hoyt and Schatten normalization factors used to scale secondary observers to their Royal Greenwich Observatory primary observer reveals a significant inhomogeneity spanning the divergence in ~1885 of the group number from the original Wolf number. In general, a correction factor time series, obtained by dividing an annual group count series by the corresponding yearly averages of raw group counts for all observers, can be used to assess the reliability of new sunspot number reconstructions.
A method for detecting changes in long time series
Downing, D.J.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.
1995-09-01
Modern scientific activities, both physical and computational, can result in time series of many thousands or even millions of data values. Here the authors describe a statistically motivated algorithm for quick screening of very long time series data for the presence of potentially interesting but arbitrary changes. The basic data model is a stationary Gaussian stochastic process, and the approach to detecting a change is the comparison of two predictions of the series at a time point or contiguous collection of time points. One prediction is a ``forecast``, i.e. based on data from earlier times, while the other a ``backcast``, i.e. based on data from later times. The statistic is the absolute value of the log-likelihood ratio for these two predictions, evaluated at the observed data. A conservative procedure is suggested for specifying critical values for the statistic under the null hypothesis of ``no change``.
Symplectic geometry spectrum regression for prediction of noisy time series
NASA Astrophysics Data System (ADS)
Xie, Hong-Bo; Dokos, Socrates; Sivakumar, Bellie; Mengersen, Kerrie
2016-05-01
We present the symplectic geometry spectrum regression (SGSR) technique as well as a regularized method based on SGSR for prediction of nonlinear time series. The main tool of analysis is the symplectic geometry spectrum analysis, which decomposes a time series into the sum of a small number of independent and interpretable components. The key to successful regularization is to damp higher order symplectic geometry spectrum components. The effectiveness of SGSR and its superiority over local approximation using ordinary least squares are demonstrated through prediction of two noisy synthetic chaotic time series (Lorenz and Rössler series), and then tested for prediction of three real-world data sets (Mississippi River flow data and electromyographic and mechanomyographic signal recorded from human body).
Symplectic geometry spectrum regression for prediction of noisy time series.
Xie, Hong-Bo; Dokos, Socrates; Sivakumar, Bellie; Mengersen, Kerrie
2016-05-01
We present the symplectic geometry spectrum regression (SGSR) technique as well as a regularized method based on SGSR for prediction of nonlinear time series. The main tool of analysis is the symplectic geometry spectrum analysis, which decomposes a time series into the sum of a small number of independent and interpretable components. The key to successful regularization is to damp higher order symplectic geometry spectrum components. The effectiveness of SGSR and its superiority over local approximation using ordinary least squares are demonstrated through prediction of two noisy synthetic chaotic time series (Lorenz and Rössler series), and then tested for prediction of three real-world data sets (Mississippi River flow data and electromyographic and mechanomyographic signal recorded from human body). PMID:27300890
Exploratory Causal Analysis in Bivariate Time Series Data
NASA Astrophysics Data System (ADS)
McCracken, James M.
Many scientific disciplines rely on observational data of systems for which it is difficult (or impossible) to implement controlled experiments and data analysis techniques are required for identifying causal information and relationships directly from observational data. This need has lead to the development of many different time series causality approaches and tools including transfer entropy, convergent cross-mapping (CCM), and Granger causality statistics. In this thesis, the existing time series causality method of CCM is extended by introducing a new method called pairwise asymmetric inference (PAI). It is found that CCM may provide counter-intuitive causal inferences for simple dynamics with strong intuitive notions of causality, and the CCM causal inference can be a function of physical parameters that are seemingly unrelated to the existence of a driving relationship in the system. For example, a CCM causal inference might alternate between ''voltage drives current'' and ''current drives voltage'' as the frequency of the voltage signal is changed in a series circuit with a single resistor and inductor. PAI is introduced to address both of these limitations. Many of the current approaches in the times series causality literature are not computationally straightforward to apply, do not follow directly from assumptions of probabilistic causality, depend on assumed models for the time series generating process, or rely on embedding procedures. A new approach, called causal leaning, is introduced in this work to avoid these issues. The leaning is found to provide causal inferences that agree with intuition for both simple systems and more complicated empirical examples, including space weather data sets. The leaning may provide a clearer interpretation of the results than those from existing time series causality tools. A practicing analyst can explore the literature to find many proposals for identifying drivers and causal connections in times series data
Scalable Hyper-parameter Estimation for Gaussian Process Based Time Series Analysis
Chandola, Varun; Vatsavai, Raju
2010-01-01
Gaussian process (GP) is increasingly becoming popular as a kernel machine learning tool for non-parametric data analysis. Recently, GP has been applied to model non-linear dependencies in time series data. GP based analysis can be used to solve problems of time series prediction, forecasting, missing data imputation, change point detection, anomaly detection, etc. But the use of GP to handle massive scientific time series data sets has been limited, owing to its expensive computational complexity. The primary bottleneck is the handling of the covariance matrix whose size is quadratic in the length of the time series. In this paper we propose a scalable method that exploit the special structure of the covariance matrix for hyper-parameter estimation in GP based learning. The proposed method allows estimation of hyper parameters associated with GP in quadratic time, which is an order of magnitude improvement over standard methods with cubic complexity. Moreover, the proposed method does not require explicit computation of the covariance matrix and hence has memory requirement linear to the length of the time series as opposed to the quadratic memory requirement of standard methods. To further improve the computational complexity of the proposed method, we provide a parallel version to concurrently estimate the log likelihood for a set of time series which is the key step in the hyper-parameter estimation. Performance results on a multi-core system show that our proposed method provides significant speedups as high as 1000, even when running in serial mode, while maintaining a small memory footprint. The parallel version exploits the natural parallelization potential of the serial algorithm and is shown to perform significantly better than the serial faster algorithm, with speedups as high as 10.
Late time tails of the massive vector field in a black hole background
Konoplya, R. A.; Zhidenko, A.; Molina, C.
2007-04-15
We investigate the late-time behavior of the massive vector field in the background of the Schwarzschild and Schwarzschild-de Sitter black holes. For Schwarzschild black hole, at intermediately late times the massive vector field is represented by three functions with different decay law {psi}{sub 0}{approx}t{sup -(l+3/2)}sinmt, {psi}{sub 1}{approx}t{sup -(l+5/2)}sinmt, {psi}{sub 2}{approx}t{sup -(l+1/2)}sinmt, while at asymptotically late times the decay law {psi}{approx}t{sup -5/6}sin(mt) is universal and does not depend on the multipole number l. Together with a previous study of massive scalar and Dirac fields where the same asymptotically late-time decay law was found, it means that the asymptotically late-time decay law {approx}t{sup -5/6}sin(mt) does not depend also on the spin of the field under consideration. For Schwarzschild-de Sitter black holes it is observed in two different regimes in the late-time decay of perturbations: nonoscillatory exponential damping for small values of m and oscillatory quasinormal mode decay for high enough m. Numerical and analytical results are found for these quasinormal frequencies.
Evaluation of Scaling Invariance Embedded in Short Time Series
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356
Detection of flood events in hydrological discharge time series
NASA Astrophysics Data System (ADS)
Seibert, S. P.; Ehret, U.
2012-04-01
The shortcomings of mean-squared-error (MSE) based distance metrics are well known (Beran 1999, Schaeffli & Gupta 2007) and the development of novel distance metrics (Pappenberger & Beven 2004, Ehret & Zehe 2011) and multi-criteria-approaches enjoy increasing popularity (Reusser 2009, Gupta et al. 2009). Nevertheless, the hydrological community still lacks metrics which identify and thus, allow signature based evaluations of hydrological discharge time series. Signature based information/evaluations are required wherever specific time series features, such as flood events, are of special concern. Calculation of event based runoff coefficients or precise knowledge on flood event characteristics (like onset or duration of rising limp or the volume of falling limp, etc.) are possible applications. The same applies for flood forecasting/simulation models. Directly comparing simulated and observed flood event features may reveal thorough insights into model dynamics. Compared to continuous space-and-time-aggregated distance metrics, event based evaluations may provide answers like the distributions of event characteristics or the percentage of the events which were actually reproduced by a hydrological model. It also may help to provide information on the simulation accuracy of small, medium and/or large events in terms of timing and magnitude. However, the number of approaches which expose time series features is small and their usage is limited to very specific questions (Merz & Blöschl 2009, Norbiato et al. 2009). We believe this is due to the following reasons: i) a generally accepted definition of the signature of interest is missing or difficult to obtain (in our case: what makes a flood event a flood event?) and/or ii) it is difficult to translate such a definition into a equation or (graphical) procedure which exposes the feature of interest in the discharge time series. We reviewed approaches which detect event starts and/or ends in hydrological discharge time
Statistical modelling of agrometeorological time series by exponential smoothing
NASA Astrophysics Data System (ADS)
Murat, Małgorzata; Malinowska, Iwona; Hoffmann, Holger; Baranowski, Piotr
2016-01-01
Meteorological time series are used in modelling agrophysical processes of the soil-plant-atmosphere system which determine plant growth and yield. Additionally, long-term meteorological series are used in climate change scenarios. Such studies often require forecasting or projection of meteorological variables, eg the projection of occurrence of the extreme events. The aim of the article was to determine the most suitable exponential smoothing models to generate forecast using data on air temperature, wind speed, and precipitation time series in Jokioinen (Finland), Dikopshof (Germany), Lleida (Spain), and Lublin (Poland). These series exhibit regular additive seasonality or non-seasonality without any trend, which is confirmed by their autocorrelation functions and partial autocorrelation functions. The most suitable models were indicated by the smallest mean absolute error and the smallest root mean squared error.
Stochastic modeling of hourly rainfall times series in Campania (Italy)
NASA Astrophysics Data System (ADS)
Giorgio, M.; Greco, R.
2009-04-01
Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil
Compounding approach for univariate time series with nonstationary variances.
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances. PMID:26764768
Compounding approach for univariate time series with nonstationary variances
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Generalized Dynamic Factor Models for Mixed-Measurement Time Series
Cui, Kai; Dunson, David B.
2013-01-01
In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation (GDKA) and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody’s rated firms from 1982–2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplemental materials available online. PMID:24791133
Generalized Dynamic Factor Models for Mixed-Measurement Time Series.
Cui, Kai; Dunson, David B
2014-02-12
In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation (GDKA) and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody's rated firms from 1982-2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplemental materials available online. PMID:24791133
A refined fuzzy time series model for stock market forecasting
NASA Astrophysics Data System (ADS)
Jilani, Tahseen Ahmed; Burney, Syed Muhammad Aqil
2008-05-01
Time series models have been used to make predictions of stock prices, academic enrollments, weather, road accident casualties, etc. In this paper we present a simple time-variant fuzzy time series forecasting method. The proposed method uses heuristic approach to define frequency-density-based partitions of the universe of discourse. We have proposed a fuzzy metric to use the frequency-density-based partitioning. The proposed fuzzy metric also uses a trend predictor to calculate the forecast. The new method is applied for forecasting TAIEX and enrollments’ forecasting of the University of Alabama. It is shown that the proposed method work with higher accuracy as compared to other fuzzy time series methods developed for forecasting TAIEX and enrollments of the University of Alabama.
Multiscale entropy analysis of complex physiologic time series.
Costa, Madalena; Goldberger, Ary L; Peng, C-K
2002-08-01
There has been considerable interest in quantifying the complexity of physiologic time series, such as heart rate. However, traditional algorithms indicate higher complexity for certain pathologic processes associated with random outputs than for healthy dynamics exhibiting long-range correlations. This paradox may be due to the fact that conventional algorithms fail to account for the multiple time scales inherent in healthy physiologic dynamics. We introduce a method to calculate multiscale entropy (MSE) for complex time series. We find that MSE robustly separates healthy and pathologic groups and consistently yields higher values for simulated long-range correlated noise compared to uncorrelated noise. PMID:12190613
Minimum entropy density method for the time series analysis
NASA Astrophysics Data System (ADS)
Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae
2009-01-01
The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.
Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system
Fijany, A.; Milman, M.; Redding, D.
1994-12-31
In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm, designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.
Wavelet analysis for non-stationary, nonlinear time series
NASA Astrophysics Data System (ADS)
Schulte, Justin A.
2016-08-01
Methods for detecting and quantifying nonlinearities in nonstationary time series are introduced and developed. In particular, higher-order wavelet analysis was applied to an ideal time series and the quasi-biennial oscillation (QBO) time series. Multiple-testing problems inherent in wavelet analysis were addressed by controlling the false discovery rate. A new local autobicoherence spectrum facilitated the detection of local nonlinearities and the quantification of cycle geometry. The local autobicoherence spectrum of the QBO time series showed that the QBO time series contained a mode with a period of 28 months that was phase coupled to a harmonic with a period of 14 months. An additional nonlinearly interacting triad was found among modes with periods of 10, 16 and 26 months. Local biphase spectra determined that the nonlinear interactions were not quadratic and that the effect of the nonlinearities was to produce non-smoothly varying oscillations. The oscillations were found to be skewed so that negative QBO regimes were preferred, and also asymmetric in the sense that phase transitions between the easterly and westerly phases occurred more rapidly than those from westerly to easterly regimes.
Time Series Analysis Based on Running Mann Whitney Z Statistics
Technology Transfer Automated Retrieval System (TEKTRAN)
A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...
Nonlinear Analysis of Surface EMG Time Series of Back Muscles
NASA Astrophysics Data System (ADS)
Dolton, Donald C.; Zurcher, Ulrich; Kaufman, Miron; Sung, Paul
2004-10-01
A nonlinear analysis of surface electromyography time series of subjects with and without low back pain is presented. The mean-square displacement and entropy shows anomalous diffusive behavior on intermediate time range 10 ms < t < 1 s. This behavior implies the presence of correlations in the signal. We discuss the shape of the power spectrum of the signal.
Long-range correlations in time series generated by time-fractional diffusion: A numerical study
NASA Astrophysics Data System (ADS)
Barbieri, Davide; Vivoli, Alessandro
2005-09-01
Time series models showing power law tails in autocorrelation functions are common in econometrics. A special non-Markovian model for such kind of time series is provided by the random walk introduced by Gorenflo et al. as a discretization of time fractional diffusion. The time series so obtained are analyzed here from a numerical point of view in terms of autocorrelations and covariance matrices.
MODIS Vegetation Indices time series improvement considering real acquisition dates
NASA Astrophysics Data System (ADS)
Testa, S.; Borgogno Mondino, E.
2013-12-01
Satellite Vegetation Indices (VI) time series images are widely used for the characterization phenology, which requires a high temporal accuracy of the satellite data. The present work is based on the MODerate resolution Imaging Spectroradiometer (MODIS) MOD13Q1 product - Vegetation Indices 16-Day L3 Global 250m, which is generated through a maximum value compositing process that reduces the number of cloudy pixels and excludes, when possible, off-nadir ones. Because of its 16-days compositing period, the distance between two adjacent-in-time values within each pixel NDVI time series can range from 1 to 32 days, thus not acceptable for phenologic studies. Moreover, most of the available smoothing algorithms, which are widely used for phenology characterization, assume that data points are equidistant in time and contemporary over the image. The objective of this work was to assess temporal features of NDVI time series over a test area, composed by Castanea sativa (chestnut) and Fagus sylvatica (beech) pure pixels within the Piemonte region in Northwestern Italy. Firstly, NDVI, Pixel Reliability (PR) and Composite Day of the Year (CDOY) data ranging from 2000 to 2011 were extracted from MOD13Q1 and corresponding time series were generated (in further computations, 2000 was not considered since it is not complete because acquisition began in February and calibration is unreliable until October). Analysis of CDOY time series (containing the actual reference date of each NDVI value) over the selected study areas showed NDVI values to be prevalently generated from data acquired at the centre of each 16-days period (the 9th day), at least constantly along the year. This leads to consider each original NDVI value nominally placed to the centre of its 16-days reference period. Then, a new NDVI time series was generated: a) moving each NDVI value to its actual "acquisition" date, b) interpolating the obtained temporary time series through SPLINE functions, c) sampling such
Mining approximate periodic pattern in hydrological time series
NASA Astrophysics Data System (ADS)
Zhu, Y. L.; Li, S. J.; Bao, N. N.; Wan, D. S.
2012-04-01
There is a lot of information about the hidden laws of nature evolution and the influences of human beings activities on the earth surface in long sequence of hydrological time series. Data mining technology can help find those hidden laws, such as flood frequency and abrupt change, which is useful for the decision support of hydrological prediction and flood control scheduling. The periodic nature of hydrological time series is important for trend forecasting of drought and flood and hydraulic engineering planning. In Hydrology, the full period analysis of hydrological time series has attracted a lot of attention, such as the discrete periodogram, simple partial wave method, Fourier analysis method, and maximum entropy spectral analysis method and wavelet analysis. In fact, the hydrological process is influenced both by deterministic factors and stochastic ones. For example, the tidal level is also affected by moon circling the Earth, in addition to the Earth revolution and its rotation. Hence, there is some kind of approximate period hidden in the hydrological time series, sometimes which is also called the cryptic period. Recently, partial period mining originated from the data mining domain can be a remedy for the traditional period analysis methods in hydrology, which has a loose request of the data integrity and continuity. They can find some partial period in the time series. This paper is focused on the partial period mining in the hydrological time series. Based on asynchronous periodic pattern and partial period mining with suffix tree, this paper proposes to mine multi-event asynchronous periodic pattern based on modified suffix tree representation and traversal, and invent a dynamic candidate period intervals adjusting method, which can avoids period omissions or waste of time and space. The experimental results on synthetic data and real water level data of the Yangtze River at Nanjing station indicate that this algorithm can discover hydrological
Finding unstable periodic orbits from chaotic time series
NASA Astrophysics Data System (ADS)
Buhl, Michael
Contained within a chaotic attractor is an infinite number of unstable periodic orbits (UPOs). Although these orbits have zero measure, they form a skeleton of the dynamics. However, they are difficult to find from an observed time series. In this thesis I present several methods to find UPOs from measured time series. In Chapter 2 I look at data measured from the stomatogastric system of the California spiny lobster as an example to find unstable periodic orbits. With this time series I use two methods. The first creates a local linear model of the dynamics and finds the periodic orbits of the model, and the second applies a linear transform to the model such that unstable orbits are stable. In addition, in this chapter I describe methods of filtering and embedding the chaotic time series. In Chapter 3 I look at a more complicated model system where the dynamics are described by delay differential equations. Now the future state of the system depends on both the current state and the state a time tau earlier. This makes the phase space of the system infinite dimensional. I present a method for modeling systems such as this and finding UPOs in the infinite dimensional phase space. In Chapters 4 and 5 I describe a new method to find UPOs using symbolic dynamics. This has many advantages over the methods described in Chapter 2; more orbits can be found using a smaller time series---even in the presence of noise. First in Chapter 4 I describe how the phase space can be partitioned so that we can use symbolic dynamics. Then in Chapter 5 I describe how the UPOs can be found from the symbolic time series. Here, I model the symbolic dynamics with a Markov chain, represented by a graph, and then the symbolic UPOs are found from the graph. These symbolic cycles can then be localized back in phase space.
Massively parallel per-pixel-based zerotree processing architecture for real-time video compression
NASA Astrophysics Data System (ADS)
Alagoda, Geoffrey; Rassau, Alexander M.; Eshraghian, Kamran
2001-11-01
In the span of a few years, mobile multimedia communication has rapidly become a significant area of research and development constantly challenging boundaries on a variety of technological fronts. Video compression, a fundamental component for most mobile multimedia applications, generally places heavy demands in terms of the required processing capacity. Hardware implementations of typical modern hybrid codecs require realisation of components such as motion compensation, wavelet transform, quantisation, zerotree coding and arithmetic coding in real-time. While the implementation of such codecs using a fast generic processor is possible, undesirable trade-offs in terms of power consumption and speed must generally be made. The improvement in power consumption that is achievable through the use of a slow-clocked massively parallel processing environment, while maintaining real-time processing speeds, should thus not be overlooked. An architecture to realise such a massively parallel solution for a zerotree entropy coder is, therefore, presented in this paper.
Entropy measure of stepwise component in GPS time series
NASA Astrophysics Data System (ADS)
Lyubushin, A. A.; Yakovlev, P. V.
2016-01-01
A new method for estimating the stepwise component in the time series is suggested. The method is based on the application of a pseudo-derivative. The advantage of this method lies in the simplicity of its practical implementation compared to the more common methods for identifying the peculiarities in the time series against the noise. The need for automatic detection of the jumps in the noised signal and for introducing a quantitative measure of a stepwise behavior of the signal arises in the problems of the GPS time series analysis. The interest in the jumps in the mean level of the GPS signal is associated with the fact that they may reflect the typical earthquakes or the so-called silent earthquakes. In this paper, we offer the criteria for quantifying the degree of the stepwise behavior of the noised time series. These criteria are based on calculating the entropy for the auxiliary series of averaged stepwise approximations, which are constructed with the use of pseudo-derivatives.
Time series, correlation matrices and random matrix models
Vinayak; Seligman, Thomas H.
2014-01-08
In this set of five lectures the authors have presented techniques to analyze open classical and quantum systems using correlation matrices. For diverse reasons we shall see that random matrices play an important role to describe a null hypothesis or a minimum information hypothesis for the description of a quantum system or subsystem. In the former case various forms of correlation matrices of time series associated with the classical observables of some system. The fact that such series are necessarily finite, inevitably introduces noise and this finite time influence lead to a random or stochastic component in these time series. By consequence random correlation matrices have a random component, and corresponding ensembles are used. In the latter we use random matrices to describe high temperature environment or uncontrolled perturbations, ensembles of differing chaotic systems etc. The common theme of the lectures is thus the importance of random matrix theory in a wide range of fields in and around physics.
Improvements in Accurate GPS Positioning Using Time Series Analysis
NASA Astrophysics Data System (ADS)
Koyama, Yuichiro; Tanaka, Toshiyuki
Although the Global Positioning System (GPS) is used widely in car navigation systems, cell phones, surveying, and other areas, several issues still exist. We focus on the continuous data received in public use of GPS, and propose a new positioning algorithm that uses time series analysis. By fitting an autoregressive model to the time series model of the pseudorange, we propose an appropriate state-space model. We apply the Kalman filter to the state-space model and use the pseudorange estimated by the filter in our positioning calculations. The results of the authors' positioning experiment show that the accuracy of the proposed method is much better than that of the standard method. In addition, as we can obtain valid values estimated by time series analysis using the state-space model, the proposed state-space model can be applied to several other fields.
On fractal analysis of cardiac interbeat time series
NASA Astrophysics Data System (ADS)
Guzmán-Vargas, L.; Calleja-Quevedo, E.; Angulo-Brown, F.
2003-09-01
In recent years the complexity of a cardiac beat-to-beat time series has been taken as an auxiliary tool to identify the health status of human hearts. Several methods has been employed to characterize the time series complexity. In this work we calculate the fractal dimension of interbeat time series arising from three groups: 10 young healthy persons, 8 elderly healthy persons and 10 patients with congestive heart failures. Our numerical results reflect evident differences in the dynamic behavior corresponding to each group. We discuss these results within the context of the neuroautonomic control of heart rate dynamics. We also propose a numerical simulation which reproduce aging effects of heart rate behavior.
Pharmacokinetic modelling of multi-decadal luminescence time series in coral skeletons
NASA Astrophysics Data System (ADS)
Llewellyn, Lyndon E.; Everingham, Yvette L.; Lough, Janice M.
2012-04-01
As corals grow, they incorporate chemical indicators of seawater conditions into their aragonite skeleton after they have traversed an outer living tissue layer. Long-lived, massive coral skeletons can record decade- and century-long time series of seawater status. One such environmental clue is luminescence intensity which can correspond to river flow patterns and has been attributed to humic acid incorporation. Seawater humic acid levels are linked to river flow as rainfall extracts them from catchment soils to then flow into rivers and coastal seas. However, discrepancies exist when validating coral luminescence records against river flow data with intense luminescence sometimes occurring in the absence of increased flows. This contributes to uncertainty when reconstructing pre-instrumental river flows and rainfall from coral luminescence. Here we demonstrate that a major portion of coral core luminescence time series can be explained using a single-compartment, pharmacokinetic model that incorporates river flow measurements as the equivalent of drug dose. The model was robust for luminescence series in corals from near-shore reefs regularly influenced by river flow. The model implies that after floods, a proportion of subsequent luminescence peaks can be derived from the initial flood. This explains why some luminescence peaks after floods often do not correspond to additional significant river flows. This provides the first mechanism-based explanation for temporal changes in coral skeleton luminescence that incorporates a mathematical link between two independent time series making this proxy even more robust for reconstructing river flow and rainfall.
A multidisciplinary database for geophysical time series management
NASA Astrophysics Data System (ADS)
Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.
2013-12-01
The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.
Dynamic Modeling of time series using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Nair, A. D.; Principe, Jose C.
1995-12-01
Artificial Neural Networks (ANN) have the ability to adapt to and learn complex topologies, they represent new technology with which to explore dynamical systems. Multi-step prediction is used to capture the dynamics of the system that produced the time series. Multi-step prediction is implemented by a recurrent ANN trained with trajectory learning. Two separate memories are employed in training the ANN, the common tapped delay-line memory and the new gamma memory. This methodology has been applied to the time series of a white dwarf and to the quasar 3C 345.
Application of nonlinear time series models to driven systems
Hunter, N.F. Jr.
1990-01-01
In our laboratory we have been engaged in an effort to model nonlinear systems using time series methods. Our objectives have been, first, to understand how the time series response of a nonlinear system unfolds as a function of the underlying state variables, second, to model the evolution of the state variables, and finally, to predict nonlinear system responses. We hope to address the relationship between model parameters and system parameters in the near future. Control of nonlinear systems based on experimentally derived parameters is also a planned topic of future research. 28 refs., 15 figs., 2 tabs.
Scale dependence of the directional relationships between coupled time series
NASA Astrophysics Data System (ADS)
Shirazi, Amir Hossein; Aghamohammadi, Cina; Anvari, Mehrnaz; Bahraminasab, Alireza; Rahimi Tabar, M. Reza; Peinke, Joachim; Sahimi, Muhammad; Marsili, Matteo
2013-02-01
Using the cross-correlation of the wavelet transformation, we propose a general method of studying the scale dependence of the direction of coupling for coupled time series. The method is first demonstrated by applying it to coupled van der Pol forced oscillators and coupled nonlinear stochastic equations. We then apply the method to the analysis of the log-return time series of the stock values of the IBM and General Electric (GE) companies. Our analysis indicates that, on average, IBM stocks react earlier to possible common sector price movements than those of GE.
Scaling analysis of multi-variate intermittent time series
NASA Astrophysics Data System (ADS)
Kitt, Robert; Kalda, Jaan
2005-08-01
The scaling properties of the time series of asset prices and trading volumes of stock markets are analysed. It is shown that similar to the asset prices, the trading volume data obey multi-scaling length-distribution of low-variability periods. In the case of asset prices, such scaling behaviour can be used for risk forecasts: the probability of observing next day a large price movement is (super-universally) inversely proportional to the length of the ongoing low-variability period. Finally, a method is devised for a multi-factor scaling analysis. We apply the simplest, two-factor model to equity index and trading volume time series.
Adaptive median filtering for preprocessing of time series measurements
NASA Technical Reports Server (NTRS)
Paunonen, Matti
1993-01-01
A median (L1-norm) filtering program using polynomials was developed. This program was used in automatic recycling data screening. Additionally, a special adaptive program to work with asymmetric distributions was developed. Examples of adaptive median filtering of satellite laser range observations and TV satellite time measurements are given. The program proved to be versatile and time saving in data screening of time series measurements.
Kālī: Time series data modeler
NASA Astrophysics Data System (ADS)
Kasliwal, Vishal P.
2016-07-01
The fully parallelized and vectorized software package Kālī models time series data using various stochastic processes such as continuous-time ARMA (C-ARMA) processes and uses Bayesian Markov Chain Monte-Carlo (MCMC) for inferencing a stochastic light curve. Kālimacr; is written in c++ with Python language bindings for ease of use. K¯lī is named jointly after the Hindu goddess of time, change, and power and also as an acronym for KArma LIbrary.
Learning time series evolution by unsupervised extraction of correlations
Deco, G.; Schuermann, B. )
1995-03-01
As a consequence, we are able to model chaotic and nonchaotic time series. Furthermore, one critical point in modeling time series is the determination of the dimension of the embedding vector used, i.e., the number of components of the past that are needed to predict the future. With this method we can detect the embedding dimension by extracting the influence of the past on the future, i.e., the correlation of remote past and future. Optimal embedding dimensions are obtained for the Henon map and the Mackey-Glass series. When noisy data corrupted by colored noise are used, a model is still possible. The noise will then be decorrelated by the network. In the case of modeling a chemical reaction, the most natural architecture that conserves the volume is a symplectic network which describes a system that conserves the entropy and therefore the transmitted information.
A multiscale statistical model for time series forecasting
NASA Astrophysics Data System (ADS)
Wang, W.; Pollak, I.
2007-02-01
We propose a stochastic grammar model for random-walk-like time series that has features at several temporal scales. We use a tree structure to model these multiscale features. The inside-outside algorithm is used to estimate the model parameters. We develop an algorithm to forecast the sign of the first difference of a time series. We illustrate the algorithm using log-price series of several stocks and compare with linear prediction and a neural network approach. We furthermore illustrate our algorithm using synthetic data and show that it significantly outperforms both the linear predictor and the neural network. The construction of our synthetic data indicates what types of signals our algorithm is well suited for.
Segmentation of time series with long-range fractal correlations
Bernaola-Galván, P.; Oliver, J.L.; Hackenberg, M.; Coronado, A.V.; Ivanov, P.Ch.; Carpena, P.
2012-01-01
Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome. PMID:23645997
Ocean time-series near Bermuda: Hydrostation S and the US JGOFS Bermuda Atlantic time-series study
NASA Technical Reports Server (NTRS)
Michaels, Anthony F.; Knap, Anthony H.
1992-01-01
Bermuda is the site of two ocean time-series programs. At Hydrostation S, the ongoing biweekly profiles of temperature, salinity and oxygen now span 37 years. This is one of the longest open-ocean time-series data sets and provides a view of decadal scale variability in ocean processes. In 1988, the U.S. JGOFS Bermuda Atlantic Time-series Study began a wide range of measurements at a frequency of 14-18 cruises each year to understand temporal variability in ocean biogeochemistry. On each cruise, the data range from chemical analyses of discrete water samples to data from electronic packages of hydrographic and optics sensors. In addition, a range of biological and geochemical rate measurements are conducted that integrate over time-periods of minutes to days. This sampling strategy yields a reasonable resolution of the major seasonal patterns and of decadal scale variability. The Sargasso Sea also has a variety of episodic production events on scales of days to weeks and these are only poorly resolved. In addition, there is a substantial amount of mesoscale variability in this region and some of the perceived temporal patterns are caused by the intersection of the biweekly sampling with the natural spatial variability. In the Bermuda time-series programs, we have added a series of additional cruises to begin to assess these other sources of variation and their impacts on the interpretation of the main time-series record. However, the adequate resolution of higher frequency temporal patterns will probably require the introduction of new sampling strategies and some emerging technologies such as biogeochemical moorings and autonomous underwater vehicles.
Complexity analysis of the turbulent environmental fluid flow time series
NASA Astrophysics Data System (ADS)
Mihailović, D. T.; Nikolić-Đorić, E.; Drešković, N.; Mimić, G.
2014-02-01
We have used the Kolmogorov complexities, sample and permutation entropies to quantify the randomness degree in river flow time series of two mountain rivers in Bosnia and Herzegovina, representing the turbulent environmental fluid, for the period 1926-1990. In particular, we have examined the monthly river flow time series from two rivers (the Miljacka and the Bosnia) in the mountain part of their flow and then calculated the Kolmogorov complexity (KL) based on the Lempel-Ziv Algorithm (LZA) (lower-KLL and upper-KLU), sample entropy (SE) and permutation entropy (PE) values for each time series. The results indicate that the KLL, KLU, SE and PE values in two rivers are close to each other regardless of the amplitude differences in their monthly flow rates. We have illustrated the changes in mountain river flow complexity by experiments using (i) the data set for the Bosnia River and (ii) anticipated human activities and projected climate changes. We have explored the sensitivity of considered measures in dependence on the length of time series. In addition, we have divided the period 1926-1990 into three subintervals: (a) 1926-1945, (b) 1946-1965, (c) 1966-1990, and calculated the KLL, KLU, SE, PE values for the various time series in these subintervals. It is found that during the period 1946-1965, there is a decrease in their complexities, and corresponding changes in the SE and PE, in comparison to the period 1926-1990. This complexity loss may be primarily attributed to (i) human interventions, after the Second World War, on these two rivers because of their use for water consumption and (ii) climate change in recent times.
Estimating The Seasonal Components In Hydrological Time Series
NASA Astrophysics Data System (ADS)
Grimaldi, S.; Montanari, A.
The hydrological safety of dams is usually evaluated by analysing historical data of river flows into the reservoir. When only short observed records are available, one is often forced to generate synthetic flow series in order to verify the safety of the dam with respect to more equally likely hydrological scenarios. To this end, stochastic pro- cesses are frequently applied and a key point of many of the simulation procedures which can be used is the estimation of the seasonal periodicities that may be present in the analysed time series. Such seasonalities often have to be removed from the his- torical record before performing the estimation of the parameters of the simulation model. A usual procedure is to estimate and subsequently eliminate the periodicities which may be present in the mean and variance of the considered time series. This study analyses the performances of various techniques for the estimation of the sea- sonal components which may affect the statistics of hydrological time series observed at fine time step. The scientific literature proposed different approaches to this end, but nevertheless their application to records collected at fine time step is often diffi- cult, due to the high variability of the data and the major significance of measurement errors which may occur during extreme events. This study aims at comparing some of the techniques proposed by the literature with a simple approach, that is obtained by modifying the well known STL method. The proposed approach is tested by estimat- ing the periodical components of some synthetic time series and applied by analysing the daily river flows of two major rivers located in Italy.
Handbook for Using the Intensive Time-Series Design.
ERIC Educational Resources Information Center
Mayer, Victor J.; Monk, John S.
Work on the development of the intensive time-series design was initiated because of the dissatisfaction with existing research designs. This dissatisfaction resulted from the paucity of data obtained from designs such as the pre-post and randomized posttest-only designs. All have the common characteristic of yielding data from only one or two…
IDENTIFICATION OF REGIME SHIFTS IN TIME SERIES USING NEIGHBORHOOD STATISTICS
The identification of alternative dynamic regimes in ecological systems requires several lines of evidence. Previous work on time series analysis of dynamic regimes includes mainly model-fitting methods. We introduce two methods that do not use models. These approaches use state-...
Time Series Data Visualization in World Wide Telescope
NASA Astrophysics Data System (ADS)
Fay, J.
WorldWide Telescope provides a rich set of timer series visualization for both archival and real time data. WWT consists of both interactive desktop tools for interactive immersive visualization and HTML5 web based controls that can be utilized in customized web pages. WWT supports a range of display options including full dome, power walls, stereo and virtual reality headsets.
The Design of Time-Series Comparisons under Resource Constraints.
ERIC Educational Resources Information Center
Willemain, Thomas R.; Hartunian, Nelson S.
1982-01-01
Two methods for dividing an interrupted time-series study between baseline and experimental phases when study resources are limited are compared. In fixed designs, the baseline duration is predetermined. In flexible designs the baseline duration is contingent on remaining resources and the match of results to prior expectations of the evaluator.…
Synchronization-based parameter estimation from time series
NASA Astrophysics Data System (ADS)
Parlitz, U.; Junge, L.; Kocarev, L.
1996-12-01
The parameters of a given (chaotic) dynamical model are estimated from scalar time series by adapting a computer model until it synchronizes with the given data. This parameter identification method is applied to numerically generated and experimental data from Chua's circuit.
Ultrasound RF time series for classification of breast lesions.
Uniyal, Nishant; Eskandari, Hani; Abolmaesumi, Purang; Sojoudi, Samira; Gordon, Paula; Warren, Linda; Rohling, Robert N; Salcudean, Septimiu E; Moradi, Mehdi
2015-02-01
This work reports the use of ultrasound radio frequency (RF) time series analysis as a method for ultrasound-based classification of malignant breast lesions. The RF time series method is versatile and requires only a few seconds of raw ultrasound data with no need for additional instrumentation. Using the RF time series features, and a machine learning framework, we have generated malignancy maps, from the estimated cancer likelihood, for decision support in biopsy recommendation. These maps depict the likelihood of malignancy for regions of size 1 mm(2) within the suspicious lesions. We report an area under receiver operating characteristics curve of 0.86 (95% confidence interval [CI]: 0.84%-0.90%) using support vector machines and 0.81 (95% CI: 0.78-0.85) using Random Forests classification algorithms, on 22 subjects with leave-one-subject-out cross-validation. Changing the classification method yielded consistent results which indicates the robustness of this tissue typing method. The findings of this report suggest that ultrasound RF time series, along with the developed machine learning framework, can help in differentiating malignant from benign breast lesions, subsequently reducing the number of unnecessary biopsies after mammography screening. PMID:25350925
The Relationship of Negative Affect and Thought: Time Series Analyses.
ERIC Educational Resources Information Center
Rubin, Amy; And Others
In recent years, the relationship between moods and thoughts has been the focus of much theorizing and some empirical work. A study was undertaken to examine the intraindividual relationship between negative affect and negative thoughts using a Box-Jenkins time series analysis. College students (N=33) completed a measure of negative mood and…
Analysis of Complex Intervention Effects in Time-Series Experiments.
ERIC Educational Resources Information Center
Bower, Cathleen
An iterative least squares procedure for analyzing the effect of various kinds of intervention in time-series data is described. There are numerous applications of this design in economics, education, and psychology, although until recently, no appropriate analysis techniques had been developed to deal with the model adequately. This paper…
ADAPTIVE DATA ANALYSIS OF COMPLEX FLUCTUATIONS IN PHYSIOLOGIC TIME SERIES
PENG, C.-K.; COSTA, MADALENA; GOLDBERGER, ARY L.
2009-01-01
We introduce a generic framework of dynamical complexity to understand and quantify fluctuations of physiologic time series. In particular, we discuss the importance of applying adaptive data analysis techniques, such as the empirical mode decomposition algorithm, to address the challenges of nonlinearity and nonstationarity that are typically exhibited in biological fluctuations. PMID:20041035
A Time-Series Analysis of Hispanic Unemployment.
ERIC Educational Resources Information Center
Defreitas, Gregory
1986-01-01
This study undertakes the first systematic time-series research on the cyclical patterns and principal determinants of Hispanic joblessness in the United States. The principal findings indicate that Hispanics tend to bear a disproportionate share of increases in unemployment during recessions. (Author/CT)
What Makes a Coursebook Series Stand the Test of Time?
ERIC Educational Resources Information Center
Illes, Eva
2009-01-01
Intriguingly, at a time when the ELT market is inundated with state-of-the-art coursebooks teaching modern-day English, a 30-year-old series enjoys continuing popularity in some secondary schools in Hungary. Why would teachers, several of whom are school-based teacher-mentors in the vanguard of the profession, purposefully choose materials which…
A Method for Comparing Multivariate Time Series with Different Dimensions
Tapinos, Avraam; Mendes, Pedro
2013-01-01
In many situations it is desirable to compare dynamical systems based on their behavior. Similarity of behavior often implies similarity of internal mechanisms or dependency on common extrinsic factors. While there are widely used methods for comparing univariate time series, most dynamical systems are characterized by multivariate time series. Yet, comparison of multivariate time series has been limited to cases where they share a common dimensionality. A semi-metric is a distance function that has the properties of non-negativity, symmetry and reflexivity, but not sub-additivity. Here we develop a semi-metric – SMETS – that can be used for comparing groups of time series that may have different dimensions. To demonstrate its utility, the method is applied to dynamic models of biochemical networks and to portfolios of shares. The former is an example of a case where the dependencies between system variables are known, while in the latter the system is treated (and behaves) as a black box. PMID:23393554
Daily time series evapotranspiration maps for Oklahoma and Texas panhandle
Technology Transfer Automated Retrieval System (TEKTRAN)
Evapotranspiration (ET) is an important process in ecosystems’ water budget and closely linked to its productivity. Therefore, regional scale daily time series ET maps developed at high and medium resolutions have large utility in studying the carbon-energy-water nexus and managing water resources. ...
The application of the transfer entropy to gappy time series
NASA Astrophysics Data System (ADS)
Kulp, C. W.; Tracy, E. R.
2009-03-01
The application of the transfer entropy to gappy symbolic time series is discussed. Although the transfer entropy can fail to correctly identify the drive-response relationship, it is able to robustly detect phase relationships. Hence, it might still be of use in applications requiring the detection of changes in these relationships.
Identification of human operator performance models utilizing time series analysis
NASA Technical Reports Server (NTRS)
Holden, F. M.; Shinners, S. M.
1973-01-01
The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.
Time Series Analysis for the Drac River Basin (france)
NASA Astrophysics Data System (ADS)
Parra-Castro, K.; Donado-Garzon, L. D.; Rodriguez, E.
2013-12-01
This research is based on analyzing of discharge time-series in four stream flow gage stations located in the Drac River basin in France: (i) Guinguette Naturelle, (ii) Infernet, (iii) Parassat and the stream flow gage (iv) Villard Loubière. In addition, time-series models as the linear regression (single and multiple) and the MORDOR model were implemented to analyze the behavior the Drac River from year 1969 until year 2010. Twelve different models were implemented to assess the daily and monthly discharge time-series for the four flow gage stations. Moreover, five selection criteria were use to analyze the models: average division, variance division, the coefficient R2, Kling-Gupta Efficiency (KGE) and the Nash Number. The selection of the models was made to have the strongest models with an important level confidence. In this case, according to the best correlation between the time-series of stream flow gage stations and the best fitting models. Four of the twelve models were selected: two models for the stream flow gage station Guinguette Naturel, one for the station Infernet and one model for the station Villard Loubière. The R2 coefficients achieved were 0.87, 0.95, 0.85 and 0.87 respectively. Consequently, both confidence levels (the modeled and the empirical) were tested in the selected model, leading to the best fitting of both discharge time-series and models with the empirical confidence interval. Additionally, a procedure for validation of the models was conducted using the data for the year 2011, where extreme hydrologic and changes in hydrologic regimes events were identified. Furthermore, two different forms of estimating uncertainty through the use of confidence levels were studied: the modeled and the empirical confidence levels. This research was useful to update the used procedures and validate time-series in the four stream flow gage stations for the use of the company Électricité de France. Additionally, coefficients for both the models and
Multiple imputation for time series data with Amelia package.
Zhang, Zhongheng
2016-02-01
Time series data are common in medical researches. Many laboratory variables or study endpoints could be measured repeatedly over time. Multiple imputation (MI) without considering time trend of a variable may cause it to be unreliable. The article illustrates how to perform MI by using Amelia package in a clinical scenario. Amelia package is powerful in that it allows for MI for time series data. External information on the variable of interest can also be incorporated by using prior or bound argument. Such information may be based on previous published observations, academic consensus, and personal experience. Diagnostics of imputation model can be performed by examining the distributions of imputed and observed values, or by using over-imputation technique. PMID:26904578
[Anomaly Detection of Multivariate Time Series Based on Riemannian Manifolds].
Xu, Yonghong; Hou, Xiaoying; Li Shuting; Cui, Jie
2015-06-01
Multivariate time series problems widely exist in production and life in the society. Anomaly detection has provided people with a lot of valuable information in financial, hydrological, meteorological fields, and the research areas of earthquake, video surveillance, medicine and others. In order to quickly and efficiently find exceptions in time sequence so that it can be presented in front of people in an intuitive way, we in this study combined the Riemannian manifold with statistical process control charts, based on sliding window, with a description of the covariance matrix as the time sequence, to achieve the multivariate time series of anomaly detection and its visualization. We made MA analog data flow and abnormal electrocardiogram data from MIT-BIH as experimental objects, and verified the anomaly detection method. The results showed that the method was reasonable and effective. PMID:26485975
Irreversibility of financial time series: A graph-theoretical approach
NASA Astrophysics Data System (ADS)
Flanagan, Ryan; Lacasa, Lucas
2016-04-01
The relation between time series irreversibility and entropy production has been recently investigated in thermodynamic systems operating away from equilibrium. In this work we explore this concept in the context of financial time series. We make use of visibility algorithms to quantify, in graph-theoretical terms, time irreversibility of 35 financial indices evolving over the period 1998-2012. We show that this metric is complementary to standard measures based on volatility and exploit it to both classify periods of financial stress and to rank companies accordingly. We then validate this approach by finding that a projection in principal components space of financial years, based on time irreversibility features, clusters together periods of financial stress from stable periods. Relations between irreversibility, efficiency and predictability are briefly discussed.
Mixed Spectrum Analysis on fMRI Time-Series.
Kumar, Arun; Lin, Feng; Rajapakse, Jagath C
2016-06-01
Temporal autocorrelation present in functional magnetic resonance image (fMRI) data poses challenges to its analysis. The existing approaches handling autocorrelation in fMRI time-series often presume a specific model of autocorrelation such as an auto-regressive model. The main limitation here is that the correlation structure of voxels is generally unknown and varies in different brain regions because of different levels of neurogenic noises and pulsatile effects. Enforcing a universal model on all brain regions leads to bias and loss of efficiency in the analysis. In this paper, we propose the mixed spectrum analysis of the voxel time-series to separate the discrete component corresponding to input stimuli and the continuous component carrying temporal autocorrelation. A mixed spectral analysis technique based on M-spectral estimator is proposed, which effectively removes autocorrelation effects from voxel time-series and identify significant peaks of the spectrum. As the proposed method does not assume any prior model for the autocorrelation effect in voxel time-series, varying correlation structure among the brain regions does not affect its performance. We have modified the standard M-spectral method for an application on a spatial set of time-series by incorporating the contextual information related to the continuous spectrum of neighborhood voxels, thus reducing considerably the computation cost. Likelihood of the activation is predicted by comparing the amplitude of discrete component at stimulus frequency of voxels across the brain by using normal distribution and modeling spatial correlations among the likelihood with a conditional random field. We also demonstrate the application of the proposed method in detecting other desired frequencies. PMID:26800533
An Introductory Overview of Statistical Methods for Discrete Time Series
NASA Astrophysics Data System (ADS)
Meng, X.-L.; California-Harvard AstroStat Collaboration
2004-08-01
A number of statistical problems encounted in astrophysics are concerned with discrete time series, such as photon counts with variation in source intensity over time. This talk provides an introductory overview of the current state-of-the-art methods in statistics, including Bayesian methods aided by Markov chain Monte Carlo, for modeling and analyzing such data. These methods have also been successfully applied in other fields, such as economics.
Normalization methods in time series of platelet function assays
Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham
2016-01-01
Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217
National Ignition Campaign (NIC) Precision Tuning Series Shock Timing Experiments
Robey, H F; Celliers, P M
2011-07-19
A series of precision shock timing experiments have been performed on NIF. These experiments continue to adjust the laser pulse shape and employ the adjusted cone fraction (CF) in the picket (1st 2 ns of the laser pulse) as determined from the re-emit experiment series. The NIF ignition laser pulse is precisely shaped and consists of a series of four impulses, which drive a corresponding series of shock waves of increasing strength to accelerate and compress the capsule ablator and fuel layer. To optimize the implosion, they tune not only the strength (or power) but also, to sub-nanosecond accuracy, the timing of the shock waves. In a well-tuned implosion, the shock waves work together to compress and heat the fuel. For the shock timing experiments, a re-entrant cone is inserted through both the hohlraum wall and the capsule ablator allowing a direct optical view of the propagating shocks in the capsule interior using the VISAR (Velocity Interferometer System for Any Reflector) diagnostic from outside the hohlraum. To emulate the DT ice of an ignition capsule, the inside of the cone and the capsule are filled with liquid deuterium.
Fast computation of recurrences in long time series
NASA Astrophysics Data System (ADS)
Rawald, Tobias; Sips, Mike; Marwan, Norbert; Dransch, Doris
2014-05-01
The quadratic time complexity of calculating basic RQA measures, doubling the size of the input time series leads to a quadrupling in operations, impairs the fast computation of RQA in many application scenarios. As an example, we analyze the Potsdamer Reihe, an ongoing non-interrupted hourly temperature profile since 1893, consisting of 1,043,112 data points. Using an optimized single-threaded CPU implementation this analysis requires about six hours. Our approach conducts RQA for the Potsdamer Reihe in five minutes. We automatically split a long time series into smaller chunks (Divide) and distribute the computation of RQA measures across multiple GPU devices. To guarantee valid RQA results, we employ carryover buffers that allow sharing information between pairs of chunks (Recombine). We demonstrate the capabilities of our Divide and Recombine approach to process long time series by comparing the runtime of our implementation to existing RQA tools. We support a variety of platforms by employing the computing framework OpenCL. Our current implementation supports the computation of standard RQA measures (recurrence rate, determinism, laminarity, ratio, average diagonal line length, trapping time, longest diagonal line, longest vertical line, divergence, entropy, trend) and also calculates recurrence times. To utilize the potential of our approach for a number of applications, we plan to release our implementation under an Open Source software license. It will be available at http://www.gfz-potsdam.de/fast-rqa/. Since our approach allows to compute RQA measures for a long time series fast, we plan to extend our implementation to support multi-scale RQA.
Efficient spectral estimation for time series with intermittent gaps
NASA Astrophysics Data System (ADS)
Smith, L. T.; Constable, C.
2009-12-01
Data from magnetic satellites like CHAMP, Ørsted, and Swarm can be used to study electromagnetic induction in Earth’s mantle. Time series of internal and external spherical harmonic coefficients (usually those associated with the predominantly dipolar structure of ring current variations) are used to determine Earth’s electromagnetic response as a function of frequency of the external variations. Inversion of this response can yield information about electrical conductivity variations in Earth’s mantle. The inductive response depends on frequency through skin depth, so it is desirable to work with the longest time series possible. Intermittent gaps in available data complicate attempts to estimate the power or cross spectra and thus the electromagnetic response for satellite records. Complete data series are most effectively analyzed using direct multi-taper spectral estimation, either with prolate multitapers that efficiently minimize broadband bias, or with a set designed to minimize local bias. The latter group have frequently been approximated by sine tapers. Intermittent gaps in data may be patched over using custom designed interpolation. We focus on a different approach, using sets of multitapers explicitly designed to accommodate gaps in the data. The optimization problems for the prolate and minimum bias tapers are altered to allow a specific arrangement of data samples, producing a modified eigenvalue-eigenfunction problem. We have shown that the prolate tapers with gaps and the minimum bias tapers with gaps provide higher resolution spectral estimates with less leakage than spectral averaging of data sections bounded by gaps. Current work is focused on producing efficient algorithms for spectral estimation of data series with gaps. A major limitation is the time and memory needed for the solution of large eigenvalue problems used to calculate the tapers for long time series. Fortunately only a limited set of the largest eigenvalues are needed, and
Wavelet transform approach for fitting financial time series data
NASA Astrophysics Data System (ADS)
Ahmed, Amel Abdoullah; Ismail, Mohd Tahir
2015-10-01
This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.
Time series segmentation with shifting means hidden markov models
NASA Astrophysics Data System (ADS)
Kehagias, Ath.; Fortin, V.
2006-08-01
We present a new family of hidden Markov models and apply these to the segmentation of hydrological and environmental time series. The proposed hidden Markov models have a discrete state space and their structure is inspired from the shifting means models introduced by Chernoff and Zacks and by Salas and Boes. An estimation method inspired from the EM algorithm is proposed, and we show that it can accurately identify multiple change-points in a time series. We also show that the solution obtained using this algorithm can serve as a starting point for a Monte-Carlo Markov chain Bayesian estimation method, thus reducing the computing time needed for the Markov chain to converge to a stationary distribution.
Segmentation of biological multivariate time-series data
NASA Astrophysics Data System (ADS)
Omranian, Nooshin; Mueller-Roeber, Bernd; Nikoloski, Zoran
2015-03-01
Time-series data from multicomponent systems capture the dynamics of the ongoing processes and reflect the interactions between the components. The progression of processes in such systems usually involves check-points and events at which the relationships between the components are altered in response to stimuli. Detecting these events together with the implicated components can help understand the temporal aspects of complex biological systems. Here we propose a regularized regression-based approach for identifying breakpoints and corresponding segments from multivariate time-series data. In combination with techniques from clustering, the approach also allows estimating the significance of the determined breakpoints as well as the key components implicated in the emergence of the breakpoints. Comparative analysis with the existing alternatives demonstrates the power of the approach to identify biologically meaningful breakpoints in diverse time-resolved transcriptomics data sets from the yeast Saccharomyces cerevisiae and the diatom Thalassiosira pseudonana.
Mulstiscale Stochastic Generator of Multivariate Met-Ocean Time Series
NASA Astrophysics Data System (ADS)
Guanche, Yanira; Mínguez, Roberto; Méndez, Fernando J.
2013-04-01
The design of maritime structures requires information on sea state conditions that influence its behavior during its life cycle. In the last decades, there has been a increasing development of sea databases (buoys, reanalysis, satellite) that allow an accurate description of the marine climate and its interaction with a given structure in terms of functionality and stability. However, these databases have a limited timelength, and its appliance entails an associated uncertainty. To avoid this limitation, engineers try to sample synthetically generated time series, statistically consistent, which allow the simulation of longer time periods. The present work proposes a hybrid methodology to deal with this issue. It is based in the combination of clustering algorithms (k-means) and an autoregressive logistic regression model (logit). Since the marine climate is directly related to the atmospheric conditions at a synoptic scale, the proposed methodology takes both systems into account; generating simultaneously circulation patterns (weather types) time series and the sea state time series related. The generation of these time series can be summarized in three steps: (1) By applying the clustering technique k-means the atmospheric conditions are classified into a representative number of synoptical patterns (2) Taking into account different covariates involved (such as seasonality, interannual variability, trends or autoregressive term) the autoregressive logistic model is adjusted (3) Once the model is able to simulate weather types time series the last step is to generate multivariate hourly metocean parameters related to these weather types. This is done by an autoregressive model (ARMA) for each variable, including cross-correlation between them. To show the goodness of the proposed method the following data has been used: Sea Level Pressure (SLP) databases from NCEP-NCAR and Global Ocean Wave (GOW) reanalysis from IH Cantabria. The synthetical met-ocean hourly
Alignment of Noisy and Uniformly Scaled Time Series
NASA Astrophysics Data System (ADS)
Lipowsky, Constanze; Dranischnikow, Egor; Göttler, Herbert; Gottron, Thomas; Kemeter, Mathias; Schömer, Elmar
The alignment of noisy and uniformly scaled time series is an important but difficult task. Given two time series, one of which is a uniformly stretched subsequence of the other, we want to determine the stretching factor and the offset of the second time series within the first one. We adapted and enhanced different methods to address this problem: classical FFT-based approaches to determine the offset combined with a naïve search for the stretching factor or its direct computation in the frequency domain, bounded dynamic time warping and a new approach called shotgun analysis, which is inspired by sequencing and reassembling of genomes in bioinformatics. We thoroughly examined the strengths and weaknesses of the different methods on synthetic and real data sets. The FFT-based approaches are very accurate on high quality data, the shotgun approach is especially suitable for data with outliers. Dynamic time warping is a candidate for non-linear stretching or compression. We successfully applied the presented methods to identify steel coils via their thickness profiles.
Supplementing environmental isotopes with time series methods to date groundwater
NASA Astrophysics Data System (ADS)
Farlin, Julien
2015-04-01
A popular method to estimate the transit time of groundwater is to fit the predictions of a lumped parameter model (LPM) to environmental isotope measurements. The fitting, or inverse modeling, procedure consists in rejecting all parameters (or parameter combinations for more complex LPMs) that exceeds a given error threshold. In many usual cases where this does not lead to a single acceptable solution, additional and independent data can prove useful to further eliminate some of the remaining solutions. In the case study presented here, groundwater transit times have been estimated by combining tritium, temperature, and discharge measurements. Tritium measurements from a series of contact springs draining the Luxembourg Sandstone aquifer were used to estimate the two parameters of an exponential piston flow model. The piston flow parameter gives the transit time of tritium through the thick unsaturated zone of the aquifer, while the exponential component corresponds to its mean transit time in the saturated zone. Due to the limited extent of the tritium time series and the fact that tritium activity has nearly returned to its background concentration, the solution of the inverse modeling was not unique. The discharge measurements were then used to reduce the number of retained parameter combinations by estimating independently from tritium the transit time through the unsaturated and saturated zones. The former was calculated from the time lag between a time series of net annual recharge over ten years and the fluctuations in discharge over that same period, while the latter was calculated from the discharge recession during the dry season. Although both methods necessitate relatively long time series of at least a few years, they reduce dramatically the range of estimated transit times. Another possibility is to use the temperature signal measured in spring water. The amplitude damping and its shift relatively to air temperature (which we used as proxy for the
The Puoko-nui CCD Time-Series Photometer
NASA Astrophysics Data System (ADS)
Chote, P.; Sullivan, D. J.
2013-01-01
Puoko-nui (te reo Maori for ‘big eye’) is a precision time series photometer developed at Victoria University of Wellington, primarily for use with the 1m McLellan telescope at Mt John University Observatory (MJUO), at Lake Tekapo, New Zealand. GPS based timing provides excellent timing accuracy, and online reduction software processes frames as they are acquired. The user is presented with a simple user interface that includes instrument control and an up to date lightcurve and Fourier amplitude spectrum of the target star. Puoko-nui has been operating in its current form since early 2011, where it is primarily used to monitor pulsating white dwarf stars.
Rényi’s information transfer between financial time series
NASA Astrophysics Data System (ADS)
Jizba, Petr; Kleinert, Hagen; Shefaat, Mohammad
2012-05-01
In this paper, we quantify the statistical coherence between financial time series by means of the Rényi entropy. With the help of Campbell’s coding theorem, we show that the Rényi entropy selectively emphasizes only certain sectors of the underlying empirical distribution while strongly suppressing others. This accentuation is controlled with Rényi’s parameter q. To tackle the issue of the information flow between time series, we formulate the concept of Rényi’s transfer entropy as a measure of information that is transferred only between certain parts of underlying distributions. This is particularly pertinent in financial time series, where the knowledge of marginal events such as spikes or sudden jumps is of a crucial importance. We apply the Rényian information flow to stock market time series from 11 world stock indices as sampled at a daily rate in the time period 02.01.1990-31.12.2009. Corresponding heat maps and net information flows are represented graphically. A detailed discussion of the transfer entropy between the DAX and S&P500 indices based on minute tick data gathered in the period 02.04.2008-11.09.2009 is also provided. Our analysis shows that the bivariate information flow between world markets is strongly asymmetric with a distinct information surplus flowing from the Asia-Pacific region to both European and US markets. An important yet less dramatic excess of information also flows from Europe to the US. This is particularly clearly seen from a careful analysis of Rényi information flow between the DAX and S&P500 indices.
Assessing spatial covariance among time series of abundance.
Jorgensen, Jeffrey C; Ward, Eric J; Scheuerell, Mark D; Zabel, Richard W
2016-04-01
For species of conservation concern, an essential part of the recovery planning process is identifying discrete population units and their location with respect to one another. A common feature among geographically proximate populations is that the number of organisms tends to covary through time as a consequence of similar responses to exogenous influences. In turn, high covariation among populations can threaten the persistence of the larger metapopulation. Historically, explorations of the covariance in population size of species with many (>10) time series have been computationally difficult. Here, we illustrate how dynamic factor analysis (DFA) can be used to characterize diversity among time series of population abundances and the degree to which all populations can be represented by a few common signals. Our application focuses on anadromous Chinook salmon (Oncorhynchus tshawytscha), a species listed under the US Endangered Species Act, that is impacted by a variety of natural and anthropogenic factors. Specifically, we fit DFA models to 24 time series of population abundance and used model selection to identify the minimum number of latent variables that explained the most temporal variation after accounting for the effects of environmental covariates. We found support for grouping the time series according to 5 common latent variables. The top model included two covariates: the Pacific Decadal Oscillation in spring and summer. The assignment of populations to the latent variables matched the currently established population structure at a broad spatial scale. At a finer scale, there was more population grouping complexity. Some relatively distant populations were grouped together, and some relatively close populations - considered to be more aligned with each other - were more associated with populations further away. These coarse- and fine-grained examinations of spatial structure are important because they reveal different structural patterns not evident
Financial time series analysis based on information categorization method
NASA Astrophysics Data System (ADS)
Tian, Qiang; Shang, Pengjian; Feng, Guochen
2014-12-01
The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.
Dynamical Analysis and Visualization of Tornadoes Time Series
2015-01-01
In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns. PMID:25790281
Dynamical analysis and visualization of tornadoes time series.
Lopes, António M; Tenreiro Machado, J A
2015-01-01
In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns. PMID:25790281
Satellite time series analysis using Empirical Mode Decomposition
NASA Astrophysics Data System (ADS)
Pannimpullath, R. Renosh; Doolaeghe, Diane; Loisel, Hubert; Vantrepotte, Vincent; Schmitt, Francois G.
2016-04-01
Geophysical fields possess large fluctuations over many spatial and temporal scales. Satellite successive images provide interesting sampling of this spatio-temporal multiscale variability. Here we propose to consider such variability by performing satellite time series analysis, pixel by pixel, using Empirical Mode Decomposition (EMD). EMD is a time series analysis technique able to decompose an original time series into a sum of modes, each one having a different mean frequency. It can be used to smooth signals, to extract trends. It is built in a data-adaptative way, and is able to extract information from nonlinear signals. Here we use MERIS Suspended Particulate Matter (SPM) data, on a weekly basis, during 10 years. There are 458 successive time steps. We have selected 5 different regions of coastal waters for the present study. They are Vietnam coastal waters, Brahmaputra region, St. Lawrence, English Channel and McKenzie. These regions have high SPM concentrations due to large scale river run off. Trend and Hurst exponents are derived for each pixel in each region. The energy also extracted using Hilberts Spectral Analysis (HSA) along with EMD method. Normalised energy computed for each mode for each region with the total energy. The total energy computed using all the modes are extracted using EMD method.
Learning time series evolution by unsupervised extraction of correlations
NASA Astrophysics Data System (ADS)
Deco, Gustavo; Schürmann, Bernd
1995-03-01
We focus on the problem of modeling time series by learning statistical correlations between the past and present elements of the series in an unsupervised fashion. This kind of correlation is, in general, nonlinear, especially in the chaotic domain. Therefore the learning algorithm should be able to extract statistical correlations, i.e., higher-order correlations between the elements of the time signal. This problem can be viewed as a special case of factorial learning. Factorial learning may be formulated as an unsupervised redundancy reduction between the output components of a transformation that conserves the transmitted information. An information-theoretic-based architecture and learning paradigm are introduced. The neural architecture has only one layer and a triangular structure in order to transform elements by observing only the past and to conserve the volume. In this fashion, a transformation that guarantees transmission of information without loss is formulated. The learning rule decorrelates the output components of the network. Two methods are used: higher-order decorrelation by explicit evaluation of higher-order cumulants of the output distributions, and minimization of the sum of entropies of each output component in order to minimize the mutual information between them, assuming that the entropies have an upper bound given by Gibbs second theorem. After decorrelation between the output components, the correlation between the elements of the time series can be extracted by analyzing the trained neural architecture. As a consequence, we are able to model chaotic and nonchaotic time series. Furthermore, one critical point in modeling time series is the determination of the dimension of the embedding vector used, i.e., the number of components of the past that are needed to predict the future. With this method we can detect the embedding dimension by extracting the influence of the past on the future, i.e., the correlation of remote past and future
Scalable time series change detection for biomass monitoring using gaussian process
Chandola, Varun; Vatsavai, Raju
2010-01-01
Biomass monitoring, specifically detecting changes in the biomass or vegetation of a geographical region, is vital for studying the carbon cycle of the system and has significant implications in the context of understanding climate change and its impacts. Recently, several time series change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) has been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. In this paper we address the scalability aspect of GP based time series change detection. Specifically, we exploit the special structure of the covariance matrix generated for GP analysis to come up with methods that can efficiently estimate the hyper-parameters associated with GP as well as identify changes in the time series while requiring a memory footprint which is linear in the size of input data, as compared to traditional method which involves solving a linear system of equations for the Choleksy decomposition of the quadratic sized covariance matrix. Experimental results show that our proposed method achieves significant speedups, as high as 1000, when processing long time series, while maintaining a small memory footprint. To further improve the computational complexity of the proposed method, we provide a parallel version which can concurrently process multiple input time series using the same set of hyper-parameters. The parallel version exploits the natural parallelization potential of the serial algorithm and is shown to perform significantly better than the serial version, with speedups as high as 10. Finally, we demonstrate the
Nonlinear modeling of chaotic time series: Theory and applications
NASA Astrophysics Data System (ADS)
Casdagli, M.; Eubank, S.; Farmer, J. D.; Gibson, J.; Desjardins, D.; Hunter, N.; Theiler, J.
We review recent developments in the modeling and prediction of nonlinear time series. In some cases, apparent randomness in time series may be due to chaotic behavior of a nonlinear but deterministic system. In such cases, it is possible to exploit the determinism to make short term forecasts that are much more accurate than one could make from a linear stochastic model. This is done by first reconstructing a state space, and then using nonlinear function approximation methods to create a dynamical model. Nonlinear models are valuable not only as short term forecasters, but also as diagnostic tools for identifying and quantifying low-dimensional chaotic behavior. During the past few years, methods for nonlinear modeling have developed rapidly, and have already led to several applications where nonlinear models motivated by chaotic dynamics provide superior predictions to linear models. These applications include prediction of fluid flows, sunspots, mechanical vibrations, ice ages, measles epidemics, and human speech.
Modeling Philippine Stock Exchange Composite Index Using Time Series Analysis
NASA Astrophysics Data System (ADS)
Gayo, W. S.; Urrutia, J. D.; Temple, J. M. F.; Sandoval, J. R. D.; Sanglay, J. E. A.
2015-06-01
This study was conducted to develop a time series model of the Philippine Stock Exchange Composite Index and its volatility using the finite mixture of ARIMA model with conditional variance equations such as ARCH, GARCH, EG ARCH, TARCH and PARCH models. Also, the study aimed to find out the reason behind the behaviorof PSEi, that is, which of the economic variables - Consumer Price Index, crude oil price, foreign exchange rate, gold price, interest rate, money supply, price-earnings ratio, Producers’ Price Index and terms of trade - can be used in projecting future values of PSEi and this was examined using Granger Causality Test. The findings showed that the best time series model for Philippine Stock Exchange Composite index is ARIMA(1,1,5) - ARCH(1). Also, Consumer Price Index, crude oil price and foreign exchange rate are factors concluded to Granger cause Philippine Stock Exchange Composite Index.
Time Series Analysis of 3D Coordinates Using Nonstochastic Observations
NASA Astrophysics Data System (ADS)
Velsink, Hiddo
2016-03-01
Adjustment and testing of a combination of stochastic and nonstochastic observations is applied to the deformation analysis of a time series of 3D coordinates. Nonstochastic observations are constant values that are treated as if they were observations. They are used to formulate constraints on the unknown parameters of the adjustment problem. Thus they describe deformation patterns. If deformation is absent, the epochs of the time series are supposed to be related via affine, similarity or congruence transformations. S-basis invariant testing of deformation patterns is treated. The model is experimentally validated by showing the procedure for a point set of 3D coordinates, determined from total station measurements during five epochs. The modelling of two patterns, the movement of just one point in several epochs, and of several points, is shown. Full, rank deficient covariance matrices of the 3D coordinates, resulting from free network adjustments of the total station measurements of each epoch, are used in the analysis.
Fast Nonparametric Clustering of Structured Time-Series.
Hensman, James; Rattray, Magnus; Lawrence, Neil D
2015-02-01
In this publication, we combine two Bayesian nonparametric models: the Gaussian Process (GP) and the Dirichlet Process (DP). Our innovation in the GP model is to introduce a variation on the GP prior which enables us to model structured time-series data, i.e., data containing groups where we wish to model inter- and intra-group variability. Our innovation in the DP model is an implementation of a new fast collapsed variational inference procedure which enables us to optimize our variational approximation significantly faster than standard VB approaches. In a biological time series application we show how our model better captures salient features of the data, leading to better consistency with existing biological classifications, while the associated inference algorithm provides a significant speed-up over EM-based variational inference. PMID:26353249
Deviations from uniform power law scaling in nonstationary time series
NASA Technical Reports Server (NTRS)
Viswanathan, G. M.; Peng, C. K.; Stanley, H. E.; Goldberger, A. L.
1997-01-01
A classic problem in physics is the analysis of highly nonstationary time series that typically exhibit long-range correlations. Here we test the hypothesis that the scaling properties of the dynamics of healthy physiological systems are more stable than those of pathological systems by studying beat-to-beat fluctuations in the human heart rate. We develop techniques based on the Fano factor and Allan factor functions, as well as on detrended fluctuation analysis, for quantifying deviations from uniform power-law scaling in nonstationary time series. By analyzing extremely long data sets of up to N = 10(5) beats for 11 healthy subjects, we find that the fluctuations in the heart rate scale approximately uniformly over several temporal orders of magnitude. By contrast, we find that in data sets of comparable length for 14 subjects with heart disease, the fluctuations grow erratically, indicating a loss of scaling stability.
Simple Patterns in Fluctuations of Time Series of Economic Interest
NASA Astrophysics Data System (ADS)
Fanchiotti, H.; García Canal, C. A.; García Zúñiga, H.
Time series corresponding to nominal exchange rates between the US dollar and Argentina, Brazil and European Economic Community currencies; different financial indexes as the Industrial Dow Jones, the British Footsie, the German DAX Composite, the Australian Share Price and the Nikkei Cash and also different Argentine local tax revenues, are analyzed looking for the appearance of simple patterns and the possible definition of forecast evaluators. In every case, the statistical fractal dimensions are obtained from the behavior of the corresponding variance of increments at a given lag. The detrended fluctuation analysis of the data in terms of the corresponding exponent in the resulting power law is carried out. Finally, the frequency power spectra of all the time series considered are computed and compared
Analyzing Single-Molecule Time Series via Nonparametric Bayesian Inference
Hines, Keegan E.; Bankston, John R.; Aldrich, Richard W.
2015-01-01
The ability to measure the properties of proteins at the single-molecule level offers an unparalleled glimpse into biological systems at the molecular scale. The interpretation of single-molecule time series has often been rooted in statistical mechanics and the theory of Markov processes. While existing analysis methods have been useful, they are not without significant limitations including problems of model selection and parameter nonidentifiability. To address these challenges, we introduce the use of nonparametric Bayesian inference for the analysis of single-molecule time series. These methods provide a flexible way to extract structure from data instead of assuming models beforehand. We demonstrate these methods with applications to several diverse settings in single-molecule biophysics. This approach provides a well-constrained and rigorously grounded method for determining the number of biophysical states underlying single-molecule data. PMID:25650922
Nonlinear modeling of chaotic time series: Theory and applications
Casdagli, M.; Eubank, S.; Farmer, J.D.; Gibson, J. Santa Fe Inst., NM ); Des Jardins, D.; Hunter, N.; Theiler, J. )
1990-01-01
We review recent developments in the modeling and prediction of nonlinear time series. In some cases apparent randomness in time series may be due to chaotic behavior of a nonlinear but deterministic system. In such cases it is possible to exploit the determinism to make short term forecasts that are much more accurate than one could make from a linear stochastic model. This is done by first reconstructing a state space, and then using nonlinear function approximation methods to create a dynamical model. Nonlinear models are valuable not only as short term forecasters, but also as diagnostic tools for identifying and quantifying low-dimensional chaotic behavior. During the past few years methods for nonlinear modeling have developed rapidly, and have already led to several applications where nonlinear models motivated by chaotic dynamics provide superior predictions to linear models. These applications include prediction of fluid flows, sunspots, mechanical vibrations, ice ages, measles epidemics and human speech. 162 refs., 13 figs.
The multiscale analysis between stock market time series
NASA Astrophysics Data System (ADS)
Shi, Wenbin; Shang, Pengjian
2015-11-01
This paper is devoted to multiscale cross-correlation analysis on stock market time series, where multiscale DCCA cross-correlation coefficient as well as multiscale cross-sample entropy (MSCE) is applied. Multiscale DCCA cross-correlation coefficient is a realization of DCCA cross-correlation coefficient on multiple scales. The results of this method present a good scaling characterization. More significantly, this method is able to group stock markets by areas. Compared to multiscale DCCA cross-correlation coefficient, MSCE presents a more remarkable scaling characterization and the value of each log return of financial time series decreases with the increasing of scale factor. But the results of grouping is not as good as multiscale DCCA cross-correlation coefficient.
The Connected Scatterplot for Presenting Paired Time Series.
Haroz, Steve; Kosara, Robert; Franconeri, Steven L
2016-09-01
The connected scatterplot visualizes two related time series in a scatterplot and connects the points with a line in temporal sequence. News media are increasingly using this technique to present data under the intuition that it is understandable and engaging. To explore these intuitions, we (1) describe how paired time series relationships appear in a connected scatterplot, (2) qualitatively evaluate how well people understand trends depicted in this format, (3) quantitatively measure the types and frequency of misinter pretations, and (4) empirically evaluate whether viewers will preferentially view graphs in this format over the more traditional format. The results suggest that low-complexity connected scatterplots can be understood with little explanation, and that viewers are biased towards inspecting connected scatterplots over the more traditional format. We also describe misinterpretations of connected scatterplots and propose further research into mitigating these mistakes for viewers unfamiliar with the technique. PMID:26600062
A Multiscale Approach to InSAR Time Series Analysis
NASA Astrophysics Data System (ADS)
Hetland, E. A.; Muse, P.; Simons, M.; Lin, N.; Dicaprio, C. J.
2010-12-01
We present a technique to constrain time-dependent deformation from repeated satellite-based InSAR observations of a given region. This approach, which we call MInTS (Multiscale InSAR Time Series analysis), relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. As opposed to single pixel InSAR time series techniques, MInTS takes advantage of both spatial and temporal characteristics of the deformation field. We use a weighting scheme which accounts for the presence of localized holes due to decorrelation or unwrapping errors in any given interferogram. We represent time-dependent deformation using a dictionary of general basis functions, capable of detecting both steady and transient processes. The estimation is regularized using a model resolution based smoothing so as to be able to capture rapid deformation where there are temporally dense radar acquisitions and to avoid oscillations during time periods devoid of acquisitions. MInTS also has the flexibility to explicitly parametrize known time-dependent processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). We use cross validation to choose the regularization penalty parameter in the inversion of for the time-dependent deformation field. We demonstrate MInTS using a set of 63 ERS-1/2 and 29 Envisat interferograms for Long Valley Caldera.
Multifractal analysis of time series generated by discrete Ito equations
Telesca, Luciano; Czechowski, Zbigniew; Lovallo, Michele
2015-06-15
In this study, we show that discrete Ito equations with short-tail Gaussian marginal distribution function generate multifractal time series. The multifractality is due to the nonlinear correlations, which are hidden in Markov processes and are generated by the interrelation between the drift and the multiplicative stochastic forces in the Ito equation. A link between the range of the generalized Hurst exponents and the mean of the squares of all averaged net forces is suggested.
Identification of neutral biochemical network models from time series data
Vilela, Marco; Vinga, Susana; Maia, Marco A Grivet Mattoso; Voit, Eberhard O; Almeida, Jonas S
2009-01-01
Background The major difficulty in modeling biological systems from multivariate time series is the identification of parameter sets that endow a model with dynamical behaviors sufficiently similar to the experimental data. Directly related to this parameter estimation issue is the task of identifying the structure and regulation of ill-characterized systems. Both tasks are simplified if the mathematical model is canonical, i.e., if it is constructed according to strict guidelines. Results In this report, we propose a method for the identification of admissible parameter sets of canonical S-systems from biological time series. The method is based on a Monte Carlo process that is combined with an improved version of our previous parameter optimization algorithm. The method maps the parameter space into the network space, which characterizes the connectivity among components, by creating an ensemble of decoupled S-system models that imitate the dynamical behavior of the time series with sufficient accuracy. The concept of sloppiness is revisited in the context of these S-system models with an exploration not only of different parameter sets that produce similar dynamical behaviors but also different network topologies that yield dynamical similarity. Conclusion The proposed parameter estimation methodology was applied to actual time series data from the glycolytic pathway of the bacterium Lactococcus lactis and led to ensembles of models with different network topologies. In parallel, the parameter optimization algorithm was applied to the same dynamical data upon imposing a pre-specified network topology derived from prior biological knowledge, and the results from both strategies were compared. The results suggest that the proposed method may serve as a powerful exploration tool for testing hypotheses and the design of new experiments. PMID:19416537
Time series prediction using a rational fraction neural networks
Lee, K.; Lee, Y.C.; Barnes, C.; Aldrich, C.H.; Kindel, J.
1988-01-01
An efficient neural network based on a rational fraction representation has been trained to perform time series prediction. The network is a generalization of the Volterra-Wiener network while still retaining the computational efficiency of the latter. Because of the second order convergent nature of the learning algorithm, the rational net is computationally far more efficient than multilayer networks. The rational fractional representation is, however, more restrictive than the multilayer networks.
Stratospheric ozone time series analysis using dynamical linear models
NASA Astrophysics Data System (ADS)
Laine, Marko; Kyrölä, Erkki
2013-04-01
We describe a hierarchical statistical state space model for ozone profile time series. The time series are from satellite measurements by the SAGE II and GOMOS instruments spanning years 1984-2012. The original data sets are combined and gridded monthly using 10 degree latitude bands, and covering 20-60 km with 1 km vertical spacing. Model components include level, trend, seasonal effect with solar activity, and quasi biennial oscillations as proxy variables. A typical feature of an atmospheric time series is that they are not stationary but exhibit both slowly varying and abrupt changes in the distributional properties. These are caused by external forcing such as changes in the solar activity or volcanic eruptions. Further, the data sampling is often nonuniform, there are data gaps, and the uncertainty of the observations can vary. When observations are combined from various sources there will be instrument and retrieval method related biases. The differences in sampling lead also to uncertainties. Standard classical ARIMA type of statistical time series methods are mostly useless for atmospheric data. A more general approach makes use of dynamical linear models and Kalman filter type of sequential algorithms. These state space models assume a linear relationship between the unknown state of the system and the observations and for the process evolution of the hidden states. They are still flexible enough to model both smooth trends and sudden changes. The above mentioned methodological challenges are discussed, together with analysis of change points in trends related to recovery of stratospheric ozone. This work is part of the ESA SPIN and ozone CCI projects.
A data-fitting procedure for chaotic time series
McDonough, J.M.; Mukerji, S.; Chung, S.
1998-10-01
In this paper the authors introduce data characterizations for fitting chaotic data to linear combinations of one-dimensional maps (say, of the unit interval) for use in subgrid-scale turbulence models. They test the efficacy of these characterizations on data generated by a chaotically-forced Burgers` equation and demonstrate very satisfactory results in terms of modeled time series, power spectra and delay maps.
An online novel adaptive filter for denoising time series measurements.
Willis, Andrew J
2006-04-01
A nonstationary form of the Wiener filter based on a principal components analysis is described for filtering time series data possibly derived from noisy instrumentation. The theory of the filter is developed, implementation details are presented and two examples are given. The filter operates online, approximating the maximum a posteriori optimal Bayes reconstruction of a signal with arbitrarily distributed and non stationary statistics. PMID:16649562
One nanosecond time synchronization using series and GPS
NASA Technical Reports Server (NTRS)
Buennagel, A. A.; Spitzmesser, D. J.; Young, L. E.
1983-01-01
Subnanosecond time sychronization between two remote rubidium frequency standards is verified by a traveling clock comparison. Using a novel, code ignorant Global Positioning System (GPS) receiver developed at JPL, the SERIES geodetic baseline measurement system is applied to establish the offset between the 1 Hz. outputs of the remote standards. Results of the two intercomparison experiments to date are presented as well as experimental details.
Time series regression model for infectious disease and weather.
Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro
2015-10-01
Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. PMID:26188633
New Comprehensive System to Construct Speleothem Fabrics Time Series
NASA Astrophysics Data System (ADS)
Frisia, S.; Borsato, A.
2014-12-01
Speleothem fabrics record processes that influence the way geochemical proxy data are encoded in speleothems, yet, there has been little advance in the use of fabrics as a complement to palaeo-proxy datasets since the fabric classification proposed by us in 2010. The systematic use of fabrics documentation in speleothem science has been limited by the absence of a comprehensive, numerical system that would allow constructing fabric time series comparable with the widely used geochemical time series. Documentation of speleothem fabrics is fundamental for a robust interpretation of speleothem time series where stable isotopes and trace elements are used as proxy, because fabrics highlight depositional as well as post-depositional processes whose understanding complements reconstructions based on geochemistry. Here we propose a logic system allowing transformation of microscope observations into numbers tied to acronyms that specify each fabric type and subtype. The rationale for ascribing progressive numbers to fabrics is based on the most up-to-date growth models. In this conceptual framework, the progression reflects hydrological conditions, bio-mediation and diagenesis. The lowest numbers are given to calcite fabrics formed at relatively constant drip rates: the columnar types (compact and open). Higher numbers are ascribed to columnar fabrics characterized by presence of impurities that cause elongation or lattice distortion (Elongated, Fascicular Optic and Radiaxial calcites). The sequence progresses with the dendritic fabrics, followed by micrite (M), which has been observed in association with microbial films. Microsparite (Ms) and mosaic calcite (Mc) have the highest numbers, being considered as diagenetic. Acronyms and subfixes are intended to become universally acknowledged. Thus, fabrics can be plotted vs. age to yield time series, where numbers are replaced by the acronyms. This will result in a visual representation of climate- or environmental
Mode Analysis with Autocorrelation Method (Single Time Series) in Tokamak
NASA Astrophysics Data System (ADS)
Saadat, Shervin; Salem, Mohammad K.; Goranneviss, Mahmoud; Khorshid, Pejman
2010-08-01
In this paper plasma mode analyzed with statistical method that designated Autocorrelation function. Auto correlation function used from one time series, so for this purpose we need one Minov coil. After autocorrelation analysis on mirnov coil data, spectral density diagram is plotted. Spectral density diagram from symmetries and trends can analyzed plasma mode. RHF fields effects with this method ate investigated in IR-T1 tokamak and results corresponded with multichannel methods such as SVD and FFT.
NASA Astrophysics Data System (ADS)
Phillips, D. A.; Meertens, C. M.; Hodgkinson, K. M.; Puskas, C. M.; Boler, F. M.; Snett, L.; Mattioli, G. S.
2013-12-01
We present an overview of time series data, tools and services available from UNAVCO along with two specific and compelling examples of geodetic time series analysis. UNAVCO provides a diverse suite of geodetic data products and cyberinfrastructure services to support community research and education. The UNAVCO archive includes data from 2500+ continuous GPS stations, borehole geophysics instruments (strainmeters, seismometers, tiltmeters, pore pressure sensors), and long baseline laser strainmeters. These data span temporal scales from seconds to decades and provide global spatial coverage with regionally focused networks including the EarthScope Plate Boundary Observatory (PBO) and COCONet. This rich, open access dataset is a tremendous resource that enables the exploration, identification and analysis of time varying signals associated with crustal deformation, reference frame determinations, isostatic adjustments, atmospheric phenomena, hydrologic processes and more. UNAVCO provides a suite of time series exploration and analysis resources including static plots, dynamic plotting tools, and data products and services designed to enhance time series analysis. The PBO GPS network allow for identification of ~1 mm level deformation signals. At some GPS stations seasonal signals and longer-term trends in both the vertical and horizontal components can be dominated by effects of hydrological loading from natural and anthropogenic sources. Modeling of hydrologic deformation using GLDAS and a variety of land surface models (NOAH, MOSAIC, VIC and CLM) shows promise for independently modeling hydrologic effects and separating them from tectonic deformation as well as anthropogenic loading sources. A major challenge is to identify where loading is dominant and corrections from GLDAS can apply and where pumping is the dominant signal and corrections are not possible without some other data. In another arena, the PBO strainmeter network was designed to capture small short
Data visualization in interactive maps and time series
NASA Astrophysics Data System (ADS)
Maigne, Vanessa; Evano, Pascal; Brockmann, Patrick; Peylin, Philippe; Ciais, Philippe
2014-05-01
State-of-the-art data visualization has nothing to do with plots and maps we used few years ago. Many opensource tools are now available to provide access to scientific data and implement accessible, interactive, and flexible web applications. Here we will present a web site opened November 2013 to create custom global and regional maps and time series from research models and datasets. For maps, we explore and get access to data sources from a THREDDS Data Server (TDS) with the OGC WMS protocol (using the ncWMS implementation) then create interactive maps with the OpenLayers javascript library and extra information layers from a GeoServer. Maps become dynamic, zoomable, synchroneaously connected to each other, and exportable to Google Earth. For time series, we extract data from a TDS with the Netcdf Subset Service (NCSS) then display interactive graphs with a custom library based on the Data Driven Documents javascript library (D3.js). This time series application provides dynamic functionalities such as interpolation, interactive zoom on different axes, display of point values, and export to different formats. These tools were implemented for the Global Carbon Atlas (http://www.globalcarbonatlas.org): a web portal to explore, visualize, and interpret global and regional carbon fluxes from various model simulations arising from both human activities and natural processes, a work led by the Global Carbon Project.
Characterization of aggressive prostate cancer using ultrasound RF time series
NASA Astrophysics Data System (ADS)
Khojaste, Amir; Imani, Farhad; Moradi, Mehdi; Berman, David; Siemens, D. Robert; Sauerberi, Eric E.; Boag, Alexander H.; Abolmaesumi, Purang; Mousavi, Parvin
2015-03-01
Prostate cancer is the most prevalently diagnosed and the second cause of cancer-related death in North American men. Several approaches have been proposed to augment detection of prostate cancer using different imaging modalities. Due to advantages of ultrasound imaging, these approaches have been the subject of several recent studies. This paper presents the results of a feasibility study on differentiating between lower and higher grade prostate cancer using ultrasound RF time series data. We also propose new spectral features of RF time series to highlight aggressive prostate cancer in small ROIs of size 1 mm × 1 mm in a cohort of 19 ex vivo specimens of human prostate tissue. In leave-one-patient-out cross-validation strategy, an area under accumulated ROC curve of 0.8 has been achieved with overall sensitivity and specificity of 81% and 80%, respectively. The current method shows promising results on differentiating between lower and higher grade of prostate cancer using ultrasound RF time series.
Time series analysis for psychological research: examining and forecasting change
Jebb, Andrew T.; Tay, Louis; Wang, Wei; Huang, Qiming
2015-01-01
Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials. PMID:26106341
Time series analysis for psychological research: examining and forecasting change.
Jebb, Andrew T; Tay, Louis; Wang, Wei; Huang, Qiming
2015-01-01
Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials. PMID:26106341
Hydroxyl time series and recirculation in turbulent nonpremixed swirling flames
Guttenfelder, Walter A.; Laurendeau, Normand M.; Ji, Jun; King, Galen B.; Gore, Jay P.; Renfro, Michael W.
2006-10-15
Time-series measurements of OH, as related to accompanying flow structures, are reported using picosecond time-resolved laser-induced fluorescence (PITLIF) and particle-imaging velocimetry (PIV) for turbulent, swirling, nonpremixed methane-air flames. The [OH] data portray a primary reaction zone surrounding the internal recirculation zone, with residual OH in the recirculation zone approaching chemical equilibrium. Modeling of the OH electronic quenching environment, when compared to fluorescence lifetime measurements, offers additional evidence that the reaction zone burns as a partially premixed flame. A time-series analysis affirms the presence of thin flamelet-like regions based on the relation between swirl-induced turbulence and fluctuations of [OH] in the reaction and recirculation zones. The OH integral time-scales are found to correspond qualitatively to local mean velocities. Furthermore, quantitative dependencies can be established with respect to axial position, Reynolds number, and global equivalence ratio. Given these relationships, the OH time-scales, and thus the primary reaction zone, appear to be dominated by convection-driven fluctuations. Surprisingly, the OH time-scales for these nominally swirling flames demonstrate significant similarities to previous PITLIF results in nonpremixed jet flames. (author)
A method for generating high resolution satellite image time series
NASA Astrophysics Data System (ADS)
Guo, Tao
2014-10-01
There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation
Estimating the Lyapunov spectrum of time delay feedback systems from scalar time series
NASA Astrophysics Data System (ADS)
Hegger, Rainer
1999-08-01
On the basis of a recently developed method for modeling time delay systems, we propose a procedure to estimate the spectrum of Lyapunov exponents from a scalar time series. It turns out that the spectrum is approximated very well and allows for good estimates of the Lyapunov dimension even if the sampling rate of the time series is so low that the infinite dimensional tangent space is spanned quite sparsely.
Estimating the Lyapunov spectrum of time delay feedback systems from scalar time series.
Hegger, R
1999-08-01
On the basis of a recently developed method for modeling time delay systems, we propose a procedure to estimate the spectrum of Lyapunov exponents from a scalar time series. It turns out that the spectrum is approximated very well and allows for good estimates of the Lyapunov dimension even if the sampling rate of the time series is so low that the infinite dimensional tangent space is spanned quite sparsely. PMID:11969918
Robust, automatic GPS station velocities and velocity time series
NASA Astrophysics Data System (ADS)
Blewitt, G.; Kreemer, C.; Hammond, W. C.
2014-12-01
Automation in GPS coordinate time series analysis makes results more objective and reproducible, but not necessarily as robust as the human eye to detect problems. Moreover, it is not a realistic option to manually scan our current load of >20,000 time series per day. This motivates us to find an automatic way to estimate station velocities that is robust to outliers, discontinuities, seasonality, and noise characteristics (e.g., heteroscedasticity). Here we present a non-parametric method based on the Theil-Sen estimator, defined as the median of velocities vij=(xj-xi)/(tj-ti) computed between all pairs (i, j). Theil-Sen estimators produce statistically identical solutions to ordinary least squares for normally distributed data, but they can tolerate up to 29% of data being problematic. To mitigate seasonality, our proposed estimator only uses pairs approximately separated by an integer number of years (N-δt)<(tj-ti )<(N+δt), where δt is chosen to be small enough to capture seasonality, yet large enough to reduce random error. We fix N=1 to maximally protect against discontinuities. In addition to estimating an overall velocity, we also use these pairs to estimate velocity time series. To test our methods, we process real data sets that have already been used with velocities published in the NA12 reference frame. Accuracy can be tested by the scatter of horizontal velocities in the North American plate interior, which is known to be stable to ~0.3 mm/yr. This presents new opportunities for time series interpretation. For example, the pattern of velocity variations at the interannual scale can help separate tectonic from hydrological processes. Without any step detection, velocity estimates prove to be robust for stations affected by the Mw7.2 2010 El Mayor-Cucapah earthquake, and velocity time series show a clear change after the earthquake, without any of the usual parametric constraints, such as relaxation of postseismic velocities to their preseismic values.
Detecting hidden nodes in complex networks from time series.
Su, Ri-Qi; Wang, Wen-Xu; Lai, Ying-Cheng
2012-06-01
We develop a general method to detect hidden nodes in complex networks, using only time series from nodes that are accessible to external observation. Our method is based on compressive sensing and we formulate a general framework encompassing continuous- and discrete-time and the evolutionary-game type of dynamical systems as well. For concrete demonstration, we present an example of detecting hidden nodes from an experimental social network. Our paradigm for detecting hidden nodes is expected to find applications in a variety of fields where identifying hidden or black-boxed objects based on a limited amount of data is of interest. PMID:23005153
Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James
2013-01-01
This paper addresses the problem of detecting and characterizing local variability in time series and other forms of sequential data. The goal is to identify and characterize statistically significant variations, at the same time suppressing the inevitable corrupting observational errors. We present a simple nonparametric modeling technique and an algorithm implementing it-an improved and generalized version of Bayesian Blocks [Scargle 1998]-that finds the optimal segmentation of the data in the observation interval. The structure of the algorithm allows it to be used in either a real-time trigger mode, or a retrospective mode. Maximum likelihood or marginal posterior functions to measure model fitness are presented for events, binned counts, and measurements at arbitrary times with known error distributions. Problems addressed include those connected with data gaps, variable exposure, extension to piece- wise linear and piecewise exponential representations, multivariate time series data, analysis of variance, data on the circle, other data modes, and dispersed data. Simulations provide evidence that the detection efficiency for weak signals is close to a theoretical asymptotic limit derived by [Arias-Castro, Donoho and Huo 2003]. In the spirit of Reproducible Research [Donoho et al. (2008)] all of the code and data necessary to reproduce all of the figures in this paper are included as auxiliary material.
STUDIES IN ASTRONOMICAL TIME SERIES ANALYSIS. VI. BAYESIAN BLOCK REPRESENTATIONS
Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James
2013-02-20
This paper addresses the problem of detecting and characterizing local variability in time series and other forms of sequential data. The goal is to identify and characterize statistically significant variations, at the same time suppressing the inevitable corrupting observational errors. We present a simple nonparametric modeling technique and an algorithm implementing it-an improved and generalized version of Bayesian Blocks-that finds the optimal segmentation of the data in the observation interval. The structure of the algorithm allows it to be used in either a real-time trigger mode, or a retrospective mode. Maximum likelihood or marginal posterior functions to measure model fitness are presented for events, binned counts, and measurements at arbitrary times with known error distributions. Problems addressed include those connected with data gaps, variable exposure, extension to piecewise linear and piecewise exponential representations, multivariate time series data, analysis of variance, data on the circle, other data modes, and dispersed data. Simulations provide evidence that the detection efficiency for weak signals is close to a theoretical asymptotic limit derived by Arias-Castro et al. In the spirit of Reproducible Research all of the code and data necessary to reproduce all of the figures in this paper are included as supplementary material.
Nonlinear time-series-based adaptive control applications
NASA Technical Reports Server (NTRS)
Mohler, R. R.; Rajkumar, V.; Zakrzewski, R. R.
1991-01-01
A control design methodology based on a nonlinear time-series reference model is presented. It is indicated by highly nonlinear simulations that such designs successfully stabilize troublesome aircraft maneuvers undergoing large changes in angle of attack as well as large electric power transients due to line faults. In both applications, the nonlinear controller was significantly better than the corresponding linear adaptive controller. For the electric power network, a flexible AC transmission system with series capacitor power feedback control is studied. A bilinear autoregressive moving average reference model is identified from system data, and the feedback control is manipulated according to a desired reference state. The control is optimized according to a predictive one-step quadratic performance index. A similar algorithm is derived for control of rapid changes in aircraft angle of attack over a normally unstable flight regime. In the latter case, however, a generalization of a bilinear time-series model reference includes quadratic and cubic terms in angle of attack.
Unraveling the cause-effect relation between time series
NASA Astrophysics Data System (ADS)
Liang, X. San
2014-11-01
Given two time series, can one faithfully tell, in a rigorous and quantitative way, the cause and effect between them? Based on a recently rigorized physical notion, namely, information flow, we solve an inverse problem and give this important and challenging question, which is of interest in a wide variety of disciplines, a positive answer. Here causality is measured by the time rate of information flowing from one series to the other. The resulting formula is tight in form, involving only commonly used statistics, namely, sample covariances; an immediate corollary is that causation implies correlation, but correlation does not imply causation. It has been validated with touchstone linear and nonlinear series, purportedly generated with one-way causality that evades the traditional approaches. It has also been applied successfully to the investigation of real-world problems; an example presented here is the cause-and-effect relation between the two climate modes, El Niño and the Indian Ocean Dipole (IOD), which have been linked to hazards in far-flung regions of the globe. In general, the two modes are mutually causal, but the causality is asymmetric: El Niño tends to stabilize IOD, while IOD functions to make El Niño more uncertain. To El Niño, the information flowing from IOD manifests itself as a propagation of uncertainty from the Indian Ocean.
Time-series animation techniques for visualizing urban growth
Acevedo, W.; Masuoka, P.
1997-01-01
Time-series animation is a visually intuitive way to display urban growth. Animations of landuse change for the Baltimore-Washington region were generated by showing a series of images one after the other in sequential order. Before creating an animation, various issues which will affect the appearance of the animation should be considered, including the number of original data frames to use, the optimal animation display speed, the number of intermediate frames to create between the known frames, and the output media on which the animations will be displayed. To create new frames between the known years of data, the change in each theme (i.e. urban development, water bodies, transportation routes) must be characterized and an algorithm developed to create the in-between frames. Example time-series animations were created using a temporal GIS database of the Baltimore-Washington area. Creating the animations involved generating raster images of the urban development, water bodies, and principal transportation routes; overlaying the raster images on a background image; and importing the frames to a movie file. Three-dimensional perspective animations were created by draping each image over digital elevation data prior to importing the frames to a movie file. ?? 1997 Elsevier Science Ltd.
Unraveling the cause-effect relation between time series.
Liang, X San
2014-11-01
Given two time series, can one faithfully tell, in a rigorous and quantitative way, the cause and effect between them? Based on a recently rigorized physical notion, namely, information flow, we solve an inverse problem and give this important and challenging question, which is of interest in a wide variety of disciplines, a positive answer. Here causality is measured by the time rate of information flowing from one series to the other. The resulting formula is tight in form, involving only commonly used statistics, namely, sample covariances; an immediate corollary is that causation implies correlation, but correlation does not imply causation. It has been validated with touchstone linear and nonlinear series, purportedly generated with one-way causality that evades the traditional approaches. It has also been applied successfully to the investigation of real-world problems; an example presented here is the cause-and-effect relation between the two climate modes, El Niño and the Indian Ocean Dipole (IOD), which have been linked to hazards in far-flung regions of the globe. In general, the two modes are mutually causal, but the causality is asymmetric: El Niño tends to stabilize IOD, while IOD functions to make El Niño more uncertain. To El Niño, the information flowing from IOD manifests itself as a propagation of uncertainty from the Indian Ocean. PMID:25493782
Deriving crop calendar using NDVI time-series
NASA Astrophysics Data System (ADS)
Patel, J. H.; Oza, M. P.
2014-11-01
Agricultural intensification is defined in terms as cropping intensity, which is the numbers of crops (single, double and triple) per year in a unit cropland area. Information about crop calendar (i.e. number of crops in a parcel of land and their planting & harvesting dates and date of peak vegetative stage) is essential for proper management of agriculture. Remote sensing sensors provide a regular, consistent and reliable measurement of vegetation response at various growth stages of crop. Therefore it is ideally suited for monitoring purpose. The spectral response of vegetation, as measured by the Normalized Difference Vegetation Index (NDVI) and its profiles, can provide a new dimension for describing vegetation growth cycle. The analysis based on values of NDVI at regular time interval provides useful information about various crop growth stages and performance of crop in a season. However, the NDVI data series has considerable amount of local fluctuation in time domain and needs to be smoothed so that dominant seasonal behavior is enhanced. Based on temporal analysis of smoothed NDVI series, it is possible to extract number of crop cycles per year and their crop calendar. In the present study, a methodology is developed to extract key elements of crop growth cycle (i.e. number of crops per year and their planting - peak - harvesting dates). This is illustrated by analysing MODIS-NDVI data series of one agricultural year (from June 2012 to May 2013) over Gujarat. Such an analysis is very useful for analysing dynamics of kharif and rabi crops.
Time-Series Analysis of Supergranule Characterstics at Solar Minimum
NASA Technical Reports Server (NTRS)
Williams, Peter E.; Pesnell, W. Dean
2013-01-01
Sixty days of Doppler images from the Solar and Heliospheric Observatory (SOHO) / Michelson Doppler Imager (MDI) investigation during the 1996 and 2008 solar minima have been analyzed to show that certain supergranule characteristics (size, size range, and horizontal velocity) exhibit fluctuations of three to five days. Cross-correlating parameters showed a good, positive correlation between supergranulation size and size range, and a moderate, negative correlation between size range and velocity. The size and velocity do exhibit a moderate, negative correlation, but with a small time lag (less than 12 hours). Supergranule sizes during five days of co-temporal data from MDI and the Solar Dynamics Observatory (SDO) / Helioseismic Magnetic Imager (HMI) exhibit similar fluctuations with a high level of correlation between them. This verifies the solar origin of the fluctuations, which cannot be caused by instrumental artifacts according to these observations. Similar fluctuations are also observed in data simulations that model the evolution of the MDI Doppler pattern over a 60-day period. Correlations between the supergranule size and size range time-series derived from the simulated data are similar to those seen in MDI data. A simple toy-model using cumulative, uncorrelated exponential growth and decay patterns at random emergence times produces a time-series similar to the data simulations. The qualitative similarities between the simulated and the observed time-series suggest that the fluctuations arise from stochastic processes occurring within the solar convection zone. This behavior, propagating to surface manifestations of supergranulation, may assist our understanding of magnetic-field-line advection, evolution, and interaction.
Astrophysical constraints on massive black hole binary evolution from pulsar timing arrays
NASA Astrophysics Data System (ADS)
Middleton, Hannah; Del Pozzo, Walter; Farr, Will M.; Sesana, Alberto; Vecchio, Alberto
2016-01-01
We consider the information that can be derived about massive black hole binary (MBHB) populations and their formation history solely from current and possible future pulsar timing array (PTA) results. We use models of the stochastic gravitational-wave background from circular MBHBs with chirp mass in the range 106-1011 M⊙ evolving solely due to radiation reaction. Our parametrized models for the black hole merger history make only weak assumptions about the properties of the black holes merging over cosmic time. We show that current PTA results place an upper limit on the black hole merger density which does not depend on the choice of a particular merger history model; however, they provide no information about the redshift or mass distribution. We show that even in the case of a detection resulting from a factor of 10 increase in amplitude sensitivity, PTAs will only put weak constraints on the source merger density as a function of mass, and will not provide any additional information on the redshift distribution. Without additional assumptions or information from other observations, a detection cannot meaningfully bound the massive black hole merger rate above zero for any particular mass.
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1995-01-01
When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.
Quantile-oriented nonlinear time series modelling of river flows
NASA Astrophysics Data System (ADS)
Elek, P.; Márkus, L.
2003-04-01
Daily river flows of Tisza River in Hungary are investigated. Various by now classical methods suggest that the series exhibit substantial long memory. Thus, as a first step, a fractional ARIMA model may be fitted to the appropriately deseasonalised data. Synthetic streamflow series can then be generated easily from the bootstrapped innovations. (This approach has recently been used by Montanari et al., Water Resources Res. 33, 1035-1044., 1997.) However, simulating flows for the Tisza river this way, we experience a significant difference between the empirical and the synthetic density functions as well as the quantiles. It brings attention to the fact that the innovations are not independent: their squares and their absolute values are autocorrelated. Furthermore, they display nonseasonal periods of high and low variances. We propose to fit a smooth transition generalised autoregressive conditional heteroscedastic (GARCH) process to the innovations. Similar models are frequently used in mathematical finance to analyse uncorrelated series with time-varying variance. However, as hydrologic time series are less heavy-tailed than financial ones, the models must differ as well. In a standard GARCH-model the dependence of the variance on the lagged innovation is quadratic whereas in the model that we intend to present in detail at the conference, it is a bounded function. The new model is superior to the previously mentioned ones in approximating the probability density, the high quantiles and the extremal behaviour of the empirical river flows. Acknowledgement: This research was supported by Hungarian Research Dev. Programme NKFP, grant No. 3/067/2001 and by Nat. Sci. Research Fund OTKA, grant No. T 032725.
Removing atmosphere loading effect from GPS time series
NASA Astrophysics Data System (ADS)
Tiampo, K. F.; Samadi Alinia, H.; Samsonov, S. V.; Gonzalez, P. J.
2015-12-01
The GPS time series of site position are contaminated by various sources of noise; in particular, the ionospheric and tropospheric path delays are significant [Gray et al., 2000; Meyer et al., 2006]. The GPS path delay in the ionosphere is largely dependent on the wave frequency whereas the delay in troposphere is dependent on the length of the travel path and therefore site elevation. Various approaches available for compensating ionosphere path delay cannot be used for removal of the tropospheric component. Quantifying the tropospheric delay plays an important role for determination of the vertical GPS component precision, as tropospheric parameters over a large distance have very little correlation with each other. Several methods have been proposed for tropospheric signal elimination from GPS vertical time series. Here we utilize surface temperature fluctuations and seasonal variations in water vapour and air pressure data for various spatial and temporal profiles in order to more accurately remove the atmospheric path delay [Samsonov et al., 2014]. In this paper, we model the atmospheric path delay of vertical position time series by analyzing the signal in the frequency domain and study its dependency on topography in eastern Ontario for the time period from January 2008 to December 2012. Systematic dependency of amplitude of atmospheric path delay as a function of height and its temporal variations based on the development of a new, physics-based model relating tropospheric/atmospheric effects with topography and can help in determining the most accurate GPS position.The GPS time series of site position are contaminated by various sources of noise; in particular, the ionospheric and tropospheric path delays are significant [Gray et al., 2000; Meyer et al., 2006]. The GPS path delay in the ionosphere is largely dependent on the wave frequency whereas the delay in troposphere is dependent on the length of the travel path and therefore site elevation. Various
Monitoring Forest Regrowth Using a Multi-Platform Time Series
NASA Technical Reports Server (NTRS)
Sabol, Donald E., Jr.; Smith, Milton O.; Adams, John B.; Gillespie, Alan R.; Tucker, Compton J.
1996-01-01
Over the past 50 years, the forests of western Washington and Oregon have been extensively harvested for timber. This has resulted in a heterogeneous mosaic of remaining mature forests, clear-cuts, new plantations, and second-growth stands that now occur in areas that formerly were dominated by extensive old-growth forests and younger forests resulting from fire disturbance. Traditionally, determination of seral stage and stand condition have been made using aerial photography and spot field observations, a methodology that is not only time- and resource-intensive, but falls short of providing current information on a regional scale. These limitations may be solved, in part, through the use of multispectral images which can cover large areas at spatial resolutions in the order of tens of meters. The use of multiple images comprising a time series potentially can be used to monitor land use (e.g. cutting and replanting), and to observe natural processes such as regeneration, maturation and phenologic change. These processes are more likely to be spectrally observed in a time series composed of images taken during different seasons over a long period of time. Therefore, for many areas, it may be necessary to use a variety of images taken with different imaging systems. A common framework for interpretation is needed that reduces topographic, atmospheric, instrumental, effects as well as differences in lighting geometry between images. The present state of remote-sensing technology in general use does not realize the full potential of the multispectral data in areas of high topographic relief. For example, the primary method for analyzing images of forested landscapes in the Northwest has been with statistical classifiers (e.g. parallelepiped, nearest-neighbor, maximum likelihood, etc.), often applied to uncalibrated multispectral data. Although this approach has produced useful information from individual images in some areas, landcover classes defined by these
Remote-Sensing Time Series Analysis, a Vegetation Monitoring Tool
NASA Technical Reports Server (NTRS)
McKellip, Rodney; Prados, Donald; Ryan, Robert; Ross, Kenton; Spruce, Joseph; Gasser, Gerald; Greer, Randall
2008-01-01
The Time Series Product Tool (TSPT) is software, developed in MATLAB , which creates and displays high signal-to- noise Vegetation Indices imagery and other higher-level products derived from remotely sensed data. This tool enables automated, rapid, large-scale regional surveillance of crops, forests, and other vegetation. TSPT temporally processes high-revisit-rate satellite imagery produced by the Moderate Resolution Imaging Spectroradiometer (MODIS) and by other remote-sensing systems. Although MODIS imagery is acquired daily, cloudiness and other sources of noise can greatly reduce the effective temporal resolution. To improve cloud statistics, the TSPT combines MODIS data from multiple satellites (Aqua and Terra). The TSPT produces MODIS products as single time-frame and multitemporal change images, as time-series plots at a selected location, or as temporally processed image videos. Using the TSPT program, MODIS metadata is used to remove and/or correct bad and suspect data. Bad pixel removal, multiple satellite data fusion, and temporal processing techniques create high-quality plots and animated image video sequences that depict changes in vegetation greenness. This tool provides several temporal processing options not found in other comparable imaging software tools. Because the framework to generate and use other algorithms is established, small modifications to this tool will enable the use of a large range of remotely sensed data types. An effective remote-sensing crop monitoring system must be able to detect subtle changes in plant health in the earliest stages, before the effects of a disease outbreak or other adverse environmental conditions can become widespread and devastating. The integration of the time series analysis tool with ground-based information, soil types, crop types, meteorological data, and crop growth models in a Geographic Information System, could provide the foundation for a large-area crop-surveillance system that could identify
Dynamical recurrent neural networks--towards environmental time series prediction.
Aussem, A; Murtagh, F; Sarazin, M
1995-06-01
Dynamical Recurrent Neural Networks (DRNN) (Aussem 1995a) are a class of fully recurrent networks obtained by modeling synapses as autoregressive filters. By virtue of their internal dynamic, these networks approximate the underlying law governing the time series by a system of nonlinear difference equations of internal variables. They therefore provide history-sensitive forecasts without having to be explicitly fed with external memory. The model is trained by a local and recursive error propagation algorithm called temporal-recurrent-backpropagation. The efficiency of the procedure benefits from the exponential decay of the gradient terms backpropagated through the adjoint network. We assess the predictive ability of the DRNN model with meterological and astronomical time series recorded around the candidate observation sites for the future VLT telescope. The hope is that reliable environmental forecasts provided with the model will allow the modern telescopes to be preset, a few hours in advance, in the most suited instrumental mode. In this perspective, the model is first appraised on precipitation measurements with traditional nonlinear AR and ARMA techniques using feedforward networks. Then we tackle a complex problem, namely the prediction of astronomical seeing, known to be a very erratic time series. A fuzzy coding approach is used to reduce the complexity of the underlying laws governing the seeing. Then, a fuzzy correspondence analysis is carried out to explore the internal relationships in the data. Based on a carefully selected set of meteorological variables at the same time-point, a nonlinear multiple regression, termed nowcasting (Murtagh et al. 1993, 1995), is carried out on the fuzzily coded seeing records. The DRNN is shown to outperform the fuzzy k-nearest neighbors method. PMID:7496587
Loading effects in GPS vertical displacement time series
NASA Astrophysics Data System (ADS)
Memin, A.; Boy, J. P.; Santamaría-Gómez, A.; Watson, C.; Gravelle, M.; Tregoning, P.
2015-12-01
Surface deformations due to loading, with yet no comprehensive representation, account for a significant part of the variability in geodetic time series. We assess effects of loading in GPS vertical displacement time series at several frequency bands. We compare displacement derived from up-to-date loading models to two global sets of positioning time series, and investigate how they are reduced looking at interannual periods (> 2 months), intermediate periods (> 7 days) and the whole spectrum (> 1day). We assess the impact of interannual loading on estimating velocities. We compute atmospheric loading effects using surface pressure fields from the ECMWF. We use the inverted barometer (IB) hypothesis valid for periods exceeding a week to describe the ocean response to the pressure forcing. We used general circulation ocean model (ECCO and GLORYS) to account for wind, heat and fresh water flux. We separately use the Toulouse Unstructured Grid Ocean model (TUGO-m), forced by air pressure and winds, to represent the dynamics of the ocean response at high frequencies. The continental water storage is described using GLDAS/Noah and MERRA-land models. Non-hydrology loading reduces the variability of the observed vertical displacement differently according to the frequency band. The hydrology loading leads to a further reduction mostly at annual periods. ECMWF+TUGO-m better agrees with vertical surface motion than the ECMWF+IB model at all frequencies. The interannual deformation is time-correlated at most of the locations. It is adequately described by a power-law process of spectral index varying from -1.5 to -0.2. Depending on the power-law parameters, the predicted non-linear deformation due to mass loading variations leads to vertical velocity biases up to 0.7 mm/yr when estimated from 5 years of continuous observations. The maximum velocity bias can reach up to 1 mm/yr in regions around the southern Tropical band.
Identifying multiple periodicities in sparse photon event time series
NASA Astrophysics Data System (ADS)
Koen, Chris
2016-07-01
The data considered are event times (e.g. photon arrival times, or the occurrence of sharp pulses). The source is multiperiodic, or the data could be multiperiodic because several unresolved sources contribute to the time series. Most events may be unobserved, either because the source is intermittent, or because some events are below the detection limit. The data may also be contaminated by spurious pulses. The problem considered is the determination of the periods in the data. A two-step procedure is proposed: in the first, a likely period is identified; in the second, events associated with this periodicity are removed from the time series. The steps are repeated until the remaining events do not exhibit any periodicity. A number of period-finding methods from the literature are reviewed, and a new maximum likelihood statistic is also introduced. It is shown that the latter is competitive compared to other techniques. The proposed methodology is tested on simulated data. Observations of two rotating radio transients are discussed, but contrary to claims in the literature, no evidence for multiperiodicity could be found.
Identifying Multiple Periodicities in Sparse Photon Event Time Series
NASA Astrophysics Data System (ADS)
Koen, Chris
2016-04-01
The data considered are event times (e.g. photon arrival times, or the occurrence of sharp pulses). The source is multiperiodic, or the data could be multiperiodic because several unresolved sources contribute to the time series. Most events may be unobserved, either because the source is intermittent, or because some events are below the detection limit. The data may also be contaminated by spurious pulses. The problem considered is the determination of the periods in the data. A two-step procedure is proposed: in the first, a likely period is identified; in the second, events associated with this periodicity are removed from the time series. The steps are repeated until the remaining events do not exhibit any periodicity. A number of period-finding methods from the literature are reviewed, and a new maximum likelihood statistic is also introduced. It is shown that the latter is competitive compared to other techniques. The proposed methodology is tested on simulated data. Observations of two rotating radio transients are discussed, but contrary to claims in the literature, no evidence for multiperiodicity could be found.
Long-term time series prediction using OP-ELM.
Grigorievskiy, Alexander; Miche, Yoan; Ventelä, Anne-Mari; Séverin, Eric; Lendasse, Amaury
2014-03-01
In this paper, an Optimally Pruned Extreme Learning Machine (OP-ELM) is applied to the problem of long-term time series prediction. Three known strategies for the long-term time series prediction i.e. Recursive, Direct and DirRec are considered in combination with OP-ELM and compared with a baseline linear least squares model and Least-Squares Support Vector Machines (LS-SVM). Among these three strategies DirRec is the most time consuming and its usage with nonlinear models like LS-SVM, where several hyperparameters need to be adjusted, leads to relatively heavy computations. It is shown that OP-ELM, being also a nonlinear model, allows reasonable computational time for the DirRec strategy. In all our experiments, except one, OP-ELM with DirRec strategy outperforms the linear model with any strategy. In contrast to the proposed algorithm, LS-SVM behaves unstably without variable selection. It is also shown that there is no superior strategy for OP-ELM: any of three can be the best. In addition, the prediction accuracy of an ensemble of OP-ELM is studied and it is shown that averaging predictions of the ensemble can improve the accuracy (Mean Square Error) dramatically. PMID:24365536
Periodicity detection method for small-sample time series datasets.
Tominaga, Daisuke
2010-01-01
Time series of gene expression often exhibit periodic behavior under the influence of multiple signal pathways, and are represented by a model that incorporates multiple harmonics and noise. Most of these data, which are observed using DNA microarrays, consist of few sampling points in time, but most periodicity detection methods require a relatively large number of sampling points. We have previously developed a detection algorithm based on the discrete Fourier transform and Akaike's information criterion. Here we demonstrate the performance of the algorithm for small-sample time series data through a comparison with conventional and newly proposed periodicity detection methods based on a statistical analysis of the power of harmonics.We show that this method has higher sensitivity for data consisting of multiple harmonics, and is more robust against noise than other methods. Although "combinatorial explosion" occurs for large datasets, the computational time is not a problem for small-sample datasets. The MATLAB/GNU Octave script of the algorithm is available on the author's web site: http://www.cbrc.jp/%7Etominaga/piccolo/. PMID:21151841
ERIC Educational Resources Information Center
Ngan, Chun-Kit
2013-01-01
Making decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts…
Synthesis of rainfall time series in a high temporal resolution
NASA Astrophysics Data System (ADS)
Callau Poduje, Ana Claudia; Haberlandt, Uwe
2014-05-01
In order to optimize the design and operation of urban drainage systems, long and continuous rain series in a high temporal resolution are essential. As the length of the rainfall records is often short, particularly the data available with the temporal and regional resolutions required for urban hydrology, it is necessary to find some numerical representation of the precipitation phenomenon to generate long synthetic rainfall series. An Alternating Renewal Model (ARM) is applied for this purpose, which consists of two structures: external and internal. The former is the sequence of wet and dry spells, described by their durations which are simulated stochastically. The internal structure is characterized by the amount of rain corresponding to each wet spell and its distribution within the spell. A multivariate frequency analysis is applied to analyze the internal structure of the wet spells and to generate synthetic events. The stochastic time series must reproduce the statistical characteristics of observed high resolution precipitation measurements used to generate them. The spatio-temporal interdependencies between stations are addressed by resampling the continuous synthetic series based on the Simulated Annealing (SA) procedure. The state of Lower-Saxony and surrounding areas, located in the north-west of Germany is used to develop the ARM. A total of 26 rainfall stations with high temporal resolution records, i.e. rainfall data every 5 minutes, are used to define the events, find the most suitable probability distributions, calibrate the corresponding parameters, simulate long synthetic series and evaluate the results. The length of the available data ranges from 10 to 20 years. The rainfall series involved in the different steps of calculation are compared using a rainfall-runoff model to simulate the runoff behavior in urban areas. The EPA Storm Water Management Model (SWMM) is applied for this evaluation. The results show a good representation of the
Estimation of coupling between time-delay systems from time series
NASA Astrophysics Data System (ADS)
Prokhorov, M. D.; Ponomarenko, V. I.
2005-07-01
We propose a method for estimation of coupling between the systems governed by scalar time-delay differential equations of the Mackey-Glass type from the observed time series data. The method allows one to detect the presence of certain types of linear coupling between two time-delay systems, to define the type, strength, and direction of coupling, and to recover the model equations of coupled time-delay systems from chaotic time series corrupted by noise. We verify our method using both numerical and experimental data.
Timing of a young mildly recycled pulsar with a massive white dwarf companion
NASA Astrophysics Data System (ADS)
Lazarus, P.; Tauris, T. M.; Knispel, B.; Freire, P. C. C.; Deneva, J. S.; Kaspi, V. M.; Allen, B.; Bogdanov, S.; Chatterjee, S.; Stairs, I. H.; Zhu, W. W.
2014-01-01
We report on timing observations of the recently discovered binary pulsar PSR J1952+2630 using the Arecibo Observatory. The mildly recycled 20.7-ms pulsar is in a 9.4-h orbit with a massive, MWD > 0.93 M⊙, white dwarf (WD) companion. We present, for the first time, a phase-coherent timing solution, with precise spin, astrometric and Keplerian orbital parameters. This shows that the characteristic age of PSR J1952+2630 is 77 Myr, younger by one order of magnitude than any other recycled pulsar-massive WD system. We derive an upper limit on the true age of the system of 150 Myr. We investigate the formation of PSR J1952+2630 using detailed modelling of the mass-transfer process from a naked helium star on to the neutron star following a common-envelope phase (Case BB Roche lobe overflow). From our modelling of the progenitor system, we constrain the accretion efficiency of the neutron star, which suggests a value between 100 and 300 per cent of the Eddington accretion limit. We present numerical models of the chemical structure of a possible oxygen-neon-magnesium WD companion. Furthermore, we calculate the past and the future spin evolution of PSR J1952+2630, until the system merges in about 3.4 Gyr due to gravitational wave emission. Although we detect no relativistic effects in our timing analysis, we show that several such effects will become measurable with continued observations over the next 10 yr; thus, PSR J1952+2630 has potential as a testbed for gravitational theories.
Characterizability of metabolic pathway systems from time series data.
Voit, Eberhard O
2013-12-01
Over the past decade, the biomathematical community has devoted substantial effort to the complicated challenge of estimating parameter values for biological systems models. An even more difficult issue is the characterization of functional forms for the processes that govern these systems. Most parameter estimation approaches tacitly assume that these forms are known or can be assumed with some validity. However, this assumption is not always true. The recently proposed method of Dynamic Flux Estimation (DFE) addresses this problem in a genuinely novel fashion for metabolic pathway systems. Specifically, DFE allows the characterization of fluxes within such systems through an analysis of metabolic time series data. Its main drawback is the fact that DFE can only directly be applied if the pathway system contains as many metabolites as unknown fluxes. This situation is unfortunately rare. To overcome this roadblock, earlier work in this field had proposed strategies for augmenting the set of unknown fluxes with independent kinetic information, which however is not always available. Employing Moore-Penrose pseudo-inverse methods of linear algebra, the present article discusses an approach for characterizing fluxes from metabolic time series data that is applicable even if the pathway system is underdetermined and contains more fluxes than metabolites. Intriguingly, this approach is independent of a specific modeling framework and unaffected by noise in the experimental time series data. The results reveal whether any fluxes may be characterized and, if so, which subset is characterizable. They also help with the identification of fluxes that, if they could be determined independently, would allow the application of DFE. PMID:23391489
Analysis of Multipsectral Time Series for supporting Forest Management Plans
NASA Astrophysics Data System (ADS)
Simoniello, T.; Carone, M. T.; Costantini, G.; Frattegiani, M.; Lanfredi, M.; Macchiato, M.
2010-05-01
Adequate forest management requires specific plans based on updated and detailed mapping. Multispectral satellite time series have been largely applied to forest monitoring and studies at different scales tanks to their capability of providing synoptic information on some basic parameters descriptive of vegetation distribution and status. As a low expensive tool for supporting forest management plans in operative context, we tested the use of Landsat-TM/ETM time series (1987-2006) in the high Agri Valley (Southern Italy) for planning field surveys as well as for the integration of existing cartography. As preliminary activity to make all scenes radiometrically consistent the no-change regression normalization was applied to the time series; then all the data concerning available forest maps, municipal boundaries, water basins, rivers, and roads were overlapped in a GIS environment. From the 2006 image we elaborated the NDVI map and analyzed the distribution for each land cover class. To separate the physiological variability and identify the anomalous areas, a threshold on the distributions was applied. To label the non homogenous areas, a multitemporal analysis was performed by separating heterogeneity due to cover changes from that linked to basilar unit mapping and classification labelling aggregations. Then a map of priority areas was produced to support the field survey plan. To analyze the territorial evolution, the historical land cover maps were elaborated by adopting a hybrid classification approach based on a preliminary segmentation, the identification of training areas, and a subsequent maximum likelihood categorization. Such an analysis was fundamental for the general assessment of the territorial dynamics and in particular for the evaluation of the efficacy of past intervention activities.
Assemblage time series reveal biodiversity change but not systematic loss.
Dornelas, Maria; Gotelli, Nicholas J; McGill, Brian; Shimadzu, Hideyasu; Moyes, Faye; Sievers, Caya; Magurran, Anne E
2014-04-18
The extent to which biodiversity change in local assemblages contributes to global biodiversity loss is poorly understood. We analyzed 100 time series from biomes across Earth to ask how diversity within assemblages is changing through time. We quantified patterns of temporal α diversity, measured as change in local diversity, and temporal β diversity, measured as change in community composition. Contrary to our expectations, we did not detect systematic loss of α diversity. However, community composition changed systematically through time, in excess of predictions from null models. Heterogeneous rates of environmental change, species range shifts associated with climate change, and biotic homogenization may explain the different patterns of temporal α and β diversity. Monitoring and understanding change in species composition should be a conservation priority. PMID:24744374
Financial Time Series Prediction Using Elman Recurrent Random Neural Networks
Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli
2016-01-01
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423
Time series analysis of electron flux at geostationary orbit
Szita, S.; Rodgers, D.J.; Johnstone, A.D.
1996-07-01
Time series of energetic (42.9{endash}300 keV) electron flux data from the geostationary satellite Meteosat-3 shows variability over various timescales. Of particular interest are the strong local time dependence of the flux data and the large flux peaks associated with particle injection events which occur over a timescale of a few hours. Fourier analysis has shown that for this energy range, the average electron flux diurnal variation can be approximated by a combination of two sine waves with periods of 12 and 24 hours. The data have been further examined using wavelet analysis, which shows how the diurnal variation changes and where it appears most significant. The injection events have a characteristic appearance but do not occur in phase with one another and therefore do not show up in a Fourier spectrum. Wavelet analysis has been used to look for characteristic time scales for these events. {copyright} {ital 1996 American Institute of Physics.}
Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.
Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli
2016-01-01
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423
Detecting and characterising ramp events in wind power time series
NASA Astrophysics Data System (ADS)
Gallego, Cristóbal; Cuerva, Álvaro; Costa, Alexandre
2014-12-01
In order to implement accurate models for wind power ramp forecasting, ramps need to be previously characterised. This issue has been typically addressed by performing binary ramp/non-ramp classifications based on ad-hoc assessed thresholds. However, recent works question this approach. This paper presents the ramp function, an innovative wavelet- based tool which detects and characterises ramp events in wind power time series. The underlying idea is to assess a continuous index related to the ramp intensity at each time step, which is obtained by considering large power output gradients evaluated under different time scales (up to typical ramp durations). The ramp function overcomes some of the drawbacks shown by the aforementioned binary classification and permits forecasters to easily reveal specific features of the ramp behaviour observed at a wind farm. As an example, the daily profile of the ramp-up and ramp-down intensities are obtained for the case of a wind farm located in Spain.
On the maximum-entropy/autoregressive modeling of time series
NASA Technical Reports Server (NTRS)
Chao, B. F.
1984-01-01
The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.
Time series analysis of waterfowl species number change
NASA Astrophysics Data System (ADS)
Mengjung Chou, Caroline; Da-Wei Tsai, David; Honglay Chen, Paris
2014-05-01
The objective of this study is to analyze the time series of waterfowl species numbers in Da-du estuary which was set up as Important Bird Areas (IBAs) from birdlife international in 2004. The multiplicative decomposition method has been adapted to determine the species variations, including long-term (T), seasonal (S), circular (C), and irregular (I). The results indicated: (1) The long-term trend decreased with time from 1989 to 2012; (2) There were two seasonal high peaks in April and November each year with the lowest peak in June. Moreover, since the winter visitors had the dominant numbers in total species numbers, the seasonal changes were mainly depended on the winter birds' migration. (3) The waterfowl was gradually restored back from lowest point in 1996, but the difference between 1989 and 2003 indicated the irreversible effect existed already. (4) The irregular variation was proved as a random distribution by several statistical tests including normality test, homogeneity of variance, independence test and variation probability method to portray the characteristics of the distributions and to demonstrate its randomness. Consequently, this study exhibited the time series analysis methods were reasonable well to present the waterfowl species changes numerically. And those results could be the precious data for the researches of ecosystem succession and anthropogenic impacts in the estuary.
Quantifying evolutionary dynamics from variant-frequency time series.
Khatri, Bhavin S
2016-01-01
From Kimura's neutral theory of protein evolution to Hubbell's neutral theory of biodiversity, quantifying the relative importance of neutrality versus selection has long been a basic question in evolutionary biology and ecology. With deep sequencing technologies, this question is taking on a new form: given a time-series of the frequency of different variants in a population, what is the likelihood that the observation has arisen due to selection or neutrality? To tackle the 2-variant case, we exploit Fisher's angular transformation, which despite being discovered by Ronald Fisher a century ago, has remained an intellectual curiosity. We show together with a heuristic approach it provides a simple solution for the transition probability density at short times, including drift, selection and mutation. Our results show under that under strong selection and sufficiently frequent sampling these evolutionary parameters can be accurately determined from simulation data and so they provide a theoretical basis for techniques to detect selection from variant or polymorphism frequency time-series. PMID:27616332
Exploratory joint and separate tracking of geographically related time series
NASA Astrophysics Data System (ADS)
Balasingam, Balakumar; Willett, Peter; Levchuk, Georgiy; Freeman, Jared
2012-05-01
Target tracking techniques have usually been applied to physical systems via radar, sonar or imaging modalities. But the same techniques - filtering, association, classification, track management - can be applied to nontraditional data such as one might find in other fields such as economics, business and national defense. In this paper we explore a particular data set. The measurements are time series collected at various sites; but other than that little is known about it. We shall refer to as the data as representing the Megawatt hour (MWH) output of various power plants located in Afghanistan. We pose such questions as: 1. Which power plants seem to have a common model? 2. Do any power plants change their models with time? 3. Can power plant behavior be predicted, and if so, how far to the future? 4. Are some of the power plants stochastically linked? That is, do we observed a lack of power demand at one power plant as implying a surfeit of demand elsewhere? The observations seem well modeled as hidden Markov. This HMM modeling is compared to other approaches; and tests are continued to other (albeit self-generated) data sets with similar characteristics. Keywords: Time-series analysis, hidden Markov models, statistical similarity, clustering weighted
Albedo Pattern Recognition and Time-Series Analyses in Malaysia
NASA Astrophysics Data System (ADS)
Salleh, S. A.; Abd Latif, Z.; Mohd, W. M. N. Wan; Chan, A.
2012-07-01
Pattern recognition and time-series analyses will enable one to evaluate and generate predictions of specific phenomena. The albedo pattern and time-series analyses are very much useful especially in relation to climate condition monitoring. This study is conducted to seek for Malaysia albedo pattern changes. The pattern recognition and changes will be useful for variety of environmental and climate monitoring researches such as carbon budgeting and aerosol mapping. The 10 years (2000-2009) MODIS satellite images were used for the analyses and interpretation. These images were being processed using ERDAS Imagine remote sensing software, ArcGIS 9.3, the 6S code for atmospherical calibration and several MODIS tools (MRT, HDF2GIS, Albedo tools). There are several methods for time-series analyses were explored, this paper demonstrates trends and seasonal time-series analyses using converted HDF format MODIS MCD43A3 albedo land product. The results revealed significance changes of albedo percentages over the past 10 years and the pattern with regards to Malaysia's nebulosity index (NI) and aerosol optical depth (AOD). There is noticeable trend can be identified with regards to its maximum and minimum value of the albedo. The rise and fall of the line graph show a similar trend with regards to its daily observation. The different can be identified in term of the value or percentage of rises and falls of albedo. Thus, it can be concludes that the temporal behavior of land surface albedo in Malaysia have a uniform behaviours and effects with regards to the local monsoons. However, although the average albedo shows linear trend with nebulosity index, the pattern changes of albedo with respects to the nebulosity index indicates that there are external factors that implicates the albedo values, as the sky conditions and its diffusion plotted does not have uniform trend over the years, especially when the trend of 5 years interval is examined, 2000 shows high negative linear
Best linear forecast of volatility in financial time series
NASA Astrophysics Data System (ADS)
Krivoruchenko, M. I.
2004-09-01
The autocorrelation function of volatility in financial time series is fitted well by a superposition of several exponents. This case admits an explicit analytical solution of the problem of constructing the best linear forecast of a stationary stochastic process. We describe and apply the proposed analytical method for forecasting volatility. The leverage effect and volatility clustering are taken into account. Parameters of the predictor function are determined numerically for the Dow Jones 30 Industrial Average. Connection of the proposed method to the popular autoregressive conditional heteroskedasticity models is discussed.
Time series analysis using semiparametric regression on oil palm production
NASA Astrophysics Data System (ADS)
Yundari, Pasaribu, U. S.; Mukhaiyar, U.
2016-04-01
This paper presents semiparametric kernel regression method which has shown its flexibility and easiness in mathematical calculation, especially in estimating density and regression function. Kernel function is continuous and it produces a smooth estimation. The classical kernel density estimator is constructed by completely nonparametric analysis and it is well reasonable working for all form of function. Here, we discuss about parameter estimation in time series analysis. First, we consider the parameters are exist, then we use nonparametrical estimation which is called semiparametrical. The selection of optimum bandwidth is obtained by considering the approximation of Mean Integrated Square Root Error (MISE).
Time series ARIMA models for daily price of palm oil
NASA Astrophysics Data System (ADS)
Ariff, Noratiqah Mohd; Zamhawari, Nor Hashimah; Bakar, Mohd Aftar Abu
2015-02-01
Palm oil is deemed as one of the most important commodity that forms the economic backbone of Malaysia. Modeling and forecasting the daily price of palm oil is of great interest for Malaysia's economic growth. In this study, time series ARIMA models are used to fit the daily price of palm oil. The Akaike Infromation Criterion (AIC), Akaike Infromation Criterion with a correction for finite sample sizes (AICc) and Bayesian Information Criterion (BIC) are used to compare between different ARIMA models being considered. It is found that ARIMA(1,2,1) model is suitable for daily price of crude palm oil in Malaysia for the year 2010 to 2012.
Chaotic time series analysis in economics: Balance and perspectives
Faggini, Marisa
2014-12-15
The aim of the paper is not to review the large body of work concerning nonlinear time series analysis in economics, about which much has been written, but rather to focus on the new techniques developed to detect chaotic behaviours in economic data. More specifically, our attention will be devoted to reviewing some of these techniques and their application to economic and financial data in order to understand why chaos theory, after a period of growing interest, appears now not to be such an interesting and promising research area.
Time series as a diagnostic tool for EKG
NASA Astrophysics Data System (ADS)
Erkal, Cahit; Cecen, Aydin
2007-11-01
A preliminary analysis of heart rate variability (peak-to-peak intervals based on EKG) will be presented using the tools of nonlinear dynamics and chaos. We show that uncertainty determination of the most commonly used invariant-the correlation dimension- and a proper implementation of time series analysis tools are necessary to differentiate between the healthy and unhealthy state of the heart. We present an example analysis based on normal and atrial fibrillation EKGs and point of some pitfalls that may give rise to misleading conclusions.
Multifractal Time Series Analysis Based on Detrended Fluctuation Analysis
NASA Astrophysics Data System (ADS)
Kantelhardt, Jan; Stanley, H. Eugene; Zschiegner, Stephan; Bunde, Armin; Koscielny-Bunde, Eva; Havlin, Shlomo
2002-03-01
In order to develop an easily applicable method for the multifractal characterization of non-stationary time series, we generalize the detrended fluctuation analysis (DFA), which is a well-established method for the determination of the monofractal scaling properties and the detection of long-range correlations. We relate the new multifractal DFA method to the standard partition function-based multifractal formalism, and compare it to the wavelet transform modulus maxima (WTMM) method which is a well-established, but more difficult procedure for this purpose. We employ the multifractal DFA method to determine if the heartrhythm during different sleep stages is characterized by different multifractal properties.
Time series analysis of transient chaos: Theory and experiment
Janosi, I.M.; Tel, T.
1996-06-01
A simple method is described how nonattracting chaotic sets can be reconstructed from time series by gluing those pieces of many transiently chaotic signals together that come close to this invariant set. The method is illustrated by both a map of well known dynamics, the H{acute e}non map, and a signal obtained from an experiment, the NMR laser. The strange saddle responsible for the transient chaotic behavior is reconstructed and its characteristics like dimension, Lyapunov exponent, and correlation function are determined. {copyright} {ital 1996 American Institute of Physics.}
Feature extraction for change analysis in SAR time series
NASA Astrophysics Data System (ADS)
Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan
2015-10-01
In remote sensing, the change detection topic represents a broad field of research. If time series data is available, change detection can be used for monitoring applications. These applications require regular image acquisitions at identical time of day along a defined period. Focusing on remote sensing sensors, radar is especially well-capable for applications requiring regularity, since it is independent from most weather and atmospheric influences. Furthermore, regarding the image acquisitions, the time of day plays no role due to the independence from daylight. Since 2007, the German SAR (Synthetic Aperture Radar) satellite TerraSAR-X (TSX) permits the acquisition of high resolution radar images capable for the analysis of dense built-up areas. In a former study, we presented the change analysis of the Stuttgart (Germany) airport. The aim of this study is the categorization of detected changes in the time series. This categorization is motivated by the fact that it is a poor statement only to describe where and when a specific area has changed. At least as important is the statement about what has caused the change. The focus is set on the analysis of so-called high activity areas (HAA) representing areas changing at least four times along the investigated period. As first step for categorizing these HAAs, the matching HAA changes (blobs) have to be identified. Afterwards, operating in this object-based blob level, several features are extracted which comprise shape-based, radiometric, statistic, morphological values and one context feature basing on a segmentation of the HAAs. This segmentation builds on the morphological differential attribute profiles (DAPs). Seven context classes are established: Urban, infrastructure, rural stable, rural unstable, natural, water and unclassified. A specific HA blob is assigned to one of these classes analyzing the CovAmCoh time series signature of the surrounding segments. In combination, also surrounding GIS information
Detection of intermittent events in atmospheric time series
NASA Astrophysics Data System (ADS)
Paradisi, P.; Cesari, R.; Palatella, L.; Contini, D.; Donateo, A.
2009-04-01
associated with the occurrence of critical events in the atmospheric dynamics. The critical events are associated with transitions between meta-stable configurations. Consequently, this approach could give some effort in the study of Extreme Events in meteorology and climatology and in weather classification schemes. Then, the renewal approach could give some effort in the modelling of non-Gaussian closures for turbulent fluxes [3]. In the proposed approach the main features that need to be estimated are: (a) the distribution of life-times of a given atmospheric meta-stable structure (Waiting Times between two critical events); (b) the statistical distribution of fluctuations; (c) the presence of memory in the time series. These features are related to the evaluation of memory content and scaling from the time series. In order to analyze these features, in recent years some novel statistical techniques have been developed. In particular, the analysis of Diffusion Entropy [4] was shown to be a robust method for the determination of the dynamical scaling. This property is related to the power-law behaviour of the life-time statistics and to the memory properties of the time series. The analysis of Renewal Aging [5], based on renewal theory [2], allows to estimate the content of memory in a time series that is related to the amount of critical events in the time series itself. After a brief review of the statistical techniques (Diffusion Entropy and Renewal Aging), an application to experimental atmospheric time series will be illustrated. References [1] Weiss G.H., Rubin R.J., Random Walks: theory and selected applications, Advances in Chemical Physics,1983, 52, 363-505 (1983). [2] D.R. Cox, Renewal Theory, Methuen, London (1962). [3] P. Paradisi, R. Cesari, F. Mainardi, F. Tampieri: The fractional Fick's law for non-local transport processes, Physica A, 293, p. 130-142 (2001). [4] P. Grigolini, L. Palatella, G. Raffaelli, Fractals 9 (2001) 439. [5] P. Allegrini, F. Barbi, P
Girls' Series Books: A View of Times Past.
ERIC Educational Resources Information Center
Schumacher, Mark
The Girls' Books in Series collection at the University of North Carolina at Greensboro's Jackson Library contains over 1850 volumes, with publication dates ranging from the mid-1800s to the 1980s. The library's list currently contains approximately 511 different series. The library owns all the titles for 85 of the series. For 167 of the series,…
Earth's Surface Displacements from the GPS Time Series
NASA Astrophysics Data System (ADS)
Haritonova, D.; Balodis, J.; Janpaule, I.; Morozova, K.
2015-11-01
The GPS observations of both Latvian permanent GNSS networks - EUPOS®-Riga and LatPos, have been collected for a period of 8 years - from 2007 to 2014. Local surface displacements have been derived from the obtained coordinate time series eliminating different impact sources. The Bernese software is used for data processing. The EUREF Permanent Network (EPN) stations in the surroundings of Latvia are selected as fiducial stations. The results have shown a positive tendency of vertical displacements in the western part of Latvia - station heights are increasing, and negative velocities are observed in the central and eastern parts. Station vertical velocities are ranging in diapason of 4 mm/year. In the case of horizontal displacements, site velocities are up to 1 mm/year and mostly oriented to the south. The comparison of the obtained results with data from the deformation model NKG_RF03vel has been made. Additionally, the purpose of this study is to analyse GPS time series obtained using two different data processing strategies: Precise Point Positioning (PPP) and estimation of station coordinates relatively to the positions of fiducial stations also known as Differential GNSS.
Time series modelling and forecasting of emergency department overcrowding.
Kadri, Farid; Harrou, Fouzi; Chaabane, Sondès; Tahon, Christian
2014-09-01
Efficient management of patient flow (demand) in emergency departments (EDs) has become an urgent issue for many hospital administrations. Today, more and more attention is being paid to hospital management systems to optimally manage patient flow and to improve management strategies, efficiency and safety in such establishments. To this end, EDs require significant human and material resources, but unfortunately these are limited. Within such a framework, the ability to accurately forecast demand in emergency departments has considerable implications for hospitals to improve resource allocation and strategic planning. The aim of this study was to develop models for forecasting daily attendances at the hospital emergency department in Lille, France. The study demonstrates how time-series analysis can be used to forecast, at least in the short term, demand for emergency services in a hospital emergency department. The forecasts were based on daily patient attendances at the paediatric emergency department in Lille regional hospital centre, France, from January 2012 to December 2012. An autoregressive integrated moving average (ARIMA) method was applied separately to each of the two GEMSA categories and total patient attendances. Time-series analysis was shown to provide a useful, readily available tool for forecasting emergency department demand. PMID:25053208
On clustering of non-stationary meteorological time series
NASA Astrophysics Data System (ADS)
Horenko, Illia
2010-04-01
A method for clustering of multidimensional non-stationary meteorological time series is presented. The approach is based on optimization of the regularized averaged clustering functional describing the quality of data representation in terms of several regression models and a metastable hidden process switching between them. Proposed numerical clustering algorithm is based on application of the finite element method ( FEM) to the problem of non-stationary time series analysis. The main advantage of the presented algorithm compared to Hidden Markov Models (HMMs) and to finite mixture models is that no a priori assumptions about the probability model for the hidden and observed processes (e.g., Markovianity or stationarity) are necessary for the proposed method. Another attractive numerical feature of the discussed algorithm is the possibility to choose the optimal number of metastable clusters and a natural opportunity to control the fuzziness of the resulting decomposition a posteriory, based on the statistical distinguishability of the resulting persistent cluster states. The resulting FEM-K-trends algorithm is compared with some standard fuzzy clustering methods on toy model examples and on analysis of multidimensional historical temperature data locally in Europe and on the global temperature data set.
A Markov switching model for annual hydrologic time series
NASA Astrophysics Data System (ADS)
Akıntuǧ, B.; Rasmussen, P. F.
2005-09-01
This paper investigates the properties of Markov switching (MS) models (also known as hidden Markov models) for generating annual time series. This type of model has been used in a number of recent studies in the water resources literature. The model considered here assumes that climate is switching between M states and that the state sequence can be described by a Markov chain. Observations are assumed to be drawn from a normal distribution whose parameters depend on the state variable. We present the stochastic properties of this class of models along with procedures for model identification and parameter estimation. Although, at a first glance, MS models appear to be quite different from ARMA models, we show that it is possible to find an ARMA model that has the same autocorrelation function and the same marginal distribution as any given MS model. Hence, despite the difference in model structure, there are strong similarities between MS and ARMA models. MS and ARMA models are applied to the time series of mean annual discharge of the Niagara River. Although it is difficult to draw any general conclusion from a single case study, it appears that MS models (and ARMA models derived from MS models) generally have stronger autocorrelation at higher lags than ARMA models estimated by conventional maximum likelihood. This may be an important property if the purpose of the study is the analysis of multiyear droughts.
Efficient Bayesian inference for natural time series using ARFIMA processes
NASA Astrophysics Data System (ADS)
Graves, T.; Gramacy, R. B.; Franzke, C. L. E.; Watkins, N. W.
2015-11-01
Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. In this paper we present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators. For CET we also extend our method to seasonal long memory.
Financial Time Series Prediction Using Spiking Neural Networks
Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam
2014-01-01
In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two “traditional”, rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618
Data compression to define information content of hydrological time series
NASA Astrophysics Data System (ADS)
Weijs, S. V.; van de Giesen, N.; Parlange, M. B.
2013-08-01
When inferring models from hydrological data or calibrating hydrological models, we are interested in the information content of those data to quantify how much can potentially be learned from them. In this work we take a perspective from (algorithmic) information theory, (A)IT, to discuss some underlying issues regarding this question. In the information-theoretical framework, there is a strong link between information content and data compression. We exploit this by using data compression performance as a time series analysis tool and highlight the analogy to information content, prediction and learning (understanding is compression). The analysis is performed on time series of a set of catchments. We discuss both the deeper foundation from algorithmic information theory, some practical results and the inherent difficulties in answering the following question: "How much information is contained in this data set?". The conclusion is that the answer to this question can only be given once the following counter-questions have been answered: (1) information about which unknown quantities? and (2) what is your current state of knowledge/beliefs about those quantities? Quantifying information content of hydrological data is closely linked to the question of separating aleatoric and epistemic uncertainty and quantifying maximum possible model performance, as addressed in the current hydrological literature. The AIT perspective teaches us that it is impossible to answer this question objectively without specifying prior beliefs.
Data compression to define information content of hydrological time series
NASA Astrophysics Data System (ADS)
Weijs, S. V.; van de Giesen, N.; Parlange, M. B.
2013-02-01
When inferring models from hydrological data or calibrating hydrological models, we might be interested in the information content of those data to quantify how much can potentially be learned from them. In this work we take a perspective from (algorithmic) information theory (AIT) to discuss some underlying issues regarding this question. In the information-theoretical framework, there is a strong link between information content and data compression. We exploit this by using data compression performance as a time series analysis tool and highlight the analogy to information content, prediction, and learning (understanding is compression). The analysis is performed on time series of a set of catchments, searching for the mechanisms behind compressibility. We discuss both the deeper foundation from algorithmic information theory, some practical results and the inherent difficulties in answering the question: "How much information is contained in this data?". The conclusion is that the answer to this question can only be given once the following counter-questions have been answered: (1) Information about which unknown quantities? (2) What is your current state of knowledge/beliefs about those quantities? Quantifying information content of hydrological data is closely linked to the question of separating aleatoric and epistemic uncertainty and quantifying maximum possible model performance, as addressed in current hydrological literature. The AIT perspective teaches us that it is impossible to answer this question objectively, without specifying prior beliefs. These beliefs are related to the maximum complexity one is willing to accept as a law and what is considered as random.
Time series clustering analysis of health-promoting behavior
NASA Astrophysics Data System (ADS)
Yang, Chi-Ta; Hung, Yu-Shiang; Deng, Guang-Feng
2013-10-01
Health promotion must be emphasized to achieve the World Health Organization goal of health for all. Since the global population is aging rapidly, ComCare elder health-promoting service was developed by the Taiwan Institute for Information Industry in 2011. Based on the Pender health promotion model, ComCare service offers five categories of health-promoting functions to address the everyday needs of seniors: nutrition management, social support, exercise management, health responsibility, stress management. To assess the overall ComCare service and to improve understanding of the health-promoting behavior of elders, this study analyzed health-promoting behavioral data automatically collected by the ComCare monitoring system. In the 30638 session records collected for 249 elders from January, 2012 to March, 2013, behavior patterns were identified by fuzzy c-mean time series clustering algorithm combined with autocorrelation-based representation schemes. The analysis showed that time series data for elder health-promoting behavior can be classified into four different clusters. Each type reveals different health-promoting needs, frequencies, function numbers and behaviors. The data analysis result can assist policymakers, health-care providers, and experts in medicine, public health, nursing and psychology and has been provided to Taiwan National Health Insurance Administration to assess the elder health-promoting behavior.
Predicting physical time series using dynamic ridge polynomial neural networks.
Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir
2014-01-01
Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950
Financial time series prediction using spiking neural networks.
Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam
2014-01-01
In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618
Coastal Atmosphere and Sea Time Series (CoASTS)
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Berthon, Jean-Francoise; Zibordi, Giuseppe; Doyle, John P.; Grossi, Stefania; vanderLinde, Dirk; Targa, Cristina; McClain, Charles R. (Technical Monitor)
2002-01-01
In this document, the first three years of a time series of bio-optical marine and atmospheric measurements are presented and analyzed. These measurements were performed from an oceanographic tower in the northern Adriatic Sea within the framework of the Coastal Atmosphere and Sea Time Series (CoASTS) project, an ocean color calibration and validation activity. The data set collected includes spectral measurements of the in-water apparent (diffuse attenuation coefficient, reflectance, Q-factor, etc.) and inherent (absorption and scattering coefficients) optical properties, as well as the concentrations of the main optical components (pigment and suspended matter concentrations). Clear seasonal patterns are exhibited by the marine quantities on which an appreciable short-term variability (on the order of a half day to one day) is superimposed. This short-term variability is well correlated with the changes in salinity at the surface resulting from the southward transport of freshwater coming from the northern rivers. Concentrations of chlorophyll alpha and total suspended matter span more than two orders of magnitude. The bio-optical characteristics of the measurement site pertain to both Case-I (about 64%) and Case-II (about 36%) waters, based on a relationship between the beam attenuation coefficient at 660nm and the chlorophyll alpha concentration. Empirical algorithms relating in-water remote sensing reflectance ratios and optical components or properties of interest (chlorophyll alpha, total suspended matter, and the diffuse attenuation coefficient) are presented.
Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks
Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir
2014-01-01
Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950
Diagnosis of nonlinear systems using time series analysis
Hunter, N.F. Jr.
1991-01-01
Diagnosis and analysis techniques for linear systems have been developed and refined to a high degree of precision. In contrast, techniques for the analysis of data from nonlinear systems are in the early stages of development. This paper describes a time series technique for the analysis of data from nonlinear systems. The input and response time series resulting from excitation of the nonlinear system are embedded in a state space. The form of the embedding is optimized using local canonical variate analysis and singular value decomposition techniques. From the state space model, future system responses are estimated. The expected degree of predictability of the system is investigated using the state transition matrix. The degree of nonlinearity present is quantified using the geometry of the transfer function poles in the z plane. Examples of application to a linear single-degree-of-freedom system, a single-degree-of-freedom Duffing Oscillator, and linear and nonlinear three degree of freedom oscillators are presented. 11 refs., 9 figs.
Software for detection and correction of inhomogeneities in time series
NASA Astrophysics Data System (ADS)
Stepanek, Petr
2010-05-01
During the last decade, software package consisting of AnClim, ProClimDB and LoadData software for processing climatological data has been created. This software offers complex solution in processing climatological time series, starting from loading data from a central database (e.g. Oracle, software LoadData), through data duality control and homogenization to time series analysis, extreme values evaluation and model outputs verification (ProClimDB and AnClim software). In recent years tools for correction of inhomogeneites in daily data was introduced. Partly methods already programmed in R (e.g. by Christine Gruber, ZAMG) like HOM of Paul Della-Marta and SPLIDHOM method of Olivier Mestre or own methods are available, some of them being able to apply multi-element approach (using e.g. weather types). Available methods can be easily compared and evaluated (both for inhomogeneity detection or correction in this case). Comparison of the available correction methods is also current task of ongoing COST action ESO601 (www. homogenisation.org). Further methods, if available under R, can be easily linked with the software and then the whole processing can benefit from user-friendly environment in which all the most commonly used functions for data handling and climatological processing are available (read more at www.climahom.eu).
Bayesian Inference of Natural Selection from Allele Frequency Time Series.
Schraiber, Joshua G; Evans, Steven N; Slatkin, Montgomery
2016-05-01
The advent of accessible ancient DNA technology now allows the direct ascertainment of allele frequencies in ancestral populations, thereby enabling the use of allele frequency time series to detect and estimate natural selection. Such direct observations of allele frequency dynamics are expected to be more powerful than inferences made using patterns of linked neutral variation obtained from modern individuals. We developed a Bayesian method to make use of allele frequency time series data and infer the parameters of general diploid selection, along with allele age, in nonequilibrium populations. We introduce a novel path augmentation approach, in which we use Markov chain Monte Carlo to integrate over the space of allele frequency trajectories consistent with the observed data. Using simulations, we show that this approach has good power to estimate selection coefficients and allele age. Moreover, when applying our approach to data on horse coat color, we find that ignoring a relevant demographic history can significantly bias the results of inference. Our approach is made available in a C++ software package. PMID:27010022
Automatising the analysis of stochastic biochemical time-series
2015-01-01
Background Mathematical and computational modelling of biochemical systems has seen a lot of effort devoted to the definition and implementation of high-performance mechanistic simulation frameworks. Within these frameworks it is possible to analyse complex models under a variety of configurations, eventually selecting the best setting of, e.g., parameters for a target system. Motivation This operational pipeline relies on the ability to interpret the predictions of a model, often represented as simulation time-series. Thus, an efficient data analysis pipeline is crucial to automatise time-series analyses, bearing in mind that errors in this phase might mislead the modeller's conclusions. Results For this reason we have developed an intuitive framework-independent Python tool to automate analyses common to a variety of modelling approaches. These include assessment of useful non-trivial statistics for simulation ensembles, e.g., estimation of master equations. Intuitive and domain-independent batch scripts will allow the researcher to automatically prepare reports, thus speeding up the usual model-definition, testing and refinement pipeline. PMID:26051821
Hybrid perturbation methods based on statistical time series models
NASA Astrophysics Data System (ADS)
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
A new complexity measure for time series analysis and classification
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Balasubramanian, Karthi; Dey, Sutirth
2013-07-01
Complexity measures are used in a number of applications including extraction of information from data such as ecological time series, detection of non-random structure in biomedical signals, testing of random number generators, language recognition and authorship attribution etc. Different complexity measures proposed in the literature like Shannon entropy, Relative entropy, Lempel-Ziv, Kolmogrov and Algorithmic complexity are mostly ineffective in analyzing short sequences that are further corrupted with noise. To address this problem, we propose a new complexity measure ETC and define it as the "Effort To Compress" the input sequence by a lossless compression algorithm. Here, we employ the lossless compression algorithm known as Non-Sequential Recursive Pair Substitution (NSRPS) and define ETC as the number of iterations needed for NSRPS to transform the input sequence to a constant sequence. We demonstrate the utility of ETC in two applications. ETC is shown to have better correlation with Lyapunov exponent than Shannon entropy even with relatively short and noisy time series. The measure also has a greater rate of success in automatic identification and classification of short noisy sequences, compared to entropy and a popular measure based on Lempel-Ziv compression (implemented by Gzip).
Empirical intrinsic geometry for nonlinear modeling and time series filtering
Talmon, Ronen; Coifman, Ronald R.
2013-01-01
In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization. PMID:23847205
Empirical intrinsic geometry for nonlinear modeling and time series filtering.
Talmon, Ronen; Coifman, Ronald R
2013-07-30
In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization. PMID:23847205
Two algorithms to fill cloud gaps in LST time series
NASA Astrophysics Data System (ADS)
Frey, Corinne; Kuenzer, Claudia
2013-04-01
Cloud contamination is a challenge for optical remote sensing. This is especially true for the recording of a fast changing radiative quantity like land surface temperature (LST). The substitution of cloud contaminated pixels with estimated values - gap filling - is not straightforward but possible to a certain extent, as this research shows for medium-resolution time series of MODIS data. Area of interest is the Upper Mekong Delta (UMD). The background for this work is an analysis of the temporal development of 1-km LST in the context of the WISDOM project. The climate of the UMD is characterized by peak rainfalls in the summer months, which is also the time where cloud contamination is highest in the area. Average number of available daytime observations per pixel can go down to less than five for example in the month of June. In winter the average number may reach 25 observations a month. This situation is not appropriate to the calculation of longterm statistics; an adequate gap filling method should be used beforehand. In this research, two different algorithms were tested on an 11 year time series: 1) a gradient based algorithm and 2) a method based on ECMWF era interim re-analysis data. The first algorithm searches for stable inter-image gradients from a given environment and for a certain period of time. These gradients are then used to estimate LST for cloud contaminated pixels in each acquisition. The estimated LSTs are clear-sky LSTs and solely based on the MODIS LST time series. The second method estimates LST on the base of adapted ECMWF era interim skin temperatures and creates a set of expected LSTs. The estimated values were used to fill the gaps in the original dataset, creating two new daily, 1 km datasets. The maps filled with the gradient based method had more than the double amount of valid pixels than the original dataset. The second method (ECMWF era interim based) was able to fill all data gaps. From the gap filled data sets then monthly
Satellite image time series simulation for environmental monitoring
NASA Astrophysics Data System (ADS)
Guo, Tao
2014-11-01
The performance of environmental monitoring heavily depends on the availability of consecutive observation data and it turns out an increasing demand in remote sensing community for satellite image data in the sufficient resolution with respect to both spatial and temporal requirements, which appear to be conflictive and hard to tune tradeoffs. Multiple constellations could be a solution if without concerning cost, and thus it is so far interesting but very challenging to develop a method which can simultaneously improve both spatial and temporal details. There are some research efforts to deal with the problem from various aspects, a type of approaches is to enhance the spatial resolution using techniques of super resolution, pan-sharpen etc. which can produce good visual effects, but mostly cannot preserve spectral signatures and result in losing analytical value. Another type is to fill temporal frequency gaps by adopting time interpolation, which actually doesn't increase informative context at all. In this paper we presented a novel method to generate satellite images in higher spatial and temporal details, which further enables satellite image time series simulation. Our method starts with a pair of high-low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and the temporal change is then projected onto high resolution data plane and assigned to each high resolution pixel referring the predefined temporal change patterns of each type of ground objects to generate a simulated high resolution data. A preliminary experiment shows that our method can simulate a high resolution data with a good accuracy. We consider the contribution of our method is to enable timely monitoring of temporal changes through analysis of low resolution images time series only, and usage of
Behavior of road accidents: Structural time series approach
NASA Astrophysics Data System (ADS)
Junus, Noor Wahida Md; Ismail, Mohd Tahir; Arsad, Zainudin
2014-12-01
Road accidents become a major issue in contributing to the increasing number of deaths. Few researchers suggest that road accidents occur due to road structure and road condition. The road structure and condition may differ according to the area and volume of traffic of the location. Therefore, this paper attempts to look up the behavior of the road accidents in four main regions in Peninsular Malaysia by employing a structural time series (STS) approach. STS offers the possibility of modelling the unobserved component such as trends and seasonal component and it is allowed to vary over time. The results found that the number of road accidents is described by a different model. Perhaps, the results imply that the government, especially a policy maker should consider to implement a different approach in ways to overcome the increasing number of road accidents.
A new correlation coefficient for bivariate time-series data
NASA Astrophysics Data System (ADS)
Erdem, Orhan; Ceyhan, Elvan; Varli, Yusuf
2014-11-01
The correlation in time series has received considerable attention in the literature. Its use has attained an important role in the social sciences and finance. For example, pair trading in finance is concerned with the correlation between stock prices, returns, etc. In general, Pearson’s correlation coefficient is employed in these areas although it has many underlying assumptions which restrict its use. Here, we introduce a new correlation coefficient which takes into account the lag difference of data points. We investigate the properties of this new correlation coefficient. We demonstrate that it is more appropriate for showing the direction of the covariation of the two variables over time. We also compare the performance of the new correlation coefficient with Pearson’s correlation coefficient and Detrended Cross-Correlation Analysis (DCCA) via simulated examples.
Adaptive Sensing of Time Series with Application to Remote Exploration
NASA Technical Reports Server (NTRS)
Thompson, David R.; Cabrol, Nathalie A.; Furlong, Michael; Hardgrove, Craig; Low, Bryan K. H.; Moersch, Jeffrey; Wettergreen, David
2013-01-01
We address the problem of adaptive informationoptimal data collection in time series. Here a remote sensor or explorer agent throttles its sampling rate in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility -- all collected datapoints lie in the past, but its resource allocation decisions require predicting far into the future. Our solution is to continually fit a Gaussian process model to the latest data and optimize the sampling plan on line to maximize information gain. We compare the performance characteristics of stationary and nonstationary Gaussian process models. We also describe an application based on geologic analysis during planetary rover exploration. Here adaptive sampling can improve coverage of localized anomalies and potentially benefit mission science yield of long autonomous traverses.
A quasi-global precipitation time series for drought monitoring
Funk, Chris C.; Peterson, Pete J.; Landsfeld, Martin F.; Pedreros, Diego H.; Verdin, James P.; Rowland, James D.; Romero, Bo E.; Husak, Gregory J.; Michaelsen, Joel C.; Verdin, Andrew P.
2014-01-01
Estimating precipitation variations in space and time is an important aspect of drought early warning and environmental monitoring. An evolving drier-than-normal season must be placed in historical context so that the severity of rainfall deficits may quickly be evaluated. To this end, scientists at the U.S. Geological Survey Earth Resources Observation and Science Center, working closely with collaborators at the University of California, Santa Barbara Climate Hazards Group, have developed a quasi-global (50°S–50°N, 180°E–180°W), 0.05° resolution, 1981 to near-present gridded precipitation time series: the Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) data archive.
Time series analysis of molecular dynamics simulation using wavelet
NASA Astrophysics Data System (ADS)
Toda, Mikito
2012-08-01
A new method is presented to extract nonstationary features of slow collective motion toward time series data of molecular dynamics simulation for proteins. The method consists of the following two steps: (1) the wavelet transformation and (2) the singular value decomposition (SVD). The wavelet transformation enables us to characterize time varying features of oscillatory motions and SVD enables us to reduce the degrees of freedom of the movement. We apply the method to molecular dynamics simulation of various proteins such as Adenylate Kinase from Escherichia coli (AKE) and Thermomyces lanuginosa lipase (TLL). Moreover, we introduce indexes to characterize collective motion of proteins. These indexes provide us with information of nonstationary deformation of protein structures. We discuss future prospects of our study involving "intrinsically disordered proteins".
Detecting Abrupt Changes in a Piecewise Locally Stationary Time Series
Last, Michael; Shumway, Robert
2007-01-01
Non-stationary time series arise in many settings, such as seismology, speech-processing, and finance. In many of these settings we are interested in points where a model of local stationarity is violated. We consider the problem of how to detect these change-points, which we identify by finding sharp changes in the time-varying power spectrum. Several different methods are considered, and we find that the symmetrized Kullback-Leibler information discrimination performs best in simulation studies. We derive asymptotic normality of our test statistic, and consistency of estimated change-point locations. We then demonstrate the technique on the problem of detecting arrival phases in earthquakes. PMID:19190715
Estimation of Hurst Exponent for the Financial Time Series
NASA Astrophysics Data System (ADS)
Kumar, J.; Manchanda, P.
2009-07-01
Till recently statistical methods and Fourier analysis were employed to study fluctuations in stock markets in general and Indian stock market in particular. However current trend is to apply the concepts of wavelet methodology and Hurst exponent, see for example the work of Manchanda, J. Kumar and Siddiqi, Journal of the Frankline Institute 144 (2007), 613-636 and paper of Cajueiro and B. M. Tabak. Cajueiro and Tabak, Physica A, 2003, have checked the efficiency of emerging markets by computing Hurst component over a time window of 4 years of data. Our goal in the present paper is to understand the dynamics of the Indian stock market. We look for the persistency in the stock market through Hurst exponent and fractal dimension of time series data of BSE 100 and NIFTY 50.
Modeling the evolution of galaxies and massive black holes across cosmic time
NASA Astrophysics Data System (ADS)
Angles-Alcazar, Daniel
I use cosmological hydrodynamic simulations to investigate different aspects of the evolution of galaxies and massive black holes across cosmic time. First, I present high resolution "zoom-in" simulations including various prescriptions for galactic outflows designed to explore the impact of star-formation driven winds on the morphological, dynamical, and structural properties of individual galaxies from early times down to z = 2. Simulations without winds produce massive, compact galaxies with low gas fractions, super-solar metallicities, high bulge fractions, and much of the star formation concentrated within the inner kpc. I show that strong winds are required to suppress early star formation, maintain high gas fractions, redistribute star-forming gas and metals over larger scales, and increase the velocity dispersion of simulated galaxies, more in agreement with the large, extended, turbulent disks typical of high-redshift star-forming galaxies. Next, I combine cosmological simulations with analytic models of black hole growth to investigate the physical mechanisms driving the observed connection between massive black holes and their host galaxies. I describe a plausible model consistent with available observations in which black hole growth is limited by galaxy-scale torques. In this torque-limited growth scenario, black holes and host galaxies evolve on average toward the observed scaling relations, regardless of the initial conditions, and with no need for mass averaging through mergers or additional self-regulation processes. Outflows from the accretion disk play a key role by providing significant mass loss, but there is no need for strong interaction with the inflowing gas in order to regulate black holes in a non-linear feedback loop. I discuss some of the main implications of this scenario in the context of current observations, including the distribution and evolution of Eddington ratios, the connection between major galaxy mergers, star formation, and
Time series power flow analysis for distribution connected PV generation.
Broderick, Robert Joseph; Quiroz, Jimmy Edward; Ellis, Abraham; Reno, Matthew J.; Smith, Jeff; Dugan, Roger
2013-01-01
Distributed photovoltaic (PV) projects must go through an interconnection study process before connecting to the distribution grid. These studies are intended to identify the likely impacts and mitigation alternatives. In the majority of the cases, system impacts can be ruled out or mitigation can be identified without an involved study, through a screening process or a simple supplemental review study. For some proposed projects, expensive and time-consuming interconnection studies are required. The challenges to performing the studies are twofold. First, every study scenario is potentially unique, as the studies are often highly specific to the amount of PV generation capacity that varies greatly from feeder to feeder and is often unevenly distributed along the same feeder. This can cause location-specific impacts and mitigations. The second challenge is the inherent variability in PV power output which can interact with feeder operation in complex ways, by affecting the operation of voltage regulation and protection devices. The typical simulation tools and methods in use today for distribution system planning are often not adequate to accurately assess these potential impacts. This report demonstrates how quasi-static time series (QSTS) simulation and high time-resolution data can be used to assess the potential impacts in a more comprehensive manner. The QSTS simulations are applied to a set of sample feeders with high PV deployment to illustrate the usefulness of the approach. The report describes methods that can help determine how PV affects distribution system operations. The simulation results are focused on enhancing the understanding of the underlying technical issues. The examples also highlight the steps needed to perform QSTS simulation and describe the data needed to drive the simulations. The goal of this report is to make the methodology of time series power flow analysis readily accessible to utilities and others responsible for evaluating
The size and mass evolution of the massive galaxies over cosmic time
NASA Astrophysics Data System (ADS)
Trujillo, Ignacio
2013-07-01
Once understood as the paradigm of passively evolving objects, the discovery that massive galaxies experienced an enormous structural evolution in the last ten billion years has opened an active line of research. The most significant pending question in this field is the following: which mechanism has made galaxies to grow largely in size without altering their stellar populations properties dramatically? The most viable explanation is that massive galaxies have undergone a significant number of minor mergers which have deposited most of their material in the outer regions of the massive galaxies. This scenario, although appealing, is still far from be observationally proved since the number of satellite galaxies surrounding the massive objects appears insufficient at all redshifts. The presence also of a population of nearby massive compact galaxies with mixture stellar properties is another piece of the puzzle that still does not nicely fit within a comprehensive scheme. I will review these and other intriguing properties of the massive galaxies in this contribution.
Seasonal signals in the reprocessed GPS coordinate time series
NASA Astrophysics Data System (ADS)
Kenyeres, A.; van Dam, T.; Figurski, M.; Szafranek, K.
2008-12-01
The global (IGS) and regional (EPN) CGPS time series have already been studied in detail by several authors to analyze the periodic signals and noise present in the long term displacement series. The comparisons indicated that the amplitude and phase of the CGPS derived seasonal signals mostly disagree with the surface mass redistribution models. The CGPS results are highly overestimating the seasonal term, only about 40% of the observed annual amplitude can be explained with the joint contribution of the geophysical models (Dong et al. 2002). Additionally the estimated amplitudes or phases are poorly coherent with the models, especially at sites close to coastal areas (van Dam et al, 2007). The conclusion of the studies was that the GPS results are distorted by analysis artifacts (e.g. ocean tide loading, aliasing of unmodeled short periodic tidal signals, antenna PCV models), monument thermal effects and multipath. Additionally, the GPS series available so far are inhomogeneous in terms of processing strategy, applied models and reference frames. The introduction of the absolute phase center variation (PCV) models for the satellite and ground antennae in 2006 and the related reprocessing of the GPS precise orbits made a perfect ground and strong argument for the complete re-analysis of the GPS observations from global to local level of networks. This enormous work is in progress within the IGS and a pilot analysis was already done for the complete EPN observations from 1996 to 2007 by the MUT group (Military University of Warsaw). The quick analysis of the results proved the expectations and the superiority of the reprocessed data. The noise level (weekly coordinate repeatability) was highly reduced making ground for the later analysis on the daily solution level. We also observed the significant decrease of the seasonal term in the residual coordinate time series, which called our attention to perform a repeated comparison of the GPS derived annual periodicity
Time perspective as a predictor of massive multiplayer online role-playing game playing.
Lukavska, Katerina
2012-01-01
This article focuses on the relationship between the time perspective (TP) personality trait and massive multiplayer online role-playing game (MMORPG) playing. We investigate the question of frequency of playing. The TP was measured with Zimbardo's TP Inventory (ZTPI), which includes five factors-past negative, past positive, present hedonistic, present fatalistic, and future. The study used data from 154 MMORPG players. We demonstrated that TP partially explained differences within a group of players with respect to the frequency of playing. Significant positive correlations were found between present factors and the amount of time spent playing MMORPGs, and significant negative correlation was found between the future factor and the time spent playing MMORPGs. Our study also revealed the influence of future-present balance on playing time. Players who scored lower in future-present balance variables (their present score was relatively high compared with their future score) reported higher values in playing time. In contrast to referential studies on TP and drug abuse and gambling, present fatalistic TP was demonstrated to be a stronger predictor of extensive playing than present hedonistic TP, which opened the question of motivation for playing. The advantage of our study compared with other personality-based studies lies in the fact that TP is a stable but malleable personality trait with a direct link to playing behavior. Therefore, TP is a promising conceptual resource for excessive playing therapy. PMID:22032796
Computer Program Recognizes Patterns in Time-Series Data
NASA Technical Reports Server (NTRS)
Hand, Charles
2003-01-01
A computer program recognizes selected patterns in time-series data like digitized samples of seismic or electrophysiological signals. The program implements an artificial neural network (ANN) and a set of N clocks for the purpose of determining whether N or more instances of a certain waveform, W, occur within a given time interval, T. The ANN must be trained to recognize W in the incoming stream of data. The first time the ANN recognizes W, it sets clock 1 to count down from T to zero; the second time it recognizes W, it sets clock 2 to count down from T to zero, and so forth through the Nth instance. On the N + 1st instance, the cycle is repeated, starting with clock 1. If any clock has not reached zero when it is reset, then N instances of W have been detected within time T, and the program so indicates. The program can readily be encoded in a field-programmable gate array or an application-specific integrated circuit that could be used, for example, to detect electroencephalographic or electrocardiographic waveforms indicative of epileptic seizures or heart attacks, respectively.
Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J. Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh
2015-01-01
We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs. PMID:26539566
Adaptive Sampling of Time Series During Remote Exploration
NASA Technical Reports Server (NTRS)
Thompson, David R.
2012-01-01
This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models
Nonlinear times series analysis of epileptic human electroencephalogram (EEG)
NASA Astrophysics Data System (ADS)
Li, Dingzhou
The problem of seizure anticipation in patients with epilepsy has attracted significant attention in the past few years. In this paper we discuss two approaches, using methods of nonlinear time series analysis applied to scalp electrode recordings, which is able to distinguish between epochs temporally distant from and just prior to, the onset of a seizure in patients with temporal lobe epilepsy. First we describe a method involving a comparison of recordings taken from electrodes adjacent to and remote from the site of the seizure focus. In particular, we define a nonlinear quantity which we call marginal predictability. This quantity is computed using data from remote and from adjacent electrodes. We find that the difference between the marginal predictabilities computed for the remote and adjacent electrodes decreases several tens of minutes prior to seizure onset, compared to its value interictally. We also show that these difl'crcnc es of marginal predictability intervals are independent of the behavior state of the patient. Next we examine the please coherence between different electrodes both in the long-range and the short-range. When time is distant from seizure onsets ("interictally"), epileptic patients have lower long-range phase coherence in the delta (1-4Hz) and beta (18-30Hz) frequency band compared to nonepileptic subjects. When seizures approach (''preictally"), we observe an increase in phase coherence in the beta band. However, interictally there is no difference in short-range phase coherence between this cohort of patients and non-epileptic subjects. Preictally short-range phase coherence also increases in the alpha (10-13Hz) and the beta band. Next we apply the quantity marginal predictability on the phase difference time series. Such marginal predictabilities are lower in the patients than in the non-epileptic subjects. However, when seizure approaches, the former moves asymptotically towards the latter.
United States Forest Disturbance Trends Observed Using Landsat Time Series
NASA Technical Reports Server (NTRS)
Masek, Jeffrey G.; Goward, Samuel N.; Kennedy, Robert E.; Cohen, Warren B.; Moisen, Gretchen G.; Schleeweis, Karen; Huang, Chengquan
2013-01-01
Disturbance events strongly affect the composition, structure, and function of forest ecosystems; however, existing U.S. land management inventories were not designed to monitor disturbance. To begin addressing this gap, the North American Forest Dynamics (NAFD) project has examined a geographic sample of 50 Landsat satellite image time series to assess trends in forest disturbance across the conterminous United States for 1985-2005. The geographic sample design used a probability-based scheme to encompass major forest types and maximize geographic dispersion. For each sample location disturbance was identified in the Landsat series using the Vegetation Change Tracker (VCT) algorithm. The NAFD analysis indicates that, on average, 2.77 Mha/yr of forests were disturbed annually, representing 1.09%/yr of US forestland. These satellite-based national disturbance rates estimates tend to be lower than those derived from land management inventories, reflecting both methodological and definitional differences. In particular the VCT approach used with a biennial time step has limited sensitivity to low-intensity disturbances. Unlike prior satellite studies, our biennial forest disturbance rates vary by nearly a factor of two between high and low years. High western US disturbance rates were associated with active fire years and insect activity, while variability in the east is more strongly related to harvest rates in managed forests. We note that generating a geographic sample based on representing forest type and variability may be problematic since the spatial pattern of disturbance does not necessarily correlate with forest type. We also find that the prevalence of diffuse, non-stand clearing disturbance in US forests makes the application of a biennial geographic sample problematic. Future satellite-based studies of disturbance at regional and national scales should focus on wall-to-wall analyses with annual time step for improved accuracy.
Established time series measure occurrence and frequency of episodic events.
NASA Astrophysics Data System (ADS)
Pebody, Corinne; Lampitt, Richard
2015-04-01
Established time series measure occurrence and frequency of episodic events. Episodic flux events occur in open oceans. Time series making measurements over significant time scales are one of the few methods that can capture these events and compare their impact with 'normal' flux. Seemingly rare events may be significant on local scales, but without the ability to measure the extent of flux on spatial and temporal scales and combine with the frequency of occurrence, it is difficult to constrain their impact. The Porcupine Abyssal Plain Sustained Observatory (PAP-SO) in the Northeast Atlantic (49 °N 16 °W, 5000m water depth) has measured particle flux since 1989 and zooplankton swimmers since 2000. Sediment traps at 3000m and 100 metres above bottom, collect material year round and we have identified close links between zooplankton and particle flux. Some of these larger animals, for example Diacria trispinosa, make a significant contribution to carbon flux through episodic flux events. D. trispinosa is a euthecosome mollusc which occurs in the Northeast Atlantic, though the PAP-SO is towards the northern limit of its distribution. Pteropods are comprised of aragonite shell, containing soft body parts excepting the muscular foot which extends beyond the mouth of the living animal. Pteropods, both live-on-entry animals and the empty shells are found year round in the 3000m trap. Generally the abundance varies with particle flux, but within that general pattern there are episodic events where significant numbers of these animals containing both organic and inorganic carbon are captured at depth and therefore could be defined as contributing to export flux. Whether the pulse of animals is as a result of the life cycle of D. trispinosa or the effects of the physics of the water column is unclear, but the complexity of the PAP-SO enables us not only to collect these animals but to examine them in parallel to the biogeochemical and physical elements measured by the
Identifying Signatures of Selection in Genetic Time Series
Feder, Alison F.; Kryazhimskiy, Sergey; Plotkin, Joshua B.
2014-01-01
Both genetic drift and natural selection cause the frequencies of alleles in a population to vary over time. Discriminating between these two evolutionary forces, based on a time series of samples from a population, remains an outstanding problem with increasing relevance to modern data sets. Even in the idealized situation when the sampled locus is independent of all other loci, this problem is difficult to solve, especially when the size of the population from which the samples are drawn is unknown. A standard χ2-based likelihood-ratio test was previously proposed to address this problem. Here we show that the χ2-test of selection substantially underestimates the probability of type I error, leading to more false positives than indicated by its P-value, especially at stringent P-values. We introduce two methods to correct this bias. The empirical likelihood-ratio test (ELRT) rejects neutrality when the likelihood-ratio statistic falls in the tail of the empirical distribution obtained under the most likely neutral population size. The frequency increment test (FIT) rejects neutrality if the distribution of normalized allele-frequency increments exhibits a mean that deviates significantly from zero. We characterize the statistical power of these two tests for selection, and we apply them to three experimental data sets. We demonstrate that both ELRT and FIT have power to detect selection in practical parameter regimes, such as those encountered in microbial evolution experiments. Our analysis applies to a single diallelic locus, assumed independent of all other loci, which is most relevant to full-genome selection scans in sexual organisms, and also to evolution experiments in asexual organisms as long as clonal interference is weak. Different techniques will be required to detect selection in time series of cosegregating linked loci. PMID:24318534
NASA Astrophysics Data System (ADS)
Visser, H.; Molenaar, J.
1995-05-01
The detection of trends in climatological data has become central to the discussion on climate change due to the enhanced greenhouse effect. To prove detection, a method is needed (i) to make inferences on significant rises or declines in trends, (ii) to take into account natural variability in climate series, and (iii) to compare output from GCMs with the trends in observed climate data. To meet these requirements, flexible mathematical tools are needed. A structural time series model is proposed with which a stochastic trend, a deterministic trend, and regression coefficients can be estimated simultaneously. The stochastic trend component is described using the class of ARIMA models. The regression component is assumed to be linear. However, the regression coefficients corresponding with the explanatory variables may be time dependent to validate this assumption. The mathematical technique used to estimate this trend-regression model is the Kaiman filter. The main features of the filter are discussed.Examples of trend estimation are given using annual mean temperatures at a single station in the Netherlands (1706-1990) and annual mean temperatures at Northern Hemisphere land stations (1851-1990). The inclusion of explanatory variables is shown by regressing the latter temperature series on four variables: Southern Oscillation index (SOI), volcanic dust index (VDI), sunspot numbers (SSN), and a simulated temperature signal, induced by increasing greenhouse gases (GHG). In all analyses, the influence of SSN on global temperatures is found to be negligible. The correlations between temperatures and SOI and VDI appear to be negative. For SOI, this correlation is significant, but for VDI it is not, probably because of a lack of volcanic eruptions during the sample period. The relation between temperatures and GHG is positive, which is in agreement with the hypothesis of a warming climate because of increasing levels of greenhouse gases. The prediction performance of
Assimilation of LAI time-series in crop production models
NASA Astrophysics Data System (ADS)
Kooistra, Lammert; Rijk, Bert; Nannes, Louis
2014-05-01
Agriculture is worldwide a large consumer of freshwater, nutrients and land. Spatial explicit agricultural management activities (e.g., fertilization, irrigation) could significantly improve efficiency in resource use. In previous studies and operational applications, remote sensing has shown to be a powerful method for spatio-temporal monitoring of actual crop status. As a next step, yield forecasting by assimilating remote sensing based plant variables in crop production models would improve agricultural decision support both at the farm and field level. In this study we investigated the potential of remote sensing based Leaf Area Index (LAI) time-series assimilated in the crop production model LINTUL to improve yield forecasting at field level. The effect of assimilation method and amount of assimilated observations was evaluated. The LINTUL-3 crop production model was calibrated and validated for a potato crop on two experimental fields in the south of the Netherlands. A range of data sources (e.g., in-situ soil moisture and weather sensors, destructive crop measurements) was used for calibration of the model for the experimental field in 2010. LAI from cropscan field radiometer measurements and actual LAI measured with the LAI-2000 instrument were used as input for the LAI time-series. The LAI time-series were assimilated in the LINTUL model and validated for a second experimental field on which potatoes were grown in 2011. Yield in 2011 was simulated with an R2 of 0.82 when compared with field measured yield. Furthermore, we analysed the potential of assimilation of LAI into the LINTUL-3 model through the 'updating' assimilation technique. The deviation between measured and simulated yield decreased from 9371 kg/ha to 8729 kg/ha when assimilating weekly LAI measurements in the LINTUL model over the season of 2011. LINTUL-3 furthermore shows the main growth reducing factors, which are useful for farm decision support. The combination of crop models and sensor
The Mount Wilson CaK Plage Index Time Series
NASA Astrophysics Data System (ADS)
Bertello, L.; Ulrich, R. K.; Boyden, J. E.; Javaraiah, J.
2008-05-01
The Mount Wilson solar photographic archive digitization project makes available to the scientific community in digital form a selection of the solar images in the archives of the Carnegie Observatories. This archive contains over 150,000 images of the Sun which were acquired over a time span in excess of 100 years. The images include broad-band images called White Light Directs, ionized CaK line spectroheliograms and Hydrogen Balmer alpha spectroheliograms. This project will digitize essentially all of the CaK and broad-band direct images out of the archive with 12 bits of significant precision and up to 3000 by 3000 spatial pixels. The analysis of this data set will permit a variety of retrospective analyzes of the state of the solar magnetism and provide a temporal baseline of about 100 years for many solar properties. We have already completed the digitization of the CaK series and we are currently working on the broad-band direct images. Solar images have been extracted and identified with original logbook parameters of observation time and scan format, and they are available from the project web site at www.astro.ucla.edu/~ulrich/MW_SPADP. We present preliminary results on a CaK plage index time series derived from the analysis of 70 years of CaK observations, from 1915 to 1985. One of the main problem we encountered during the calibration process of these images is the presence of a vignetting function. This function is linked to the relative position between the pupil and the grating. As a result of this effect the intensity and its gradient are highly variable from one image to another. We currently remove this effect by using a running median filter to determine the background of the image and divide the image by this background to obtain a flat image. A plage index value is then computed from the intensity distribution of this flat image. We show that the temporal variability of our CaK plage index agrees very well with the behavior of the international
Aerosol Climate Time Series in ESA Aerosol_cci
NASA Astrophysics Data System (ADS)
Popp, Thomas; de Leeuw, Gerrit; Pinnock, Simon
2016-04-01
Within the ESA Climate Change Initiative (CCI) Aerosol_cci (2010 - 2017) conducts intensive work to improve algorithms for the retrieval of aerosol information from European sensors. Meanwhile, full mission time series of 2 GCOS-required aerosol parameters are completely validated and released: Aerosol Optical Depth (AOD) from dual view ATSR-2 / AATSR radiometers (3 algorithms, 1995 - 2012), and stratospheric extinction profiles from star occultation GOMOS spectrometer (2002 - 2012). Additionally, a 35-year multi-sensor time series of the qualitative Absorbing Aerosol Index (AAI) together with sensitivity information and an AAI model simulator is available. Complementary aerosol properties requested by GCOS are in a "round robin" phase, where various algorithms are inter-compared: fine mode AOD, mineral dust AOD (from the thermal IASI spectrometer, but also from ATSR instruments and the POLDER sensor), absorption information and aerosol layer height. As a quasi-reference for validation in few selected regions with sparse ground-based observations the multi-pixel GRASP algorithm for the POLDER instrument is used. Validation of first dataset versions (vs. AERONET, MAN) and inter-comparison to other satellite datasets (MODIS, MISR, SeaWIFS) proved the high quality of the available datasets comparable to other satellite retrievals and revealed needs for algorithm improvement (for example for higher AOD values) which were taken into account for a reprocessing. The datasets contain pixel level uncertainty estimates which were also validated and improved in the reprocessing. For the three ATSR algorithms the use of an ensemble method was tested. The paper will summarize and discuss the status of dataset reprocessing and validation. The focus will be on the ATSR, GOMOS and IASI datasets. Pixel level uncertainties validation will be summarized and discussed including unknown components and their potential usefulness and limitations. Opportunities for time series extension
Metric emerging to massive modes in quantum cosmological space-times
NASA Astrophysics Data System (ADS)
Dapor, Andrea; Lewandowski, Jerzy
2013-03-01
We consider a massive quantum test Klein-Gordon field probing a homogeneous isotropic quantum cosmological space-time in the background. In particular, we derive a semiclassical space-time which emerges to a mode of the field. The method consists of a comparison between quantum field theory on a quantum background and quantum field theory on a classical curved space-time, giving rise to an emergent metric tensor (its components being computed from the equation of propagation of the quantum Klein-Gordon field in the test field approximation). If the field is massless the emergent metric is of the Friedmann-Robertson-Walker form, but if a mass term is considered it turns out that the simplest emergent metric that displays the symmetries of the system is of the Bianchi I type, deformed in the direction of propagation of the particle. This anisotropy is of a quantum nature: it is proportional to ℏ and “dresses” the isotropic classical space-time obtained in the classical limit.
Optimizing Functional Network Representation of Multivariate Time Series
NASA Astrophysics Data System (ADS)
Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco Del; Menasalvas, Ernestina; Boccaletti, Stefano
2012-09-01
By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks.
Optimal estimation of recurrence structures from time series
NASA Astrophysics Data System (ADS)
beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel
2016-05-01
Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.
Spitzer IRAC Photometry for Time Series in Crowded Fields
NASA Astrophysics Data System (ADS)
Calchi Novati, S.; Gould, A.; Yee, J. C.; Beichman, C.; Bryden, G.; Carey, S.; Fausnaugh, M.; Gaudi, B. S.; Henderson, C. B.; Pogge, R. W.; Shvartzvald, Y.; Wibking, B.; Zhu, W.; Spitzer Team; Udalski, A.; Poleski, R.; Pawlak, M.; Szymański, M. K.; Skowron, J.; Mróz, P.; Kozłowski, S.; Wyrzykowski, Ł.; Pietrukowicz, P.; Pietrzyński, G.; Soszyński, I.; Ulaczyk, K.; OGLE Group
2015-12-01
We develop a new photometry algorithm that is optimized for the Infrared Array Camera (IRAC) Spitzer time series in crowded fields and that is particularly adapted to faint or heavily blended targets. We apply this to the 170 targets from the 2015 Spitzer microlensing campaign and present the results of three variants of this algorithm in an online catalog. We present detailed accounts of the application of this algorithm to two difficult cases, one very faint and the other very crowded. Several of Spitzer's instrumental characteristics that drive the specific features of this algorithm are shared by Kepler and WFIRST, implying that these features may prove to be a useful starting point for algorithms designed for microlensing campaigns by these other missions.
Binary Time Series Modeling with Application to Adhesion Frequency Experiments
Hung, Ying; Zarnitsyna, Veronika; Zhang, Yan; Zhu, Cheng; Wu, C. F. Jeff
2011-01-01
Repeated adhesion frequency assay is the only published method for measuring the kinetic rates of cell adhesion. Cell adhesion plays an important role in many physiological and pathological processes. Traditional analysis of adhesion frequency experiments assumes that the adhesion test cycles are independent Bernoulli trials. This assumption can often be violated in practice. Motivated by the analysis of repeated adhesion tests, a binary time series model incorporating random effects is developed in this paper. A goodness-of-fit statistic is introduced to assess the adequacy of distribution assumptions on the dependent binary data with random effects. The asymptotic distribution of the goodness-of-fit statistic is derived and its finite-sample performance is examined via a simulation study. Application of the proposed methodology to real data from a T-cell experiment reveals some interesting information, including the dependency between repeated adhesion tests. PMID:22180690
Incorporating Satellite Time-Series Data into Modeling
NASA Technical Reports Server (NTRS)
Gregg, Watson
2008-01-01
In situ time series observations have provided a multi-decadal view of long-term changes in ocean biology. These observations are sufficiently reliable to enable discernment of even relatively small changes, and provide continuous information on a host of variables. Their key drawback is their limited domain. Satellite observations from ocean color sensors do not suffer the drawback of domain, and simultaneously view the global oceans. This attribute lends credence to their use in global and regional model validation and data assimilation. We focus on these applications using the NASA Ocean Biogeochemical Model. The enhancement of the satellite data using data assimilation is featured and the limitation of tongterm satellite data sets is also discussed.
Predicting chaotic time series with a partial model
NASA Astrophysics Data System (ADS)
Hamilton, Franz; Berry, Tyrus; Sauer, Timothy
2015-07-01
Methods for forecasting time series are a critical aspect of the understanding and control of complex networks. When the model of the network is unknown, nonparametric methods for prediction have been developed, based on concepts of attractor reconstruction pioneered by Takens and others. In this Rapid Communication we consider how to make use of a subset of the system equations, if they are known, to improve the predictive capability of forecasting methods. A counterintuitive implication of the results is that knowledge of the evolution equation of even one variable, if known, can improve forecasting of all variables. The method is illustrated on data from the Lorenz attractor and from a small network with chaotic dynamics.
Directional ocean spectra by three-dimensional displacement time series
Su, T.Z.
1984-01-01
The directionality of ocean waves is considered as the most problem area of today's wave measurement technology. In 1982 the University of Hawaii Ocean ''Engineering Department began a research project Engineering Development of a Directional Wave Spectrum Measurement System for OTEC Applications'' to address this problem. A new technology was developed in this research. This technology uses acoustic signals to determine the trajectory of a floating buoy which simulates the movement of a surface water particle. Transfer functions of the three-dimensional displacement time series are used to describe the wave kinematics. The described wave kinematics are directly applied to calculate hydrodynamic loading. Cospectra and quadrature spectra determine the directional distribution function. The resultant directional distribution function is used to predict the directional progression of ocean waves.
Time-series analysis of Campylobacter incidence in Switzerland.
Wei, W; Schüpbach, G; Held, L
2015-07-01
Campylobacteriosis has been the most common food-associated notifiable infectious disease in Switzerland since 1995. Contact with and ingestion of raw or undercooked broilers are considered the dominant risk factors for infection. In this study, we investigated the temporal relationship between the disease incidence in humans and the prevalence of Campylobacter in broilers in Switzerland from 2008 to 2012. We use a time-series approach to describe the pattern of the disease by incorporating seasonal effects and autocorrelation. The analysis shows that prevalence of Campylobacter in broilers, with a 2-week lag, has a significant impact on disease incidence in humans. Therefore Campylobacter cases in humans can be partly explained by contagion through broiler meat. We also found a strong autoregressive effect in human illness, and a significant increase of illness during Christmas and New Year's holidays. In a final analysis, we corrected for the sampling error of prevalence in broilers and the results gave similar conclusions. PMID:25400006
Polynomial harmonic GMDH learning networks for time series modeling.
Nikolaev, Nikolay Y; Iba, Hitoshi
2003-12-01
This paper presents a constructive approach to neural network modeling of polynomial harmonic functions. This is an approach to growing higher-order networks like these build by the multilayer GMDH algorithm using activation polynomials. Two contributions for enhancement of the neural network learning are offered: (1) extending the expressive power of the network representation with another compositional scheme for combining polynomial terms and harmonics obtained analytically from the data; (2) space improving the higher-order network performance with a backpropagation algorithm for further gradient descent learning of the weights, initialized by least squares fitting during the growing phase. Empirical results show that the polynomial harmonic version phGMDH outperforms the previous GMDH, a Neurofuzzy GMDH and traditional MLP neural networks on time series modeling tasks. Applying next backpropagation training helps to achieve superior polynomial network performances. PMID:14622880
Forecasting electricity usage using univariate time series models
NASA Astrophysics Data System (ADS)
Hock-Eam, Lim; Chee-Yin, Yip
2014-12-01
Electricity is one of the important energy sources. A sufficient supply of electricity is vital to support a country's development and growth. Due to the changing of socio-economic characteristics, increasing competition and deregulation of electricity supply industry, the electricity demand forecasting is even more important than before. It is imperative to evaluate and compare the predictive performance of various forecasting methods. This will provide further insights on the weakness and strengths of each method. In literature, there are mixed evidences on the best forecasting methods of electricity demand. This paper aims to compare the predictive performance of univariate time series models for forecasting the electricity demand using a monthly data of maximum electricity load in Malaysia from January 2003 to December 2013. Results reveal that the Box-Jenkins method produces the best out-of-sample predictive performance. On the other hand, Holt-Winters exponential smoothing method is a good forecasting method for in-sample predictive performance.
Optimizing Functional Network Representation of Multivariate Time Series
Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco del; Menasalvas, Ernestina; Boccaletti, Stefano
2012-01-01
By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks. PMID:22953051
Practical Measures of Integrated Information for Time-Series Data
Barrett, Adam B.; Seth, Anil K.
2011-01-01
A recent measure of ‘integrated information’, ΦDM, quantifies the extent to which a system generates more information than the sum of its parts as it transitions between states, possibly reflecting levels of consciousness generated by neural systems. However, ΦDM is defined only for discrete Markov systems, which are unusual in biology; as a result, ΦDM can rarely be measured in practice. Here, we describe two new measures, ΦE and ΦAR, that overcome these limitations and are easy to apply to time-series data. We use simulations to demonstrate the in-practice applicability of our measures, and to explore their properties. Our results provide new opportunities for examining information integration in real and model systems and carry implications for relations between integrated information, consciousness, and other neurocognitive processes. However, our findings pose challenges for theories that ascribe physical meaning to the measured quantities. PMID:21283779
An Ontology for the Discovery of Time-series Data
NASA Astrophysics Data System (ADS)
Hooper, R. P.; Choi, Y.; Piasecki, M.; Zaslavsky, I.; Valentine, D. W.; Whitenack, T.
2010-12-01
An ontology was developed to enable a single-dimensional keyword search of time-series data collected at fixed points, such as stream gage records, water quality observations, or repeated biological measurements collected at fixed stations. The hierarchical levels were developed to allow navigation from general concepts to more specific ones, terminating in a leaf concept, which is the specific property measured. For example, the concept “nutrient” has child concepts of “nitrogen”, “phosphorus”, and “carbon”; each of these children concepts are then broken into the actual constituent measured (e.g., “total kjeldahl nitrogen” or “nitrate + nitrite”). In this way, a non-expert user can find all nutrients containing nitrogen without knowing all the species measured, but an expert user can go immediately to the compound of interest. In addition, a property, such as dissolved silica, can appear as a leaf concept under nutrients or weathering products. This flexibility allows users from various disciplines to find properties of interest. The ontology can be viewed at http://water.sdsc.edu/hiscentral/startree.aspx. Properties measured by various data publishers (e.g., universities and government agencies) are tagged with leaf concepts from this ontology. A discovery client, HydroDesktop, creates a search request by defining the spatial and temporal extent of interest and a keyword taken from the discovery ontology. Metadata returned from the catalog describes the time series which meet the specified search criteria. This ontology is considered to be an initial description of physical, chemical and biological properties measured in water and suspended sediment. Future plans call for creating a moderated forum for the scientific community to add to and to modify this ontology. Further information for the Hydrologic Information Systems project, of which this is a part, is available at http://his.cuahsi.org.
Efficient Bayesian inference for natural time series using ARFIMA processes
NASA Astrophysics Data System (ADS)
Graves, Timothy; Gramacy, Robert; Franzke, Christian; Watkins, Nicholas
2016-04-01
Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. We present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators [1]. In addition we show how the method can be used to perform joint inference of the stability exponent and the memory parameter when ARFIMA is extended to allow for alpha-stable innovations. Such models can be used to study systems where heavy tails and long range memory coexist. [1] Graves et al, Nonlin. Processes Geophys., 22, 679-700, 2015; doi:10.5194/npg-22-679-2015.
Controlled, distributed data management of an Antarctic time series
NASA Astrophysics Data System (ADS)
Leadbetter, Adam; Connor, David; Cunningham, Nathan; Reynolds, Sarah
2010-05-01
The Rothera Time Series (RaTS) presents over ten years of oceanographic data collected off the Antarctic Peninsula comprising conductivity, temperature, depth cast data; current meter data; and bottle sample data. The data set has been extensively analysed and is well represented in the scientific literature. However, it has never been available to browse as a coherent entity. Work has been undertaken by both the data collecting organisation (the British Antarctic Survey, BAS) and the associated national data centre (the British Oceanographic Data Centre, BODC) to describe the parameters comprising the dataset in a consistent manner. To this end, each data point in the RaTS dataset has now been ascribed a parameter usage term, selected from the appropriate controlled vocabulary of the Natural Environment Research Council's Data Grid (NDG). By marking up the dataset in this way the semantic richness of the NDG vocabularies is fully accessible, and the dataset can be then explored using the Global Change Master Directory keyword set, the International Standards Organisation topic categories, SeaDataNet disciplines and agreed parameter groups, and the NDG parameter discovery vocabulary. We present a single data discovery and exploration tool, a web portal which allows the user to drill down through the dataset using their chosen keyword set. The spatial coverage of the chosen data is displayed through a Google Earth web plugin. Finally, as the time series data are held at BODC and the discrete sample data held at BAS (which are separate physical locations), a mechanism has been established to provide metadata from one site to another. This takes the form of an Open Geospatial Consortium Web Map Service server at BODC feeding information into the portal hosted at BAS.
Statistical methods of parameter estimation for deterministically chaotic time series.
Pisarenko, V F; Sornette, D
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A "segmentation fitting" maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x(1) considered as an additional unknown parameter. The segmentation fitting method, called "piece-wise" ML, is similar in spirit but simpler and has smaller bias than the "multiple shooting" previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically). PMID:15089376
Blind source separation problem in GPS time series
NASA Astrophysics Data System (ADS)
Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.
2016-04-01
A critical point in the analysis of ground displacement time series, as those recorded by space geodetic techniques, is the development of data-driven methods that allow the different sources of deformation to be discerned and characterized in the space and time domains. Multivariate statistic includes several approaches that can be considered as a part of data-driven methods. A widely used technique is the principal component analysis (PCA), which allows us to reduce the dimensionality of the data space while maintaining most of the variance of the dataset explained. However, PCA does not perform well in finding the solution to the so-called blind source separation (BSS) problem, i.e., in recovering and separating the original sources that generate the observed data. This is mainly due to the fact that PCA minimizes the misfit calculated using an L2 norm (χ 2), looking for a new Euclidean space where the projected data are uncorrelated. The independent component analysis (ICA) is a popular technique adopted to approach the BSS problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we test the use of a modified variational Bayesian ICA (vbICA) method to recover the multiple sources of ground deformation even in the presence of missing data. The vbICA method models the probability density function (pdf) of each source signal using a mix of Gaussian distributions, allowing for more flexibility in the description of the pdf of the sources with respect to standard ICA, and giving a more reliable estimate of them. Here we present its application to synthetic global positioning system (GPS) position time series, generated by simulating deformation near an active fault, including inter-seismic, co-seismic, and post-seismic signals, plus seasonal signals and noise, and an additional time-dependent volcanic source. We evaluate the ability of the PCA and ICA decomposition
Time series inversion of spectra from ground-based radiometers
NASA Astrophysics Data System (ADS)
Christensen, O. M.; Eriksson, P.
2013-07-01
Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument, which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the Onsala Space Observatory (OSO) water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.
Time series inversion of spectra from ground-based radiometers
NASA Astrophysics Data System (ADS)
Christensen, O. M.; Eriksson, P.
2013-02-01
Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the OSO water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.
Streamflow properties from time series of surface velocity and stage
Plant, W.J.; Keller, W.C.; Hayes, K.; Spicer, K.
2005-01-01
Time series of surface velocity and stage have been collected simultaneously. Surface velocity was measured using an array of newly developed continuous-wave microwave sensors. Stage was obtained from the standard U.S. Geological Survey (USGS) measurements. The depth of the river was measured several times during our experiments using sounding weights. The data clearly showed that the point of zero flow was not the bottom at the measurement site, indicating that a downstream control exists. Fathometer measurements confirmed this finding. A model of the surface velocity expected at a site having a downstream control was developed. The model showed that the standard form for the friction velocity does not apply to sites where a downstream control exists. This model fit our measured surface velocity versus stage plots very well with reasonable values of the parameters. Discharges computed using the surface velocities and measured depths matched the USGS rating curve for the site. Values of depth-weighted mean velocities derived from our data did not agree with those expected from Manning's equation due to the downstream control. These results suggest that if real-time surface velocities were available at a gauging station, unstable stream beds could be monitored. Journal of Hydraulic Engineering ?? ASCE.
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.
New insights into time series analysis. I. Correlated observations
NASA Astrophysics Data System (ADS)
Ferreira Lopes, C. E.; Cross, N. J. G.
2016-02-01
Context. The first step when investigating time varying data is the detection of any reliable changes in star brightness. This step is crucial to decreasing the processing time by reducing the number of sources processed in later, slower steps. Variability indices and their combinations have been used to identify variability patterns and to select non-stochastic variations, but the separation of true variables is hindered because of wavelength-correlated systematics of instrumental and atmospheric origin or due to possible data reduction anomalies. Aims: The main aim is to review the current inventory of correlation variability indices and measure the efficiency for selecting non-stochastic variations in photometric data. Methods: We test new and standard data-mining methods for correlated data using public time-domain data from the WFCAM Science Archive (WSA). This archive contains multi-wavelength calibration data (WFCAMCAL) for 216,722 point sources, with at least ten unflagged epochs in any of five filters (YZJHK), which were used to test the different indices against. We improve the panchromatic variability indices and introduce a new set of variability indices for preselecting variable star candidates. Using the WFCAMCAL Variable Star Catalogue (WVSC1) we delimit the efficiency of each variability index. Moreover we test new insights about these indices to improve the efficiency of detection of time-series data dominated by correlated variations. Results: We propose five new variability indices that display high efficiency for the detection of variable stars. We determine the best way to select variable stars with these indices and the current tool inventory. In addition, we propose a universal analytical expression to select likely variables using the fraction of fluctuations on these indices (ffluc). The ffluc can be used as a universal way to analyse photometric data since it displays a only weak dependency with the instrument properties. The variability
Interglacial climate dynamics and advanced time series analysis
NASA Astrophysics Data System (ADS)
Mudelsee, Manfred; Bermejo, Miguel; Köhler, Peter; Lohmann, Gerrit
2013-04-01
Studying the climate dynamics of past interglacials (IGs) helps to better assess the anthropogenically influenced dynamics of the current IG, the Holocene. We select the IG portions from the EPICA Dome C ice core archive, which covers the past 800 ka, to apply methods of statistical time series analysis (Mudelsee 2010). The analysed variables are deuterium/H (indicating temperature) (Jouzel et al. 2007), greenhouse gases (Siegenthaler et al. 2005, Loulergue et al. 2008, L¨ü thi et al. 2008) and a model-co-derived climate radiative forcing (Köhler et al. 2010). We select additionally high-resolution sea-surface-temperature records from the marine sedimentary archive. The first statistical method, persistence time estimation (Mudelsee 2002) lets us infer the 'climate memory' property of IGs. Second, linear regression informs about long-term climate trends during IGs. Third, ramp function regression (Mudelsee 2000) is adapted to look on abrupt climate changes during IGs. We compare the Holocene with previous IGs in terms of these mathematical approaches, interprete results in a climate context, assess uncertainties and the requirements to data from old IGs for yielding results of 'acceptable' accuracy. This work receives financial support from the Deutsche Forschungsgemeinschaft (Project ClimSens within the DFG Research Priority Program INTERDYNAMIK) and the European Commission (Marie Curie Initial Training Network LINC, No. 289447, within the 7th Framework Programme). References Jouzel J, Masson-Delmotte V, Cattani O, Dreyfus G, Falourd S, Hoffmann G, Minster B, Nouet J, Barnola JM, Chappellaz J, Fischer H, Gallet JC, Johnsen S, Leuenberger M, Loulergue L, Luethi D, Oerter H, Parrenin F, Raisbeck G, Raynaud D, Schilt A, Schwander J, Selmo E, Souchez R, Spahni R, Stauffer B, Steffensen JP, Stenni B, Stocker TF, Tison JL, Werner M, Wolff EW (2007) Orbital and millennial Antarctic climate variability over the past 800,000 years. Science 317:793. Köhler P, Bintanja R
Massively Parallel and Scalable Implicit Time Integration Algorithms for Structural Dynamics
NASA Technical Reports Server (NTRS)
Farhat, Charbel
1997-01-01
Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because of the following additional facts: (a) explicit schemes are easier to parallelize than implicit ones, and (b) explicit schemes induce short range interprocessor communications that are relatively inexpensive, while the factorization methods used in most implicit schemes induce long range interprocessor communications that often ruin the sought-after speed-up. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet be offset by the speed of the currently available parallel hardware. Therefore, it is essential to develop efficient alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating the low-frequency dynamics of aerospace structures.
Time series photometry of faint cataclysmic variables with a CCD
NASA Astrophysics Data System (ADS)
Abbott, Timothy Mark Cameron
1992-08-01
I describe a new hardware and software environment for the practice of time-series stellar photometry with the CCD systems available at McDonald Observatory. This instrument runs suitable CCD's in frame transfer mode and permits windowing on the CCD image to maximize the duty cycle of the photometer. Light curves may be extracted and analyzed in real time at the telescope and image data are stored for later, more thorough analysis. I describe a star tracking algorithm, which is optimized for a timeseries of images of the same stellar field. I explore the extraction of stellar brightness measures from these images using circular software apertures and develop a complete description of the noise properties of this technique. I show that scintillation and pixelization noise have a significant effect on high quality observations. I demonstrate that optimal sampling and profile fitting techniques are unnecessarily complex or detrimental methods of obtaining stellar brightness measures under conditions commonly encountered in timeseries CCD photometry. I compare CCD's and photomultiplier tubes as detectors for timeseries photometry using light curves of a variety of stars obtained simultaneously with both detectors and under equivalent conditions. A CCD can produce useful data under conditions when a photomultiplier tube cannot, and a CCD will often produce more reliable results even under photometric conditions. I prevent studies of the cataclysmic variables (CV's) AL Com, CP Eri, V Per, and DO Leo made using the time series CCD photometer. AL Com is a very faint CV at high Galactic latitude and a bona fide Population II CV. Some of the properties of AL Com are similar to the dwarf nova WZ Sge and others are similar to the intermediate polar EX Hya, but overall AL Com is unlike any other well-studied cataclysmic variable. CP Eri is shown to be the fifth known interacting binary white dwarf. V Per was the first CV found to have an orbital period near the middle of the
Time-series growth in the female labor force.
Smith, J P; Ward, M P
1985-01-01
This paper investigates the reasons for the growth in the female labor force in the US during the 20th century. Female labor force participation rates increased by 50% from 1950 to 1970. Real wages have played a significant but hardly exclusive role both in the long term growth in female employment and in the more accelerated growth after 1950. At the beginning of this century, fewer than 1 woman in 5 was a member of the labor force; by 1981 more than 6 in 10 were. Increases in female participation were slightly larger among younger women during the 1970s; for the next 20 years the age shape tilted toward older women. For US women 25-34 years old, labor force participation rates have been rising by more than 2 percentage points per year. Closely intertwined with decisions regarding women's work are those involving marriage and family formation. 2 demographic factors that would play a part in subsequent developments are: nuclearization of the US family and urbanization. Time-series trends in education are observed because schooling affects female labor supply independently of any influence through wages; increased years of schooling across birth cohorts shows that an increase of 1.33 years of schooling increased labor participation by 6.9 percentage points during the pre-World War II era. The swing in marriage rates also affects timing, especially for younger women. Based on disaggregated time series data across the period 1950-1981, mean values at single years of age of labor supply, education, work experience, weekly wages, and fertility are determined. Profiles indicate that female labor supply varies considerably not only across cohorts but also over life cycles within birth cohorts. Results show that: 1) relative female wages defined over the work force were lower in 1980 than in 1950, 2) children, especially when young, reduce labor supply, 3) large negative elasticities are linked to female wages, and 4) with all fertility induced effects included, real wage
A multiscale approach to InSAR time series analysis
NASA Astrophysics Data System (ADS)
Simons, M.; Hetland, E. A.; Muse, P.; Lin, Y. N.; Dicaprio, C.; Rickerby, A.
2008-12-01
We describe a new technique to constrain time-dependent deformation from repeated satellite-based InSAR observations of a given region. This approach, which we call MInTS (Multiscale analysis of InSAR Time Series), relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. This approach also permits a consistent treatment of all data independent of the presence of localized holes in any given interferogram. In essence, MInTS allows one to considers all data at the same time (as opposed to one pixel at a time), thereby taking advantage of both spatial and temporal characteristics of the deformation field. In terms of the temporal representation, we have the flexibility to explicitly parametrize known processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). Our approach also allows for the temporal parametrization to includes a set of general functions (e.g., splines) in order to account for unexpected processes. We allow for various forms of model regularization using a cross-validation approach to select penalty parameters. The multiscale analysis allows us to consider various contributions (e.g., orbit errors) that may affect specific scales but not others. The methods described here are all embarrassingly parallel and suitable for implementation on a cluster computer. We demonstrate the use of MInTS using a large suite of ERS-1/2 and Envisat interferograms for Long Valley Caldera, and validate our results by comparing with ground-based observations.
Nonparametric directionality measures for time series and point process data.
Halliday, David M
2015-06-01
The need to determine the directionality of interactions between neural signals is a key requirement for analysis of multichannel recordings. Approaches most commonly used are parametric, typically relying on autoregressive models. A number of concerns have been expressed regarding parametric approaches, thus there is a need to consider alternatives. We present an alternative nonparametric approach for construction of directionality measures for bivariate random processes. The method combines time and frequency domain representations of bivariate data to decompose the correlation by direction. Our framework generates two sets of complementary measures, a set of scalar measures, which decompose the total product moment correlation coefficient summatively into three terms by direction and a set of functions which decompose the coherence summatively at each frequency into three terms by direction: forward direction, reverse direction and instantaneous interaction. It can be undertaken as an addition to a standard bivariate spectral and coherence analysis, and applied to either time series or point-process (spike train) data or mixtures of the two (hybrid data). In this paper, we demonstrate application to spike train data using simulated cortical neurone networks and application to experimental data from isolated muscle spindle sensory endings subject to random efferent stimulation. PMID:25958923
Echoed time series predictions, neural networks and genetic algorithms
NASA Astrophysics Data System (ADS)
Conway, A.
This work aims to illustrate a potentially serious and previously unrecognised problem in using Neural Networks (NNs), and possibly other techniques, to predict Time Series (TS). It also demonstrates how a new training scheme using a genetic algorithm can alleviate this problem. Although it is already established that NNs can predict TS such as Sunspot Number (SSN) with reasonable success, the accuracy of these predictions is often judged solely by an RMS or related error. The use of this type of error overlooks the presence of what we have termed echoing, where the NN outputs its most recent input as its prediction. Therefore, a method of detecting echoed predictions is introduced, called time-shifting. Reasons for the presence of echo are discussed and then related to the choice of TS sampling. Finally, a new specially designed training scheme is described, which is a hybrid of a genetic algorithm search and back propagation. With this method we have successfully trained NNs to predict without any echo.
Automatic CCD Imaging Systems for Time-series CCD Photometry
NASA Astrophysics Data System (ADS)
Caton, D. B.; Pollock, J. T.; Davis, S. A.
2004-12-01
CCDs allow precision photometry to be done with small telescopes and at sites with less than ideal seeing conditions. The addition of an automatic observing mode makes it easy to do time-series CCD photometry of variable stars and AGN/QSOs. At Appalachian State University's Dark Sky Observatory (DSO), we have implemented automatic imaging systems for image acquisition, scripted filter changing, data storage and quick-look online photometry two different telescopes, the 32-inch and 18-inch telescopes. The camera at the 18-inch allows a simple system where the data acquisition PC controls a DFM Engineering filter wheel and Photometrics/Roper camera. The 32-inch system is the more complex, with three computers communicating in order to make good use of its camera's 30-second CCD-read time for filter change. Both telescopes use macros written in the PMIS software (GKR Computer Consulting). Both systems allow automatic data capture with only tended care provided by the observer. Indeed, one observer can easily run both telescopes simultaneously. The efficiency and reliability of these systems also reduces observer errors. The only unresolved problem is an occasional but rare camera-read error (the PC is apparently interrupted). We also sometimes experience a crash of the PMIS software, probably due to its 16-bit code now running in the Windows 2000 32-bit environment. We gratefully acknowledge the support of the National Science Foundation through grants number AST-0089248 and AST-9119750, the Dunham Fund for Astrophysical Research, and the ASU Research Council.
Time-series models for border inspection data.
Decrouez, Geoffrey; Robinson, Andrew
2013-12-01
We propose a new modeling approach for inspection data that provides a more useful interpretation of the patterns of detections of invasive pests, using cargo inspection as a motivating example. Methods that are currently in use generally classify shipments according to their likelihood of carrying biosecurity risk material, given available historical and contextual data. Ideally, decisions regarding which cargo containers to inspect should be made in real time, and the models used should be able to focus efforts when the risk is higher. In this study, we propose a dynamic approach that treats the data as a time series in order to detect periods of high risk. A regulatory organization will respond differently to evidence of systematic problems than evidence of random problems, so testing for serial correlation is of major interest. We compare three models that account for various degrees of serial dependence within the data. First is the independence model where the prediction of the arrival of a risky shipment is made solely on the basis of contextual information. We also consider a Markov chain that allows dependence between successive observations, and a hidden Markov model that allows further dependence on past data. The predictive performance of the models is then evaluated using ROC and leakage curves. We illustrate this methodology on two sets of real inspection data. PMID:23682814
Coastal Atmosphere and Sea Time Series (CoASTS)
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Zibordi, Giuseppe; Berthon, Jean-Francoise; Doyle, John P.; Grossi, Stefania; vanderLinde, Dirk; Targa, Cristina; Alberotanza, Luigi; McClain, Charles R. (Technical Monitor)
2002-01-01
The Coastal Atmosphere and Sea Time Series (CoASTS) Project aimed at supporting ocean color research and applications, from 1995 up to the time of publication of this document, has ensured the collection of a comprehensive atmospheric and marine data set from an oceanographic tower located in the northern Adriatic Sea. The instruments and the measurement methodologies used to gather quantities relevant for bio-optical modeling and for the calibration and validation of ocean color sensors, are described. Particular emphasis is placed on four items: (1) the evaluation of perturbation effects in radiometric data (i.e., tower-shading, instrument self-shading, and bottom effects); (2) the intercomparison of seawater absorption coefficients from in situ measurements and from laboratory spectrometric analysis on discrete samples; (3) the intercomparison of two filter techniques for in vivo measurement of particulate absorption coefficients; and (4) the analysis of repeatability and reproducibility of the most relevant laboratory measurements carried out on seawater samples (i.e., particulate and yellow substance absorption coefficients, and pigment and total suspended matter concentrations). Sample data are also presented and discussed to illustrate the typical features characterizing the CoASTS measurement site in view of supporting the suitability of the CoASTS data set for bio-optical modeling and ocean color calibration and validation.
Urban Area Monitoring using MODIS Time Series Data
NASA Astrophysics Data System (ADS)
Devadiga, S.; Sarkar, S.; Mauoka, E.
2015-12-01
Growing urban sprawl and its impact on global climate due to urban heat island effects has been an active area of research over the recent years. This is especially significant in light of rapid urbanization that is happening in some of the first developing nations across the globe. But so far study of urban area growth has been largely restricted to local and regional scales, using high to medium resolution satellite observations, taken at distinct time periods. In this presentation we propose a new approach to detect and monitor urban area expansion using long time series of MODIS data. This work characterizes data points using a vector of several annual metrics computed from the MODIS 8-day and 16-day composite L3 data products, at 250M resolution and over several years and then uses a vector angle mapping classifier to detect and segment the urban area. The classifier is trained using a set of training points obtained from a reference vector point and polygon pre-filtered using the MODIS VI product. This work gains additional significance, given that, despite unprecedented urban growth since 2000, the area covered by the urban class in the MODIS Global Land Cover (MCD12Q1, MCDLCHKM and MCDLC1KM) product hasn't changed since the launch of Terra and Aqua. The proposed approach was applied to delineate the urban area around several cities in Asia known to have maximum growth in the last 15 years. Results were verified using high resolution Landsat data.
Impact of Sensor Degradation on the MODIS NDVI Time Series
NASA Technical Reports Server (NTRS)
Wang, Dongdong; Morton, Douglas; Masek, Jeffrey; Wu, Aisheng; Nagol, Jyoteshwar; Xiong, Xiaoxiong; Levy, Robert; Vermote, Eric; Wolfe, Robert
2011-01-01
Time series of satellite data provide unparalleled information on the response of vegetation to climate variability. Detecting subtle changes in vegetation over time requires consistent satellite-based measurements. Here, we evaluated the impact of sensor degradation on trend detection using Collection 5 data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors on the Terra and Aqua platforms. For Terra MODIS, the impact of blue band (Band 3, 470nm) degradation on simulated surface reflectance was most pronounced at near-nadir view angles, leading to a 0.001-0.004/yr decline in Normalized Difference Vegetation Index (NDVI) under a range of simulated aerosol conditions and surface types. Observed trends MODIS NDVI over North America were consistent with simulated results, with nearly a threefold difference in negative NDVI trends derived from Terra (17.4%) and Aqua (6.7%) MODIS sensors during 2002-2010. Planned adjustments to Terra MODIS calibration for Collection 6 data reprocessing will largely eliminate this negative bias in NDVI trends over vegetation.
Impact of Sensor Degradation on the MODIS NDVI Time Series
NASA Technical Reports Server (NTRS)
Wang, Dongdong; Morton, Douglas Christopher; Masek, Jeffrey; Wu, Aisheng; Nagol, Jyoteshwar; Xiong, Xiaoxiong; Levy, Robert; Vermote, Eric; Wolfe, Robert
2012-01-01
Time series of satellite data provide unparalleled information on the response of vegetation to climate variability. Detecting subtle changes in vegetation over time requires consistent satellite-based measurements. Here, the impact of sensor degradation on trend detection was evaluated using Collection 5 data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors on the Terra and Aqua platforms. For Terra MODIS, the impact of blue band (Band 3, 470 nm) degradation on simulated surface reflectance was most pronounced at near-nadir view angles, leading to a 0.001-0.004 yr-1 decline in Normalized Difference Vegetation Index (NDVI) under a range of simulated aerosol conditions and surface types. Observed trends in MODIS NDVI over North America were consistentwith simulated results,with nearly a threefold difference in negative NDVI trends derived from Terra (17.4%) and Aqua (6.7%) MODIS sensors during 2002-2010. Planned adjustments to Terra MODIS calibration for Collection 6 data reprocessing will largely eliminate this negative bias in detection of NDVI trends.
Time series decomposition methods were applied to meteorological and air quality data and their numerical model estimates. Decomposition techniques express a time series as the sum of a small number of independent modes which hypothetically represent identifiable forcings, thereb...
Mackenzie River Delta morphological change based on Landsat time series
NASA Astrophysics Data System (ADS)
Vesakoski, Jenni-Mari; Alho, Petteri; Gustafsson, David; Arheimer, Berit; Isberg, Kristina
2015-04-01
Arctic rivers are sensitive and yet quite unexplored river systems to which the climate change will impact on. Research has not focused in detail on the fluvial geomorphology of the Arctic rivers mainly due to the remoteness and wideness of the watersheds, problems with data availability and difficult accessibility. Nowadays wide collaborative spatial databases in hydrology as well as extensive remote sensing datasets over the Arctic are available and they enable improved investigation of the Arctic watersheds. Thereby, it is also important to develop and improve methods that enable detecting the fluvio-morphological processes based on the available data. Furthermore, it is essential to reconstruct and improve the understanding of the past fluvial processes in order to better understand prevailing and future fluvial processes. In this study we sum up the fluvial geomorphological change in the Mackenzie River Delta during the last ~30 years. The Mackenzie River Delta (~13 000 km2) is situated in the North Western Territories, Canada where the Mackenzie River enters to the Beaufort Sea, Arctic Ocean near the city of Inuvik. Mackenzie River Delta is lake-rich, productive ecosystem and ecologically sensitive environment. Research objective is achieved through two sub-objectives: 1) Interpretation of the deltaic river channel planform change by applying Landsat time series. 2) Definition of the variables that have impacted the most on detected changes by applying statistics and long hydrological time series derived from Arctic-HYPE model (HYdrologic Predictions for Environment) developed by Swedish Meteorological and Hydrological Institute. According to our satellite interpretation, field observations and statistical analyses, notable spatio-temporal changes have occurred in the morphology of the river channel and delta during the past 30 years. For example, the channels have been developing in braiding and sinuosity. In addition, various linkages between the studied
Network-based estimation of time-dependent noise in GPS position time series
NASA Astrophysics Data System (ADS)
Dmitrieva, Ksenia; Segall, Paul; DeMets, Charles
2015-06-01
Some estimates of GPS velocity uncertainties are very low, 0.1 mm/year with 10 years of data. Yet, residual velocities relative to rigid plate models in nominally stable plate interiors can be an order of magnitude larger. This discrepancy could be caused by underestimating low-frequency time-dependent noise in position time series, such as random walk. We show that traditional estimators, based on individual time series, are insensitive to low-amplitude random walk, yet such noise significantly increases GPS velocity uncertainties. Here, we develop a method for determining representative noise parameters in GPS position time series, by analyzing an entire network simultaneously, which we refer to as the network noise estimator (NNE). We analyze data from the aseismic central-eastern USA, assuming that residual motions relative to North America, corrected for glacial isostatic adjustment (GIA), represent noise. The position time series are decomposed into signal (plate rotation and GIA) and noise components. NNE simultaneously processes multiple stations with a Kalman filter and solves for average noise components for the network by maximum likelihood estimation. Synthetic tests show that NNE correctly estimates even low-level random walk, thus providing better estimates of velocity uncertainties than conventional, single station methods. To test NNE on actual data, we analyze a heterogeneous 15 station GPS network from the central-eastern USA, assuming the noise is a sum of random walk, flicker and white noise. For the horizontal time series, NNE finds higher average random walk than the standard individual station-based method, leading to velocity uncertainties a factor of 2 higher than traditional methods.
Academic Workload and Working Time: Retrospective Perceptions versus Time-Series Data
ERIC Educational Resources Information Center
Kyvik, Svein
2013-01-01
The purpose of this article is to examine the validity of perceptions by academic staff about their past and present workload and working hours. Retrospective assessments are compared with time-series data. The data are drawn from four mail surveys among academic staff in Norwegian universities undertaken in the period 1982-2008. The findings show…
Rivera, Diego; Lillo, Mario; Granda, Stalin
2014-12-01
The concept of time stability has been widely used in the design and assessment of monitoring networks of soil moisture, as well as in hydrological studies, because it is as a technique that allows identifying of particular locations having the property of representing mean values of soil moisture in the field. In this work, we assess the effect of time stability calculations as new information is added and how time stability calculations are affected at shorter periods, subsampled from the original time series, containing different amounts of precipitation. In doing so, we defined two experiments to explore the time stability behavior. The first experiment sequentially adds new data to the previous time series to investigate the long-term influence of new data in the results. The second experiment applies a windowing approach, taking sequential subsamples from the entire time series to investigate the influence of short-term changes associated with the precipitation in each window. Our results from an operating network (seven monitoring points equipped with four sensors each in a 2-ha blueberry field) show that as information is added to the time series, there are changes in the location of the most stable point (MSP), and that taking the moving 21-day windows, it is clear that most of the variability of soil water content changes is associated with both the amount and intensity of rainfall. The changes of the MSP over each window depend on the amount of water entering the soil and the previous state of the soil water content. For our case study, the upper strata are proxies for hourly to daily changes in soil water content, while the deeper strata are proxies for medium-range stored water. Thus, different locations and depths are representative of processes at different time scales. This situation must be taken into account when water management depends on soil water content values from fixed locations. PMID:25249045
Statistical Analysis of Sensor Network Time Series at Multiple Time Scales
NASA Astrophysics Data System (ADS)
Granat, R. A.; Donnellan, A.
2013-12-01
Modern sensor networks often collect data at multiple time scales in order to observe physical phenomena that occur at different scales. Whether collected by heterogeneous or homogenous sensor networks, measurements at different time scales are usually subject to different dynamics, noise characteristics, and error sources. We explore the impact of these effects on the results of statistical time series analysis methods applied to multi-scale time series data. As a case study, we analyze results from GPS time series position data collected in Japan and the Western United States, which produce raw observations at 1Hz and orbit corrected observations at time resolutions of 5 minutes, 30 minutes, and 24 hours. We utilize the GPS analysis package (GAP) software to perform three types of statistical analysis on these observations: hidden Markov modeling, probabilistic principle components analysis, and covariance distance analysis. We compare the results of these methods at the different time scales and discuss the impact on science understanding of earthquake fault systems generally and recent large seismic events specifically, including the Tohoku-Oki earthquake in Japan and El Mayor-Cucupah earthquake in Mexico.
High-Speed Time-Series CCD Photometry with Agile
NASA Astrophysics Data System (ADS)
Mukadam, Anjum S.; Owen, R.; Mannery, E.; MacDonald, N.; Williams, B.; Stauffer, F.; Miller, C.
2011-12-01
We have assembled a high-speed time-series CCD photometer named Agile for the 3.5 m telescope at Apache Point Observatory, based on the design of a photometer called Argos at McDonald Observatory. Instead of a mechanical shutter, we use the frame-transfer operation of the CCD to end an exposure and initiate the subsequent new exposure. The frame-transfer operation is triggered by the negative edge of a GPS pulse; the instrument timing is controlled directly by hardware, without any software intervention or delays. This is the central pillar in the design of Argos that we have also used in Agile; this feature makes the accuracy of instrument timing better than a millisecond. Agile is based on a Princeton Instruments Acton VersArray camera with a frame-transfer CCD, which has 1K × 1K active pixels, each of size 13 μm × 13 μm. Using a focal reducer at the Nasmyth focus of the 3.5 m telescope at Apache Point Observatory, we yield a field of view of 2.2 × 2.2 arcmin2 with an unbinned plate scale of 0.13'' pixel-1. The CCD is back-illuminated and thinned for improved blue sensitivity and provides a quantum efficiency >=80% in the wavelength range of 4500-7500 Å. The unbinned full-frame readout time can be as fast as 1.1 s this is achieved using a low-noise amplifier operating at 1 MHz with an average read noise of the order of 6.6 e rms. At the slow read rate of 100 kHz to be used for exposure times longer than a few seconds, we determine an average read noise of the order of 3.7 e rms. Agile is optimized to observe variability at short timescales from one-third of a second to several hundred seconds. The variable astronomical sources routinely observed with Agile include pulsating white dwarfs, cataclysmic variables, flare stars, planetary transits, and planetary satellite occultations.
A Multiscale Approach to InSAR Time Series Analysis
NASA Astrophysics Data System (ADS)
Simons, M.; Hetland, E. A.; Muse, P.; Lin, Y.; Dicaprio, C. J.
2009-12-01
We describe progress in the development of MInTS (Multiscale analysis of InSAR Time Series), an approach to constructed self-consistent time-dependent deformation observations from repeated satellite-based InSAR images of a given region. MInTS relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. In essence, MInTS allows one to considers all data at the same time as opposed to one pixel at a time, thereby taking advantage of both spatial and temporal characteristics of the deformation field. This approach also permits a consistent treatment of all data independent of the presence of localized holes due to unwrapping issues in any given interferogram. Specifically, the presence of holes is accounted for through a weighting scheme that accounts for the extent of actual data versus the area of holes associated with any given wavelet. In terms of the temporal representation, we have the flexibility to explicitly parametrize known processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). Our approach also allows for the temporal parametrization to include a set of general functions in order to account for unexpected processes. We allow for various forms of model regularization using a cross-validation approach to select penalty parameters. We also experiment with the use of sparsity inducing regularization as a way to select from a large dictionary of time functions. The multiscale analysis allows us to consider various contributions (e.g., orbit errors) that may affect specific scales but not others. The methods described here are all embarrassingly parallel and suitable for implementation on a cluster computer. We demonstrate the use of MInTS using a large suite of ERS-1/2 and Envisat interferograms for Long Valley Caldera, and validate
Time series analysis as a tool for karst water management
NASA Astrophysics Data System (ADS)
Fournier, Matthieu; Massei, Nicolas; Duran, Léa
2015-04-01
Karst hydrosystems are well known for their vulnerability to turbidity due to their complex and unique characteristics which make them very different from other aquifers. Moreover, many parameters can affect their functioning. It makes the characterization of their vulnerability difficult and needs the use of statistical analyses Time series analyses on turbidity, electrical conductivity and water discharge datasets, such as correlation and spectral analyses, have proven to be useful in improving our understanding of karst systems. However, the loss of information on time localization is a major drawback of those Fourier spectral methods; this problem has been overcome by the development of wavelet analysis (continuous or discrete) for hydrosystems offering the possibility to better characterize the complex modalities of variation inherent to non stationary processes. Nevertheless, from wavelet transform, signal is decomposed on several continuous wavelet signals which cannot be true with local-time processes frequently observed in karst aquifer. More recently, a new approach associating empirical mode decomposition and the Hilbert transform was presented for hydrosystems. It allows an orthogonal decomposition of the signal analyzed and provides a more accurate estimation of changing variability scales across time for highly transient signals. This study aims to identify the natural and anthropogenic parameters which control turbidity released at a well for drinking water supply. The well is located in the chalk karst aquifer near the Seine river at 40 km of the Seine estuary in western Paris Basin. At this location, tidal variations greatly affect the level of the water in the Seine. Continuous wavelet analysis on turbidity dataset have been used to decompose turbidity release at the well into three components i) the rain event periods, ii) the pumping periods and iii) the tidal range of Seine river. Time-domain reconstruction by inverse wavelet transform allows
Measurement of time-dependent fractal dimension for time series of silicon content in pig iron
NASA Astrophysics Data System (ADS)
Zhou, Zhi-Min
2007-03-01
This work applies the rescaled range analysis and fractal dimension technique to analyze the time series on silicon content in pig iron for detecting the inherent mechanism that governs blast furnace iron making process. The results show that there is time-dependent fractal feature and deterministic mechanism in blast furnace (BF) ironmaking process, which is helpful to gain deeper understanding of BF ironmaking process and also provide a powerful tool to solve the prediction of the silicon content in pig iron.
Physiological time-series analysis: what does regularity quantify?
NASA Technical Reports Server (NTRS)
Pincus, S. M.; Goldberger, A. L.
1994-01-01
Approximate entropy (ApEn) is a recently developed statistic quantifying regularity and complexity that appears to have potential application to a wide variety of physiological and clinical time-series data. The focus here is to provide a better understanding of ApEn to facilitate its proper utilization, application, and interpretation. After giving the formal mathematical description of ApEn, we provide a multistep description of the algorithm as applied to two contrasting clinical heart rate data sets. We discuss algorithm implementation and interpretation and introduce a general mathematical hypothesis of the dynamics of a wide class of diseases, indicating the utility of ApEn to test this hypothesis. We indicate the relationship of ApEn to variability measures, the Fourier spectrum, and algorithms motivated by study of chaotic dynamics. We discuss further mathematical properties of ApEn, including the choice of input parameters, statistical issues, and modeling considerations, and we conclude with a section on caveats to ensure correct ApEn utilization.
Presentations to Emergency Departments for COPD: A Time Series Analysis
Youngson, Erik; Rowe, Brian H.
2016-01-01
Background. Chronic obstructive pulmonary disease (COPD) is a common respiratory condition characterized by progressive dyspnea and acute exacerbations which may result in emergency department (ED) presentations. This study examines monthly rates of presentations to EDs in one Canadian province. Methods. Presentations for COPD made by individuals aged ≥55 years during April 1999 to March 2011 were extracted from provincial databases. Data included age, sex, and health zone of residence (North, Central, South, and urban). Crude rates were calculated. Seasonal autoregressive integrated moving average (SARIMA) time series models were developed. Results. ED presentations for COPD totalled 188,824 and the monthly rate of presentation remained relatively stable (from 197.7 to 232.6 per 100,000). Males and seniors (≥65 years) comprised 52.2% and 73.7% of presentations, respectively. The ARIMA(1,0, 0) × (1,0, 1)12 model was appropriate for the overall rate of presentations and for each sex and seniors. Zone specific models showed relatively stable or decreasing rates; the North zone had an increasing trend. Conclusions. ED presentation rates for COPD have been relatively stable in Alberta during the past decade. However, their increases in northern regions deserve further exploration. The SARIMA models quantified the temporal patterns and can help planning future health care service needs. PMID:27445514
Time-series analysis of offshore-wind-wave groupiness
Liang, H.B.
1988-01-01
This research is to applies basic time-series-analysis techniques on the complex envelope function where the study of the offshore-wind-wave groupiness is a relevant interest. In constructing the complex envelope function, a phase-unwrapping technique is integrated into the algorithm for estimating the carrier frequency and preserving the phase information for further studies. The Gaussian random wave model forms the basis of the wave-group statistics by the envelope-amplitude crossings. Good agreement between the theory and the analysis of field records is found. Other linear models, such as the individual-waves approach and the energy approach, are compared to the envelope approach by analyzing the same set of records. It is found that the character of the filter used in each approach dominates the wave-group statistics. Analyses indicate that the deep offshore wind waves are weakly nonlinear and the Gaussian random assumption remains appropriate for describing the sea state. Wave groups statistics derived from the Gaussian random wave model thus become applicable.
Assessing Spontaneous Combustion Instability with Nonlinear Time Series Analysis
NASA Technical Reports Server (NTRS)
Eberhart, C. J.; Casiano, M. J.
2015-01-01
Considerable interest lies in the ability to characterize the onset of spontaneous instabilities within liquid propellant rocket engine (LPRE) combustion devices. Linear techniques, such as fast Fourier transforms, various correlation parameters, and critical damping parameters, have been used at great length for over fifty years. Recently, nonlinear time series methods have been applied to deduce information pertaining to instability incipiency hidden in seemingly stochastic combustion noise. A technique commonly used in biological sciences known as the Multifractal Detrended Fluctuation Analysis has been extended to the combustion dynamics field, and is introduced here as a data analysis approach complementary to linear ones. Advancing, a modified technique is leveraged to extract artifacts of impending combustion instability that present themselves a priori growth to limit cycle amplitudes. Analysis is demonstrated on data from J-2X gas generator testing during which a distinct spontaneous instability was observed. Comparisons are made to previous work wherein the data were characterized using linear approaches. Verification of the technique is performed by examining idealized signals and comparing two separate, independently developed tools.
Permutation Entropy Analysis of Geomagnetic Indices Time Series
NASA Astrophysics Data System (ADS)
De Michelis, Paola; Consolini, Giuseppe
2013-04-01
The Earth's magnetospheric dynamics displays a very complex nature in response to solar wind changes as widely documented in the scientific literature. This complex dynamics manifests in various physical processes occurring in different regions of the Earth's magnetosphere as clearly revealed by previous analyses on geomagnetic indices (AE-indices, Dst, Sym-H, ....., etc.). One of the most interesting features of the geomagnetic indices as proxies of the Earth's magnetospheric dynamics is the multifractional nature of the time series of such indices. This aspect has been interpreted as the occurrence of intermittence and dynamical phase transition in the Earth's magnetosphere. Here, we investigate the Markovian nature of different geomagnetic indices (AE-indices, Sym-H, Asy-H) and their fluctuations by means of Permutation Entropy Analysis. The results clearly show the non-Markovian and different nature of the distinct sets of geomagnetic indices, pointing towards diverse underlying physical processes. A discussion in connection with the nature of the physical processes responsible of each set of indices and their multifractional character is attempted.
A Monte Carlo Approach to Biomedical Time Series Search
Woodbridge, Jonathan; Mortazavi, Bobak; Sarrafzadeh, Majid; Bui, Alex A.T.
2016-01-01
Time series subsequence matching (or signal searching) has importance in a variety of areas in health care informatics. These areas include case-based diagnosis and treatment as well as the discovery of trends and correlations between data. Much of the traditional research in signal searching has focused on high dimensional R-NN matching. However, the results of R-NN are often small and yield minimal information gain; especially with higher dimensional data. This paper proposes a randomized Monte Carlo sampling method to broaden search criteria such that the query results are an accurate sampling of the complete result set. The proposed method is shown both theoretically and empirically to improve information gain. The number of query results are increased by several orders of magnitude over approximate exact matching schemes and fall within a Gaussian distribution. The proposed method also shows excellent performance as the majority of overhead added by sampling can be mitigated through parallelization. Experiments are run on both simulated and real-world biomedical datasets.
Chaotic time series analysis of vision evoked EEG
NASA Astrophysics Data System (ADS)
Zhang, Ningning; Wang, Hong
2009-12-01
To investigate the human brain activities for aesthetic processing, beautiful woman face picture and ugly buffoon face picture were applied. Twelve subjects were assigned the aesthetic processing task while the electroencephalogram (EEG) was recorded. Event-related brain potential (ERP) was required from the 32 scalp electrodes and the ugly buffoon picture produced larger amplitudes for the N1, P2, N2, and late slow wave components. Average ERP from the ugly buffoon picture were larger than that from the beautiful woman picture. The ERP signals shows that the ugly buffoon elite higher emotion waves than the beautiful woman face, because some expression is on the face of the buffoon. Then, chaos time series analysis was carried out to calculate the largest Lyapunov exponent using small data set method and the correlation dimension using G-P algorithm. The results show that the largest Lyapunov exponents of the ERP signals are greater than zero, which indicate that the ERP signals may be chaotic. The correlations dimensions coming from the beautiful woman picture are larger than that from the ugly buffoon picture. The comparison of the correlations dimensions shows that the beautiful face can excite the brain nerve cells. The research in the paper is a persuasive proof to the opinion that cerebrum's work is chaotic under some picture stimuli.
Chaotic time series analysis of vision evoked EEG
NASA Astrophysics Data System (ADS)
Zhang, Ningning; Wang, Hong
2010-01-01
To investigate the human brain activities for aesthetic processing, beautiful woman face picture and ugly buffoon face picture were applied. Twelve subjects were assigned the aesthetic processing task while the electroencephalogram (EEG) was recorded. Event-related brain potential (ERP) was required from the 32 scalp electrodes and the ugly buffoon picture produced larger amplitudes for the N1, P2, N2, and late slow wave components. Average ERP from the ugly buffoon picture were larger than that from the beautiful woman picture. The ERP signals shows that the ugly buffoon elite higher emotion waves than the beautiful woman face, because some expression is on the face of the buffoon. Then, chaos time series analysis was carried out to calculate the largest Lyapunov exponent using small data set method and the correlation dimension using G-P algorithm. The results show that the largest Lyapunov exponents of the ERP signals are greater than zero, which indicate that the ERP signals may be chaotic. The correlations dimensions coming from the beautiful woman picture are larger than that from the ugly buffoon picture. The comparison of the correlations dimensions shows that the beautiful face can excite the brain nerve cells. The research in the paper is a persuasive proof to the opinion that cerebrum's work is chaotic under some picture stimuli.
Time series analysis of Monte Carlo neutron transport calculations
NASA Astrophysics Data System (ADS)
Nease, Brian Robert
A time series based approach is applied to the Monte Carlo (MC) fission source distribution to calculate the non-fundamental mode eigenvalues of the system. The approach applies Principal Oscillation Patterns (POPs) to the fission source distribution, transforming the problem into a simple autoregressive order one (AR(1)) process. Proof is provided that the stationary MC process is linear to first order approximation, which is a requirement for the application of POPs. The autocorrelation coefficient of the resulting AR(1) process corresponds to the ratio of the desired mode eigenvalue to the fundamental mode eigenvalue. All modern k-eigenvalue MC codes calculate the fundamental mode eigenvalue, so the desired mode eigenvalue can be easily determined. The strength of this approach is contrasted against the Fission Matrix method (FMM) in terms of accuracy versus computer memory constraints. Multi-dimensional problems are considered since the approach has strong potential for use in reactor analysis, and the implementation of the method into production codes is discussed. Lastly, the appearance of complex eigenvalues is investigated and solutions are provided.
Chaos Time Series Prediction Based on Membrane Optimization Algorithms
Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng
2015-01-01
This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249
A Time Series Approach for Soil Moisture Estimation
NASA Technical Reports Server (NTRS)
Kim, Yunjin; vanZyl, Jakob
2006-01-01
Soil moisture is a key parameter in understanding the global water cycle and in predicting natural hazards. Polarimetric radar measurements have been used for estimating soil moisture of bare surfaces. In order to estimate soil moisture accurately, the surface roughness effect must be compensated properly. In addition, these algorithms will not produce accurate results for vegetated surfaces. It is difficult to retrieve soil moisture of a vegetated surface since the radar backscattering cross section is sensitive to the vegetation structure and environmental conditions such as the ground slope. Therefore, it is necessary to develop a method to estimate the effect of the surface roughness and vegetation reliably. One way to remove the roughness effect and the vegetation contamination is to take advantage of the temporal variation of soil moisture. In order to understand the global hydrologic cycle, it is desirable to measure soil moisture with one- to two-days revisit. Using these frequent measurements, a time series approach can be implemented to improve the soil moisture retrieval accuracy.
Optimal model-free prediction from multivariate time series.
Runge, Jakob; Donner, Reik V; Kurths, Jürgen
2015-05-01
Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation. PMID:26066231
On the Fourier and Wavelet Analysis of Coronal Time Series
NASA Astrophysics Data System (ADS)
Auchère, F.; Froment, C.; Bocchialini, K.; Buchlin, E.; Solomon, J.
2016-07-01
Using Fourier and wavelet analysis, we critically re-assess the significance of our detection of periodic pulsations in coronal loops. We show that the proper identification of the frequency dependence and statistical properties of the different components of the power spectra provides a strong argument against the common practice of data detrending, which tends to produce spurious detections around the cut-off frequency of the filter. In addition, the white and red noise models built into the widely used wavelet code of Torrence & Compo cannot, in most cases, adequately represent the power spectra of coronal time series, thus also possibly causing false positives. Both effects suggest that several reports of periodic phenomena should be re-examined. The Torrence & Compo code nonetheless effectively computes rigorous confidence levels if provided with pertinent models of mean power spectra, and we describe the appropriate manner in which to call its core routines. We recall the meaning of the default confidence levels output from the code, and we propose new Monte-Carlo-derived levels that take into account the total number of degrees of freedom in the wavelet spectra. These improvements allow us to confirm that the power peaks that we detected have a very low probability of being caused by noise.
Imputation of missing data in time series for air pollutants
NASA Astrophysics Data System (ADS)
Junger, W. L.; Ponce de Leon, A.
2015-02-01
Missing data are major concerns in epidemiological studies of the health effects of environmental air pollutants. This article presents an imputation-based method that is suitable for multivariate time series data, which uses the EM algorithm under the assumption of normal distribution. Different approaches are considered for filtering the temporal component. A simulation study was performed to assess validity and performance of proposed method in comparison with some frequently used methods. Simulations showed that when the amount of missing data was as low as 5%, the complete data analysis yielded satisfactory results regardless of the generating mechanism of the missing data, whereas the validity began to degenerate when the proportion of missing values exceeded 10%. The proposed imputation method exhibited good accuracy and precision in different settings with respect to the patterns of missing observations. Most of the imputations obtained valid results, even under missing not at random. The methods proposed in this study are implemented as a package called mtsdi for the statistical software system R.
De-Trending Time Series Data for Variability Surveys
NASA Astrophysics Data System (ADS)
Kim, Dae-Won; Protopapas, Pavlos; Dave, Rahul
2009-02-01
We present an algorithm for the removal of trends in time series data. The trends could be caused by various systematic and random noise sources such as cloud passages, change of airmass or CCD noise. In order to determine the trends, we select template stars based on a hierarchical clustering algorithm. The hierarchy tree is constructed using the similarity matrix of light curves of stars whose elements are the Pearson correlation values. A new bottom-up merging algorithm is developed to extract clusters of template stars that are highly correlated among themselves, and may thus be used to identify the trends. We then use the multiple linear regression method to de-trend all individual light curves based on these determined trends. Experimental results with simulated light curves which contain artificial trends and events are presented. We also applied our algorithm to TAOS (Taiwan-American Occultation Survey) wide field data observed with a 0.5m f/1.9 telescope equipped with 2k by 2k CCD. With our approach, we successfully removed trends and increased signal to noise in TAOS light curves.
From time series measurements to rules of causality
NASA Astrophysics Data System (ADS)
Raivio, K. J.
2012-05-01
Data analysis procedure is described which can be used to extract events and rules of causality from time series measurements of industrial processes. The proposed procedure incorporates well-known data analysis methods, that have not been widely used in condition monitoring systems. Here, it will be demonstrated how the algorithms can be utilised in mobile work machine condition monitoring. The analysis process starts with selection of representative measurements which describe the operation of machine reasonable well. Here, variables are selected using a clustering method, in order to find groups of variables. One measurement from each group is selected for later analysis. Next, data streams are segmented to find areas on which the operation of machine continues without any abrupt changes. The segments or combinations of them are analysed to be able to extract sequences of operational states or change points. Event sequences are further analysed to extract association rules for the events. The extracted rules contain information about occurrence probability of certain event sequence. This facilitates, e.g., identification of fault precursors. Probabilities computed from measurements using this procedure can be used to adjust expert knowledge based fault-trees.
Scene Context Dependency of Pattern Constancy of Time Series Imagery
NASA Technical Reports Server (NTRS)
Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur
2008-01-01
A fundamental element of future generic pattern recognition technology is the ability to extract similar patterns for the same scene despite wide ranging extraneous variables, including lighting, turbidity, sensor exposure variations, and signal noise. In the process of demonstrating pattern constancy of this kind for retinex/visual servo (RVS) image enhancement processing, we found that the pattern constancy performance depended somewhat on scene content. Most notably, the scene topography and, in particular, the scale and extent of the topography in an image, affects the pattern constancy the most. This paper will explore these effects in more depth and present experimental data from several time series tests. These results further quantify the impact of topography on pattern constancy. Despite this residual inconstancy, the results of overall pattern constancy testing support the idea that RVS image processing can be a universal front-end for generic visual pattern recognition. While the effects on pattern constancy were significant, the RVS processing still does achieve a high degree of pattern constancy over a wide spectrum of scene content diversity, and wide ranging extraneousness variations in lighting, turbidity, and sensor exposure.
Presentations to Emergency Departments for COPD: A Time Series Analysis.
Rosychuk, Rhonda J; Youngson, Erik; Rowe, Brian H
2016-01-01
Background. Chronic obstructive pulmonary disease (COPD) is a common respiratory condition characterized by progressive dyspnea and acute exacerbations which may result in emergency department (ED) presentations. This study examines monthly rates of presentations to EDs in one Canadian province. Methods. Presentations for COPD made by individuals aged ≥55 years during April 1999 to March 2011 were extracted from provincial databases. Data included age, sex, and health zone of residence (North, Central, South, and urban). Crude rates were calculated. Seasonal autoregressive integrated moving average (SARIMA) time series models were developed. Results. ED presentations for COPD totalled 188,824 and the monthly rate of presentation remained relatively stable (from 197.7 to 232.6 per 100,000). Males and seniors (≥65 years) comprised 52.2% and 73.7% of presentations, respectively. The ARIMA(1,0, 0) × (1,0, 1)12 model was appropriate for the overall rate of presentations and for each sex and seniors. Zone specific models showed relatively stable or decreasing rates; the North zone had an increasing trend. Conclusions. ED presentation rates for COPD have been relatively stable in Alberta during the past decade. However, their increases in northern regions deserve further exploration. The SARIMA models quantified the temporal patterns and can help planning future health care service needs. PMID:27445514
Innovative techniques to analyze time series of geomagnetic activity indices
NASA Astrophysics Data System (ADS)
Balasis, Georgios; Papadimitriou, Constantinos; Daglis, Ioannis A.; Potirakis, Stelios M.; Eftaxias, Konstantinos
2016-04-01
Magnetic storms are undoubtedly among the most important phenomena in space physics and also a central subject of space weather. The non-extensive Tsallis entropy has been recently introduced, as an effective complexity measure for the analysis of the geomagnetic activity Dst index. The Tsallis entropy sensitively shows the complexity dissimilarity among different "physiological" (normal) and "pathological" states (intense magnetic storms). More precisely, the Tsallis entropy implies the emergence of two distinct patterns: (i) a pattern associated with the intense magnetic storms, which is characterized by a higher degree of organization, and (ii) a pattern associated with normal periods, which is characterized by a lower degree of organization. Other entropy measures such as Block Entropy, T-Complexity, Approximate Entropy, Sample Entropy and Fuzzy Entropy verify the above mentioned result. Importantly, the wavelet spectral analysis in terms of Hurst exponent, H, also shows the existence of two different patterns: (i) a pattern associated with the intense magnetic storms, which is characterized by a fractional Brownian persistent behavior (ii) a pattern associated with normal periods, which is characterized by a fractional Brownian anti-persistent behavior. Finally, we observe universality in the magnetic storm and earthquake dynamics, on a basis of a modified form of the Gutenberg-Richter law for the Tsallis statistics. This finding suggests a common approach to the interpretation of both phenomena in terms of the same driving physical mechanism. Signatures of discrete scale invariance in Dst time series further supports the aforementioned proposal.
Detrended fluctuation analysis of laser Doppler flowmetry time series.
Esen, Ferhan; Aydin, Gülsün Sönmez; Esen, Hamza
2009-12-01
Detrended fluctuation analysis (DFA) of laser Doppler flow (LDF) time series appears to yield improved prognostic power in microvascular dysfunction, through calculation of the scaling exponent, alpha. In the present study the long lasting strenuous activity-induced change in microvascular function was evaluated by DFA in basketball players compared with sedentary control. Forearm skin blood flow was measured at rest and during local heating. Three scaling exponents, the slopes of the three regression lines, were identified corresponding to cardiac, cardio-respiratory and local factors. Local scaling exponent was always approximately one, alpha=1.01+/-0.15, in the control group and did not change with local heating. However, we found a broken line with two scaling exponents (alpha(1)=1.06+/-0.01 and alpha(2)=0.75+/-0.01) in basketball players. The broken line became a single line having one scaling exponent (alpha(T)=0.94+/-0.01) with local heating. The scaling exponents, alpha(2) and alpha(T), smaller than 1 indicate reduced long-range correlation in blood flow due to a loss of integration in local mechanisms and suggest endothelial dysfunction as the most likely candidate. Evaluation of microvascular function from a baseline LDF signal at rest is the superiority of DFA to other methods, spectral or not, that use the amplitude changes of evoked relative signal. PMID:19660479
Fractal dimensions of laser doppler flowmetry time series.
Carolan-Rees, G; Tweddel, A C; Naka, K K; Griffith, T M
2002-01-01
Laser Doppler flowmetry (LDF) provides a non-invasive method of assessing cutaneous perfusion. As the microvasculature under the probe is not defined the measured flux cannot be given absolute units, but the technique has nevertheless proved valuable for assessing relative changes in perfusion in response to physiological stress. LDF signals normally show pronounced temporal variability, both as a consequence of the pulsatile nature of blood flow and local changes in dynamic vasomotor activity. The aim of the present study was to investigate the use of methods of nonlinear analysis in characterizing temporal fluctuations in LDF signals. Data were collected under standardised conditions from the forearm of 16 normal subjects at rest, during exercise and on recovery. Surrogate data was then generated from the original time series by phase randomization. Dispersional analysis demonstrated that the LDF data was fractal with two distinct scaling regions, thus allowing the calculation of a fractal dimension which decreased significantly from 1.23 +/- 0.09 to 1.04 +/- 0.02 during exercise. By contrast, dispersional analysis of the surrogate data showed no scaling region. PMID:11891142
Time series analysis of the cataclysmic variable V1101 Aquilae
NASA Astrophysics Data System (ADS)
Spahn, Alexander C.
This work reports on the application of various time series analysis techniques to a two month portion of the light curve of the cataclysmic variable V1101 Aquilae. The system is a Z Cam type dwarf nova with an orbital period of 4.089 hours and an active outburst cycle of 15.15 days due to a high mass transfer rate. The system's light curve also displays higher frequency variations, known as negative sumperhums, with a period of 3.891 hours and a period deficit of --5.1%. The amplitude of the negative superhumps varies as an inverse function of system brightness, with an amplitude of 0.70358 during outburst and 0.97718 during quiescence (relative flux units). These variations are believed to be caused by the contrast between the accretion disk and the bright spot. An O--?C diagram was constructed and reveals the system's evolution. In general, during the rise to outburst, the disk moment of inertia decreases as mass is lost from the disk, causing the precession period of the tilted disk to increase and with it the negative superhump period. The decline of outburst is associated with the opposite effects. While no standstills were observed in this data, they are present in the AAVSO data and the results agree with the conditions for Z Cam stars.
Chaos time series prediction based on membrane optimization algorithms.
Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng; Peng, Hong
2015-01-01
This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249
Reconstructing Ocean Circulation using Coral (triangle)14C Time Series
Kashgarian, M; Guilderson, T P
2001-02-23
the invasion of fossil fuel CO{sub 2} and bomb {sup 14}C into the atmosphere and surface oceans. Therefore the {Delta}{sup 14}C data that are produced in this study can be used to validate the ocean uptake of fossil fuel CO2 in coupled ocean-atmosphere models. This study takes advantage of the quasi-conservative nature of {sup 14}C as a water mass tracer by using {Delta}{sup 14}C time series in corals to identify changes in the shallow circulation of the Pacific. Although the data itself provides fundamental information on surface water mass movement the true strength is a combined approach which is greater than the individual parts; the data helps uncover deficiencies in ocean circulation models and the model results place long {Delta}{sup 14}C time series in a dynamic framework which helps to identify those locations where additional observations are most needed.
D City Transformations by Time Series of Aerial Images
NASA Astrophysics Data System (ADS)
Adami, A.
2015-02-01
Recent photogrammetric applications, based on dense image matching algorithms, allow to use not only images acquired by digital cameras, amateur or not, but also to recover the vast heritage of analogue photographs. This possibility opens up many possibilities in the use and enhancement of existing photos heritage. The research of the original figuration of old buildings, the virtual reconstruction of disappeared architectures and the study of urban development are some of the application areas that exploit the great cultural heritage of photography. Nevertheless there are some restrictions in the use of historical images for automatic reconstruction of buildings such as image quality, availability of camera parameters and ineffective geometry of image acquisition. These constrains are very hard to solve and it is difficult to discover good dataset in the case of terrestrial close range photogrammetry for the above reasons. Even the photographic archives of museums and superintendence, while retaining a wealth of documentation, have no dataset for a dense image matching approach. Compared to the vast collection of historical photos, the class of aerial photos meets both criteria stated above. In this paper historical aerial photographs are used with dense image matching algorithms to realize 3d models of a city in different years. The models can be used to study the urban development of the city and its changes through time. The application relates to the city centre of Verona, for which some time series of aerial photographs have been retrieved. The models obtained in this way allowed, right away, to observe the urban development of the city, the places of expansion and new urban areas. But a more interesting aspect emerged from the analytical comparison between models. The difference, as the Euclidean distance, between two models gives information about new buildings or demolitions. As considering accuracy it is necessary point out that the quality of final
Nonlinear time series analysis of normal and pathological human walking
NASA Astrophysics Data System (ADS)
Dingwell, Jonathan B.; Cusumano, Joseph P.
2000-12-01
Characterizing locomotor dynamics is essential for understanding the neuromuscular control of locomotion. In particular, quantifying dynamic stability during walking is important for assessing people who have a greater risk of falling. However, traditional biomechanical methods of defining stability have not quantified the resistance of the neuromuscular system to perturbations, suggesting that more precise definitions are required. For the present study, average maximum finite-time Lyapunov exponents were estimated to quantify the local dynamic stability of human walking kinematics. Local scaling exponents, defined as the local slopes of the correlation sum curves, were also calculated to quantify the local scaling structure of each embedded time series. Comparisons were made between overground and motorized treadmill walking in young healthy subjects and between diabetic neuropathic (NP) patients and healthy controls (CO) during overground walking. A modification of the method of surrogate data was developed to examine the stochastic nature of the fluctuations overlying the nominally periodic patterns in these data sets. Results demonstrated that having subjects walk on a motorized treadmill artificially stabilized their natural locomotor kinematics by small but statistically significant amounts. Furthermore, a paradox previously present in the biomechanical literature that resulted from mistakenly equating variability with dynamic stability was resolved. By slowing their self-selected walking speeds, NP patients adopted more locally stable gait patterns, even though they simultaneously exhibited greater kinematic variability than CO subjects. Additionally, the loss of peripheral sensation in NP patients was associated with statistically significant differences in the local scaling structure of their walking kinematics at those length scales where it was anticipated that sensory feedback would play the greatest role. Lastly, stride-to-stride fluctuations in the
Crop growth dynamics modeling using time-series satellite imagery
NASA Astrophysics Data System (ADS)
Zhao, Yu
2014-11-01
In modern agriculture, remote sensing technology plays an essential role in monitoring crop growth and crop yield prediction. To monitor crop growth and predict crop yield, accurate and timely crop growth information is significant, in particularly for large scale farming. As the high cost and low data availability of high-resolution satellite images such as RapidEye, we focus on the time-series low resolution satellite imagery. In this research, NDVI curve, which was retrieved from satellite images of MODIS 8-days 250m surface reflectance, was applied to monitor soybean's yield. Conventional model and vegetation index for yield prediction has problems on describing the growth basic processes affecting yield component formation. In our research, a novel method is developed to well model the Crop Growth Dynamics (CGD) and generate CGD index to describe the soybean's yield component formation. We analyze the standard growth stage of soybean and to model the growth process, we have two key calculate process. The first is normalization of the NDVI-curve coordinate and division of the crop growth based on the standard development stages using EAT (Effective accumulated temperature).The second is modeling the biological growth on each development stage through analyzing the factors of yield component formation. The evaluation was performed through the soybean yield prediction using the CGD Index in the growth stage when the whole dataset for modeling is available and we got precision of 88.5% which is about 10% higher than the conventional method. The validation results showed that prediction accuracy using our CGD modeling is satisfied and can be applied in practice of large scale soybean yield monitoring.
Optimizing the search for transiting planets in long time series
NASA Astrophysics Data System (ADS)
Ofir, Aviv
2014-01-01
Context. Transit surveys, both ground- and space-based, have already accumulated a large number of light curves that span several years. Aims: The search for transiting planets in these long time series is computationally intensive. We wish to optimize the search for both detection and computational efficiencies. Methods: We assume that the searched systems can be described well by Keplerian orbits. We then propagate the effects of different system parameters to the detection parameters. Results: We show that the frequency information content of the light curve is primarily determined by the duty cycle of the transit signal, and thus the optimal frequency sampling is found to be cubic and not linear. Further optimization is achieved by considering duty-cycle dependent binning of the phased light curve. By using the (standard) BLS, one is either fairly insensitive to long-period planets or less sensitive to short-period planets and computationally slower by a significant factor of ~330 (for a 3 yr long dataset). We also show how the physical system parameters, such as the host star's size and mass, directly affect transit detection. This understanding can then be used to optimize the search for every star individually. Conclusions: By considering Keplerian dynamics explicitly rather than implicitly one can optimally search the BLS parameter space. The presented Optimal BLS enhances the detectability of both very short and very long period planets, while allowing such searches to be done with much reduced resources and time. The Matlab/Octave source code for Optimal BLS is made available. The MATLAB code is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/561/A138
A time-series approach to dynamical systems from classical and quantum worlds
Fossion, Ruben
2014-01-08
This contribution discusses some recent applications of time-series analysis in Random Matrix Theory (RMT), and applications of RMT in the statistial analysis of eigenspectra of correlation matrices of multivariate time series.
NASA Astrophysics Data System (ADS)
Pitts, J. Brian
2016-02-01
What if gravity satisfied the Klein-Gordon equation? Both particle physics from the 1920-30s and the 1890s Neumann-Seeliger modification of Newtonian gravity with exponential decay suggest considering a "graviton mass term" for gravity, which is algebraic in the potential. Unlike Nordström's "massless" theory, massive scalar gravity is strictly special relativistic in the sense of being invariant under the Poincaré group but not the 15-parameter Bateman-Cunningham conformal group. It therefore exhibits the whole of Minkowski space-time structure, albeit only indirectly concerning volumes. Massive scalar gravity is plausible in terms of relativistic field theory, while violating most interesting versions of Einstein's principles of general covariance, general relativity, equivalence, and Mach. Geometry is a poor guide to understanding massive scalar gravity(s): matter sees a conformally flat metric due to universal coupling, but gravity also sees the rest of the flat metric (barely or on long distances) in the mass term. What is the 'true' geometry, one might wonder, in line with Poincaré's modal conventionality argument? Infinitely many theories exhibit this bimetric 'geometry,' all with the total stress-energy's trace as source; thus geometry does not explain the field equations. The irrelevance of the Ehlers-Pirani-Schild construction to a critique of conventionalism becomes evident when multi-geometry theories are contemplated. Much as Seeliger envisaged, the smooth massless limit indicates underdetermination of theories by data between massless and massive scalar gravities-indeed an unconceived alternative. At least one version easily could have been developed before General Relativity; it then would have motivated thinking of Einstein's equations along the lines of Einstein's newly re-appreciated "physical strategy" and particle physics and would have suggested a rivalry from massive spin 2 variants of General Relativity (massless spin 2, Pauli and Fierz
NASA Astrophysics Data System (ADS)
Lindholm, D. M.; Weigel, R. S.; Wilson, A.; Ware Dewolfe, A.
2009-12-01
Data analysis in the physical sciences is often plagued by the difficulty in acquiring the desired data. A great deal of work has been done in the area of metadata and data discovery, however, many such discoveries simply provide links that lead directly to a data file. Often these files are impractically large, containing more time samples or variables than desired, and are slow to access. Once these files are downloaded, format issues further complicate using the data. Some data servers have begun to address these problems by improving data virtualization and ease of use. However, these services often don't scale to large datasets. Also, the generic nature of the data models used by these servers, while providing greater flexibility, may complicate setting up such a service for data providers and limit sufficient semantics that would otherwise simplify use for clients, machine or human. The Time Series Data Server (TSDS) aims to address these problems within the limited, yet common, domain of time series data. With the simplifying assumption that all data products served are a function of time, the server can optimize for data access based on time subsets, a common use case. The server also supports requests for specific variables, which can be of type scalar, structure, or sequence. It also supports data types with higher level semantics, such as "spectrum." The TSDS is implemented using Java Servlet technology and can be dropped into any servlet container and customized for a data provider's needs. The interface is based on OPeNDAP (http://opendap.org) and conforms to the Data Acces Protocol (DAP) 2.0, a NASA standard (ESDS-RFC-004), which defines a simple HTTP request and response paradigm. Thus a TSDS server instance is a compliant OPeNDAP server that can be accessed by any OPeNDAP client or directly via RESTful web service requests. The TSDS reads the data that it serves into a common data model via the NetCDF Markup Language (NcML, http
NASA Astrophysics Data System (ADS)
Osada, Y.; Ohta, Y.; Demachi, T.; Kido, M.; Fujimoto, H.; Azuma, R.; Hino, R.
2013-12-01
Large interplate earthquake repeatedly occurred in Japan Trench. Recently, the detail crustal deformation revealed by the nation-wide inland GPS network called as GEONET by GSI. However, the maximum displacement region for interplate earthquake is mainly located offshore region. GPS/Acoustic seafloor geodetic observation (hereafter GPS/A) is quite important and useful for understanding of shallower part of the interplate coupling between subducting and overriding plates. We typically conduct GPS/A in specific ocean area based on repeated campaign style using research vessel or buoy. Therefore, we cannot monitor the temporal variation of seafloor crustal deformation in real time. The one of technical issue on real time observation is kinematic GPS analysis because kinematic GPS analysis based on reference and rover data. If the precise kinematic GPS analysis will be possible in the offshore region, it should be promising method for real time GPS/A with USV (Unmanned Surface Vehicle) and a moored buoy. We assessed stability, precision and accuracy of StarFireTM global satellites based augmentation system. We primarily tested for StarFire in the static condition. In order to assess coordinate precision and accuracy, we compared 1Hz StarFire time series and post-processed precise point positioning (PPP) 1Hz time series by GIPSY-OASIS II processing software Ver. 6.1.2 with three difference product types (ultra-rapid, rapid, and final orbits). We also used difference interval clock information (30 and 300 seconds) for the post-processed PPP processing. The standard deviation of real time StarFire time series is less than 30 mm (horizontal components) and 60 mm (vertical component) based on 1 month continuous processing. We also assessed noise spectrum of the estimated time series by StarFire and post-processed GIPSY PPP results. We found that the noise spectrum of StarFire time series is similar pattern with GIPSY-OASIS II processing result based on JPL rapid orbit
ERIC Educational Resources Information Center
St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D.
2014-01-01
Researchers are increasingly using comparative interrupted time series (CITS) designs to estimate the effects of programs and policies when randomized controlled trials are not feasible. In a simple interrupted time series design, researchers compare the pre-treatment values of a treatment group time series to post-treatment values in order to…
In aquatic systems, time series of dissolved oxygen (DO) have been used to compute estimates of ecosystem metabolism. Central to this open-water method is the assumption that the DO time series is a Lagrangian specification of the flow field. However, most DO time series are coll...
Investigating MAI's Precision: Single Interferogram and Time Series Filtering
NASA Astrophysics Data System (ADS)
Bechor Ben Dov, N.; Herring, T.
2010-12-01
Multiple aperture InSAR (MAI) is a technique to obtain along-track displacements from InSAR phase data. Because InSAR measurements are insensitive to along-track displacements, it is only possible to retrieve them using none-interferometric approaches, either pixel-offset tracking or using data from different orbital configurations and assuming continuity/ displacement model. These approaches are limited by precision and data acquisition conflicts, respectively. MAI is promising in this respect as its precision is better than the former and its data is available whether additional acquisitions are there or not. Here we study the MAI noise and develop a filter to reduce it. We test the filtering with empirical noise and simulated signal data. Below we describe the filtered results single interferogram precision, and a Kalman filter approach for MAI time series. We use 14 interferograms taken over the larger Los Angeles/San Gabrial Mountains area in CA. The interferograms include a variety of decorrelation sources, both terrain-related (topographic variations, vegetation and agriculture), and imaging-related (spatial and temporal baselines of 200-500m and 1-12 months, respectively). Most of the pixels are in the low to average coherence range (below 0.7). The data were collected by ESA and made available by the WInSAR consortium. We assume the data contain “zero” along-track signal (less then the theoretical 4 cm for our coherence range), and use the images as 14 dependent realizations of the MAI noise. We find a wide distribution of phase values σ = 2-3 radians (wrapped). We superimpose a signal on our MAI noise interferograms using along-track displacement (-88 - 143 cm) calculated for the 1812 Wrightwood earthquake. To analyze single MAI interferograms, we design an iterative quantile-based filter and test it on the noise+signal MAI interferograms. The residuals reveal the following MAI noise characteristics: (1) a constant noise term, up to 90 cm (2) a
Comparison of Shipboard Hydrographic Time Series off California and Hawaii
NASA Astrophysics Data System (ADS)
Collins, C.; Margolina, T.; Rago, T.
2012-04-01
Time series of hydrographic measurements off Central California (CalCOFI line 67) are compared with measurements made during the same month to the north of Oahu, Hawaii (Station ALOHA). Off Hawaii, the upper 1000 m were sampled every two or three hours for three days to remove internal tide variability. Along Line 67, an onshore-offshore section of 17 stations which are 20 km apart were occupied over a period of ~ 2.5 days (or longer, depending upon weather conditions and sea state). These 17 stations were averaged to obtain a similar sample size to the Hawaii data. The California data were obtained quarterly beginning in 1997. The Hawaii data are monthly and begin in 1988. The inshore portion of the California data were dominated by poleward geostrophic flow. At a distance of 100 km from the coast, the equatorward geostrophic flow of the California Current dominated the surface flow and deepened further from shore. Using a 1000 dbar reference, the dynamic heights were in good agreement with climatological data and gave an equatorward flow at the surface of 2.1 cm/s between California and Hawaii. Water properties to the bottom were measured at both locations. Off California, this was done at the station farthest from shore where water depth is 4500 m. The geostrophic flow between California (about 30 casts) and Hawaii data (more than 200 casts) was computed relative to 4500 dbar. Between the bottom and about 750 dbar, the flow was poleward, <0.001 m/s, and yielded a total northward transport of about 3.5 Sv for waters below 750 m. Trends of salinity, oxygen on isopycnal surfaces common to both data sets are determined for the period 1997-2010.
Analytical framework for recurrence network analysis of time series
NASA Astrophysics Data System (ADS)
Donges, Jonathan F.; Heitzig, Jobst; Donner, Reik V.; Kurths, Jürgen
2012-04-01
Recurrence networks are a powerful nonlinear tool for time series analysis of complex dynamical systems. While there are already many successful applications ranging from medicine to paleoclimatology, a solid theoretical foundation of the method has still been missing so far. Here, we interpret an ɛ-recurrence network as a discrete subnetwork of a “continuous” graph with uncountably many vertices and edges corresponding to the system's attractor. This step allows us to show that various statistical measures commonly used in complex network analysis can be seen as discrete estimators of newly defined continuous measures of certain complex geometric properties of the attractor on the scale given by ɛ. In particular, we introduce local measures such as the ɛ-clustering coefficient, mesoscopic measures such as ɛ-motif density, path-based measures such as ɛ-betweennesses, and global measures such as ɛ-efficiency. This new analytical basis for the so far heuristically motivated network measures also provides an objective criterion for the choice of ɛ via a percolation threshold, and it shows that estimation can be improved by so-called node splitting invariant versions of the measures. We finally illustrate the framework for a number of archetypical chaotic attractors such as those of the Bernoulli and logistic maps, periodic and two-dimensional quasiperiodic motions, and for hyperballs and hypercubes by deriving analytical expressions for the novel measures and comparing them with data from numerical experiments. More generally, the theoretical framework put forward in this work describes random geometric graphs and other networks with spatial constraints, which appear frequently in disciplines ranging from biology to climate science.
Evaluating mallard adaptive management models with time series
Conn, P.B.; Kendall, W.L.
2004-01-01
Wildlife practitioners concerned with midcontinent mallard (Anas platyrhynchos) management in the United States have instituted a system of adaptive harvest management (AHM) as an objective format for setting harvest regulations. Under the AHM paradigm, predictions from a set of models that reflect key uncertainties about processes underlying population dynamics are used in coordination with optimization software to determine an optimal set of harvest decisions. Managers use comparisons of the predictive abilities of these models to gauge the relative truth of different hypotheses about density-dependent recruitment and survival, with better-predicting models giving more weight to the determination of harvest regulations. We tested the effectiveness of this strategy by examining convergence rates of 'predictor' models when the true model for population dynamics was known a priori. We generated time series for cases when the a priori model was 1 of the predictor models as well as for several cases when the a priori model was not in the model set. We further examined the addition of different levels of uncertainty into the variance structure of predictor models, reflecting different levels of confidence about estimated parameters. We showed that in certain situations, the model-selection process favors a predictor model that incorporates the hypotheses of additive harvest mortality and weakly density-dependent recruitment, even when the model is not used to generate data. Higher levels of predictor model variance led to decreased rates of convergence to the model that generated the data, but model weight trajectories were in general more stable. We suggest that predictive models should incorporate all sources of uncertainty about estimated parameters, that the variance structure should be similar for all predictor models, and that models with different functional forms for population dynamics should be considered for inclusion in predictor model! sets. All of these
The Coral Data Time Series Need To Be Revisited
NASA Astrophysics Data System (ADS)
Juillet-Leclerc, A.
2004-12-01
Coral skeleton is formed under organism control and its geochemical properties are strongly influenced by biological effects embedding environmental signal. Geochemists have been puzzled by the diversity of geochemical responses showed by colonies grown in a same area. By revisiting the Weber and Woodhead data series (1972), gathering data from enough colonies developed in similar conditions to provide a statistical isotopic value representative of one site, we demonstrate that for Porites and Acropora, the expected isotopic thermometer is revealed when the "vital effect" is removed. On the other hand, by using Acropora cultured in controlled condition, with changing temperature on a range comprised between 23 and 29°C, the comparison of oxygen and carbon isotopic values revealed the role played by kinetic fractionation. This apparent paradox of two co-existing fractionations is explained by the isotopic analyzes of wild and cultured corals operated at micrometer size scale taking into account of microstructures of the skeleton. Two different crystals appear to be the growth units of the skeleton, each crystal corresponding to a specific deposition mechanism. Thus, the measurement performed with a conventional method is a "bulk" measurement, which depends upon two isotopic fractionations. Some investigations underlined the discrepancy of the meaning of the inter-annual and seasonal isotopic records, which could be illustrated by different isotopic calibrations assessed from seasonal or annual data. It has been also explained by micrometer analyses of Porites aragonite. A smoothing at around 400microns of isotopic measurements as well as Sr/Ca indicates that at seasonal time scale the growth unit is the month. This is in agreement with extensive studies conducted by biologists describing the mechanism governing the formation of Porites skeleton: every month is deposited a framework which is progressively filled in. By combining biologists and geochemists knowledge
Time series analysis of diverse extreme phenomena: universal features
NASA Astrophysics Data System (ADS)
Eftaxias, K.; Balasis, G.
2012-04-01
The field of study of complex systems holds that the dynamics of complex systems are founded on universal principles that may used to describe a great variety of scientific and technological approaches of different types of natural, artificial, and social systems. We suggest that earthquake, epileptic seizures, solar flares, and magnetic storms dynamics can be analyzed within similar mathematical frameworks. A central property of aforementioned extreme events generation is the occurrence of coherent large-scale collective behavior with very rich structure, resulting from repeated nonlinear interactions among the corresponding constituents. Consequently, we apply the Tsallis nonextensive statistical mechanics as it proves an appropriate framework in order to investigate universal principles of their generation. First, we examine the data in terms of Tsallis entropy aiming to discover common "pathological" symptoms of transition to a significant shock. By monitoring the temporal evolution of the degree of organization in time series we observe similar distinctive features revealing significant reduction of complexity during their emergence. Second, a model for earthquake dynamics coming from a nonextensive Tsallis formalism, starting from first principles, has been recently introduced. This approach leads to an energy distribution function (Gutenberg-Richter type law) for the magnitude distribution of earthquakes, providing an excellent fit to seismicities generated in various large geographic areas usually identified as seismic regions. We show that this function is able to describe the energy distribution (with similar non-extensive q-parameter) of solar flares, magnetic storms, epileptic and earthquake shocks. The above mentioned evidence of a universal statistical behavior suggests the possibility of a common approach for studying space weather, earthquakes and epileptic seizures.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyong; Zhang, Zhijie; Chang, Yuguang; Chen, Zhengchao
2015-12-01
Accurate data on the spatial distribution and potential growth estimation of human population are playing pivotal role in addressing and mitigating heavy lose caused by earthquake. Traditional demographic data is limited in its spatial resolution and is extremely hard to update. With the accessibility of massive DMSP/OLS night time imagery, it is possible to model population distribution at the county level across China. In order to compare and improve the continuity and consistency of time-series DMSP night-time satellite imagery obtained by different satellites in same year or different years by the same satellite from 2002-2010, normalized method was deployed for the inter-correction among imageries. And we referred to the reference F162007 Jixi city, whose social-economic has been relatively stable. Through binomial model, with average R2 0.90, then derived the correction factor of each year. The normalization obviously improved consistency comparing to previous data, which enhanced the correspondent accuracy of model. Then conducted the model of population density between average night-time light intensity in eight-economic districts. According to the two parameters variation law of consecutive years, established the prediction model of next following years with R2of slope and constant typically 0.85 to 0.95 in different regions. To validate the model, taking the year of 2005 as example, retrieved quantitatively population distribution in per square kilometer based on the model, then compared the results to the statistical data based on census, the difference of the result is acceptable. In summary, the estimation model facilitates the quick estimation and prediction in relieving the damage to people, which is significant in decision-making.
Crossing the Digital Divide: Connecting GIS, Time Series and Space-Time Arrays (Invited)
NASA Astrophysics Data System (ADS)
Maidment, D. R.; Salas, F.; Domenico, B.; Nativi, S.
2010-12-01
Hydrologic information science requires several different kinds of information: GIS coverages of water features of the land surface and subsurface; time series of observations of streamflow, water quality, groundwater levels and climate; and space-time arrays of weather, climate and remotely sensed information. Increasingly, such information is being published as web services, in standardized data structures that transmit smoothly through the internet. A large "Digital Divide" exists between the world of discrete spatial objects in GIS and associated time series, and the world of continuous space-time arrays as is used weather and climate science. In order to cross this divide, it should be possible to search for quantities such as “precipitation” and to find the information no matter whether it comprises time series of precipitation at gage sites, or space-time arrays of precipitation from Nexrad radar rainfall measurements. This means that servers of discrete space-time hydrologic data, such as the CUAHSI HydroServer, and servers of continuous space-time weather and climate data, such as the Unidata THREDDS server, should be able to be indexed in a unified manner that will permit discovery of common information types across different classes of information services. This paper will explore options for accomplishing this goal using the CUAHSI HydroServer and the Unidata THREDDS server as representative examples of information service providers. Among the options to be explored is GI-cat, a federated, standards-based catalog service developed at the Earth and Space Science Informatics Laboratory of the University of Florence.
Torque-limited Growth of Massive Black Holes in Galaxies across Cosmic Time
NASA Astrophysics Data System (ADS)
Anglés-Alcázar, Daniel; Özel, Feryal; Davé, Romeel; Katz, Neal; Kollmeier, Juna A.; Oppenheimer, Benjamin D.
2015-02-01
We combine cosmological hydrodynamic simulations with analytic models to evaluate the role of galaxy-scale gravitational torques on the evolution of massive black holes at the centers of star-forming galaxies. We confirm and extend our earlier results to show that torque-limited growth yields black holes and host galaxies evolving on average along the M BH-M bulge relation from early times down to z = 0 and that convergence onto the scaling relation occurs independent of the initial conditions and with no need for mass averaging through mergers or additional self-regulation processes. Smooth accretion dominates the long-term evolution, with black hole mergers with mass ratios gsim 1:5 representing typically a small fraction of the total growth. Winds from the accretion disk are required to eject significant mass to suppress black hole growth, but there is no need for coupling this wind to galactic-scale gas to regulate black holes in a nonlinear feedback loop. Torque-limited growth yields a close-to-linear < \\dot{M}_BH > \\propto star formation rate (SFR) relation for the black hole accretion rate averaged over galaxy evolution timescales. However, the SFR-AGN connection has significant scatter owing to strong variability of black hole accretion at all resolved timescales. Eddington ratios can be described by a broad lognormal distribution with median value evolving roughly as λMSvprop(1 + z)1.9, suggesting a main sequence for black hole growth similar to the cosmic evolution of specific SFRs. Our results offer an attractive scenario consistent with available observations in which cosmological gas infall and transport of angular momentum in the galaxy by gravitational instabilities regulate the long-term co-evolution of black holes and star-forming galaxies.
Detection of cavity migration risks using radar interferometric time series
NASA Astrophysics Data System (ADS)
Chang, L.; Hanssen, R. F.
2012-12-01
, ERS-2, Envisat, and Radarsat-2, to investigate the dynamics (deformation) of the area. In particular we show, for the first time, shear-stress change distribution patterns within the structure of a building, over a period of close to 20 years. Time series analysis shows that deformation rates of ~4 mm/a could be detected for about 18 years, followed by a dramatic increase of up to 20 mm/a in the last period. These results imply that the driving mechanisms of the 2011 catastrophe have a very long lead time and are therefore likely due to a long-lasting gradual motion, such as the upward migration of a cavity. The analysis shows the collocation of the deformation location with relatively shallow near-horizontal mine shafts, suggesting that cavity migration has a high likelihood to be the driving mechanism of the collapse-sinkhole.
NASA Astrophysics Data System (ADS)
Strozzi, Fernanda; Zaldívar, José-Manuel; Zbilut, Joseph P.
2007-03-01
The application of recurrence quantification analysis (RQA) and state space divergence reconstruction for the analysis of financial time series in terms of cross-correlation and forecasting is illustrated using high-frequency time series and random heavy-tailed data sets. The results indicate that these techniques, able to deal with non-stationarity in the time series, may contribute to the understanding of the complex dynamics hidden in financial markets. The results demonstrate that financial time series are highly correlated. Finally, an on-line trading strategy is illustrated and the results shown using high-frequency foreign exchange time series.
Studies in astronomical time series analysis. I - Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1981-01-01
Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.
FALSE DETERMINATIONS OF CHAOS IN SHORT NOISY TIME SERIES. (R828745)
A method (NEMG) proposed in 1992 for diagnosing chaos in noisy time series with 50 or fewer observations entails fitting the time series with an empirical function which predicts an observation in the series from previous observations, and then estimating the rate of divergenc...
Madeira, Sara C; Oliveira, Arlindo L
2009-01-01
Background The ability to monitor the change in expression patterns over time, and to observe the emergence of coherent temporal responses using gene expression time series, obtained from microarray experiments, is critical to advance our understanding of complex biological processes. In this context, biclustering algorithms have been recognized as an important tool for the discovery of local expression patterns, which are crucial to unravel potential regulatory mechanisms. Although most formulations of the biclustering problem are NP-hard, when working with time series expression data the interesting biclusters can be restricted to those with contiguous columns. This restriction leads to a tractable problem and enables the design of efficient biclustering algorithms able to identify all maximal contiguous column coherent biclusters. Methods In this work, we propose e-CCC-Biclustering, a biclustering algorithm that finds and reports all maximal contiguous column coherent biclusters with approximate expression patterns in time polynomial in the size of the time series gene expression matrix. This polynomial time complexity is achieved by manipulating a discretized version of the original matrix using efficient string processing techniques. We also propose extensions to deal with missing values, discover anticorrelated and scaled expression patterns, and different ways to compute the errors allowed in the expression patterns. We propose a scoring criterion combining the statistical significance of expression patterns with a similarity measure between overlapping biclusters. Results We present results in real data showing the effectiveness of e-CCC-Biclustering and its relevance in the discovery of regulatory modules describing the transcriptomic expression patterns occurring in Saccharomyces cerevisiae in response to heat stress. In particular, the results show the advantage of considering approximate patterns when compared to state of the art methods that require
NASA Astrophysics Data System (ADS)
Gómez-Extremera, Manuel; Carpena, Pedro; Ivanov, Plamen Ch.; Bernaola-Galván, Pedro A.
2016-04-01
We systematically study the scaling properties of the magnitude and sign of the fluctuations in correlated time series, which is a simple and useful approach to distinguish between systems with different dynamical properties but the same linear correlations. First, we decompose artificial long-range power-law linearly correlated time series into magnitude and sign series derived from the consecutive increments in the original series, and we study their correlation properties. We find analytical expressions for the correlation exponent of the sign series as a function of the exponent of the original series. Such expressions are necessary for modeling surrogate time series with desired scaling properties. Next, we study linear and nonlinear correlation properties of series composed as products of independent magnitude and sign series. These surrogate series can be considered as a zero-order approximation to the analysis of the coupling of magnitude and sign in real data, a problem still open in many fields. We find analytical results for the scaling behavior of the composed series as a function of the correlation exponents of the magnitude and sign series used in the composition, and we determine the ranges of magnitude and sign correlation exponents leading to either single scaling or to crossover behaviors. Finally, we obtain how the linear and nonlinear properties of the composed series depend on the correlation exponents of their magnitude and sign series. Based on this information we propose a method to generate surrogate series with controlled correlation exponent and multifractal spectrum.
Gómez-Extremera, Manuel; Carpena, Pedro; Ivanov, Plamen Ch; Bernaola-Galván, Pedro A
2016-04-01
We systematically study the scaling properties of the magnitude and sign of the fluctuations in correlated time series, which is a simple and useful approach to distinguish between systems with different dynamical properties but the same linear correlations. First, we decompose artificial long-range power-law linearly correlated time series into magnitude and sign series derived from the consecutive increments in the original series, and we study their correlation properties. We find analytical expressions for the correlation exponent of the sign series as a function of the exponent of the original series. Such expressions are necessary for modeling surrogate time series with desired scaling properties. Next, we study linear and nonlinear correlation properties of series composed as products of independent magnitude and sign series. These surrogate series can be considered as a zero-order approximation to the analysis of the coupling of magnitude and sign in real data, a problem still open in many fields. We find analytical results for the scaling behavior of the composed series as a function of the correlation exponents of the magnitude and sign series used in the composition, and we determine the ranges of magnitude and sign correlation exponents leading to either single scaling or to crossover behaviors. Finally, we obtain how the linear and nonlinear properties of the composed series depend on the correlation exponents of their magnitude and sign series. Based on this information we propose a method to generate surrogate series with controlled correlation exponent and multifractal spectrum. PMID:27176287
Sumatriptan and lost productivity time: a time series analysis of diary data.
Miller, D W; Martin, B C; Loo, C M
1996-01-01
Two previously conducted clinical studies assessed lost nonworkplace activity time and lost workplace productivity time due to migraine symptoms in subjects using sumatriptan for 6 months to treat their migraines after a 12- to 18-week period of using their usual therapy without sumatriptan. Although statistically significant differences in lost nonworkplace activity time and lost workplace productivity time between the usual therapy and sumatriptan treatment periods were detected using the Wilcoxon signed-rank test, this test could not determine whether differences were attributable to inherent trends in the data. This current study employed time series analysis, which detects and controls for preexisting trends in data, to further explore the possibility that the observed reductions in lost time in the two clinical studies were related to management of the subjects with sumatriptan. The intercepts and slopes of the computed linear models suggest that the initiation of sumatriptan therapy produced savings of 0.8 hours of nonworkplace activity time and 0.5 hours of workplace productivity time per patient per week. These savings were sustained throughout the sumatriptan treatment period. Preexisting trends in the data were not detected in the models. Thus the productivity gains are not associated with either time effects or the statistical phenomenon of regression to the mean, but variables that are extreme in initial measurements will tend to be closer to the center of the distribution in subsequent measurements. This strengthens the hypothesis that management of migraine with sumatriptan is associated with reductions in lost productivity time. PMID:9001842
Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo
2014-07-01
Reservoir computing is a recently introduced machine learning paradigm that has already shown excellent performances in the processing of empirical data. We study a particular kind of reservoir computers called time-delay reservoirs that are constructed out of the sampling of the solution of a time-delay differential equation and show their good performance in the forecasting of the conditional covariances associated to multivariate discrete-time nonlinear stochastic processes of VEC-GARCH type as well as in the prediction of factual daily market realized volatilities computed with intraday quotes, using as training input daily log-return series of moderate size. We tackle some problems associated to the lack of task-universality for individually operating reservoirs and propose a solution based on the use of parallel arrays of time-delay reservoirs. PMID:24732236
Record statistics of financial time series and geometric random walks.
Sabir, Behlool; Santhanam, M S
2014-09-01
The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data. PMID:25314414
Record statistics of financial time series and geometric random walks
NASA Astrophysics Data System (ADS)
Sabir, Behlool; Santhanam, M. S.
2014-09-01
The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.
Aerosol climate time series from ESA Aerosol_cci (Invited)
NASA Astrophysics Data System (ADS)
Holzer-Popp, T.
2013-12-01
developed further, to evaluate the datasets and their regional and seasonal merits. The validation showed that most datasets have improved significantly and in particular PARASOL (ocean only) provides excellent results. The metrics for AATSR (land and ocean) datasets are similar to those of MODIS and MISR, with AATSR better in some land regions and less good in some others (ocean). However, AATSR coverage is smaller than that of MODIS due to swath width. The MERIS dataset provides better coverage than AATSR but has lower quality (especially over land) than the other datasets. Also the synergetic AATSR/SCIAMACHY dataset has lower quality. The evaluation of the pixel uncertainties shows first good results but also reveals that more work needs to be done to provide comprehensive information for data assimilation. Users (MACC/ECMWF, AEROCOM) confirmed the relevance of this additional information and encouraged Aerosol_cci to release the current uncertainties. The paper will summarize and discuss the results of three year work in Aerosol_cci, extract the lessons learned and conclude with an outlook to the work proposed for the next three years. In this second phase a cyclic effort of algorithm evolution, dataset generation, validation and assessment will be applied to produce and further improve complete time series from all sensors under investigation, new sensors will be added (e.g. IASI), and preparation for the Sentinel missions will be made.
Ramesh, Dwarakaprasad; Setty, Huliyurdurga Srinivasasetty Natraj; Kumarswamy; Kumar, Sunil; Jayanth; Manjunath, Cholenahalli Nanjappa
2016-01-01
Acute massive pulmonary embolism is a life-threatening emergency that must be promptly diagnosed and managed. Over the last several years, the use of computed tomography scanning has improved the clinician's ability to diagnose acute pulmonary embolism. We report two cases of acute massive pulmonary embolism who presented with sudden onset of dyspnea and underwent successful open pulmonary embolectomy. The first case presented with acute onset of dyspnea of 2 days duration, in view of hemodynamic deterioration and two-dimensional echocardiography, it revealed clot in right ventricular (RV) apex and right pulmonary artery; the patient underwent cardiopulmonary bypass and open pulmonary embolectomy with RV clot extraction. The second case presented with a sudden onset of dyspnea on the 15th postoperative day for traumatic rupture of urinary bladder, in view of recent surgery, the patient was subjected to surgical embolectomy. Following surgical intervention, both the patients made a prompt recovery. PMID:27433070
Studies in astronomical time series analysis: Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1979-01-01
Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.
Study in the natural time domain of the entropy of dichotomic geoelectrical and chaotic time series
NASA Astrophysics Data System (ADS)
Ramírez-Rojas, A.; Telesca, L.; Angulo-Brown, F.
2010-12-01
The so-called seismo-electric signals (SES) have been considered as precursors of great earthquakes. To characterize possible SES activities, the Natural Time Domain (NTD) (Varotsos et al., 2001) was proposed as adequate methodology. In this work we analyze two geoelectric time series measured in a very seismically active area of South Pacific Mexican coast, and a chaotic time series obtained from the Liebovitch and Thot (LT) chaotic map. The two analyzed geoelectric signals display possible SES activities associated with the earhquakes occurred on October 24, 1993 (M6.6, epicenter at (16.54N, 98.98W)) and on September 14, 1995 (M7.4, epicenter at (16.31N, 98.88W)). Our monitoring station was located at (16.50N, 99.47W) close to Acapulco city and the experimental set-up was based on the VAN methodology. We found that the correlation degree of the SES geoelectric signals increases before the occurrence of the seismic events with power spectrum and entropy calculated in NTD in good agreement with analogous studies in the field of earthquake-related phenomena. Such SES activity, analysed in NTD, can be discriminated from the LT- chaotic map and from artificial noises. Varotsos P.A., Sarlis N.V., Skordas E.S., Practica of Athens Academy 76, (2001) 294 Liebovitch S.L. and Thot T.I., J. Theor. Biol., 148(1991), 243-267
A Hybrid Algorithm for Clustering of Time Series Data Based on Affinity Search Technique
Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A.; Shaygan, Mohammad Amin; Jalali, Alireza
2014-01-01
Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets. PMID:24982966
Fractional derivatives of random walks: Time series with long-time memory
NASA Astrophysics Data System (ADS)
Roman, H. Eduardo; Porto, Markus
2008-09-01
We review statistical properties of models generated by the application of a (positive and negative order) fractional derivative operator to a standard random walk and show that the resulting stochastic walks display slowly decaying autocorrelation functions. The relation between these correlated walks and the well-known fractionally integrated autoregressive models with conditional heteroskedasticity (FIGARCH), commonly used in econometric studies, is discussed. The application of correlated random walks to simulate empirical financial times series is considered and compared with the predictions from FIGARCH and the simpler FIARCH processes. A comparison with empirical data is performed.
Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series
Vicente, Raul; Díaz-Pernas, Francisco J.; Wibral, Michael
2014-01-01
Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and
Efficient transfer entropy analysis of non-stationary neural time series.
Wollstadt, Patricia; Martínez-Zarzuela, Mario; Vicente, Raul; Díaz-Pernas, Francisco J; Wibral, Michael
2014-01-01
Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and
Correlated errors in geodetic time series: Implications for time-dependent deformation
Langbein, J.; Johnson, H.
1997-01-01
Analysis of frequent trilateration observations from the two-color electronic distance measuring networks in California demonstrate that the noise power spectra are dominated by white noise at higher frequencies and power law behavior at lower frequencies. In contrast, Earth scientists typically have assumed that only white noise is present in a geodetic time series, since a combination of infrequent measurements and low precision usually preclude identifying the time-correlated signature in such data. After removing a linear trend from the two-color data, it becomes evident that there are primarily two recognizable types of time-correlated noise present in the residuals. The first type is a seasonal variation in displacement which is probably a result of measuring to shallow surface monuments installed in clayey soil which responds to seasonally occurring rainfall; this noise is significant only for a small fraction of the sites analyzed. The second type of correlated noise becomes evident only after spectral analysis of line length changes and shows a functional relation at long periods between power and frequency of and where f is frequency and ?? ??? 2. With ?? = 2, this type of correlated noise is termed random-walk noise, and its source is mainly thought to be small random motions of geodetic monuments with respect to the Earth's crust, though other sources are possible. Because the line length changes in the two-color networks are measured at irregular intervals, power spectral techniques cannot reliably estimate the level of I//" noise. Rather, we also use here a maximum likelihood estimation technique which assumes that there are only two sources of noise in the residual time series (white noise and randomwalk noise) and estimates the amount of each. From this analysis we find that the random-walk noise level averages about 1.3 mm/Vyr and that our estimates of the white noise component confirm theoretical limitations of the measurement technique. In
Scale and time dependence of serial correlations in word-length time series of written texts
NASA Astrophysics Data System (ADS)
Rodriguez, E.; Aguilar-Cornejo, M.; Femat, R.; Alvarez-Ramirez, J.
2014-11-01
This work considered the quantitative analysis of large written texts. To this end, the text was converted into a time series by taking the sequence of word lengths. The detrended fluctuation analysis (DFA) was used for characterizing long-range serial correlations of the time series. To this end, the DFA was implemented within a rolling window framework for estimating the variations of correlations, quantified in terms of the scaling exponent, strength along the text. Also, a filtering derivative was used to compute the dependence of the scaling exponent relative to the scale. The analysis was applied to three famous English-written literary narrations; namely, Alice in Wonderland (by Lewis Carrol), Dracula (by Bram Stoker) and Sense and Sensibility (by Jane Austen). The results showed that high correlations appear for scales of about 50-200 words, suggesting that at these scales the text contains the stronger coherence. The scaling exponent was not constant along the text, showing important variations with apparent cyclical behavior. An interesting coincidence between the scaling exponent variations and changes in narrative units (e.g., chapters) was found. This suggests that the scaling exponent obtained from the DFA is able to detect changes in narration structure as expressed by the usage of words of different lengths.
Time series analysis and the analysis of aquatic and riparian ecosystems
Milhous, R.T.
2003-01-01
Time series analysis of physical instream habitat and the riparian zone is not done as frequently as would be beneficial in understanding the fisheries aspects of the aquatic ecosystem. This paper presents two case studies have how time series analysis may be accomplished. Time series analysis is the analysis of the variation of the physical habitat or the hydro-period in the riparian zone (in many situations, the floodplain).
NASA Technical Reports Server (NTRS)
Hailperin, Max
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.
Modeling PSInSAR time series without phase unwrapping
Zhang, L.; Ding, X.; Lu, Zhiming
2011-01-01
In this paper, we propose a least-squares-based method for multitemporal synthetic aperture radar interferometry that allows one to estimate deformations without the need of phase unwrapping. The method utilizes a series of multimaster wrapped differential interferograms with short baselines and focuses on arcs at which there are no phase ambiguities. An outlier detector is used to identify and remove the arcs with phase ambiguities, and a pseudoinverse of the variancecovariance matrix is used as the weight matrix of the correlated observations. The deformation rates at coherent points are estimated with a least squares model constrained by reference points. The proposed approach is verified with a set of simulated data. ?? 2006 IEEE.
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy
1993-01-01
Climate changes traditionally have been detected from long series of observations and long after they happened. The 'inverse sequential' monitoring procedure is designed to detect changes as soon as they occur. Frequency distribution parameters are estimated both from the most recent existing set of observations and from the same set augmented by 1,2,...j new observations. Individual-value probability products ('likelihoods') are then calculated which yield probabilities for erroneously accepting the existing parameter(s) as valid for the augmented data set and vice versa. A parameter change is signaled when these probabilities (or a more convenient and robust compound 'no change' probability) show a progressive decrease. New parameters are then estimated from the new observations alone to restart the procedure. The detailed algebra is developed and tested for Gaussian means and variances, Poisson and chi-square means, and linear or exponential trends; a comprehensive and interactive Fortran program is provided in the appendix.
Building a Type Ia Supernova Model with SNfactory Spectrophotometric Time Series
NASA Astrophysics Data System (ADS)
Saunders, Clare; Nearby Supernova Factory
2015-01-01
We present a spectral time series model built using Nearby Supernova Factory (SNfactory) data. The spectrophotometric time series of over one hundred Type Ia supernovae in the data set offer much more information than photometric light curves for use in improving the standardization of Type Ia supernova magnitudes: spectrophotometric observations are interpolated onto a spectral time series surface using Gaussian processes, then Principal Component Analysis (PCA) is used to calculate spectral time series templates. The model is verified using K-fold cross-validation. We discuss the potential for using the PCA coefficients to lower the dispersion in standardized magnitudes on the Hubble diagram.
On the Interpretation of Running Trends as Summary Statistics for Time Series Analysis
NASA Astrophysics Data System (ADS)
Vigo, Isabel M.; Trottini, Mario; Belda, Santiago
2016-04-01
In recent years, running trends analysis (RTA) has been widely used in climate applied research as summary statistics for time series analysis. There is no doubt that RTA might be a useful descriptive tool, but despite its general use in applied research, precisely what it reveals about the underlying time series is unclear and, as a result, its interpretation is unclear too. This work contributes to such interpretation in two ways: 1) an explicit formula is obtained for the set of time series with a given series of running trends, making it possible to show that running trends, alone, perform very poorly as summary statistics for time series analysis; and 2) an equivalence is established between RTA and the estimation of a (possibly nonlinear) trend component of the underlying time series using a weighted moving average filter. Such equivalence provides a solid ground for RTA implementation and interpretation/validation.
Ecological stages of the Venice Lagoon analysed using landing time series data
NASA Astrophysics Data System (ADS)
Libralato, Simone; Pranovi, Fabio; Raicevich, Saša; Da Ponte, Filippo; Giovanardi, Otello; Pastres, Roberto; Torricelli, Patrizia; Mainardi, Danilo
2004-11-01
The time series of landings in the Venice Lagoon from 1945 to 2001 were analysed with the aim of explaining the ecosystem changes occurred. The comparative analysis of the total landings and mean Trophic Level (mTL) time series allowed to identify four different stages in the lagoon ecosystem. The first period, from 1945 to 1973, was characterised by increasing trends in the landings and their mTL. The second one, from 1974 to 1989, showed a decrease in the landings but still an increase in the mTL. The third period, from 1990 to 1998, had again a positive trend in the landings, but the mTL showed a sharp decline. After 1998, a slight decreasing trend in both mTL and landings was observed: the analyses of the artisanal fishery landings only date back to 1995 this effect. The presence of four distinct periods was also confirmed by the analysis of the trends of other indices estimated using landings data: the Fishing in Balance index (FiB), the Trophic Efficiency (TE) and the Pelagic on Demersal landings ratio (P/D). In the first period, the increasing fishing pressure, along with no evidence of ecosystem crisis, suggested that an increased nutrient discharge was supporting it; analogously, the bottom-up effects had driven the dynamics of the ecosystem also in the second period, when the decrease in nutrient loads caused a shift of the primary producers from planktonic to macrobenthic. The spreading of the Manila clam, a non-native species, and the development of its massive mechanical exploitation have been the main forces driving the ecosystem during the third period, for which, however, no signs of crises were detected. The fourth period showed evidence of the "fishing down the food web" effect. Possible causes of such an effect were investigated and allowed us to conclude that not overfishing, but the effects of mechanical harvesting of the Manila clam had caused relevant impacts on habitat and benthic communities, concluding that the present level of
NASA Astrophysics Data System (ADS)
Muñoz-Diosdado, A.
2005-01-01
We analyzed databases with gait time series of adults and persons with Parkinson, Huntington and amyotrophic lateral sclerosis (ALS) diseases. We obtained the staircase graphs of accumulated events that can be bounded by a straight line whose slope can be used to distinguish between gait time series from healthy and ill persons. The global Hurst exponent of these series do not show tendencies, we intend that this is because some gait time series have monofractal behavior and others have multifractal behavior so they cannot be characterized with a single Hurst exponent. We calculated the multifractal spectra, obtained the spectra width and found that the spectra of the healthy young persons are almost monofractal. The spectra of ill persons are wider than the spectra of healthy persons. In opposition to the interbeat time series where the pathology implies loss of multifractality, in the gait time series the multifractal behavior emerges with the pathology. Data were collected from healthy and ill subjects as they walked in a roughly circular path and they have sensors in both feet, so we have one time series for the left foot and other for the right foot. First, we analyzed these time series separately, and then we compared both results, with direct comparison and with a cross correlation analysis. We tried to find differences in both time series that can be used as indicators of equilibrium problems.
Anomalous Fano resonance of massive Dirac particle through a time-dependent barrier
NASA Astrophysics Data System (ADS)
Zhang, Cunxi; Liu, Jie; Fu, Libin
2015-06-01
As is well known Fano resonance arises from the interference between a localized state and a continuum state. Using the standard Floquet theory and the scattering matrix method, we study theoretically the massive Dirac particle transmission over a quantum barrier with an oscillating field. It is found that the massive relativistic particles can generate not only normal Fano resonance in the transmission due to the interference between a localized state (bound state) and the continuum state, but also anomalous Fano resonance due to the interference between a delocalized state (extended state) and the continuum state. The dependence of line shapes on driving parameters for these two kinds of Fano resonances is quite different. For normal Fano resonance the asymmetry parameter is approximately proportional to a power law of the amplitude of the oscillating field, while for the anomalous Fano resonance the asymmetry parameters change slightly with different oscillation amplitudes. In practice, the anomalous Fano resonance can be identified by observing asymmetry parameters in experiment.
Time-series Spectroscopy of the Pulsating Eclipsing Binary XX Cephei
NASA Astrophysics Data System (ADS)
Koo, Jae-Rim; Lee, Jae Woo; Hong, Kyeongsoo; Kim, Seung-Lee; Lee, Chung-Uk
2016-03-01
Oscillating Algol-type eclipsing binaries (oEA) are very interesting objects that have three observational features of eclipse, pulsation, and mass transfer. Direct measurement of their masses and radii from the double-lined radial velocity data and photometric light curves would be the most essential for understanding their evolutionary process and for performing the asteroseismological study. We present the physical properties of the oEA star XX Cep from high-resolution time-series spectroscopic data. The effective temperature of the primary star was determined to be 7946 ± 240 K by comparing the observed spectra and the Kurucz models. We detected the absorption lines of the secondary star, which had never been detected in previous studies, and obtained the radial velocities for both components. With the published BVRI light curves, we determined the absolute parameters for the binary via Wilson-Devinney modeling. The masses and radii are {M}1=2.49+/- 0.06 {M}⊙ , {M}2=0.38+/- 0.01 {M}⊙ , {R}1=2.27+/- 0.02 {R}⊙ , and {R}2=2.43+/- 0.02 {R}⊙ , respectively. The primary star is about 45% more massive and 60% larger than the zero-age main sequence stars with the same effective temperature. It is probably because XX Cep has experienced a very different evolutionary process due to mass transfer, contrasting with the normal main sequence stars. The primary star is located inside the theoretical instability strip of δ Sct-type stars on the HR diagram. We demonstrated that XX Cep is an oEA star, consisting of a δ Sct-type pulsating primary component and an evolved secondary companion.
Introducing the US Ocean Carbon Biogeochemistry Subcommittee on Ocean Time-Series
NASA Astrophysics Data System (ADS)
Neuer, Susanne; Benway, Heather
2015-04-01
The objective of this presentation is to showcase activities of the Ocean Time-series Committee (OTC), a subcommittee of the scientific steering committee of the US Ocean Carbon & Biogeochemistry (OCB) Program (www.us-ocb.org). OCB is a scientific coordinating body that facilitates collaborative, interdisciplinary research opportunities and initiatives within the U.S. and with international partners. The OTC's focus is to highlight the importance of shipboard time-series as unique observing assets to the oceanographic community, and to encourage synergistic and collaborative technology and methods development, including development and validation of sensors and autonomous devices, and their possible integration into existing time-series observations. A major emphasis of the OTC has been to improve communication and collaboration among U.S. and international scientists engaged in ocean time-series science. For example, in 2012, OCB/OTC and the International Ocean Carbon Coordination Project (IOCCP) co-organized an international time-series workshop in Bermuda focused on biogeochemical time-series methods and data intercomparison. A key outcome of this workshop was a best practices guide for shipboard sampling and analytical protocols used at biogeochemical time-series sites and the development of a global time-series network to improve international coordination and communication among the operators of the >150 marine biogeochemical time-series. We hope that this presentation will stimulate a discussion of common goals and visions for the future of time-series observations and ways to enhance collaboration among the international time-series community.