Science.gov

Sample records for accurate time series

  1. Improvements in Accurate GPS Positioning Using Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Koyama, Yuichiro; Tanaka, Toshiyuki

    Although the Global Positioning System (GPS) is used widely in car navigation systems, cell phones, surveying, and other areas, several issues still exist. We focus on the continuous data received in public use of GPS, and propose a new positioning algorithm that uses time series analysis. By fitting an autoregressive model to the time series model of the pseudorange, we propose an appropriate state-space model. We apply the Kalman filter to the state-space model and use the pseudorange estimated by the filter in our positioning calculations. The results of the authors' positioning experiment show that the accuracy of the proposed method is much better than that of the standard method. In addition, as we can obtain valid values estimated by time series analysis using the state-space model, the proposed state-space model can be applied to several other fields.

  2. Accurate mapping of forest types using dense seasonal Landsat time-series

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaolin; Liu, Desheng

    2014-10-01

    An accurate map of forest types is important for proper usage and management of forestry resources. Medium resolution satellite images (e.g., Landsat) have been widely used for forest type mapping because they are able to cover large areas more efficiently than the traditional forest inventory. However, the results of a detailed forest type classification based on these images are still not satisfactory. To improve forest mapping accuracy, this study proposed an operational method to get detailed forest types from dense Landsat time-series incorporating with or without topographic information provided by DEM. This method integrated a feature selection and a training-sample-adding procedure into a hierarchical classification framework. The proposed method has been tested in Vinton County of southeastern Ohio. The detailed forest types include pine forest, oak forest, and mixed-mesophytic forest. The proposed method was trained and validated using ground samples from field plots. The three forest types were classified with an overall accuracy of 90.52% using dense Landsat time-series, while topographic information can only slightly improve the accuracy to 92.63%. Moreover, the comparison between results of using Landsat time-series and a single image reveals that time-series data can largely improve the accuracy of forest type mapping, indicating the importance of phenological information contained in multi-seasonal images for discriminating different forest types. Thanks to zero cost of all input remotely sensed datasets and ease of implementation, this approach has the potential to be applied to map forest types at regional or global scales.

  3. Accurate measurement of time

    NASA Astrophysics Data System (ADS)

    Itano, Wayne M.; Ramsey, Norman F.

    1993-07-01

    The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.

  4. Time Series Explorer

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas

    The key, central objectives of the proposed Time Series Explorer project are to develop an organized collection of software tools for analysis of time series data in current and future NASA astrophysics data archives, and to make the tools available in two ways: as a library (the Time Series Toolbox) that individual science users can use to write their own data analysis pipelines, and as an application (the Time Series Automaton) providing an accessible, data-ready interface to many Toolbox algorithms, facilitating rapid exploration and automatic processing of time series databases. A number of time series analysis methods will be implemented, including techniques that range from standard ones to state-of-the-art developments by the proposers and others. Most of the algorithms will be able to handle time series data subject to real-world problems such as data gaps, sampling that is otherwise irregular, asynchronous sampling (in multi-wavelength settings), and data with non-Gaussian measurement errors. The proposed research responds to the ADAP element supporting the development of tools for mining the vast reservoir of information residing in NASA databases. The tools that will be provided to the community of astronomers studying variability of astronomical objects (from nearby stars and extrasolar planets, through galactic and extragalactic sources) will revolutionize the quality of timing analyses that can be carried out, and greatly enhance the scientific throughput of all NASA astrophysics missions past, present, and future. The Automaton will let scientists explore time series - individual records or large data bases -- with the most informative and useful analysis methods available, without having to develop the tools themselves or understand the computational details. Both elements, the Toolbox and the Automaton, will enable deep but efficient exploratory time series data analysis, which is why we have named the project the Time Series Explorer. Science

  5. Time Series Database

    2007-11-02

    TSDB is a Python module for storing large volumes of time series data. TSDB stores data in binary files indexed by a timestamp. Aggregation functions (such as rate, sum, avg, etc.) can be performed on the data, but data is never discarded. TSDB is presently best suited for SNMP data but new data types are easily added.

  6. GPS Position Time Series @ JPL

    NASA Technical Reports Server (NTRS)

    Owen, Susan; Moore, Angelyn; Kedar, Sharon; Liu, Zhen; Webb, Frank; Heflin, Mike; Desai, Shailen

    2013-01-01

    Different flavors of GPS time series analysis at JPL - Use same GPS Precise Point Positioning Analysis raw time series - Variations in time series analysis/post-processing driven by different users. center dot JPL Global Time Series/Velocities - researchers studying reference frame, combining with VLBI/SLR/DORIS center dot JPL/SOPAC Combined Time Series/Velocities - crustal deformation for tectonic, volcanic, ground water studies center dot ARIA Time Series/Coseismic Data Products - Hazard monitoring and response focused center dot ARIA data system designed to integrate GPS and InSAR - GPS tropospheric delay used for correcting InSAR - Caltech's GIANT time series analysis uses GPS to correct orbital errors in InSAR - Zhen Liu's talking tomorrow on InSAR Time Series analysis

  7. Permutations and time series analysis.

    PubMed

    Cánovas, Jose S; Guillamón, Antonio

    2009-12-01

    The main aim of this paper is to show how the use of permutations can be useful in the study of time series analysis. In particular, we introduce a test for checking the independence of a time series which is based on the number of admissible permutations on it. The main improvement in our tests is that we are able to give a theoretical distribution for independent time series. PMID:20059199

  8. FROG: Time-series analysis

    NASA Astrophysics Data System (ADS)

    Allan, Alasdair

    2014-06-01

    FROG performs time series analysis and display. It provides a simple user interface for astronomers wanting to do time-domain astrophysics but still offers the powerful features found in packages such as PERIOD (ascl:1406.005). FROG includes a number of tools for manipulation of time series. Among other things, the user can combine individual time series, detrend series (multiple methods) and perform basic arithmetic functions. The data can also be exported directly into the TOPCAT (ascl:1101.010) application for further manipulation if needed.

  9. Apparatus for statistical time-series analysis of electrical signals

    NASA Technical Reports Server (NTRS)

    Stewart, C. H. (Inventor)

    1973-01-01

    An apparatus for performing statistical time-series analysis of complex electrical signal waveforms, permitting prompt and accurate determination of statistical characteristics of the signal is presented.

  10. Time series with tailored nonlinearities

    NASA Astrophysics Data System (ADS)

    Räth, C.; Laut, I.

    2015-10-01

    It is demonstrated how to generate time series with tailored nonlinearities by inducing well-defined constraints on the Fourier phases. Correlations between the phase information of adjacent phases and (static and dynamic) measures of nonlinearities are established and their origin is explained. By applying a set of simple constraints on the phases of an originally linear and uncorrelated Gaussian time series, the observed scaling behavior of the intensity distribution of empirical time series can be reproduced. The power law character of the intensity distributions being typical for, e.g., turbulence and financial data can thus be explained in terms of phase correlations.

  11. Time series with tailored nonlinearities.

    PubMed

    Räth, C; Laut, I

    2015-10-01

    It is demonstrated how to generate time series with tailored nonlinearities by inducing well-defined constraints on the Fourier phases. Correlations between the phase information of adjacent phases and (static and dynamic) measures of nonlinearities are established and their origin is explained. By applying a set of simple constraints on the phases of an originally linear and uncorrelated Gaussian time series, the observed scaling behavior of the intensity distribution of empirical time series can be reproduced. The power law character of the intensity distributions being typical for, e.g., turbulence and financial data can thus be explained in terms of phase correlations. PMID:26565155

  12. Economic Time-Series Page.

    ERIC Educational Resources Information Center

    Bos, Theodore; Culver, Sarah E.

    2000-01-01

    Describes the Economagic Web site, a comprehensive site of free economic time-series data that can be used for research and instruction. Explains that it contains 100,000+ economic data series from sources such as the Federal Reserve Banking System, the Census Bureau, and the Department of Commerce. (CMK)

  13. Entropy of electromyography time series

    NASA Astrophysics Data System (ADS)

    Kaufman, Miron; Zurcher, Ulrich; Sung, Paul S.

    2007-12-01

    A nonlinear analysis based on Renyi entropy is applied to electromyography (EMG) time series from back muscles. The time dependence of the entropy of the EMG signal exhibits a crossover from a subdiffusive regime at short times to a plateau at longer times. We argue that this behavior characterizes complex biological systems. The plateau value of the entropy can be used to differentiate between healthy and low back pain individuals.

  14. CHEMICAL TIME-SERIES SAMPLING

    EPA Science Inventory

    The rationale for chemical time-series sampling has its roots in the same fundamental relationships as govern well hydraulics. Samples of ground water are collected as a function of increasing time of pumpage. The most efficient pattern of collection consists of logarithmically s...

  15. Random time series in astronomy.

    PubMed

    Vaughan, Simon

    2013-02-13

    Progress in astronomy comes from interpreting the signals encoded in the light received from distant objects: the distribution of light over the sky (images), over photon wavelength (spectrum), over polarization angle and over time (usually called light curves by astronomers). In the time domain, we see transient events such as supernovae, gamma-ray bursts and other powerful explosions; we see periodic phenomena such as the orbits of planets around nearby stars, radio pulsars and pulsations of stars in nearby galaxies; and we see persistent aperiodic variations ('noise') from powerful systems such as accreting black holes. I review just a few of the recent and future challenges in the burgeoning area of time domain astrophysics, with particular attention to persistently variable sources, the recovery of reliable noise power spectra from sparsely sampled time series, higher order properties of accreting black holes, and time delays and correlations in multi-variate time series. PMID:23277606

  16. Pattern Recognition in Time Series

    NASA Astrophysics Data System (ADS)

    Lin, Jessica; Williamson, Sheri; Borne, Kirk D.; DeBarr, David

    2012-03-01

    Perhaps the most commonly encountered data types are time series, touching almost every aspect of human life, including astronomy. One obvious problem of handling time-series databases concerns with its typically massive size—gigabytes or even terabytes are common, with more and more databases reaching the petabyte scale. For example, in telecommunication, large companies like AT&T produce several hundred millions long-distance records per day [Cort00]. In astronomy, time-domain surveys are relatively new—these are surveys that cover a significant fraction of the sky with many repeat observations, thereby producing time series for millions or billions of objects. Several such time-domain sky surveys are now completed, such as the MACHO [Alco01],OGLE [Szym05], SDSS Stripe 82 [Bram08], SuperMACHO [Garg08], and Berkeley’s Transients Classification Pipeline (TCP) [Star08] projects. The Pan-STARRS project is an active sky survey—it began in 2010, a 3-year survey covering three-fourths of the sky with ˜60 observations of each field [Kais04]. The Large Synoptic Survey Telescope (LSST) project proposes to survey 50% of the visible sky repeatedly approximately 1000 times over a 10-year period, creating a 100-petabyte image archive and a 20-petabyte science database (http://www.lsst.org/). The LSST science database will include time series of over 100 scientific parameters for each of approximately 50 billion astronomical sources—this will be the largest data collection (and certainly the largest time series database) ever assembled in astronomy, and it rivals any other discipline’s massive data collections for sheer size and complexity. More common in astronomy are time series of flux measurements. As a consequence of many decades of observations (and in some cases, hundreds of years), a large variety of flux variations have been detected in astronomical objects, including periodic variations (e.g., pulsating stars, rotators, pulsars, eclipsing binaries

  17. Inductive time series modeling program

    SciTech Connect

    Kirk, B.L.; Rust, B.W.

    1985-10-01

    A number of features that comprise environmental time series share a common mathematical behavior. Analysis of the Mauna Loa carbon dioxide record and other time series is aimed at constructing mathematical functions which describe as many major features of the data as possible. A trend function is fit to the data, removed, and the resulting residuals analyzed for any significant behavior. This is repeated until the residuals are driven to white noise. In the following discussion, the concept of trend will include cyclic components. The mathematical tools and program packages used are VARPRO (Golub and Pereyra 1973), for the least squares fit, and a modified version of our spectral analysis program (Kirk et al. 1979), for spectrum and noise analysis. The program is written in FORTRAN. All computations are done in double precision, except for the plotting calls where the DISSPLA package is used. The core requirement varies between 600 K and 700 K. The program is implemented on the IBM 360/370. Currently, the program can analyze up to five different time series where each series contains no more than 300 points. 12 refs.

  18. Building Chaotic Model From Incomplete Time Series

    NASA Astrophysics Data System (ADS)

    Siek, Michael; Solomatine, Dimitri

    2010-05-01

    and missing value estimates; and (2) the accuracy of the built chaotic model predictions. The model results indicate that the proposed methods are able to build a chaotic model from incomplete time series and to provide reliable and accurate predictions.

  19. Nonlinear time-series analysis revisited

    NASA Astrophysics Data System (ADS)

    Bradley, Elizabeth; Kantz, Holger

    2015-09-01

    In 1980 and 1981, two pioneering papers laid the foundation for what became known as nonlinear time-series analysis: the analysis of observed data—typically univariate—via dynamical systems theory. Based on the concept of state-space reconstruction, this set of methods allows us to compute characteristic quantities such as Lyapunov exponents and fractal dimensions, to predict the future course of the time series, and even to reconstruct the equations of motion in some cases. In practice, however, there are a number of issues that restrict the power of this approach: whether the signal accurately and thoroughly samples the dynamics, for instance, and whether it contains noise. Moreover, the numerical algorithms that we use to instantiate these ideas are not perfect; they involve approximations, scale parameters, and finite-precision arithmetic, among other things. Even so, nonlinear time-series analysis has been used to great advantage on thousands of real and synthetic data sets from a wide variety of systems ranging from roulette wheels to lasers to the human heart. Even in cases where the data do not meet the mathematical or algorithmic requirements to assure full topological conjugacy, the results of nonlinear time-series analysis can be helpful in understanding, characterizing, and predicting dynamical systems.

  20. Nonlinear time-series analysis revisited.

    PubMed

    Bradley, Elizabeth; Kantz, Holger

    2015-09-01

    In 1980 and 1981, two pioneering papers laid the foundation for what became known as nonlinear time-series analysis: the analysis of observed data-typically univariate-via dynamical systems theory. Based on the concept of state-space reconstruction, this set of methods allows us to compute characteristic quantities such as Lyapunov exponents and fractal dimensions, to predict the future course of the time series, and even to reconstruct the equations of motion in some cases. In practice, however, there are a number of issues that restrict the power of this approach: whether the signal accurately and thoroughly samples the dynamics, for instance, and whether it contains noise. Moreover, the numerical algorithms that we use to instantiate these ideas are not perfect; they involve approximations, scale parameters, and finite-precision arithmetic, among other things. Even so, nonlinear time-series analysis has been used to great advantage on thousands of real and synthetic data sets from a wide variety of systems ranging from roulette wheels to lasers to the human heart. Even in cases where the data do not meet the mathematical or algorithmic requirements to assure full topological conjugacy, the results of nonlinear time-series analysis can be helpful in understanding, characterizing, and predicting dynamical systems. PMID:26428563

  1. Univariate time series forecasting algorithm validation

    NASA Astrophysics Data System (ADS)

    Ismail, Suzilah; Zakaria, Rohaiza; Muda, Tuan Zalizam Tuan

    2014-12-01

    Forecasting is a complex process which requires expert tacit knowledge in producing accurate forecast values. This complexity contributes to the gaps between end users and expert. Automating this process by using algorithm can act as a bridge between them. Algorithm is a well-defined rule for solving a problem. In this study a univariate time series forecasting algorithm was developed in JAVA and validated using SPSS and Excel. Two set of simulated data (yearly and non-yearly); several univariate forecasting techniques (i.e. Moving Average, Decomposition, Exponential Smoothing, Time Series Regressions and ARIMA) and recent forecasting process (such as data partition, several error measures, recursive evaluation and etc.) were employed. Successfully, the results of the algorithm tally with the results of SPSS and Excel. This algorithm will not just benefit forecaster but also end users that lacking in depth knowledge of forecasting process.

  2. Introduction to Time Series Analysis

    NASA Technical Reports Server (NTRS)

    Hardin, J. C.

    1986-01-01

    The field of time series analysis is explored from its logical foundations to the most modern data analysis techniques. The presentation is developed, as far as possible, for continuous data, so that the inevitable use of discrete mathematics is postponed until the reader has gained some familiarity with the concepts. The monograph seeks to provide the reader with both the theoretical overview and the practical details necessary to correctly apply the full range of these powerful techniques. In addition, the last chapter introduces many specialized areas where research is currently in progress.

  3. Hydrodynamic analysis of time series

    NASA Astrophysics Data System (ADS)

    Suciu, N.; Vamos, C.; Vereecken, H.; Vanderborght, J.

    2003-04-01

    It was proved that balance equations for systems with corpuscular structure can be derived if a kinematic description by piece-wise analytic functions is available [1]. For example, the hydrodynamic equations for one-dimensional systems of inelastic particles, derived in [2], were used to prove the inconsistency of the Fourier law of heat with the microscopic structure of the system. The hydrodynamic description is also possible for single particle systems. In this case, averages of physical quantities associated with the particle, over a space-time window, generalizing the usual ``moving averages'' which are performed on time intervals only, were shown to be almost everywhere continuous space-time functions. Moreover, they obey balance partial differential equations (continuity equation for the 'concentration', Navier-Stokes equation, a. s. o.) [3]. Time series can be interpreted as trajectories in the space of the recorded parameter. Their hydrodynamic interpretation is expected to enable deterministic predictions, when closure relations can be obtained for the balance equations. For the time being, a first result is the estimation of the probability density for the occurrence of a given parameter value, by the normalized concentration field from the hydrodynamic description. The method is illustrated by hydrodynamic analysis of three types of time series: white noise, stock prices from financial markets and groundwater levels recorded at Krauthausen experimental field of Forschungszentrum Jülich (Germany). [1] C. Vamoş, A. Georgescu, N. Suciu, I. Turcu, Physica A 227, 81-92, 1996. [2] C. Vamoş, N. Suciu, A. Georgescu, Phys. Rev E 55, 5, 6277-6280, 1997. [3] C. Vamoş, N. Suciu, W. Blaj, Physica A, 287, 461-467, 2000.

  4. Singular spectrum analysis and forecasting of hydrological time series

    NASA Astrophysics Data System (ADS)

    Marques, C. A. F.; Ferreira, J. A.; Rocha, A.; Castanheira, J. M.; Melo-Gonçalves, P.; Vaz, N.; Dias, J. M.

    The singular spectrum analysis (SSA) technique is applied to some hydrological univariate time series to assess its ability to uncover important information from those series, and also its forecast skill. The SSA is carried out on annual precipitation, monthly runoff, and hourly water temperature time series. Information is obtained by extracting important components or, when possible, the whole signal from the time series. The extracted components are then subject to forecast by the SSA algorithm. It is illustrated the SSA ability to extract a slowly varying component (i.e. the trend) from the precipitation time series, the trend and oscillatory components from the runoff time series, and the whole signal from the water temperature time series. The SSA was also able to accurately forecast the extracted components of these time series.

  5. Multiple Indicator Stationary Time Series Models.

    ERIC Educational Resources Information Center

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  6. Analysis of time series from stochastic processes

    PubMed

    Gradisek; Siegert; Friedrich; Grabec

    2000-09-01

    Analysis of time series from stochastic processes governed by a Langevin equation is discussed. Several applications for the analysis are proposed based on estimates of drift and diffusion coefficients of the Fokker-Planck equation. The coefficients are estimated directly from a time series. The applications are illustrated by examples employing various synthetic time series and experimental time series from metal cutting. PMID:11088809

  7. Multivariate Time Series Similarity Searching

    PubMed Central

    Wang, Jimin; Zhu, Yuelong; Li, Shijin; Wan, Dingsheng; Zhang, Pengcheng

    2014-01-01

    Multivariate time series (MTS) datasets are very common in various financial, multimedia, and hydrological fields. In this paper, a dimension-combination method is proposed to search similar sequences for MTS. Firstly, the similarity of single-dimension series is calculated; then the overall similarity of the MTS is obtained by synthesizing each of the single-dimension similarity based on weighted BORDA voting method. The dimension-combination method could use the existing similarity searching method. Several experiments, which used the classification accuracy as a measure, were performed on six datasets from the UCI KDD Archive to validate the method. The results show the advantage of the approach compared to the traditional similarity measures, such as Euclidean distance (ED), cynamic time warping (DTW), point distribution (PD), PCA similarity factor (SPCA), and extended Frobenius norm (Eros), for MTS datasets in some ways. Our experiments also demonstrate that no measure can fit all datasets, and the proposed measure is a choice for similarity searches. PMID:24895665

  8. Multivariate time series similarity searching.

    PubMed

    Wang, Jimin; Zhu, Yuelong; Li, Shijin; Wan, Dingsheng; Zhang, Pengcheng

    2014-01-01

    Multivariate time series (MTS) datasets are very common in various financial, multimedia, and hydrological fields. In this paper, a dimension-combination method is proposed to search similar sequences for MTS. Firstly, the similarity of single-dimension series is calculated; then the overall similarity of the MTS is obtained by synthesizing each of the single-dimension similarity based on weighted BORDA voting method. The dimension-combination method could use the existing similarity searching method. Several experiments, which used the classification accuracy as a measure, were performed on six datasets from the UCI KDD Archive to validate the method. The results show the advantage of the approach compared to the traditional similarity measures, such as Euclidean distance (ED), cynamic time warping (DTW), point distribution (PD), PCA similarity factor (SPCA), and extended Frobenius norm (Eros), for MTS datasets in some ways. Our experiments also demonstrate that no measure can fit all datasets, and the proposed measure is a choice for similarity searches. PMID:24895665

  9. Multifractal Analysis of Aging and Complexity in Heartbeat Time Series

    NASA Astrophysics Data System (ADS)

    Muñoz D., Alejandro; Almanza V., Victor H.; del Río C., José L.

    2004-09-01

    Recently multifractal analysis has been used intensively in the analysis of physiological time series. In this work we apply the multifractal analysis to the study of heartbeat time series from healthy young subjects and other series obtained from old healthy subjects. We show that this multifractal formalism could be a useful tool to discriminate these two kinds of series. We used the algorithm proposed by Chhabra and Jensen that provides a highly accurate, practical and efficient method for the direct computation of the singularity spectrum. Aging causes loss of multifractality in the heartbeat time series, it means that heartbeat time series of elderly persons are less complex than the time series of young persons. This analysis reveals a new level of complexity characterized by the wide range of necessary exponents to characterize the dynamics of young people.

  10. A Review of Subsequence Time Series Clustering

    PubMed Central

    Teh, Ying Wah

    2014-01-01

    Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies. PMID:25140332

  11. A review of subsequence time series clustering.

    PubMed

    Zolhavarieh, Seyedjamal; Aghabozorgi, Saeed; Teh, Ying Wah

    2014-01-01

    Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies. PMID:25140332

  12. Nonparametric causal inference for bivariate time series

    NASA Astrophysics Data System (ADS)

    McCracken, James M.; Weigel, Robert S.

    2016-02-01

    We introduce new quantities for exploratory causal inference between bivariate time series. The quantities, called penchants and leanings, are computationally straightforward to apply, follow directly from assumptions of probabilistic causality, do not depend on any assumed models for the time series generating process, and do not rely on any embedding procedures; these features may provide a clearer interpretation of the results than those from existing time series causality tools. The penchant and leaning are computed based on a structured method for computing probabilities.

  13. Forecasting Enrollments with Fuzzy Time Series.

    ERIC Educational Resources Information Center

    Song, Qiang; Chissom, Brad S.

    The concept of fuzzy time series is introduced and used to forecast the enrollment of a university. Fuzzy time series, an aspect of fuzzy set theory, forecasts enrollment using a first-order time-invariant model. To evaluate the model, the conventional linear regression technique is applied and the predicted values obtained are compared to the…

  14. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    NASA Astrophysics Data System (ADS)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  15. IGS Clock Products for Accurate Geodetic and Timing Applications

    NASA Astrophysics Data System (ADS)

    Senior, K. L.; Ray, J. R.

    2007-12-01

    The performance of any GNSS is intimately related to the characteristics of the satellite clocks, so an understanding of the clock behavior is vital. The accurate products of the IGS enable daily point positions to the sub-cm level and continuous global clock comparisons to the sub-ns level. Time transfers are less accurate than associated positioning because of: 1) difficult-to-measure hardware delays; 2) the limiting pseudorange measurement errors. Both factors arise from characteristics of the pseudorange signals, which are easily degraded by multipath and other effects. The behavior of the satellite clocks are also be important. Over sub-daily intervals, IGS products show that approximate power-law stochastic processes govern all GPS clocks. The Block IIA Rb and Cs clocks obey random walk noise, with the Rb clocks up to nearly an order of magnitude more stable. Due to the high-frequency noise of the onboard Time Keeping system in the newer Block IIR and IIR-M satellites, their Rb clocks are dominantly white noise up to a few 1000 s with standard deviations of 90 to 180 ps. Superposed on this random background, periodic signals are present at four harmonic frequencies, n × (2.0029 ± 0.0005) cycles per day for n = 1, 2, 3, and 4. The equivalent fundamental period is 11.9826 hours, which surprisingly differs from the reported mean GPS orbital period of 11.9659 hours by 60 ± 11 s. We cannot account for this apparent discrepancy but note that a clear relationship between the periodic signals and the orbital dynamics is evidenced for some satellites by modulations of the spectral amplitudes with eclipse season. The Cs clocks are more strongly affected than the Rb clocks. All four harmonics are much smaller for the IIR/IIR-M satellites than for the older blocks. The strong 12- and 6-hour periodics in most GPS clocks dictate that these variations should be modeled in all high-accuracy applications, such as for timescale formation, interpolation of IGS clock products

  16. Multigrid time-accurate integration of Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  17. A time-accurate multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.

    1985-01-01

    A time-accurate multiple-grid algorithm is described. The algorithm allows one to take much larger time steps with an explicit time-marching scheme than would otherwise be the case. Sample calculations of a scalar advection equation and the Euler equations for an oscillating airfoil are shown. For the oscillating airfoil, time steps an order of magnitude larger than the single-grid algorithm are possible.

  18. Statistical criteria for characterizing irradiance time series.

    SciTech Connect

    Stein, Joshua S.; Ellis, Abraham; Hansen, Clifford W.

    2010-10-01

    We propose and examine several statistical criteria for characterizing time series of solar irradiance. Time series of irradiance are used in analyses that seek to quantify the performance of photovoltaic (PV) power systems over time. Time series of irradiance are either measured or are simulated using models. Simulations of irradiance are often calibrated to or generated from statistics for observed irradiance and simulations are validated by comparing the simulation output to the observed irradiance. Criteria used in this comparison should derive from the context of the analyses in which the simulated irradiance is to be used. We examine three statistics that characterize time series and their use as criteria for comparing time series. We demonstrate these statistics using observed irradiance data recorded in August 2007 in Las Vegas, Nevada, and in June 2009 in Albuquerque, New Mexico.

  19. Generation of artificial helioseismic time-series

    NASA Technical Reports Server (NTRS)

    Schou, J.; Brown, T. M.

    1993-01-01

    We present an outline of an algorithm to generate artificial helioseismic time-series, taking into account as much as possible of the knowledge we have on solar oscillations. The hope is that it will be possible to find the causes of some of the systematic errors in analysis algorithms by testing them with such artificial time-series.

  20. Salient Segmentation of Medical Time Series Signals

    PubMed Central

    Woodbridge, Jonathan; Lan, Mars; Sarrafzadeh, Majid; Bui, Alex

    2016-01-01

    Searching and mining medical time series databases is extremely challenging due to large, high entropy, and multidimensional datasets. Traditional time series databases are populated using segments extracted by a sliding window. The resulting database index contains an abundance of redundant time series segments with little to no alignment. This paper presents the idea of “salient segmentation”. Salient segmentation is a probabilistic segmentation technique for populating medical time series databases. Segments with the lowest probabilities are considered salient and are inserted into the index. The resulting index has little redundancy and is composed of aligned segments. This approach reduces index sizes by more than 98% over conventional sliding window techniques. Furthermore, salient segmentation can reduce redundancy in motif discovery algorithms by more than 85%, yielding a more succinct representation of a time series signal.

  1. Entropic Analysis of Electromyography Time Series

    NASA Astrophysics Data System (ADS)

    Kaufman, Miron; Sung, Paul

    2005-03-01

    We are in the process of assessing the effectiveness of fractal and entropic measures for the diagnostic of low back pain from surface electromyography (EMG) time series. Surface electromyography (EMG) is used to assess patients with low back pain. In a typical EMG measurement, the voltage is measured every millisecond. We observed back muscle fatiguing during one minute, which results in a time series with 60,000 entries. We characterize the complexity of time series by computing the Shannon entropy time dependence. The analysis of the time series from different relevant muscles from healthy and low back pain (LBP) individuals provides evidence that the level of variability of back muscle activities is much larger for healthy individuals than for individuals with LBP. In general the time dependence of the entropy shows a crossover from a diffusive regime to a regime characterized by long time correlations (self organization) at about 0.01s.

  2. Biclustering of time series microarray data.

    PubMed

    Meng, Jia; Huang, Yufei

    2012-01-01

    Clustering is a popular data exploration technique widely used in microarray data analysis. In this chapter, we review ideas and algorithms of bicluster and its applications in time series microarray analysis. We introduce first the concept and importance of biclustering and its different variations. We then focus our discussion on the popular iterative signature algorithm (ISA) for searching biclusters in microarray dataset. Next, we discuss in detail the enrichment constraint time-dependent ISA (ECTDISA) for identifying biologically meaningful temporal transcription modules from time series microarray dataset. In the end, we provide an example of ECTDISA application to time series microarray data of Kaposi's Sarcoma-associated Herpesvirus (KSHV) infection. PMID:22130875

  3. Network structure of multivariate time series.

    PubMed

    Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito

    2015-01-01

    Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail. PMID:26487040

  4. Network structure of multivariate time series

    PubMed Central

    Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito

    2015-01-01

    Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail. PMID:26487040

  5. Homogenising time series: beliefs, dogmas and facts

    NASA Astrophysics Data System (ADS)

    Domonkos, P.

    2011-06-01

    In the recent decades various homogenisation methods have been developed, but the real effects of their application on time series are still not known sufficiently. The ongoing COST action HOME (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As a part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever before to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. Empirical results show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities often have similar statistical characteristics than natural changes caused by climatic variability, thus the pure application of the classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality than in raw time series. Some problems around detecting multiple structures of inhomogeneities, as well as that of time series comparisons within homogenisation procedures are discussed briefly in the study.

  6. Network structure of multivariate time series

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas; Nicosia, Vincenzo; Latora, Vito

    2015-10-01

    Our understanding of a variety of phenomena in physics, biology and economics crucially depends on the analysis of multivariate time series. While a wide range tools and techniques for time series analysis already exist, the increasing availability of massive data structures calls for new approaches for multidimensional signal processing. We present here a non-parametric method to analyse multivariate time series, based on the mapping of a multidimensional time series into a multilayer network, which allows to extract information on a high dimensional dynamical system through the analysis of the structure of the associated multiplex network. The method is simple to implement, general, scalable, does not require ad hoc phase space partitioning, and is thus suitable for the analysis of large, heterogeneous and non-stationary time series. We show that simple structural descriptors of the associated multiplex networks allow to extract and quantify nontrivial properties of coupled chaotic maps, including the transition between different dynamical phases and the onset of various types of synchronization. As a concrete example we then study financial time series, showing that a multiplex network analysis can efficiently discriminate crises from periods of financial stability, where standard methods based on time-series symbolization often fail.

  7. Sensor-Generated Time Series Events: A Definition Language

    PubMed Central

    Anguera, Aurea; Lara, Juan A.; Lizcano, David; Martínez, Maria Aurora; Pazos, Juan

    2012-01-01

    There are now a great many domains where information is recorded by sensors over a limited time period or on a permanent basis. This data flow leads to sequences of data known as time series. In many domains, like seismography or medicine, time series analysis focuses on particular regions of interest, known as events, whereas the remainder of the time series contains hardly any useful information. In these domains, there is a need for mechanisms to identify and locate such events. In this paper, we propose an events definition language that is general enough to be used to easily and naturally define events in time series recorded by sensors in any domain. The proposed language has been applied to the definition of time series events generated within the branch of medicine dealing with balance-related functions in human beings. A device, called posturograph, is used to study balance-related functions. The platform has four sensors that record the pressure intensity being exerted on the platform, generating four interrelated time series. As opposed to the existing ad hoc proposals, the results confirm that the proposed language is valid, that is generally applicable and accurate, for identifying the events contained in the time series.

  8. Modeling Time Series Data for Supervised Learning

    ERIC Educational Resources Information Center

    Baydogan, Mustafa Gokce

    2012-01-01

    Temporal data are increasingly prevalent and important in analytics. Time series (TS) data are chronological sequences of observations and an important class of temporal data. Fields such as medicine, finance, learning science and multimedia naturally generate TS data. Each series provide a high-dimensional data vector that challenges the learning…

  9. Developing consistent time series landsat data products

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Landsat series satellite has provided earth observation data record continuously since early 1970s. There are increasing demands on having a consistent time series of Landsat data products. In this presentation, I will summarize the work supported by the USGS Landsat Science Team project from 20...

  10. Visibility Graph Based Time Series Analysis

    PubMed Central

    Stephen, Mutua; Gu, Changgui; Yang, Huijie

    2015-01-01

    Network based time series analysis has made considerable achievements in the recent years. By mapping mono/multivariate time series into networks, one can investigate both it’s microscopic and macroscopic behaviors. However, most proposed approaches lead to the construction of static networks consequently providing limited information on evolutionary behaviors. In the present paper we propose a method called visibility graph based time series analysis, in which series segments are mapped to visibility graphs as being descriptions of the corresponding states and the successively occurring states are linked. This procedure converts a time series to a temporal network and at the same time a network of networks. Findings from empirical records for stock markets in USA (S&P500 and Nasdaq) and artificial series generated by means of fractional Gaussian motions show that the method can provide us rich information benefiting short-term and long-term predictions. Theoretically, we propose a method to investigate time series from the viewpoint of network of networks. PMID:26571115

  11. Accurate GPS Time-Linked data Acquisition System (ATLAS II) user's manual.

    SciTech Connect

    Jones, Perry L.; Zayas, Jose R.; Ortiz-Moyet, Juan

    2004-02-01

    The Accurate Time-Linked data Acquisition System (ATLAS II) is a small, lightweight, time-synchronized, robust data acquisition system that is capable of acquiring simultaneous long-term time-series data from both a wind turbine rotor and ground-based instrumentation. This document is a user's manual for the ATLAS II hardware and software. It describes the hardware and software components of ATLAS II, and explains how to install and execute the software.

  12. Measuring nonlinear behavior in time series data

    NASA Astrophysics Data System (ADS)

    Wai, Phoong Seuk; Ismail, Mohd Tahir

    2014-12-01

    Stationary Test is an important test in detect the time series behavior since financial and economic data series always have missing data, structural change as well as jumps or breaks in the data set. Moreover, stationary test is able to transform the nonlinear time series variable to become stationary by taking difference-stationary process or trend-stationary process. Two different types of hypothesis testing of stationary tests that are Augmented Dickey-Fuller (ADF) test and Kwiatkowski-Philips-Schmidt-Shin (KPSS) test are examine in this paper to describe the properties of the time series variables in financial model. Besides, Least Square method is used in Augmented Dickey-Fuller test to detect the changes of the series and Lagrange multiplier is used in Kwiatkowski-Philips-Schmidt-Shin test to examine the properties of oil price, gold price and Malaysia stock market. Moreover, Quandt-Andrews, Bai-Perron and Chow tests are also use to detect the existence of break in the data series. The monthly index data are ranging from December 1989 until May 2012. Result is shown that these three series exhibit nonlinear properties but are able to transform to stationary series after taking first difference process.

  13. Nonlinear Analysis of Surface EMG Time Series

    NASA Astrophysics Data System (ADS)

    Zurcher, Ulrich; Kaufman, Miron; Sung, Paul

    2004-04-01

    Applications of nonlinear analysis of surface electromyography time series of patients with and without low back pain are presented. Limitations of the standard methods based on the power spectrum are discussed.

  14. Complex network approach to fractional time series

    SciTech Connect

    Manshour, Pouya

    2015-10-15

    In order to extract correlation information inherited in stochastic time series, the visibility graph algorithm has been recently proposed, by which a time series can be mapped onto a complex network. We demonstrate that the visibility algorithm is not an appropriate one to study the correlation aspects of a time series. We then employ the horizontal visibility algorithm, as a much simpler one, to map fractional processes onto complex networks. The degree distributions are shown to have parabolic exponential forms with Hurst dependent fitting parameter. Further, we take into account other topological properties such as maximum eigenvalue of the adjacency matrix and the degree assortativity, and show that such topological quantities can also be used to predict the Hurst exponent, with an exception for anti-persistent fractional Gaussian noises. To solve this problem, we take into account the Spearman correlation coefficient between nodes' degrees and their corresponding data values in the original time series.

  15. Advanced spectral methods for climatic time series

    USGS Publications Warehouse

    Ghil, M.; Allen, M.R.; Dettinger, M.D.; Ide, K.; Kondrashov, D.; Mann, M.E.; Robertson, A.W.; Saunders, A.; Tian, Y.; Varadi, F.; Yiou, P.

    2002-01-01

    The analysis of univariate or multivariate time series provides crucial information to describe, understand, and predict climatic variability. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical field of study. Considerable progress has also been made in interpreting the information so obtained in terms of dynamical systems theory. In this review we describe the connections between time series analysis and nonlinear dynamics, discuss signal- to-noise enhancement, and present some of the novel methods for spectral analysis. The various steps, as well as the advantages and disadvantages of these methods, are illustrated by their application to an important climatic time series, the Southern Oscillation Index. This index captures major features of interannual climate variability and is used extensively in its prediction. Regional and global sea surface temperature data sets are used to illustrate multivariate spectral methods. Open questions and further prospects conclude the review.

  16. Complex network approach to fractional time series

    NASA Astrophysics Data System (ADS)

    Manshour, Pouya

    2015-10-01

    In order to extract correlation information inherited in stochastic time series, the visibility graph algorithm has been recently proposed, by which a time series can be mapped onto a complex network. We demonstrate that the visibility algorithm is not an appropriate one to study the correlation aspects of a time series. We then employ the horizontal visibility algorithm, as a much simpler one, to map fractional processes onto complex networks. The degree distributions are shown to have parabolic exponential forms with Hurst dependent fitting parameter. Further, we take into account other topological properties such as maximum eigenvalue of the adjacency matrix and the degree assortativity, and show that such topological quantities can also be used to predict the Hurst exponent, with an exception for anti-persistent fractional Gaussian noises. To solve this problem, we take into account the Spearman correlation coefficient between nodes' degrees and their corresponding data values in the original time series.

  17. Detecting nonlinear structure in time series

    SciTech Connect

    Theiler, J.

    1991-01-01

    We describe an approach for evaluating the statistical significance of evidence for nonlinearity in a time series. The formal application of our method requires the careful statement of a null hypothesis which characterizes a candidate linear process, the generation of an ensemble of surrogate'' data sets which are similar to the original time series but consistent with the null hypothesis, and the computation of a discriminating statistic for the original and for each of the surrogate data sets. The idea is to test the original time series against the null hypothesis by checking whether the discriminating statistic computed for the original time series differs significantly from the statistics computed for each of the surrogate sets. While some data sets very cleanly exhibit low-dimensional chaos, there are many cases where the evidence is sketchy and difficult to evaluate. We hope to provide a framework within which such claims of nonlinearity can be evaluated. 5 refs., 4 figs.

  18. Homogenising time series: Beliefs, dogmas and facts

    NASA Astrophysics Data System (ADS)

    Domonkos, P.

    2010-09-01

    For obtaining reliable information about climate change and climate variability the use of high quality data series is essentially important, and one basic tool of quality improvements is the statistical homogenisation of observed time series. In the recent decades large number of homogenisation methods has been developed, but the real effects of their application on time series are still not known entirely. The ongoing COST HOME project (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. The author believes that several old theoretical rules have to be re-evaluated. Some examples of the hot questions, a) Statistically detected change-points can be accepted only with the confirmation of metadata information? b) Do semi-hierarchic algorithms for detecting multiple change-points in time series function effectively in practise? c) Is it good to limit the spatial comparison of candidate series with up to five other series in the neighbourhood? Empirical results - those from the COST benchmark, and other experiments too - show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities seem like part of the climatic variability, thus the pure application of classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality, than in raw time series. The developers and users of homogenisation methods have to bear in mind that

  19. Scale-dependent intrinsic entropies of complex time series.

    PubMed

    Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E

    2016-04-13

    Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. PMID:26953181

  20. Detecting chaos in irregularly sampled time series.

    PubMed

    Kulp, C W

    2013-09-01

    Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars. PMID:24089946

  1. Heuristic segmentation of a nonstationary time series

    NASA Astrophysics Data System (ADS)

    Fukuda, Kensuke; Eugene Stanley, H.; Nunes Amaral, Luís A.

    2004-02-01

    Many phenomena, both natural and human influenced, give rise to signals whose statistical properties change under time translation, i.e., are nonstationary. For some practical purposes, a nonstationary time series can be seen as a concatenation of stationary segments. However, the exact segmentation of a nonstationary time series is a hard computational problem which cannot be solved exactly by existing methods. For this reason, heuristic methods have been proposed. Using one such method, it has been reported that for several cases of interest—e.g., heart beat data and Internet traffic fluctuations—the distribution of durations of these stationary segments decays with a power-law tail. A potential technical difficulty that has not been thoroughly investigated is that a nonstationary time series with a (scalefree) power-law distribution of stationary segments is harder to segment than other nonstationary time series because of the wider range of possible segment lengths. Here, we investigate the validity of a heuristic segmentation algorithm recently proposed by Bernaola-Galván et al. [Phys. Rev. Lett. 87, 168105 (2001)] by systematically analyzing surrogate time series with different statistical properties. We find that if a given nonstationary time series has stationary periods whose length is distributed as a power law, the algorithm can split the time series into a set of stationary segments with the correct statistical properties. We also find that the estimated power-law exponent of the distribution of stationary-segment lengths is affected by (i) the minimum segment length and (ii) the ratio R≡σɛ/σx¯, where σx¯ is the standard deviation of the mean values of the segments and σɛ is the standard deviation of the fluctuations within a segment. Furthermore, we determine that the performance of the algorithm is generally not affected by uncorrelated noise spikes or by weak long-range temporal correlations of the fluctuations within segments.

  2. Forbidden patterns in financial time series.

    PubMed

    Zanin, Massimiliano

    2008-03-01

    The existence of forbidden patterns, i.e., certain missing sequences in a given time series, is a recently proposed instrument of potential application in the study of time series. Forbidden patterns are related to the permutation entropy, which has the basic properties of classic chaos indicators, such as Lyapunov exponent or Kolmogorov entropy, thus allowing to separate deterministic (usually chaotic) from random series; however, it requires fewer values of the series to be calculated, and it is suitable for using with small datasets. In this paper, the appearance of forbidden patterns is studied in different economical indicators such as stock indices (Dow Jones Industrial Average and Nasdaq Composite), NYSE stocks (IBM and Boeing), and others (ten year Bond interest rate), to find evidence of deterministic behavior in their evolutions. Moreover, the rate of appearance of the forbidden patterns is calculated, and some considerations about the underlying dynamics are suggested. PMID:18377070

  3. Predicting road accidents: Structural time series approach

    NASA Astrophysics Data System (ADS)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-07-01

    In this paper, the model for occurrence of road accidents in Malaysia between the years of 1970 to 2010 was developed and throughout this model the number of road accidents have been predicted by using the structural time series approach. The models are developed by using stepwise method and the residual of each step has been analyzed. The accuracy of the model is analyzed by using the mean absolute percentage error (MAPE) and the best model is chosen based on the smallest Akaike information criterion (AIC) value. A structural time series approach found that local linear trend model is the best model to represent the road accidents. This model allows level and slope component to be varied over time. In addition, this approach also provides useful information on improving the conventional time series method.

  4. Development of an IUE Time Series Browser

    NASA Technical Reports Server (NTRS)

    Massa, Derck

    2005-01-01

    The International Ultraviolet Explorer (IUE) satellite operated successfully for more than 17 years. Its archive of more than 100,000 science exposures is widely acknowledged as an invaluable scientific resource that will not be duplicated in the foreseeable future. We have searched this archive for objects which were observed 10 or more times with the same spectral dispersion and wavelength coverage over the lifetime of IUE. Using this definition of a time series, we find that roughly half of the science exposures are members of such time series. This paper describes a WEB-based IUE time series browser which enables the user to visually inspect the repeated observations for variability and to examine each member spectrum individually. Further, if the researcher determines that a specific data set is worthy of further investigation, it can be easily downloaded for further, detailed analysis.

  5. Learning time series for intelligent monitoring

    NASA Technical Reports Server (NTRS)

    Manganaris, Stefanos; Fisher, Doug

    1994-01-01

    We address the problem of classifying time series according to their morphological features in the time domain. In a supervised machine-learning framework, we induce a classification procedure from a set of preclassified examples. For each class, we infer a model that captures its morphological features using Bayesian model induction and the minimum message length approach to assign priors. In the performance task, we classify a time series in one of the learned classes when there is enough evidence to support that decision. Time series with sufficiently novel features, belonging to classes not present in the training set, are recognized as such. We report results from experiments in a monitoring domain of interest to NASA.

  6. Integrated method for chaotic time series analysis

    DOEpatents

    Hively, L.M.; Ng, E.G.

    1998-09-29

    Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.

  7. Integrated method for chaotic time series analysis

    DOEpatents

    Hively, Lee M.; Ng, Esmond G.

    1998-01-01

    Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.

  8. Fractal and natural time analysis of geoelectrical time series

    NASA Astrophysics Data System (ADS)

    Ramirez Rojas, A.; Moreno-Torres, L. R.; Cervantes, F.

    2013-05-01

    In this work we show the analysis of geoelectric time series linked with two earthquakes of M=6.6 and M=7.4. That time series were monitored at the South Pacific Mexican coast, which is the most important active seismic subduction zone in México. The geolectric time series were analyzed by using two complementary methods: a fractal analysis, by means of the detrended fluctuation analysis (DFA) in the conventional time, and the power spectrum defined in natural time domain (NTD). In conventional time we found long-range correlations prior to the EQ-occurrences and simultaneously in NTD, the behavior of the power spectrum suggest the possible existence of seismo electric signals (SES) similar with the previously reported in equivalent time series monitored in Greece prior to earthquakes of relevant magnitude.

  9. Layered Ensemble Architecture for Time Series Forecasting.

    PubMed

    Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin

    2016-01-01

    Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods. PMID:25751882

  10. Climate Time Series Analysis and Forecasting

    NASA Astrophysics Data System (ADS)

    Young, P. C.; Fildes, R.

    2009-04-01

    This paper will discuss various aspects of climate time series data analysis, modelling and forecasting being carried out at Lancaster. This will include state-dependent parameter, nonlinear, stochastic modelling of globally averaged atmospheric carbon dioxide; the computation of emission strategies based on modern control theory; and extrapolative time series benchmark forecasts of annual average temperature, both global and local. The key to the forecasting evaluation will be the iterative estimation of forecast error based on rolling origin comparisons, as recommended in the forecasting research literature. The presentation will conclude with with a comparison of the time series forecasts with forecasts produced from global circulation models and a discussion of the implications for climate modelling research.

  11. Intrinsic superstatistical components of financial time series

    NASA Astrophysics Data System (ADS)

    Vamoş, Călin; Crăciun, Maria

    2014-12-01

    Time series generated by a complex hierarchical system exhibit various types of dynamics at different time scales. A financial time series is an example of such a multiscale structure with time scales ranging from minutes to several years. In this paper we decompose the volatility of financial indices into five intrinsic components and we show that it has a heterogeneous scale structure. The small-scale components have a stochastic nature and they are independent 99% of the time, becoming synchronized during financial crashes and enhancing the heavy tails of the volatility distribution. The deterministic behavior of the large-scale components is related to the nonstationarity of the financial markets evolution. Our decomposition of the financial volatility is a superstatistical model more complex than those usually limited to a superposition of two independent statistics at well-separated time scales.

  12. Characterization of Experimental Chaotic Time Series

    NASA Astrophysics Data System (ADS)

    Tomlin, Brett; Olsen, Thomas; Callan, Kristine; Wiener, Richard

    2004-05-01

    Correlation dimension and Lyapunov dimension are complementary measures of the strength of the chaotic dynamics of a nonlinear system. Long time series were obtained from experiment, both in a modified Taylor-Couette fluid flow apparatus and a non-linear electronic circuit. The irregular generation of Taylor Vortex Pairs in Taylor-Couette flow with hourglass geometry has previously demonstrated low dimensional chaos( T. Olsen, R. Bjorge, & R. Wiener, Bull. Am. Phys. Soc. 47-10), 76 (2002).. The non-linear circuit allows for the acquisition of very large time series and serves as test case for the numerical procedures. Details of the calculation and results are presented.

  13. Detecting smoothness in noisy time series

    SciTech Connect

    Cawley, R.; Hsu, G.; Salvino, L.W.

    1996-06-01

    We describe the role of chaotic noise reduction in detecting an underlying smoothness in a dataset. We have described elsewhere a general method for assessing the presence of determinism in a time series, which is to test against the class of datasets producing smoothness (i.e., the null hypothesis is determinism). In order to reduce the likelihood of a false call, we recommend this kind of analysis be applied first to a time series whose deterministic origin is at question. We believe this step should be taken before implementing other methods of dynamical analysis and measurement, such as correlation dimension or Lyapounov spectrum. {copyright} {ital 1996 American Institute of Physics.}

  14. Clustering Short Time-Series Microarray

    NASA Astrophysics Data System (ADS)

    Ping, Loh Wei; Hasan, Yahya Abu

    2008-01-01

    Most microarray analyses are carried out on static gene expressions. However, the dynamical study of microarrays has lately gained more attention. Most researches on time-series microarray emphasize on the bioscience and medical aspects but few from the numerical aspect. This study attempts to analyze short time-series microarray mathematically using STEM clustering tool which formally preprocess data followed by clustering. We next introduce the Circular Mould Distance (CMD) algorithm with combinations of both preprocessing and clustering analysis. Both methods are subsequently compared in terms of efficiencies.

  15. TimeSeer: Scagnostics for high-dimensional time series.

    PubMed

    Dang, Tuan Nhon; Anand, Anushka; Wilkinson, Leland

    2013-03-01

    We introduce a method (Scagnostic time series) and an application (TimeSeer) for organizing multivariate time series and for guiding interactive exploration through high-dimensional data. The method is based on nine characterizations of the 2D distributions of orthogonal pairwise projections on a set of points in multidimensional euclidean space. These characterizations include measures, such as, density, skewness, shape, outliers, and texture. Working directly with these Scagnostic measures, we can locate anomalous or interesting subseries for further analysis. Our application is designed to handle the types of doubly multivariate data series that are often found in security, financial, social, and other sectors. PMID:23307611

  16. Multifractal analysis of polyalanines time series

    NASA Astrophysics Data System (ADS)

    Figueirêdo, P. H.; Nogueira, E.; Moret, M. A.; Coutinho, Sérgio

    2010-05-01

    Multifractal properties of the energy time series of short α-helix structures, specifically from a polyalanine family, are investigated through the MF-DFA technique ( multifractal detrended fluctuation analysis). Estimates for the generalized Hurst exponent h(q) and its associated multifractal exponents τ(q) are obtained for several series generated by numerical simulations of molecular dynamics in different systems from distinct initial conformations. All simulations were performed using the GROMOS force field, implemented in the program THOR. The main results have shown that all series exhibit multifractal behavior depending on the number of residues and temperature. Moreover, the multifractal spectra reveal important aspects of the time evolution of the system and suggest that the nucleation process of the secondary structures during the visits on the energy hyper-surface is an essential feature of the folding process.

  17. SO2 EMISSIONS AND TIME SERIES MODELS

    EPA Science Inventory

    The paper describes a time series model that permits the estimation of the statistical properties of pounds of SO2 per million Btu in stack emissions. It uses measured values for this quantity provided by coal sampling and analysis (CSA), by a continuous emissions monitor (CEM), ...

  18. Three Analysis Examples for Time Series Data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    With improvements in instrumentation and the automation of data collection, plot level repeated measures and time series data are increasingly available to monitor and assess selected variables throughout the duration of an experiment or project. Records and metadata on variables of interest alone o...

  19. Directionality volatility in electroencephalogram time series

    NASA Astrophysics Data System (ADS)

    Mansor, Mahayaudin M.; Green, David A.; Metcalfe, Andrew V.

    2016-06-01

    We compare time series of electroencephalograms (EEGs) from healthy volunteers with EEGs from subjects diagnosed with epilepsy. The EEG time series from the healthy group are recorded during awake state with their eyes open and eyes closed, and the records from subjects with epilepsy are taken from three different recording regions of pre-surgical diagnosis: hippocampal, epileptogenic and seizure zone. The comparisons for these 5 categories are in terms of deviations from linear time series models with constant variance Gaussian white noise error inputs. One feature investigated is directionality, and how this can be modelled by either non-linear threshold autoregressive models or non-Gaussian errors. A second feature is volatility, which is modelled by Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) processes. Other features include the proportion of variability accounted for by time series models, and the skewness and the kurtosis of the residuals. The results suggest these comparisons may have diagnostic potential for epilepsy and provide early warning of seizures.

  20. Topological analysis of chaotic time series

    NASA Astrophysics Data System (ADS)

    Gilmore, Robert

    1997-10-01

    Topological methods have recently been developed for the classification, analysis, and synthesis of chaotic time series. These methods can be applied to time series with a Lyapunov dimension less than three. The procedure determines the stretching and squeezing mechanisms which operate to create a strange attractor and organize all the unstable periodic orbits in the attractor in a unique way. Strange attractors are identified by a set of integers. These are topological invariants for a two dimensional branched manifold, which is the infinite dissipation limit of the strange attractor. It is remarkable that this topological information can be extracted from chaotic time series. The data required for this analysis need not be extensive or exceptionally clean. The topological invariants: (1) are subject to validation/invalidation tests; (2) describe how to model the data; and (3) do not change as control parameters change. Topological analysis is the first step in a doubly discrete classification scheme for strange attractors. The second discrete classification involves specification of a 'basis set' set of periodic orbits whose presence forces the existence of all other periodic orbits in the strange attractor. The basis set of orbits does change as control parameters change. Quantitative models developed to describe time series data are tested by the methods of entrainment. This analysis procedure has been applied to analyze a number of data sets. Several analyses are described.

  1. Event Discovery in Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Preston, D.; Protopapas, P.; Brodley, C.

    2009-09-01

    The discovery of events in astronomical time series data is a non-trival problem. Existing methods address the problem by requiring a fixed-sized sliding window which, given the varying lengths of events and sampling rates, could overlook important events. In this work, we develop probability models for finding the significance of an arbitrary-sized sliding window, and use these probabilities to find areas of significance. In addition, we present our analyses of major surveys archived at the Time Series Center, part of the Initiative in Innovative Computing at Harvard University. We applied our method to the time series data in order to discover events such as microlensing or any non-periodic events in the MACHO, OGLE and TAOS surveys. The analysis shows that the method is an effective tool for filtering out nearly 99% of noisy and uninteresting time series from a large set of data, but still provides full recovery of all known variable events (microlensing, blue star events, supernovae etc.). Furthermore, due to its efficiency, this method can be performed on-the-fly and will be used to analyze upcoming surveys, such as Pan-STARRS.

  2. Nonlinear Time Series Analysis via Neural Networks

    NASA Astrophysics Data System (ADS)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  3. Delay Differential Analysis of Time Series

    PubMed Central

    Lainscsek, Claudia; Sejnowski, Terrence J.

    2015-01-01

    Nonlinear dynamical system analysis based on embedding theory has been used for modeling and prediction, but it also has applications to signal detection and classification of time series. An embedding creates a multidimensional geometrical object from a single time series. Traditionally either delay or derivative embeddings have been used. The delay embedding is composed of delayed versions of the signal, and the derivative embedding is composed of successive derivatives of the signal. The delay embedding has been extended to nonuniform embeddings to take multiple timescales into account. Both embeddings provide information on the underlying dynamical system without having direct access to all the system variables. Delay differential analysis is based on functional embeddings, a combination of the derivative embedding with nonuniform delay embeddings. Small delay differential equation (DDE) models that best represent relevant dynamic features of time series data are selected from a pool of candidate models for detection or classification. We show that the properties of DDEs support spectral analysis in the time domain where nonlinear correlation functions are used to detect frequencies, frequency and phase couplings, and bispectra. These can be efficiently computed with short time windows and are robust to noise. For frequency analysis, this framework is a multivariate extension of discrete Fourier transform (DFT), and for higher-order spectra, it is a linear and multivariate alternative to multidimensional fast Fourier transform of multidimensional correlations. This method can be applied to short or sparse time series and can be extended to cross-trial and cross-channel spectra if multiple short data segments of the same experiment are available. Together, this time-domain toolbox provides higher temporal resolution, increased frequency and phase coupling information, and it allows an easy and straightforward implementation of higher-order spectra across time

  4. Remote Sensing Time Series Product Tool

    NASA Technical Reports Server (NTRS)

    Predos, Don; Ryan, Robert E.; Ross, Kenton W.

    2006-01-01

    The TSPT (Time Series Product Tool) software was custom-designed for NASA to rapidly create and display single-band and band-combination time series, such as NDVI (Normalized Difference Vegetation Index) images, for wide-area crop surveillance and for other time-critical applications. The TSPT, developed in MATLAB, allows users to create and display various MODIS (Moderate Resolution Imaging Spectroradiometer) or simulated VIIRS (Visible/Infrared Imager Radiometer Suite) products as single images, as time series plots at a selected location, or as temporally processed image videos. Manually creating these types of products is extremely labor intensive; however, the TSPT development tool makes the process simplified and efficient. MODIS is ideal for monitoring large crop areas because of its wide swath (2330 km), its relatively small ground sample distance (250 m), and its high temporal revisit time (twice daily). Furthermore, because MODIS imagery is acquired daily, rapid changes in vegetative health can potentially be detected. The new TSPT technology provides users with the ability to temporally process high-revisit-rate satellite imagery, such as that acquired from MODIS and from its successor, the VIIRS. The TSPT features the important capability of fusing data from both MODIS instruments onboard the Terra and Aqua satellites, which drastically improves cloud statistics. With the TSPT, MODIS metadata is used to find and optionally remove bad and suspect data. Noise removal and temporal processing techniques allow users to create low-noise time series plots and image videos and to select settings and thresholds that tailor particular output products. The TSPT GUI (graphical user interface) provides an interactive environment for crafting what-if scenarios by enabling a user to repeat product generation using different settings and thresholds. The TSPT Application Programming Interface provides more fine-tuned control of product generation, allowing experienced

  5. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  6. Modelling population change from time series data

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.

    1992-01-01

    Information on change in population size over time is among the most basic inputs for population management. Unfortunately, population changes are generally difficult to identify, and once identified difficult to explain. Sources of variald (patterns) in population data include: changes in environment that affect carrying capaciyy and produce trend, autocorrelative processes, irregular environmentally induced perturbations, and stochasticity arising from population processes. In addition. populations are almost never censused and many surveys (e.g., the North American Breeding Bird Survey) produce multiple, incomplete time series of population indices, providing further sampling complications. We suggest that each source of pattern should be used to address specific hypotheses regarding population change, but that failure to correctly model each source can lead to false conclusions about the dynamics of populations. We consider hypothesis tests based on each source of pattern, and the effects of autocorrelated observations and sampling error. We identify important constraints on analyses of time series that limit their use in identifying underlying relationships.

  7. Time series regression studies in environmental epidemiology

    PubMed Central

    Bhaskaran, Krishnan; Gasparrini, Antonio; Hajat, Shakoor; Smeeth, Liam; Armstrong, Ben

    2013-01-01

    Time series regression studies have been widely used in environmental epidemiology, notably in investigating the short-term associations between exposures such as air pollution, weather variables or pollen, and health outcomes such as mortality, myocardial infarction or disease-specific hospital admissions. Typically, for both exposure and outcome, data are available at regular time intervals (e.g. daily pollution levels and daily mortality counts) and the aim is to explore short-term associations between them. In this article, we describe the general features of time series data, and we outline the analysis process, beginning with descriptive analysis, then focusing on issues in time series regression that differ from other regression methods: modelling short-term fluctuations in the presence of seasonal and long-term patterns, dealing with time varying confounding factors and modelling delayed (‘lagged’) associations between exposure and outcome. We finish with advice on model checking and sensitivity analysis, and some common extensions to the basic model. PMID:23760528

  8. Time series regression studies in environmental epidemiology.

    PubMed

    Bhaskaran, Krishnan; Gasparrini, Antonio; Hajat, Shakoor; Smeeth, Liam; Armstrong, Ben

    2013-08-01

    Time series regression studies have been widely used in environmental epidemiology, notably in investigating the short-term associations between exposures such as air pollution, weather variables or pollen, and health outcomes such as mortality, myocardial infarction or disease-specific hospital admissions. Typically, for both exposure and outcome, data are available at regular time intervals (e.g. daily pollution levels and daily mortality counts) and the aim is to explore short-term associations between them. In this article, we describe the general features of time series data, and we outline the analysis process, beginning with descriptive analysis, then focusing on issues in time series regression that differ from other regression methods: modelling short-term fluctuations in the presence of seasonal and long-term patterns, dealing with time varying confounding factors and modelling delayed ('lagged') associations between exposure and outcome. We finish with advice on model checking and sensitivity analysis, and some common extensions to the basic model. PMID:23760528

  9. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  10. Sliced Inverse Regression for Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Li-Sue

    1995-11-01

    In this thesis, general nonlinear models for time series data are considered. A basic form is x _{t} = f(beta_sp{1} {T}X_{t-1},beta_sp {2}{T}X_{t-1},... , beta_sp{k}{T}X_ {t-1},varepsilon_{t}), where x_{t} is an observed time series data, X_{t } is the first d time lag vector, (x _{t},x_{t-1},... ,x _{t-d-1}), f is an unknown function, beta_{i}'s are unknown vectors, varepsilon_{t }'s are independent distributed. Special cases include AR and TAR models. We investigate the feasibility applying SIR/PHD (Li 1990, 1991) (the sliced inverse regression and principal Hessian methods) in estimating beta _{i}'s. PCA (Principal component analysis) is brought in to check one critical condition for SIR/PHD. Through simulation and a study on 3 well -known data sets of Canadian lynx, U.S. unemployment rate and sunspot numbers, we demonstrate how SIR/PHD can effectively retrieve the interesting low-dimension structures for time series data.

  11. Accurate expressions for solar cell fill factors including series and shunt resistances

    NASA Astrophysics Data System (ADS)

    Green, Martin A.

    2016-02-01

    Together with open-circuit voltage and short-circuit current, fill factor is a key solar cell parameter. In their classic paper on limiting efficiency, Shockley and Queisser first investigated this factor's analytical properties showing, for ideal cells, it could be expressed implicitly in terms of the maximum power point voltage. Subsequently, fill factors usually have been calculated iteratively from such implicit expressions or from analytical approximations. In the absence of detrimental series and shunt resistances, analytical fill factor expressions have recently been published in terms of the Lambert W function available in most mathematical computing software. Using a recently identified perturbative relationship, exact expressions in terms of this function are derived in technically interesting cases when both series and shunt resistances are present but have limited impact, allowing a better understanding of their effect individually and in combination. Approximate expressions for arbitrary shunt and series resistances are then deduced, which are significantly more accurate than any previously published. A method based on the insights developed is also reported for deducing one-diode fits to experimental data.

  12. Time-Series Analysis: A Cautionary Tale

    NASA Technical Reports Server (NTRS)

    Damadeo, Robert

    2015-01-01

    Time-series analysis has often been a useful tool in atmospheric science for deriving long-term trends in various atmospherically important parameters (e.g., temperature or the concentration of trace gas species). In particular, time-series analysis has been repeatedly applied to satellite datasets in order to derive the long-term trends in stratospheric ozone, which is a critical atmospheric constituent. However, many of the potential pitfalls relating to the non-uniform sampling of the datasets were often ignored and the results presented by the scientific community have been unknowingly biased. A newly developed and more robust application of this technique is applied to the Stratospheric Aerosol and Gas Experiment (SAGE) II version 7.0 ozone dataset and the previous biases and newly derived trends are presented.

  13. Time Series Analysis Using Geometric Template Matching.

    PubMed

    Frank, Jordan; Mannor, Shie; Pineau, Joelle; Precup, Doina

    2013-03-01

    We present a novel framework for analyzing univariate time series data. At the heart of the approach is a versatile algorithm for measuring the similarity of two segments of time series called geometric template matching (GeTeM). First, we use GeTeM to compute a similarity measure for clustering and nearest-neighbor classification. Next, we present a semi-supervised learning algorithm that uses the similarity measure with hierarchical clustering in order to improve classification performance when unlabeled training data are available. Finally, we present a boosting framework called TDEBOOST, which uses an ensemble of GeTeM classifiers. TDEBOOST augments the traditional boosting approach with an additional step in which the features used as inputs to the classifier are adapted at each step to improve the training error. We empirically evaluate the proposed approaches on several datasets, such as accelerometer data collected from wearable sensors and ECG data. PMID:22641699

  14. Multifractal Analysis of Sunspot Number Time Series

    NASA Astrophysics Data System (ADS)

    Kasde, Satish Kumar; Gwal, Ashok Kumar; Sondhiya, Deepak Kumar

    2016-07-01

    Multifractal analysis based approaches have been recently developed as an alternative framework to study the complex dynamical fluctuations in sunspot numbers data including solar cycles 20 to 23 and ascending phase of current solar cycle 24.To reveal the multifractal nature, the time series data of monthly sunspot number are analyzed by singularity spectrum and multi resolution wavelet analysis. Generally, the multifractility in sunspot number generate turbulence with the typical characteristics of the anomalous process governing the magnetosphere and interior of Sun. our analysis shows that singularities spectrum of sunspot data shows well Gaussian shape spectrum, which clearly establishes the fact that monthly sunspot number has multifractal character. The multifractal analysis is able to provide a local and adaptive description of the cyclic components of sunspot number time series, which are non-stationary and result of nonlinear processes. Keywords: Sunspot Numbers, Magnetic field, Multifractal analysis and wavelet Transform Techniques.

  15. Aggregated Indexing of Biomedical Time Series Data

    PubMed Central

    Woodbridge, Jonathan; Mortazavi, Bobak; Sarrafzadeh, Majid; Bui, Alex A.T.

    2016-01-01

    Remote and wearable medical sensing has the potential to create very large and high dimensional datasets. Medical time series databases must be able to efficiently store, index, and mine these datasets to enable medical professionals to effectively analyze data collected from their patients. Conventional high dimensional indexing methods are a two stage process. First, a superset of the true matches is efficiently extracted from the database. Second, supersets are pruned by comparing each of their objects to the query object and rejecting any objects falling outside a predetermined radius. This pruning stage heavily dominates the computational complexity of most conventional search algorithms. Therefore, indexing algorithms can be significantly improved by reducing the amount of pruning. This paper presents an online algorithm to aggregate biomedical times series data to significantly reduce the search space (index size) without compromising the quality of search results. This algorithm is built on the observation that biomedical time series signals are composed of cyclical and often similar patterns. This algorithm takes in a stream of segments and groups them to highly concentrated collections. Locality Sensitive Hashing (LSH) is used to reduce the overall complexity of the algorithm, allowing it to run online. The output of this aggregation is used to populate an index. The proposed algorithm yields logarithmic growth of the index (with respect to the total number of objects) while keeping sensitivity and specificity simultaneously above 98%. Both memory and runtime complexities of time series search are improved when using aggregated indexes. In addition, data mining tasks, such as clustering, exhibit runtimes that are orders of magnitudes faster when run on aggregated indexes.

  16. Analysis of Polyphonic Musical Time Series

    NASA Astrophysics Data System (ADS)

    Sommer, Katrin; Weihs, Claus

    A general model for pitch tracking of polyphonic musical time series will be introduced. Based on a model of Davy and Godsill (Bayesian harmonic models for musical pitch estimation and analysis, Technical Report 431, Cambridge University Engineering Department, 2002) Davy and Godsill (2002) the different pitches of the musical sound are estimated with MCMC methods simultaneously. Additionally a preprocessing step is designed to improve the estimation of the fundamental frequencies (A comparative study on polyphonic musical time series using MCMC methods. In C. Preisach et al., editors, Data Analysis, Machine Learning, and Applications, Springer, Berlin, 2008). The preprocessing step compares real audio data with an alphabet constructed from the McGill Master Samples (Opolko and Wapnick, McGill University Master Samples [Compact disc], McGill University, Montreal, 1987) and consists of tones of different instruments. The tones with minimal Itakura-Saito distortion (Gray et al., Transactions on Acoustics, Speech, and Signal Processing ASSP-28(4):367-376, 1980) are chosen as first estimates and as starting points for the MCMC algorithms. Furthermore the implementation of the alphabet is an approach for the recognition of the instruments generating the musical time series. Results are presented for mixed monophonic data from McGill and for self recorded polyphonic audio data.

  17. Characterization of noisy symbolic time series.

    PubMed

    Kulp, Christopher W; Smith, Suzanne

    2011-02-01

    The 0-1 test for chaos is a recently developed time series characterization algorithm that can determine whether a system is chaotic or nonchaotic. While the 0-1 test was designed for deterministic series, in real-world measurement situations, noise levels may not be known and the 0-1 test may have difficulty distinguishing between chaos and randomness. In this paper, we couple the 0-1 test for chaos with a test for determinism and apply these tests to noisy symbolic series generated from various model systems. We find that the pairing of the 0-1 test with a test for determinism improves the ability to correctly distinguish between chaos and randomness from a noisy series. Furthermore, we explore the modes of failure for the 0-1 test and the test for determinism so that we can better understand the effectiveness of the two tests to handle various levels of noise. We find that while the tests can handle low noise and high noise situations, moderate levels of noise can lead to inconclusive results from the two tests. PMID:21405890

  18. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  19. Time series segmentation with shifting means hidden markov models

    NASA Astrophysics Data System (ADS)

    Kehagias, Ath.; Fortin, V.

    2006-08-01

    We present a new family of hidden Markov models and apply these to the segmentation of hydrological and environmental time series. The proposed hidden Markov models have a discrete state space and their structure is inspired from the shifting means models introduced by Chernoff and Zacks and by Salas and Boes. An estimation method inspired from the EM algorithm is proposed, and we show that it can accurately identify multiple change-points in a time series. We also show that the solution obtained using this algorithm can serve as a starting point for a Monte-Carlo Markov chain Bayesian estimation method, thus reducing the computing time needed for the Markov chain to converge to a stationary distribution.

  20. Evolutionary factor analysis of replicated time series.

    PubMed

    Motta, Giovanni; Ombao, Hernando

    2012-09-01

    In this article, we develop a novel method that explains the dynamic structure of multi-channel electroencephalograms (EEGs) recorded from several trials in a motor-visual task experiment. Preliminary analyses of our data suggest two statistical challenges. First, the variance at each channel and cross-covariance between each pair of channels evolve over time. Moreover, the cross-covariance profiles display a common structure across all pairs, and these features consistently appear across all trials. In the light of these features, we develop a novel evolutionary factor model (EFM) for multi-channel EEG data that systematically integrates information across replicated trials and allows for smoothly time-varying factor loadings. The individual EEGs series share common features across trials, thus, suggesting the need to pool information across trials, which motivates the use of the EFM for replicated time series. We explain the common co-movements of EEG signals through the existence of a small number of common factors. These latent factors are primarily responsible for processing the visual-motor task which, through the loadings, drive the behavior of the signals observed at different channels. The estimation of the time-varying loadings is based on the spectral decomposition of the estimated time-varying covariance matrix. PMID:22364516

  1. Homogenization of precipitation time series with ACMANT

    NASA Astrophysics Data System (ADS)

    Domonkos, Peter

    2015-10-01

    New method for the time series homogenization of observed precipitation (PP) totals is presented; this method is a unit of the ACMANT software package. ACMANT is a relative homogenization method; minimum four time series with adequate spatial correlations are necessary for its use. The detection of inhomogeneities (IHs) is performed with fitting optimal step function, while the calculation of adjustment terms is based on the minimization of the residual variance in homogenized datasets. Together with the presentation of PP homogenization with ACMANT, some peculiarities of PP homogenization as, for instance, the frequency and seasonal variation of IHs in observed PP data and their relation to the performance of homogenization methods are discussed. In climatic regions of snowy winters, ACMANT distinguishes two seasons, namely, rainy season and snowy season, and the seasonal IHs are searched with bivariate detection. ACMANT is a fully automatic method, is freely downloadable from internet and treats either daily or monthly input. Series of observed data in the input dataset may cover different periods, and the occurrence of data gaps is allowed. False zero values instead of missing data code or physical outliers should be corrected before running ACMANT. Efficiency tests indicate that ACMANT belongs to the best performing methods, although further comparative tests of automatic homogenization methods are needed to confirm or reject this finding.

  2. Fractal fluctuations in cardiac time series

    NASA Technical Reports Server (NTRS)

    West, B. J.; Zhang, R.; Sanders, A. W.; Miniyar, S.; Zuckerman, J. H.; Levine, B. D.; Blomqvist, C. G. (Principal Investigator)

    1999-01-01

    Human heart rate, controlled by complex feedback mechanisms, is a vital index of systematic circulation. However, it has been shown that beat-to-beat values of heart rate fluctuate continually over a wide range of time scales. Herein we use the relative dispersion, the ratio of the standard deviation to the mean, to show, by systematically aggregating the data, that the correlation in the beat-to-beat cardiac time series is a modulated inverse power law. This scaling property indicates the existence of long-time memory in the underlying cardiac control process and supports the conclusion that heart rate variability is a temporal fractal. We argue that the cardiac control system has allometric properties that enable it to respond to a dynamical environment through scaling.

  3. Nonlinear modeling of chaotic time series: Theory and applications

    NASA Astrophysics Data System (ADS)

    Casdagli, M.; Eubank, S.; Farmer, J. D.; Gibson, J.; Desjardins, D.; Hunter, N.; Theiler, J.

    We review recent developments in the modeling and prediction of nonlinear time series. In some cases, apparent randomness in time series may be due to chaotic behavior of a nonlinear but deterministic system. In such cases, it is possible to exploit the determinism to make short term forecasts that are much more accurate than one could make from a linear stochastic model. This is done by first reconstructing a state space, and then using nonlinear function approximation methods to create a dynamical model. Nonlinear models are valuable not only as short term forecasters, but also as diagnostic tools for identifying and quantifying low-dimensional chaotic behavior. During the past few years, methods for nonlinear modeling have developed rapidly, and have already led to several applications where nonlinear models motivated by chaotic dynamics provide superior predictions to linear models. These applications include prediction of fluid flows, sunspots, mechanical vibrations, ice ages, measles epidemics, and human speech.

  4. Nonlinear modeling of chaotic time series: Theory and applications

    SciTech Connect

    Casdagli, M.; Eubank, S.; Farmer, J.D.; Gibson, J. Santa Fe Inst., NM ); Des Jardins, D.; Hunter, N.; Theiler, J. )

    1990-01-01

    We review recent developments in the modeling and prediction of nonlinear time series. In some cases apparent randomness in time series may be due to chaotic behavior of a nonlinear but deterministic system. In such cases it is possible to exploit the determinism to make short term forecasts that are much more accurate than one could make from a linear stochastic model. This is done by first reconstructing a state space, and then using nonlinear function approximation methods to create a dynamical model. Nonlinear models are valuable not only as short term forecasters, but also as diagnostic tools for identifying and quantifying low-dimensional chaotic behavior. During the past few years methods for nonlinear modeling have developed rapidly, and have already led to several applications where nonlinear models motivated by chaotic dynamics provide superior predictions to linear models. These applications include prediction of fluid flows, sunspots, mechanical vibrations, ice ages, measles epidemics and human speech. 162 refs., 13 figs.

  5. Time series analyses of global change data.

    PubMed

    Lane, L J; Nichols, M H; Osborn, H B

    1994-01-01

    The hypothesis that statistical analyses of historical time series data can be used to separate the influences of natural variations from anthropogenic sources on global climate change is tested. Point, regional, national, and global temperature data are analyzed. Trend analyses for the period 1901-1987 suggest mean annual temperatures increased (in degrees C per century) globally at the rate of about 0.5, in the USA at about 0.3, in the south-western USA desert region at about 1.2, and at the Walnut Gulch Experimental Watershed in south-eastern Arizona at about 0.8. However, the rates of temperature change are not constant but vary within the 87-year period. Serial correlation and spectral density analysis of the temperature time series showed weak periodicities at various frequencies. The only common periodicity among the temperature series is an apparent cycle of about 43 years. The temperature time series were correlated with the Wolf sunspot index, atmospheric CO(2) concentrations interpolated from the Siple ice core data, and atmospheric CO(2) concentration data from Mauna Loa measurements. Correlation analysis of temperature data with concurrent data on atmospheric CO(2) concentrations and the Wolf sunspot index support previously reported significant correlation over the 1901-1987 period. Correlation analysis between temperature, atmospheric CO(2) concentration, and the Wolf sunspot index for the shorter period, 1958-1987, when continuous Mauna Loa CO(2) data are available, suggest significant correlation between global warming and atmospheric CO(2) concentrations but no significant correlation between global warming and the Wolf sunspot index. This may be because the Wolf sunspot index apparently increased from 1901 until about 1960 and then decreased thereafter, while global warming apparently continued to increase through 1987. Correlation of sunspot activity with global warming may be spurious but additional analyses are required to test this hypothesis

  6. Time Series Photometry of KZ Lacertae

    NASA Astrophysics Data System (ADS)

    Joner, Michael D.

    2016-01-01

    We present BVRI time series photometry of the high amplitude delta Scuti star KZ Lacertae secured using the 0.9-meter telescope located at the Brigham Young University West Mountain Observatory. In addition to the multicolor light curves that are presented, the V data from the last six years of observations are used to plot an O-C diagram in order to determine the ephemeris and evaluate evidence for period change. We wish to thank the Brigham Young University College of Physical and Mathematical Sciences as well as the Department of Physics and Astronomy for their continued support of the research activities at the West Mountain Observatory.

  7. Time series analysis of temporal networks

    NASA Astrophysics Data System (ADS)

    Sikdar, Sandipan; Ganguly, Niloy; Mukherjee, Animesh

    2016-01-01

    A common but an important feature of all real-world networks is that they are temporal in nature, i.e., the network structure changes over time. Due to this dynamic nature, it becomes difficult to propose suitable growth models that can explain the various important characteristic properties of these networks. In fact, in many application oriented studies only knowing these properties is sufficient. For instance, if one wishes to launch a targeted attack on a network, this can be done even without the knowledge of the full network structure; rather an estimate of some of the properties is sufficient enough to launch the attack. We, in this paper show that even if the network structure at a future time point is not available one can still manage to estimate its properties. We propose a novel method to map a temporal network to a set of time series instances, analyze them and using a standard forecast model of time series, try to predict the properties of a temporal network at a later time instance. To our aim, we consider eight properties such as number of active nodes, average degree, clustering coefficient etc. and apply our prediction framework on them. We mainly focus on the temporal network of human face-to-face contacts and observe that it represents a stochastic process with memory that can be modeled as Auto-Regressive-Integrated-Moving-Average (ARIMA). We use cross validation techniques to find the percentage accuracy of our predictions. An important observation is that the frequency domain properties of the time series obtained from spectrogram analysis could be used to refine the prediction framework by identifying beforehand the cases where the error in prediction is likely to be high. This leads to an improvement of 7.96% (for error level ≤20%) in prediction accuracy on an average across all datasets. As an application we show how such prediction scheme can be used to launch targeted attacks on temporal networks. Contribution to the Topical Issue

  8. Time series modelling of surface pressure data

    NASA Astrophysics Data System (ADS)

    Al-Awadhi, Shafeeqah; Jolliffe, Ian

    1998-03-01

    In this paper we examine time series modelling of surface pressure data, as measured by a barograph, at Herne Bay, England, during the years 1981-1989. Autoregressive moving average (ARMA) models have been popular in many fields over the past 20 years, although applications in climatology have been rather less widespread than in some disciplines. Some recent examples are Milionis and Davies (Int. J. Climatol., 14, 569-579) and Seleshi et al. (Int. J. Climatol., 14, 911-923). We fit standard ARMA models to the pressure data separately for each of six 2-month natural seasons. Differences between the best fitting models for different seasons are discussed. Barograph data are recorded continuously, whereas ARMA models are fitted to discretely recorded data. The effect of different spacings between the fitted data on the models chosen is discussed briefly.Often, ARMA models can give a parsimonious and interpretable representation of a time series, but for many series the assumptions underlying such models are not fully satisfied, and more complex models may be considered. A specific feature of surface pressure data in the UK is that its behaviour is different at high and at low pressures: day-to-day changes are typically larger at low pressure levels than at higher levels. This means that standard assumptions used in fitting ARMA models are not valid, and two ways of overcoming this problem are investigated. Transformation of the data to better satisfy the usual assumptions is considered, as is the use of non-linear, specifically threshold autoregressive (TAR), models.

  9. Ensemble vs. time averages in financial time series analysis

    NASA Astrophysics Data System (ADS)

    Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2012-12-01

    Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.

  10. Singular spectrum analysis for time series with missing data

    USGS Publications Warehouse

    Schoellhamer, D.H.

    2001-01-01

    Geophysical time series often contain missing data, which prevents analysis with many signal processing and multivariate tools. A modification of singular spectrum analysis for time series with missing data is developed and successfully tested with synthetic and actual incomplete time series of suspended-sediment concentration from San Francisco Bay. This method also can be used to low pass filter incomplete time series.

  11. Alignment of Noisy and Uniformly Scaled Time Series

    NASA Astrophysics Data System (ADS)

    Lipowsky, Constanze; Dranischnikow, Egor; Göttler, Herbert; Gottron, Thomas; Kemeter, Mathias; Schömer, Elmar

    The alignment of noisy and uniformly scaled time series is an important but difficult task. Given two time series, one of which is a uniformly stretched subsequence of the other, we want to determine the stretching factor and the offset of the second time series within the first one. We adapted and enhanced different methods to address this problem: classical FFT-based approaches to determine the offset combined with a naïve search for the stretching factor or its direct computation in the frequency domain, bounded dynamic time warping and a new approach called shotgun analysis, which is inspired by sequencing and reassembling of genomes in bioinformatics. We thoroughly examined the strengths and weaknesses of the different methods on synthetic and real data sets. The FFT-based approaches are very accurate on high quality data, the shotgun approach is especially suitable for data with outliers. Dynamic time warping is a candidate for non-linear stretching or compression. We successfully applied the presented methods to identify steel coils via their thickness profiles.

  12. Nonparametric, nonnegative deconvolution of large time series

    NASA Astrophysics Data System (ADS)

    Cirpka, O. A.

    2006-12-01

    There is a long tradition of characterizing hydrologic systems by linear models, in which the response of the system to a time-varying stimulus is computed by convolution of a system-specific transfer function with the input signal. Despite its limitations, the transfer-function concept has been shown valuable for many situations such as the precipitation/run-off relationships of catchments and solute transport in agricultural soils and aquifers. A practical difficulty lies in the identification of the transfer function. A common approach is to fit a parametric function, enforcing a particular shape of the transfer function, which may be in contradiction to the real behavior (e.g., multimodal transfer functions, long tails, etc.). In our nonparametric deconvolution, the transfer function is assumed an auto-correlated random time function, which is conditioned on the data by a Bayesian approach. Nonnegativity, which is a vital constraint for solute-transport applications, is enforced by the method of Lagrange multipliers. This makes the inverse problem nonlinear. In nonparametric deconvolution, identifying the auto-correlation parameters is crucial. Enforcing too much smoothness prohibits the identification of important features, whereas insufficient smoothing leads to physically meaningless transfer functions, mapping noise components in the two data series onto each other. We identify optimal smoothness parameters by the expectation-maximization method, which requires the repeated generation of many conditional realizations. The overall approach, however, is still significantly faster than Markov-Chain Monte-Carlo methods presented recently. We apply our approach to electric-conductivity time series measured in a river and monitoring wells in the adjacent aquifer. The data cover 1.5 years with a temporal resolution of 1h. The identified transfer functions have lengths of up to 60 days, making up 1440 parameters. We believe that nonparametric deconvolution is an

  13. Assessing burn severity using satellite time series

    NASA Astrophysics Data System (ADS)

    Veraverbeke, Sander; Lhermitte, Stefaan; Verstraeten, Willem; Goossens, Rudi

    2010-05-01

    In this study a multi-temporal differenced Normalized Burn Ratio (dNBRMT) is presented to assess burn severity of the 2007 Peloponnese (Greece) wildfires. 8-day composites were created using the daily near infrared (NIR) and mid infrared (MIR) reflectance products of the Moderate Resolution Imaging Spectroradiometer (MODIS). Prior to the calculation of the dNBRMT a pixel-based control plot selection procedure was initiated for each burned pixel based on time series similarity of the pre-fire year 2006 to estimate the spatio-temporal NBR dynamics in the case that no fire event would have occurred. The dNBRMT is defined as the one-year post-fire integrated difference between the NBR values of the control and focal pixels. Results reveal the temporal dependency of the absolute values of bi-temporal dNBR maps as the mean temporal standard deviation of the one-year post-fire bi-temporal dNBR time series equaled 0.14 (standard deviation of 0.04). The dNBRMT's integration of temporal variability into one value potentially enhances the comparability of fires across space and time. In addition, the dNBRMT is robust to random noise thanks to the averaging effect. The dNBRMT, based on coarse resolution imagery with high temporal frequency, has the potential to become either a valuable complement to fine resolution Landsat dNBR mapping or an imperative option for assessing burn severity at a continental to global scale.

  14. A New SBUV Ozone Profile Time Series

    NASA Technical Reports Server (NTRS)

    McPeters, Richard

    2011-01-01

    Under NASA's MEaSUREs program for creating long term multi-instrument data sets, our group at Goddard has re-processed ozone profile data from a series of SBUV instruments. We have processed data from the Nimbus 7 SBUV instrument (1979-1990) and data from SBUV/2 instruments on NOAA-9 (1985-1998), NOAA-11 (1989-1995), NOAA-16 (2001-2010), NOAA-17 (2002-2010), and NOAA-18 (2005-2010). This reprocessing uses the version 8 ozone profile algorithm but now uses the Brion, Daumont, and Malicet (BMD) ozone cross sections instead of the Bass and Paur cross sections. The new cross sections have much better resolution, and extended wavelength range, and a more consistent temperature dependence. The re-processing also uses an improved cloud height climatology based on the Raman cloud retrievals of OMI. Finally, the instrument-to-instrument calibration is set using matched scenes so that ozone diurnal variation in the upper stratosphere does not alias into the ozone trands. Where there is no instrument overlap, SAGE and MLS are used to estimate calibration offsets. Preliminary analysis shows a more coherent time series as a function of altitude. The net effect on profile total column ozone is on average an absolute reduction of about one percent. Comparisons with ground-based systems are significantly better at high latitudes.

  15. Periodograms for multiband astronomical time series

    NASA Astrophysics Data System (ADS)

    Ivezic, Z.; VanderPlas, J. T.

    2016-05-01

    We summarize the multiband periodogram, a general extension of the well-known Lomb-Scargle approach for detecting periodic signals in time- domain data developed by VanderPlas & Ivezic (2015). A Python implementation of this method is available on GitHub. The multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES, and LSST), and can treat non-uniform sampling and heteroscedastic errors. The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands. The key aspect is the use of Tikhonov regularization which drives most of the variability into the so-called base model common to all bands, while fits for individual bands describe residuals relative to the base model and typically require lower-order Fourier series. We use simulated light curves and randomly subsampled SDSS Stripe 82 data to demonstrate the superiority of this method compared to other methods from the literature, and find that this method will be able to efficiently determine the correct period in the majority of LSST's bright RR Lyrae stars with as little as six months of LSST data.

  16. Volatility modeling of rainfall time series

    NASA Astrophysics Data System (ADS)

    Yusof, Fadhilah; Kane, Ibrahim Lawal

    2013-07-01

    Networks of rain gauges can provide a better insight into the spatial and temporal variability of rainfall, but they tend to be too widely spaced for accurate estimates. A way to estimate the spatial variability of rainfall between gauge points is to interpolate between them. This paper evaluates the spatial autocorrelation of rainfall data in some locations in Peninsular Malaysia using geostatistical technique. The results give an insight on the spatial variability of rainfall in the area, as such, two rain gauges were selected for an in-depth study of the temporal dependence of the rainfall data-generating process. It could be shown that rainfall data are affected by nonlinear characteristics of the variance often referred to as variance clustering or volatility, where large changes tend to follow large changes and small changes tend to follow small changes. The autocorrelation structure of the residuals and the squared residuals derived from autoregressive integrated moving average (ARIMA) models were inspected, the residuals are uncorrelated but the squared residuals show autocorrelation, and the Ljung-Box test confirmed the results. A test based on the Lagrange multiplier principle was applied to the squared residuals from the ARIMA models. The results of this auxiliary test show a clear evidence to reject the null hypothesis of no autoregressive conditional heteroskedasticity (ARCH) effect. Hence, it indicates that generalized ARCH (GARCH) modeling is necessary. An ARIMA error model is proposed to capture the mean behavior and a GARCH model for modeling heteroskedasticity (variance behavior) of the residuals from the ARIMA model. Therefore, the composite ARIMA-GARCH model captures the dynamics of daily rainfall in the study area. On the other hand, seasonal ARIMA model became a suitable model for the monthly average rainfall series of the same locations treated.

  17. Removing atmosphere loading effect from GPS time series

    NASA Astrophysics Data System (ADS)

    Tiampo, K. F.; Samadi Alinia, H.; Samsonov, S. V.; Gonzalez, P. J.

    2015-12-01

    The GPS time series of site position are contaminated by various sources of noise; in particular, the ionospheric and tropospheric path delays are significant [Gray et al., 2000; Meyer et al., 2006]. The GPS path delay in the ionosphere is largely dependent on the wave frequency whereas the delay in troposphere is dependent on the length of the travel path and therefore site elevation. Various approaches available for compensating ionosphere path delay cannot be used for removal of the tropospheric component. Quantifying the tropospheric delay plays an important role for determination of the vertical GPS component precision, as tropospheric parameters over a large distance have very little correlation with each other. Several methods have been proposed for tropospheric signal elimination from GPS vertical time series. Here we utilize surface temperature fluctuations and seasonal variations in water vapour and air pressure data for various spatial and temporal profiles in order to more accurately remove the atmospheric path delay [Samsonov et al., 2014]. In this paper, we model the atmospheric path delay of vertical position time series by analyzing the signal in the frequency domain and study its dependency on topography in eastern Ontario for the time period from January 2008 to December 2012. Systematic dependency of amplitude of atmospheric path delay as a function of height and its temporal variations based on the development of a new, physics-based model relating tropospheric/atmospheric effects with topography and can help in determining the most accurate GPS position.The GPS time series of site position are contaminated by various sources of noise; in particular, the ionospheric and tropospheric path delays are significant [Gray et al., 2000; Meyer et al., 2006]. The GPS path delay in the ionosphere is largely dependent on the wave frequency whereas the delay in troposphere is dependent on the length of the travel path and therefore site elevation. Various

  18. Normalizing the causality between time series.

    PubMed

    Liang, X San

    2015-08-01

    Recently, a rigorous yet concise formula was derived to evaluate information flow, and hence the causality in a quantitative sense, between time series. To assess the importance of a resulting causality, it needs to be normalized. The normalization is achieved through distinguishing a Lyapunov exponent-like, one-dimensional phase-space stretching rate and a noise-to-signal ratio from the rate of information flow in the balance of the marginal entropy evolution of the flow recipient. It is verified with autoregressive models and applied to a real financial analysis problem. An unusually strong one-way causality is identified from IBM (International Business Machines Corporation) to GE (General Electric Company) in their early era, revealing to us an old story, which has almost faded into oblivion, about "Seven Dwarfs" competing with a giant for the mainframe computer market. PMID:26382363

  19. Scaling laws from geomagnetic time series

    USGS Publications Warehouse

    Voros, Z.; Kovacs, P.; Juhasz, A.; Kormendi, A.; Green, A.W.

    1998-01-01

    The notion of extended self-similarity (ESS) is applied here for the X - component time series of geomagnetic field fluctuations. Plotting nth order structure functions against the fourth order structure function we show that low-frequency geomagnetic fluctuations up to the order n = 10 follow the same scaling laws as MHD fluctuations in solar wind, however, for higher frequencies (f > l/5[h]) a clear departure from the expected universality is observed for n > 6. ESS does not allow to make an unambiguous statement about the non triviality of scaling laws in "geomagnetic" turbulence. However, we suggest to use higher order moments as promising diagnostic tools for mapping the contributions of various remote magnetospheric sources to local observatory data. Copyright 1998 by the American Geophysical Union.

  20. Using entropy to cut complex time series

    NASA Astrophysics Data System (ADS)

    Mertens, David; Poncela Casasnovas, Julia; Spring, Bonnie; Amaral, L. A. N.

    2013-03-01

    Using techniques from statistical physics, physicists have modeled and analyzed human phenomena varying from academic citation rates to disease spreading to vehicular traffic jams. The last decade's explosion of digital information and the growing ubiquity of smartphones has led to a wealth of human self-reported data. This wealth of data comes at a cost, including non-uniform sampling and statistically significant but physically insignificant correlations. In this talk I present our work using entropy to identify stationary sub-sequences of self-reported human weight from a weight management web site. Our entropic approach-inspired by the infomap network community detection algorithm-is far less biased by rare fluctuations than more traditional time series segmentation techniques. Supported by the Howard Hughes Medical Institute

  1. Normalizing the causality between time series

    NASA Astrophysics Data System (ADS)

    Liang, X. San

    2015-08-01

    Recently, a rigorous yet concise formula was derived to evaluate information flow, and hence the causality in a quantitative sense, between time series. To assess the importance of a resulting causality, it needs to be normalized. The normalization is achieved through distinguishing a Lyapunov exponent-like, one-dimensional phase-space stretching rate and a noise-to-signal ratio from the rate of information flow in the balance of the marginal entropy evolution of the flow recipient. It is verified with autoregressive models and applied to a real financial analysis problem. An unusually strong one-way causality is identified from IBM (International Business Machines Corporation) to GE (General Electric Company) in their early era, revealing to us an old story, which has almost faded into oblivion, about "Seven Dwarfs" competing with a giant for the mainframe computer market.

  2. Periodograms for Multiband Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    VanderPlas, Jacob T.; Iv´, Željko

    2015-10-01

    This paper introduces the multiband periodogram, a general extension of the well-known Lomb-Scargle approach for detecting periodic signals in time-domain data. In addition to advantages of the Lomb-Scargle method such as treatment of non-uniform sampling and heteroscedastic errors, the multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES, and LSST). The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands. The key aspect is the use of Tikhonov regularization which drives most of the variability into the so-called base model common to all bands, while fits for individual bands describe residuals relative to the base model and typically require lower-order Fourier series. This decrease in the effective model complexity is the main reason for improved performance. After a pedagogical development of the formalism of least-squares spectral analysis, which motivates the essential features of the multiband model, we use simulated light curves and randomly subsampled SDSS Stripe 82 data to demonstrate the superiority of this method compared to other methods from the literature and find that this method will be able to efficiently determine the correct period in the majority of LSST’s bright RR Lyrae stars with as little as six months of LSST data, a vast improvement over the years of data reported to be required by previous studies. A Python implementation of this method, along with code to fully reproduce the results reported here, is available on GitHub.

  3. Detecting and characterising ramp events in wind power time series

    NASA Astrophysics Data System (ADS)

    Gallego, Cristóbal; Cuerva, Álvaro; Costa, Alexandre

    2014-12-01

    In order to implement accurate models for wind power ramp forecasting, ramps need to be previously characterised. This issue has been typically addressed by performing binary ramp/non-ramp classifications based on ad-hoc assessed thresholds. However, recent works question this approach. This paper presents the ramp function, an innovative wavelet- based tool which detects and characterises ramp events in wind power time series. The underlying idea is to assess a continuous index related to the ramp intensity at each time step, which is obtained by considering large power output gradients evaluated under different time scales (up to typical ramp durations). The ramp function overcomes some of the drawbacks shown by the aforementioned binary classification and permits forecasters to easily reveal specific features of the ramp behaviour observed at a wind farm. As an example, the daily profile of the ramp-up and ramp-down intensities are obtained for the case of a wind farm located in Spain.

  4. Mulstiscale Stochastic Generator of Multivariate Met-Ocean Time Series

    NASA Astrophysics Data System (ADS)

    Guanche, Yanira; Mínguez, Roberto; Méndez, Fernando J.

    2013-04-01

    The design of maritime structures requires information on sea state conditions that influence its behavior during its life cycle. In the last decades, there has been a increasing development of sea databases (buoys, reanalysis, satellite) that allow an accurate description of the marine climate and its interaction with a given structure in terms of functionality and stability. However, these databases have a limited timelength, and its appliance entails an associated uncertainty. To avoid this limitation, engineers try to sample synthetically generated time series, statistically consistent, which allow the simulation of longer time periods. The present work proposes a hybrid methodology to deal with this issue. It is based in the combination of clustering algorithms (k-means) and an autoregressive logistic regression model (logit). Since the marine climate is directly related to the atmospheric conditions at a synoptic scale, the proposed methodology takes both systems into account; generating simultaneously circulation patterns (weather types) time series and the sea state time series related. The generation of these time series can be summarized in three steps: (1) By applying the clustering technique k-means the atmospheric conditions are classified into a representative number of synoptical patterns (2) Taking into account different covariates involved (such as seasonality, interannual variability, trends or autoregressive term) the autoregressive logistic model is adjusted (3) Once the model is able to simulate weather types time series the last step is to generate multivariate hourly metocean parameters related to these weather types. This is done by an autoregressive model (ARMA) for each variable, including cross-correlation between them. To show the goodness of the proposed method the following data has been used: Sea Level Pressure (SLP) databases from NCEP-NCAR and Global Ocean Wave (GOW) reanalysis from IH Cantabria. The synthetical met-ocean hourly

  5. Timing calibration and spectral cleaning of LOFAR time series data

    NASA Astrophysics Data System (ADS)

    Corstanje, A.; Buitink, S.; Enriquez, J. E.; Falcke, H.; Hörandel, J. R.; Krause, M.; Nelles, A.; Rachen, J. P.; Schellart, P.; Scholten, O.; ter Veen, S.; Thoudam, S.; Trinh, T. N. G.

    2016-05-01

    We describe a method for spectral cleaning and timing calibration of short time series data of the voltage in individual radio interferometer receivers. It makes use of phase differences in fast Fourier transform (FFT) spectra across antenna pairs. For strong, localized terrestrial sources these are stable over time, while being approximately uniform-random for a sum over many sources or for noise. Using only milliseconds-long datasets, the method finds the strongest interfering transmitters, a first-order solution for relative timing calibrations, and faulty data channels. No knowledge of gain response or quiescent noise levels of the receivers is required. With relatively small data volumes, this approach is suitable for use in an online system monitoring setup for interferometric arrays. We have applied the method to our cosmic-ray data collection, a collection of measurements of short pulses from extensive air showers, recorded by the LOFAR radio telescope. Per air shower, we have collected 2 ms of raw time series data for each receiver. The spectral cleaning has a calculated optimal sensitivity corresponding to a power signal-to-noise ratio of 0.08 (or -11 dB) in a spectral window of 25 kHz, for 2 ms of data in 48 antennas. This is well sufficient for our application. Timing calibration across individual antenna pairs has been performed at 0.4 ns precision; for calibration of signal clocks across stations of 48 antennas the precision is 0.1 ns. Monitoring differences in timing calibration per antenna pair over the course of the period 2011 to 2015 shows a precision of 0.08 ns, which is useful for monitoring and correcting drifts in signal path synchronizations. A cross-check method for timing calibration is presented, using a pulse transmitter carried by a drone flying over the array. Timing precision is similar, 0.3 ns, but is limited by transmitter position measurements, while requiring dedicated flights.

  6. `Geologic time series' of earth surface deformation

    NASA Astrophysics Data System (ADS)

    Friedrich, A. M.

    2004-12-01

    The debate of whether the earth has evolved gradually or by catastrophic change has dominated the geological sciences for many centuries. On a human timescale, the earth appears to be changing slowly except for a few sudden events (singularities) such as earthquakes, floods, or landslides. While these singularities dramatically affect the loss of life or the destruction of habitat locally, they have little effect on the global population growth rate or evolution of the earth's surface. It is also unclear to what degree such events leave their traces in the geologic record. Yet, the earth's surface is changing! For example, rocks that equilibrated at depths of > 30 km below the surface are exposed at high elevations in mountains belts indicating vertical motion (uplift) of tens of kilometers; and rocks that acquired a signature of the earth's magnetic field are found up to hundreds of kilometers from their origin indicating significant horizontal transport along great faults. Whether such long-term motion occurs at the rate indicated by the recurrence interval of singular events, or whether singularities also operate at a higher-order scale ("mega-singularities") are open questions. Attempts to address these questions require time series significantly longer than several recurrence intervals of singularities. For example, for surface rupturing earthquakes (Magnitude > 7) with recurrence intervals ranging from tens to tens of thousands of years, observation periods on the order of thousands of years to a million years would be needed. However, few if any of the presently available measurement methods provide both the necessary resolution and "recording duration." While paleoseismic methods have the appropriate spatial and temporal resolution, data collection along most faults has been limited to the last one or two earthquakes. Geologic and geomorphic measurements may record long-term changes in fault slip, but only provide rates averaged over many recurrence

  7. Generation of accurate integral surfaces in time-dependent vector fields.

    PubMed

    Garth, Christoph; Krishnan, Han; Tricoche, Xavier; Bobach, Tom; Joy, Kenneth I

    2008-01-01

    We present a novel approach for the direct computation of integral surfaces in time-dependent vector fields. As opposed to previous work, which we analyze in detail, our approach is based on a separation of integral surface computation into two stages: surface approximation and generation of a graphical representation. This allows us to overcome several limitations of existing techniques. We first describe an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface. In a second step, we generate a well-conditioned triangulation. Our approach allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion. We examine the properties of the presented methods on several example datasets and perform a numerical study of its correctness and accuracy. Finally, we investigate some visualization aspects of integral surfaces. PMID:18988990

  8. Peat conditions mapping using MODIS time series

    NASA Astrophysics Data System (ADS)

    Poggio, Laura; Gimona, Alessandro; Bruneau, Patricia; Johnson, Sally; McBride, Andrew; Artz, Rebekka

    2016-04-01

    Large areas of Scotland are covered in peatlands, providing an important sink of carbon in their near natural state but act as a potential source of gaseous and dissolved carbon emission if not in good conditions. Data on the condition of most peatlands in Scotland are, however, scarce and largely confined to sites under nature protection designations, often biased towards sites in better condition. The best information available at present is derived from labour intensive field-based monitoring of relatively few designated sites (Common Standard Monitoring Dataset). In order to provide a national dataset of peat conditions, the available point information from the CSM data was modelled with morphological features and information derived from MODIS sensor. In particular we used time series of indices describing vegetation greenness (Enhanced Vegetation Index), water availability (Normalised Water Difference index), Land Surface Temperature and vegetation productivity (Gross Primary productivity). A scorpan-kriging approach was used, in particular using Generalised Additive Models for the description of the trend. The model provided the probability of a site to be in favourable conditions and the uncertainty of the predictions was taken into account. The internal validation (leave-one-out) provided a mis-classification error of around 0.25. The derived dataset was then used, among others, in the decision making process for the selection of sites for restoration.

  9. A Hybrid Algorithm for Clustering of Time Series Data Based on Affinity Search Technique

    PubMed Central

    Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A.; Shaygan, Mohammad Amin; Jalali, Alireza

    2014-01-01

    Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets. PMID:24982966

  10. Transmission of linear regression patterns between time series: From relationship in time series to complex networks

    NASA Astrophysics Data System (ADS)

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  11. A New Method for Accurate Treatment of Flow Equations in Cylindrical Coordinates Using Series Expansions

    NASA Technical Reports Server (NTRS)

    Constantinescu, G.S.; Lele, S. K.

    2000-01-01

    using these schemes is especially sensitive to the type of equation treatment at the singularity axis. The objective of this work is to develop a generally applicable numerical method for treating the singularities present at the polar axis, which is particularly suitable for highly accurate finite-differences schemes (e.g., Pade schemes) on non-staggered grids. The main idea is to reinterpret the regularity conditions developed in the context of pseudo-spectral methods. A set of exact equations at the singularity axis is derived using the appropriate series expansions for the variables in the original set of equations. The present treatment of the equations preserves the same level of accuracy as for the interior scheme. We also want to point out the wider utility of the method, proposed here in the context of compressible flow equations, as its extension for incompressible flows or for any other set of equations that are solved on a non-staggered mesh in cylindrical coordinates with finite-differences schemes of various level of accuracy is straightforward. The robustness and accuracy of the proposed technique is assessed by comparing results from simulations of laminar forced-jets and turbulent compressible jets using LES with similar calculations in which the equations are solved in Cartesian coordinates at the polar axis, or in which the singularity is removed by employing a staggered mesh in the radial direction without a mesh point at r = 0.

  12. Combination of TWSTFT and GNSS for accurate UTC time transfer

    NASA Astrophysics Data System (ADS)

    Jiang, Z.; Petit, G.

    2009-06-01

    The international UTC/TAI time and frequency transfer network is based on two independent space techniques: Two-Way Satellite Time and Frequency Transfer (TWSTFT) and Global Navigation Satellite System (GNSS). The network is highly redundant. In fact, 28% of the national time laboratories, which contribute 88% of the total atomic clock weight and all the primary frequency standards to UTC/TAI, operate both techniques. This redundancy is not fully used in UTC/TAI generation. We propose a combination that keeps the advantages of TWSTFT and GNSS and offers a new and effective strategy to improve UTC/TAI in terms of accuracy, stability and robustness. We focus on the combination of two BIPM routine products, TWSTFT and GPS PPP (time transfer using the precise point positioning technique), but the proposed method can be used for any carrier phase-based GNSS product.

  13. Time-accurate Navier-Stokes calculations with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Melson, N. Duane; Atkins, Harold L.; Sanetrik, Mark D.

    1993-01-01

    A numerical scheme to solve the unsteady Navier-Stokes equations is described. The scheme is implemented by modifying the multigrid-multiblock version of the steady Navier-Stokes equations solver, TLNS3D. The scheme is fully implicit in time and uses TLNS3D to iteratively invert the equations at each physical time step. The design objective of the scheme is unconditional stability (at least for first- and second-order discretizations of the physical time derivatives). With unconditional stability, the choice of the time step is based on the physical phenomena to be resolved rather than limited by numerical stability which is especially important for high Reynolds number viscous flows, where the spatial variation of grid cell size can be as much as six orders of magnitude. An analysis of the iterative procedure and the implementation of this procedure in TLNS3D are discussed. Numerical results are presented to show both the capabilities of the scheme and its speed up relative to the use of global minimum time stepping. Reductions in computational times of an order of magnitude are demonstrated.

  14. Intercomparison of six Mediterranean zooplankton time series

    NASA Astrophysics Data System (ADS)

    Berline, Léo; Siokou-Frangou, Ioanna; Marasović, Ivona; Vidjak, Olja; Fernández de Puelles, M.a. Luz; Mazzocchi, Maria Grazia; Assimakopoulou, Georgia; Zervoudaki, Soultana; Fonda-Umani, Serena; Conversi, Alessandra; Garcia-Comas, Carmen; Ibanez, Frédéric; Gasparini, Stéphane; Stemmann, Lars; Gorsky, Gabriel

    2012-05-01

    We analyzed and compared Mediterranean mesozooplankton time series spanning 1957-2006 from six coastal stations in the Balearic, Ligurian, Tyrrhenian, North and Middle Adriatic and Aegean Sea. Our analysis focused on fluctuations of major zooplankton taxonomic groups and their relation with environmental and climatic variability. Average seasonal cycles and interannual trends were derived. Stations spanned a large range of trophic status from oligotrophic to moderately eutrophic. Intra-station analyses showed (1) coherent multi-taxa trends off Villefranche sur mer that diverge from the previous results found at species level, (2) in Baleares, covariation of zooplankton and water masses as a consequence of the boundary hydrographic regime in the middle Western Mediterranean, (3) decrease in trophic status and abundance of some taxonomic groups off Naples, and (4) off Athens, an increase of zooplankton abundance and decrease in chlorophyll possibly caused by reduction of anthropogenic nutrient input, increase of microbial components, and more efficient grazing control on phytoplankton. (5) At basin scale, the analysis of temperature revealed significant positive correlations between Villefranche, Trieste and Naples for annual and/or winter average, and synchronous abrupt cooling and warming events centered in 1987 at the same three sites. After correction for multiple comparisons, we found no significant correlations between climate indices and local temperature or zooplankton abundance, nor between stations for zooplankton abundance, therefore we suggest that for these coastal stations local drivers (climatic, anthropogenic) are dominant and that the link between local and larger scale of climate should be investigated further if we are to understand zooplankton fluctuations.

  15. Heart rate variability helps tracking time more accurately.

    PubMed

    Cellini, Nicola; Mioni, Giovanna; Levorato, Ilenia; Grondin, Simon; Stablum, Franca; Sarlo, Michela

    2015-12-01

    Adequate temporal abilities are crucial for adaptive behavior. In time processing, variations in the rate of pulses' emission by the pacemaker are often reported to be an important cause of temporal errors. These variations are often associated with physiological changes, and recently it has also been proposed that physiological changes may not just vary the pulses' emission, but they can work as a timekeeper themselves. In the present study we further explore the relationship between temporal abilities with autonomic activity and interoceptive awareness in a group of thirty healthy young adults (mean age 24.18 years; SD=2.1). Using electrocardiogram, impedance cardiography and skin conductance measures, we assessed the relationship between the autonomic profile at rest and temporal abilities in two temporal tasks (time bisection and finger tapping tasks). Results showed that heart rate variability affects time perception. We observed that increased heart rate variability (HRV) was associated with higher temporal accuracy. More specifically, we found that higher vagal control was associated with lower error in producing 1-s tempo, whereas higher overall HRV was related with lower error (measured by the constant error) in the time bisection task. Our results support the idea that bodily signals may shape our perception of time. PMID:26507899

  16. Time-accurate Navier-Stokes calculations with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Melson, N. D.; Sanetrik, Mark D.; Atkins, Harold L.

    1993-01-01

    An efficient method for calculating unsteady flows is presented, with emphasis on a modified version of the thin-layer Navier-Stokes equations. Fourier stability analysis is used to illustrate the effect of treating the source term implicitly instead of explicity, as well as to illustrate other algorithmic choices. A 2D circular cylinder (with a Reynolds number of 1200 and a Mach number of 0.3) is calculated. The present scheme requires only about 10 percent of the computer time required by global minimum time stepping.

  17. A fast, time-accurate unsteady full potential scheme

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Ide, H.; Gorski, J.; Osher, S.

    1985-01-01

    The unsteady form of the full potential equation is solved in conservation form by an implicit method based on approximate factorization. At each time level, internal Newton iterations are performed to achieve time accuracy and computational efficiency. A local time linearization procedure is introduced to provide a good initial guess for the Newton iteration. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi, obtained by imposing the density to be continuous across the wake. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. The resulting unsteady method performs well which, even at low reduced frequency levels of 0.1 or less, requires fewer than 100 time steps per cycle at transonic Mach numbers. The code is fully vectorized for the CRAY-XMP and the VPS-32 computers.

  18. Quantifying evolutionary dynamics from variant-frequency time series.

    PubMed

    Khatri, Bhavin S

    2016-01-01

    From Kimura's neutral theory of protein evolution to Hubbell's neutral theory of biodiversity, quantifying the relative importance of neutrality versus selection has long been a basic question in evolutionary biology and ecology. With deep sequencing technologies, this question is taking on a new form: given a time-series of the frequency of different variants in a population, what is the likelihood that the observation has arisen due to selection or neutrality? To tackle the 2-variant case, we exploit Fisher's angular transformation, which despite being discovered by Ronald Fisher a century ago, has remained an intellectual curiosity. We show together with a heuristic approach it provides a simple solution for the transition probability density at short times, including drift, selection and mutation. Our results show under that under strong selection and sufficiently frequent sampling these evolutionary parameters can be accurately determined from simulation data and so they provide a theoretical basis for techniques to detect selection from variant or polymorphism frequency time-series. PMID:27616332

  19. Accurate Monotonicity - Preserving Schemes With Runge-Kutta Time Stepping

    NASA Technical Reports Server (NTRS)

    Suresh, A.; Huynh, H. T.

    1997-01-01

    A new class of high-order monotonicity-preserving schemes for the numerical solution of conservation laws is presented. The interface value in these schemes is obtained by limiting a higher-order polynominal reconstruction. The limiting is designed to preserve accuracy near extrema and to work well with Runge-Kutta time stepping. Computational efficiency is enhanced by a simple test that determines whether the limiting procedure is needed. For linear advection in one dimension, these schemes are shown as well as the Euler equations also confirm their high accuracy, good shock resolution, and computational efficiency.

  20. Stochastic PArallel Rarefied-gas Time-accurate Analyzer

    SciTech Connect

    Michael Gallis, Steve Plimpton

    2014-01-24

    The SPARTA package is software for simulating low-density fluids via the Direct Simulation Monte Carlo (DSMC) method, which is a particle-based method for tracking particle trajectories and collisions as a model of a multi-species gas. The main component of SPARTA is a simulation code which allows the user to specify a simulation domain, populate it with particles, embed triangulated surfaces as boundary conditions for the flow, overlay a grid for finding pairs of collision partners, and evolve the system in time via explicit timestepping. The package also includes various pre- and post-processing tools, useful for setting up simulations and analyzing the results. The simulation code runs either in serial on a single processor or desktop machine, or can be run in parallel using the MPI message-passing library, to enable faster performance on large problems.

  1. Stochastic PArallel Rarefied-gas Time-accurate Analyzer

    2014-01-24

    The SPARTA package is software for simulating low-density fluids via the Direct Simulation Monte Carlo (DSMC) method, which is a particle-based method for tracking particle trajectories and collisions as a model of a multi-species gas. The main component of SPARTA is a simulation code which allows the user to specify a simulation domain, populate it with particles, embed triangulated surfaces as boundary conditions for the flow, overlay a grid for finding pairs of collision partners,more » and evolve the system in time via explicit timestepping. The package also includes various pre- and post-processing tools, useful for setting up simulations and analyzing the results. The simulation code runs either in serial on a single processor or desktop machine, or can be run in parallel using the MPI message-passing library, to enable faster performance on large problems.« less

  2. Time series modelling and forecasting of emergency department overcrowding.

    PubMed

    Kadri, Farid; Harrou, Fouzi; Chaabane, Sondès; Tahon, Christian

    2014-09-01

    Efficient management of patient flow (demand) in emergency departments (EDs) has become an urgent issue for many hospital administrations. Today, more and more attention is being paid to hospital management systems to optimally manage patient flow and to improve management strategies, efficiency and safety in such establishments. To this end, EDs require significant human and material resources, but unfortunately these are limited. Within such a framework, the ability to accurately forecast demand in emergency departments has considerable implications for hospitals to improve resource allocation and strategic planning. The aim of this study was to develop models for forecasting daily attendances at the hospital emergency department in Lille, France. The study demonstrates how time-series analysis can be used to forecast, at least in the short term, demand for emergency services in a hospital emergency department. The forecasts were based on daily patient attendances at the paediatric emergency department in Lille regional hospital centre, France, from January 2012 to December 2012. An autoregressive integrated moving average (ARIMA) method was applied separately to each of the two GEMSA categories and total patient attendances. Time-series analysis was shown to provide a useful, readily available tool for forecasting emergency department demand. PMID:25053208

  3. A Fully Implicit Time Accurate Method for Hypersonic Combustion: Application to Shock-induced Combustion Instability

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Radhakrishnan, Krishnan

    1994-01-01

    A new fully implicit, time accurate algorithm suitable for chemically reacting, viscous flows in the transonic-to-hypersonic regime is described. The method is based on a class of Total Variation Diminishing (TVD) schemes and uses successive Gauss-Siedel relaxation sweeps. The inversion of large matrices is avoided by partitioning the system into reacting and nonreacting parts, but still maintaining a fully coupled interaction. As a result, the matrices that have to be inverted are of the same size as those obtained with the commonly used point implicit methods. In this paper we illustrate the applicability of the new algorithm to hypervelocity unsteady combustion applications. We present a series of numerical simulations of the periodic combustion instabilities observed in ballistic-range experiments of blunt projectiles flying at subdetonative speeds through hydrogen-air mixtures. The computed frequencies of oscillation are in excellent agreement with experimental data.

  4. It's About Time: How Accurate Can Geochronology Become?

    NASA Astrophysics Data System (ADS)

    Harrison, M.; Baldwin, S.; Caffee, M. W.; Gehrels, G. E.; Schoene, B.; Shuster, D. L.; Singer, B. S.

    2015-12-01

    As isotope ratio precisions have improved to as low as ±1 ppm, geochronologic precision has remained essentially unchanged. This largely reflects the nature of radioactivity whereby the parent decays into a different chemical species thus putting as much emphasis on the determining inter-element ratios as isotopic. Even the best current accuracy grows into errors of >0.6 m.y. during the Paleozoic - a span of time equal to ¼ of the Pleistocene. If we are to understand the nature of Paleozoic species variation and climate change at anything like the Cenozoic, we need a 10x improvement in accuracy. The good news is that there is no physical impediment to realizing this. There are enough Pb* atoms in the outer few μm's of a Paleozoic zircon grown moments before eruption to permit ±0.01% accuracy in the U-Pb system. What we need are the resources to synthesize the spikes, enhance ionization yields, exploit microscale sampling, and improve knowledge of λ correspondingly. Despite advances in geochronology over the past 40 years (multicollection, multi-isotope spikes, in situ dating), our ability to translate a daughter atom into a detected ion has remained at the level of 1% or so. This means that a ~102 increase in signal can be achieved before we approach a physical limit. Perhaps the most promising approach is use of broad spectrum lasers that can ionize all neutrals. Radical new approaches to providing mass separation of such signals are emerging, including trapped ion cyclotron resonance and multi-turn, sputtered neutral TOF spectrometers capable of mass resolutions in excess of 105. These innovations hold great promise in geochronology but are largely being developed for cosmochemistry. This may make sense at first glance as cosmochemists are classically atom-limited (IDPs, stardust) but can be a misperception as the outer few μm's of a zircon may represent no more mass than a stardust mote. To reach the fundamental limits of geochronologic signals we need to

  5. Approximate Entropies for Stochastic Time Series and EKG Time Series of Patients with Epilepsy and Pseudoseizures

    NASA Astrophysics Data System (ADS)

    Vyhnalek, Brian; Zurcher, Ulrich; O'Dwyer, Rebecca; Kaufman, Miron

    2009-10-01

    A wide range of heart rate irregularities have been reported in small studies of patients with temporal lobe epilepsy [TLE]. We hypothesize that patients with TLE display cardiac dysautonomia in either a subclinical or clinical manner. In a small study, we have retrospectively identified (2003-8) two groups of patients from the epilepsy monitoring unit [EMU] at the Cleveland Clinic. No patients were diagnosed with cardiovascular morbidities. The control group consisted of patients with confirmed pseudoseizures and the experimental group had confirmed right temporal lobe epilepsy through a seizure free outcome after temporal lobectomy. We quantified the heart rate variability using the approximate entropy [ApEn]. We found similar values of the ApEn in all three states of consciousness (awake, sleep, and proceeding seizure onset). In the TLE group, there is some evidence for greater variability in the awake than in either the sleep or proceeding seizure onset. Here we present results for mathematically-generated time series: the heart rate fluctuations ξ follow the γ statistics i.e., p(ξ)=γ-1(k) ξ^k exp(-ξ). This probability function has well-known properties and its Shannon entropy can be expressed in terms of the γ-function. The parameter k allows us to generate a family of heart rate time series with different statistics. The ApEn calculated for the generated time series for different values of k mimic the properties found for the TLE and pseudoseizure group. Our results suggest that the ApEn is an effective tool to probe differences in statistics of heart rate fluctuations.

  6. Multiscale entropy to distinguish physiologic and synthetic RR time series.

    PubMed

    Costa, M; Goldberger, A L; Peng, C-K

    2002-01-01

    We address the challenge of distinguishing physiologic interbeat interval time series from those generated by synthetic algorithms via a newly developed multiscale entropy method. Traditional measures of time series complexity only quantify the degree of regularity on a single time scale. However, many physiologic variables, such as heart rate, fluctuate in a very complex manner and present correlations over multiple time scales. We have proposed a new method to calculate multiscale entropy from complex signals. In order to distinguish between physiologic and synthetic time series, we first applied the method to a learning set of RR time series derived from healthy subjects. We empirically established selected criteria characterizing the entropy dependence on scale factor for these datasets. We then applied this algorithm to the CinC 2002 test datasets. Using only the multiscale entropy method, we correctly classified 48 of 50 (96%) time series. In combination with Fourier spectral analysis, we correctly classified all time series. PMID:14686448

  7. Efficient Algorithms for Segmentation of Item-Set Time Series

    NASA Astrophysics Data System (ADS)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  8. A Time Series Approach for Soil Moisture Estimation

    NASA Technical Reports Server (NTRS)

    Kim, Yunjin; vanZyl, Jakob

    2006-01-01

    Soil moisture is a key parameter in understanding the global water cycle and in predicting natural hazards. Polarimetric radar measurements have been used for estimating soil moisture of bare surfaces. In order to estimate soil moisture accurately, the surface roughness effect must be compensated properly. In addition, these algorithms will not produce accurate results for vegetated surfaces. It is difficult to retrieve soil moisture of a vegetated surface since the radar backscattering cross section is sensitive to the vegetation structure and environmental conditions such as the ground slope. Therefore, it is necessary to develop a method to estimate the effect of the surface roughness and vegetation reliably. One way to remove the roughness effect and the vegetation contamination is to take advantage of the temporal variation of soil moisture. In order to understand the global hydrologic cycle, it is desirable to measure soil moisture with one- to two-days revisit. Using these frequent measurements, a time series approach can be implemented to improve the soil moisture retrieval accuracy.

  9. Time series power flow analysis for distribution connected PV generation.

    SciTech Connect

    Broderick, Robert Joseph; Quiroz, Jimmy Edward; Ellis, Abraham; Reno, Matthew J.; Smith, Jeff; Dugan, Roger

    2013-01-01

    Distributed photovoltaic (PV) projects must go through an interconnection study process before connecting to the distribution grid. These studies are intended to identify the likely impacts and mitigation alternatives. In the majority of the cases, system impacts can be ruled out or mitigation can be identified without an involved study, through a screening process or a simple supplemental review study. For some proposed projects, expensive and time-consuming interconnection studies are required. The challenges to performing the studies are twofold. First, every study scenario is potentially unique, as the studies are often highly specific to the amount of PV generation capacity that varies greatly from feeder to feeder and is often unevenly distributed along the same feeder. This can cause location-specific impacts and mitigations. The second challenge is the inherent variability in PV power output which can interact with feeder operation in complex ways, by affecting the operation of voltage regulation and protection devices. The typical simulation tools and methods in use today for distribution system planning are often not adequate to accurately assess these potential impacts. This report demonstrates how quasi-static time series (QSTS) simulation and high time-resolution data can be used to assess the potential impacts in a more comprehensive manner. The QSTS simulations are applied to a set of sample feeders with high PV deployment to illustrate the usefulness of the approach. The report describes methods that can help determine how PV affects distribution system operations. The simulation results are focused on enhancing the understanding of the underlying technical issues. The examples also highlight the steps needed to perform QSTS simulation and describe the data needed to drive the simulations. The goal of this report is to make the methodology of time series power flow analysis readily accessible to utilities and others responsible for evaluating

  10. Visibility graph network analysis of gold price time series

    NASA Astrophysics Data System (ADS)

    Long, Yu

    2013-08-01

    Mapping time series into a visibility graph network, the characteristics of the gold price time series and return temporal series, and the mechanism underlying the gold price fluctuation have been explored from the perspective of complex network theory. The network degree distribution characters, which change from power law to exponent law when the series was shuffled from original sequence, and the average path length characters, which change from L∼lnN into lnL∼lnN as the sequence was shuffled, demonstrate that price series and return series are both long-rang dependent fractal series. The relations of Hurst exponent to the power-law exponent of degree distribution demonstrate that the logarithmic price series is a fractal Brownian series and the logarithmic return series is a fractal Gaussian series. Power-law exponents of degree distribution in a time window changing with window moving demonstrates that a logarithmic gold price series is a multifractal series. The Power-law average clustering coefficient demonstrates that the gold price visibility graph is a hierarchy network. The hierarchy character, in light of the correspondence of graph to price fluctuation, means that gold price fluctuation is a hierarchy structure, which appears to be in agreement with Elliot’s experiential Wave Theory on stock price fluctuation, and the local-rule growth theory of a hierarchy network means that the hierarchy structure of gold price fluctuation originates from persistent, short term factors, such as short term speculation.

  11. Proteome Analyses Using Accurate Mass and Elution Time Peptide Tags with Capillary LC Time-of-Flight Mass Spectrometry

    SciTech Connect

    Strittmatter, Eric F.; Ferguson, Patrick L.; Tang, Keqi; Smith, Richard D.

    2003-09-01

    We describe the application of capillary liquid chromatography (LC) time-of-flight (TOF) mass spectrometric instrumentation for the rapid characterization of microbial proteomes. Previously (Lipton et al. Proc. Natl Acad. Sci. USA, 99, 2002, 11049) the peptides from a series of growth conditions of Deinococcus radiodurans have been characterized using capillary LC MS/MS and accurate mass measurements which are logged in an accurate mass and time (AMT) tag database. Using this AMT tag database, detected peptides can be assigned using measurements obtained on a TOF due to the additional use of elution time data as a constraint. When peptide matches are obtained using AMT tags (i.e. using both constraints) unique matches of a mass spectral peak occurs 88% of the time. Not only are AMT tag matches unique in most cases, the coverage of the proteome is high; {approx}3500 unique peptide AMT tags are found on average per capillary LC run. From the results of the AMT tag database search, {approx}900 ORFs detected using LC-TOFMS, with {approx}500 ORFs covered by at least two AMT tags. These results indicate that AMT databases searches with modest mass and elution time criteria can provide proteomic information for approximately one thousand proteins in a single run of <3 hours. The advantage of this method over using MS/MS based techniques is the large number of identifications that occur in a single experiment as well as the basis for improved quantitation. For MS/MS experiments, the number of peptide identifications is severely restricted because of the time required to dissociate the peptides individually. These results demonstrate the utility of the AMT tag approach using capillary LC-TOF MS instruments, and also show that AMT tags developed using other instrumentation can be effectively utilized.

  12. Interpretable Early Classification of Multivariate Time Series

    ERIC Educational Resources Information Center

    Ghalwash, Mohamed F.

    2013-01-01

    Recent advances in technology have led to an explosion in data collection over time rather than in a single snapshot. For example, microarray technology allows us to measure gene expression levels in different conditions over time. Such temporal data grants the opportunity for data miners to develop algorithms to address domain-related problems,…

  13. Simulation of Ground Winds Time Series

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    2008-01-01

    A simulation process has been developed for generation of the longitudinal and lateral components of ground wind atmospheric turbulence as a function of mean wind speed, elevation, temporal frequency range and distance between locations. The distance between locations influences the spectral coherence between the simulated series at adjacent locations. Short distances reduce correlation only at high frequencies; as distances increase correlation is reduced over a wider range of frequencies. The choice of values for the constants d1 and d3 in the PSD model is the subject of work in progress. An improved knowledge of the values for zO as a function of wind direction at the ARES-1 launch pads is necessary for definition of d1. Results of other studies at other locations may be helpful as summarized in Fichtl's recent correspondence. Ideally, further research is needed based on measurements of ground wind turbulence with high resolution anemometers at a number of altitudes at a new KSC tower located closer to the ARES-1 launch pad .The proposed research would be based on turbulence measurements that may be influenced by surface terrain roughness that may be significantly different from roughness prior to 1970 in Fichtl's measurements. Significant improvements in instrumentation, data storage end processing will greatly enhance the capability to model ground wind profiles and ground wind turbulence.

  14. How to analyse irregularly sampled geophysical time series?

    NASA Astrophysics Data System (ADS)

    Eroglu, Deniz; Ozken, Ibrahim; Stemler, Thomas; Marwan, Norbert; Wyrwoll, Karl-Heinz; Kurths, Juergen

    2015-04-01

    One of the challenges of time series analysis is to detect dynamical changes in the dynamics of the underlying system.There are numerous methods that can be used to detect such regime changes in regular sampled times series. Here we present a new approach, that can be applied, when the time series is irregular sampled. Such data sets occur frequently in real world applications as in paleo climate proxy records. The basic idea follows Victor and Purpura [1] and considers segments of the time series. For each segment we compute the cost of transforming the segment into the following one. If the time series is from one dynamical regime the cost of transformation should be similar for each segment of the data. Dramatic changes in the cost time series indicate a change in the underlying dynamics. Any kind of analysis can be applicable to the cost time series since it is a regularly sampled time series. While recurrence plots are not the best choice for irregular sampled data with some measurement noise component, we show that a recurrence plot analysis based on the cost time series can successfully identify the changes in the dynamics of the system. We tested this method using synthetically created time series and will use these results to highlight the performance of our method. Furthermore we present our analysis of a suite of calcite and aragonite stalagmites located in the eastern Kimberley region of tropical Western Australia. This oxygen isotopic data is a proxy for the monsoon activity over the last 8,000 years. In this time series our method picks up several so far undetected changes from wet to dry in the monsoon system and therefore enables us to get a better understanding of the monsoon dynamics in the North-East of Australia over the last couple of thousand years. [1] J. D. Victor and K. P. Purpura, Network: Computation in Neural Systems 8, 127 (1997)

  15. Common trends in northeast Atlantic squid time series

    NASA Astrophysics Data System (ADS)

    Zuur, A. F.; Pierce, G. J.

    2004-06-01

    In this paper, dynamic factor analysis is used to estimate common trends in time series of squid catch per unit effort in Scottish (UK) waters. Results indicated that time series of most months were related to sea surface temperature measured at Millport (UK) and a few series were related to the NAO index. The DFA methodology identified three common trends in the squid time series not revealed by traditional approaches, which suggest a possible shift in relative abundance of summer- and winter-spawning populations.

  16. Time series analysis of air pollutants in Beirut, Lebanon.

    PubMed

    Farah, Wehbeh; Nakhlé, Myriam Mrad; Abboud, Maher; Annesi-Maesano, Isabella; Zaarour, Rita; Saliba, Nada; Germanos, Georges; Gerard, Jocelyne

    2014-12-01

    This study reports for the first time a time series analysis of daily urban air pollutant levels (CO, NO, NO2, O3, PM10, and SO2) in Beirut, Lebanon. The study examines data obtained between September 2005 and July 2006, and their descriptive analysis shows long-term variations of daily levels of air pollution concentrations. Strong persistence of these daily levels is identified in the time series using an autocorrelation function, except for SO2. Time series of standardized residual values (SRVs) are also calculated to compare fluctuations of the time series with different levels. Time series plots of the SRVs indicate that NO and NO2 had similar temporal fluctuations. However, NO2 and O3 had opposite temporal fluctuations, attributable to weather conditions and the accumulation of vehicular emissions. The effects of both desert dust storms and airborne particulate matter resulting from the Lebanon War in July 2006 are also discernible in the SRV plots. PMID:25150052

  17. Horizontal visibility graphs: exact results for random time series.

    PubMed

    Luque, B; Lacasa, L; Ballesteros, F; Luque, J

    2009-10-01

    The visibility algorithm has been recently introduced as a mapping between time series and complex networks. This procedure allows us to apply methods of complex network theory for characterizing time series. In this work we present the horizontal visibility algorithm, a geometrically simpler and analytically solvable version of our former algorithm, focusing on the mapping of random series (series of independent identically distributed random variables). After presenting some properties of the algorithm, we present exact results on the topological properties of graphs associated with random series, namely, the degree distribution, the clustering coefficient, and the mean path length. We show that the horizontal visibility algorithm stands as a simple method to discriminate randomness in time series since any random series maps to a graph with an exponential degree distribution of the shape P(k)=(1/3)(2/3)(k-2), independent of the probability distribution from which the series was generated. Accordingly, visibility graphs with other P(k) are related to nonrandom series. Numerical simulations confirm the accuracy of the theorems for finite series. In a second part, we show that the method is able to distinguish chaotic series from independent and identically distributed (i.i.d.) theory, studying the following situations: (i) noise-free low-dimensional chaotic series, (ii) low-dimensional noisy chaotic series, even in the presence of large amounts of noise, and (iii) high-dimensional chaotic series (coupled map lattice), without needs for additional techniques such as surrogate data or noise reduction methods. Finally, heuristic arguments are given to explain the topological properties of chaotic series, and several sequences that are conjectured to be random are analyzed. PMID:19905386

  18. Analysis of Time-Series Quasi-Experiments. Final Report.

    ERIC Educational Resources Information Center

    Glass, Gene V.; Maguire, Thomas O.

    The objective of this project was to investigate the adequacy of statistical models developed by G. E. P. Box and G. C. Tiao for the analysis of time-series quasi-experiments: (1) The basic model developed by Box and Tiao is applied to actual time-series experiment data from two separate experiments, one in psychology and one in educational…

  19. Spectral Procedures Enhance the Analysis of Three Agricultural Time Series

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Many agricultural and environmental variables are influenced by cyclic processes that occur naturally. Consequently their time series often have cyclic behavior. This study developed times series models for three different phenomenon: (1) a 60 year-long state average crop yield record, (2) a four ...

  20. A Computer Evolution in Teaching Undergraduate Time Series

    ERIC Educational Resources Information Center

    Hodgess, Erin M.

    2004-01-01

    In teaching undergraduate time series courses, we have used a mixture of various statistical packages. We have finally been able to teach all of the applied concepts within one statistical package; R. This article describes the process that we use to conduct a thorough analysis of a time series. An example with a data set is provided. We compare…

  1. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    ERIC Educational Resources Information Center

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  2. Nonlinear parametric model for Granger causality of time series

    NASA Astrophysics Data System (ADS)

    Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano

    2006-06-01

    The notion of Granger causality between two time series examines if the prediction of one series could be improved by incorporating information of the other. In particular, if the prediction error of the first time series is reduced by including measurements from the second time series, then the second time series is said to have a causal influence on the first one. We propose a radial basis function approach to nonlinear Granger causality. The proposed model is not constrained to be additive in variables from the two time series and can approximate any function of these variables, still being suitable to evaluate causality. Usefulness of this measure of causality is shown in two applications. In the first application, a physiological one, we consider time series of heart rate and blood pressure in congestive heart failure patients and patients affected by sepsis: we find that sepsis patients, unlike congestive heart failure patients, show symmetric causal relationships between the two time series. In the second application, we consider the feedback loop in a model of excitatory and inhibitory neurons: we find that in this system causality measures the combined influence of couplings and membrane time constants.

  3. Measurements of spatial population synchrony: influence of time series transformations.

    PubMed

    Chevalier, Mathieu; Laffaille, Pascal; Ferdy, Jean-Baptiste; Grenouillet, Gaël

    2015-09-01

    Two mechanisms have been proposed to explain spatial population synchrony: dispersal among populations, and the spatial correlation of density-independent factors (the "Moran effect"). To identify which of these two mechanisms is driving spatial population synchrony, time series transformations (TSTs) of abundance data have been used to remove the signature of one mechanism, and highlight the effect of the other. However, several issues with TSTs remain, and to date no consensus has emerged about how population time series should be handled in synchrony studies. Here, by using 3131 time series involving 34 fish species found in French rivers, we computed several metrics commonly used in synchrony studies to determine whether a large-scale climatic factor (temperature) influenced fish population dynamics at the regional scale, and to test the effect of three commonly used TSTs (detrending, prewhitening and a combination of both) on these metrics. We also tested whether the influence of TSTs on time series and population synchrony levels was related to the features of the time series using both empirical and simulated time series. For several species, and regardless of the TST used, we evidenced a Moran effect on freshwater fish populations. However, these results were globally biased downward by TSTs which reduced our ability to detect significant signals. Depending on the species and the features of the time series, we found that TSTs could lead to contradictory results, regardless of the metric considered. Finally, we suggest guidelines on how population time series should be processed in synchrony studies. PMID:25953116

  4. Using Time-Series Regression to Predict Academic Library Circulations.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1984-01-01

    Four methods were used to forecast monthly circulation totals in 15 midwestern academic libraries: dummy time-series regression, lagged time-series regression, simple average (straight-line forecasting), monthly average (naive forecasting). In tests of forecasting accuracy, dummy regression method and monthly mean method exhibited smallest average…

  5. Sunspot Time Series: Passive and Active Intervals

    NASA Astrophysics Data System (ADS)

    Zięba, S.; Nieckarz, Z.

    2014-07-01

    Solar activity slowly and irregularly decreases from the first spotless day (FSD) in the declining phase of the old sunspot cycle and systematically, but also in an irregular way, increases to the new cycle maximum after the last spotless day (LSD). The time interval between the first and the last spotless day can be called the passive interval (PI), while the time interval from the last spotless day to the first one after the new cycle maximum is the related active interval (AI). Minima of solar cycles are inside PIs, while maxima are inside AIs. In this article, we study the properties of passive and active intervals to determine the relation between them. We have found that some properties of PIs, and related AIs, differ significantly between two group of solar cycles; this has allowed us to classify Cycles 8 - 15 as passive cycles, and Cycles 17 - 23 as active ones. We conclude that the solar activity in the PI declining phase (a descending phase of the previous cycle) determines the strength of the approaching maximum in the case of active cycles, while the activity of the PI rising phase (a phase of the ongoing cycle early growth) determines the strength of passive cycles. This can have implications for solar dynamo models. Our approach indicates the important role of solar activity during the declining and the rising phases of the solar-cycle minimum.

  6. Functional and stochastic models estimation for GNSS coordinates time series

    NASA Astrophysics Data System (ADS)

    Galera Monico, J. F.; Silva, H. A.; Marques, H. A.

    2014-12-01

    GNSS has been largely used in Geodesy and correlated areas for positioning. The position and velocity of terrestrial stations have been estimated using GNSS data based on daily solutions. So, currently it is possible to analyse the GNSS coordinates time series aiming to improve the functional and stochastic models what can help to understand geodynamic phenomena. Several sources of errors are mathematically modelled or estimated in the GNSS data processing to obtain precise coordinates what in general is carried out by using scientific software. However, due to impossibility to model all errors some kind of noises can remain contaminating the coordinate time series, especially those related with seasonal effects. The noise affecting GNSS coordinate time series can be composed by white and coloured noises what can be characterized from Variance Component Estimation technique through Least Square Method. The methodology to characterize noise in GNSS coordinates time series will be presented in this paper so that the estimated variance can be used to reconstruct stochastic and functional models of the times series providing a more realistic and reliable modeling of time series. Experiments were carried out by using GNSS time series for few Brazilian stations considering almost ten years of daily solutions. The noises components were characterized as white, flicker and random walk noise and applied to estimate the times series functional model considering semiannual and annual effects. The results show that the adoption of an adequate stochastic model considering the noises variances of time series can produce more realistic and reliable functional model for GNSS coordinate time series. Such results may be applied in the context of the realization of the Brazilian Geodetic System.

  7. Application of cross-sectional time series modeling for the prediction of energy expenditure from heart rate and accelerometry

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Accurate estimation of energy expenditure (EE) in children and adolescents is required for a better understanding of physiological, behavioral, and environmental factors affecting energy balance. Cross-sectional time series (CSTS) models, which account for correlation structure of repeated observati...

  8. Comparison of New and Old Sunspot Number Time Series

    NASA Astrophysics Data System (ADS)

    Cliver, E. W.

    2016-06-01

    Four new sunspot number time series have been published in this Topical Issue: a backbone-based group number in Svalgaard and Schatten (Solar Phys., 2016; referred to here as SS, 1610 - present), a group number series in Usoskin et al. (Solar Phys., 2016; UEA, 1749 - present) that employs active day fractions from which it derives an observational threshold in group spot area as a measure of observer merit, a provisional group number series in Cliver and Ling (Solar Phys., 2016; CL, 1841 - 1976) that removed flaws in the Hoyt and Schatten (Solar Phys. 179, 189, 1998a; 181, 491, 1998b) normalization scheme for the original relative group sunspot number ( RG, 1610 - 1995), and a corrected Wolf (international, RI) number in Clette and Lefèvre (Solar Phys., 2016; SN, 1700 - present). Despite quite different construction methods, the four new series agree well after about 1900. Before 1900, however, the UEA time series is lower than SS, CL, and SN, particularly so before about 1885. Overall, the UEA series most closely resembles the original RG series. Comparison of the UEA and SS series with a new solar wind B time series (Owens et al. in J. Geophys. Res., 2016; 1845 - present) indicates that the UEA time series is too low before 1900. We point out incongruities in the Usoskin et al. (Solar Phys., 2016) observer normalization scheme and present evidence that this method under-estimates group counts before 1900. In general, a correction factor time series, obtained by dividing an annual group count series by the corresponding yearly averages of raw group counts for all observers, can be used to assess the reliability of new sunspot number reconstructions.

  9. Time series photometry and starspot properties

    NASA Astrophysics Data System (ADS)

    Oláh, Katalin

    2011-08-01

    Systematic efforts of monitoring starspots from the middle of the XXth century, and the results obtained from the datasets, are summarized with special focus on the observations made by automated telescopes. Multicolour photometry shows correlations between colour indices and brightness, indicating spotted regions with different average temperatures originating from spots and faculae. Long-term monitoring of spotted stars reveals variability on different timescales. On the rotational timescale new spot appearances and starspot proper motions are followed from continuous changes of light curves during subsequent rotations. Sudden interchange of the more and less active hemispheres on the stellar surfaces is the so called flip-flop phenomenon. The existence and strength of the differential rotation is seen from the rotational signals of spots being at different stellar latitudes. Long datasets, with only short, annual interruptions, shed light on the nature of stellar activity cycles and multiple cycles. The systematic and/or random changes of the spot cycle lengths are discovered and described using various time-frequency analysis tools. Positions and sizes of spotted regions on stellar surfaces are calculated from photometric data by various softwares. From spot positions derived for decades, active longitudes on the stellar surfaces are found, which, in case of synchronized eclipsing binaries can be well positioned in the orbital frame, with respect to, and affected by, the companion stars.

  10. Learning a Mahalanobis Distance-Based Dynamic Time Warping Measure for Multivariate Time Series Classification.

    PubMed

    Mei, Jiangyuan; Liu, Meizhu; Wang, Yuan-Fang; Gao, Huijun

    2016-06-01

    Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach. PMID:25966490

  11. High Performance Biomedical Time Series Indexes Using Salient Segmentation

    PubMed Central

    Woodbridge, Jonathan; Mortazavi, Bobak; Bui, Alex A.T.; Sarrafzadeh, Majid

    2016-01-01

    The advent of remote and wearable medical sensing has created a dire need for efficient medical time series databases. Wearable medical sensing devices provide continuous patient monitoring by various types of sensors and have the potential to create massive amounts of data. Therefore, time series databases must utilize highly optimized indexes in order to efficiently search and analyze stored data. This paper presents a highly efficient technique for indexing medical time series signals using Locality Sensitive Hashing (LSH). Unlike previous work, only salient (or interesting) segments are inserted into the index. This technique reduces search times by up to 95% while yielding near identical search results. PMID:23367072

  12. From time series to complex networks: The visibility graph

    PubMed Central

    Lacasa, Lucas; Luque, Bartolo; Ballesteros, Fernando; Luque, Jordi; Nuño, Juan Carlos

    2008-01-01

    In this work we present a simple and fast computational method, the visibility algorithm, that converts a time series into a graph. The constructed graph inherits several properties of the series in its structure. Thereby, periodic series convert into regular graphs, and random series do so into random graphs. Moreover, fractal series convert into scale-free networks, enhancing the fact that power law degree distributions are related to fractality, something highly discussed recently. Some remarkable examples and analytical tools are outlined to test the method's reliability. Many different measures, recently developed in the complex network theory, could by means of this new approach characterize time series from a new point of view. PMID:18362361

  13. From time series to complex networks: the visibility graph.

    PubMed

    Lacasa, Lucas; Luque, Bartolo; Ballesteros, Fernando; Luque, Jordi; Nuño, Juan Carlos

    2008-04-01

    In this work we present a simple and fast computational method, the visibility algorithm, that converts a time series into a graph. The constructed graph inherits several properties of the series in its structure. Thereby, periodic series convert into regular graphs, and random series do so into random graphs. Moreover, fractal series convert into scale-free networks, enhancing the fact that power law degree distributions are related to fractality, something highly discussed recently. Some remarkable examples and analytical tools are outlined to test the method's reliability. Many different measures, recently developed in the complex network theory, could by means of this new approach characterize time series from a new point of view. PMID:18362361

  14. DEM time series of an agricultural watershed

    NASA Astrophysics Data System (ADS)

    Pineux, Nathalie; Lisein, Jonathan; Swerts, Gilles; Degré, Aurore

    2014-05-01

    In agricultural landscape soil surface evolves notably due to erosion and deposition phenomenon. Even if most of the field data come from plot scale studies, the watershed scale seems to be more appropriate to understand them. Currently, small unmanned aircraft systems and images treatments are improving. In this way, 3D models are built from multiple covering shots. When techniques for large areas would be to expensive for a watershed level study or techniques for small areas would be too time consumer, the unmanned aerial system seems to be a promising solution to quantify the erosion and deposition patterns. The increasing technical improvements in this growth field allow us to obtain a really good quality of data and a very high spatial resolution with a high Z accuracy. In the center of Belgium, we equipped an agricultural watershed of 124 ha. For three years (2011-2013), we have been monitoring weather (including rainfall erosivity using a spectropluviograph), discharge at three different locations, sediment in runoff water, and watershed microtopography through unmanned airborne imagery (Gatewing X100). We also collected all available historical data to try to capture the "long-term" changes in watershed morphology during the last decades: old topography maps, soil historical descriptions, etc. An erosion model (LANDSOIL) is also used to assess the evolution of the relief. Short-term evolution of the surface are now observed through flights done at 200m height. The pictures are taken with a side overlap equal to 80%. To precisely georeference the DEM produced, ground control points are placed on the study site and surveyed using a Leica GPS1200 (accuracy of 1cm for x and y coordinates and 1.5cm for the z coordinate). Flights are done each year in December to have an as bare as possible ground surface. Specific treatments are developed to counteract vegetation effect because it is know as key sources of error in the DEM produced by small unmanned aircraft

  15. Performance of multifractal detrended fluctuation analysis on short time series

    NASA Astrophysics Data System (ADS)

    López, Juan Luis; Contreras, Jesús Guillermo

    2013-02-01

    The performance of the multifractal detrended analysis on short time series is evaluated for synthetic samples of several mono- and multifractal models. The reconstruction of the generalized Hurst exponents is used to determine the range of applicability of the method and the precision of its results as a function of the decreasing length of the series. As an application the series of the daily exchange rate between the U.S. dollar and the euro is studied.

  16. Time series modeling of system self-assessment of survival

    SciTech Connect

    Lu, H.; Kolarik, W.J.

    1999-06-01

    Self-assessment of survival for a system, subsystem or component is implemented by assessing conditional performance reliability in real-time, which includes modeling and analysis of physical performance data. This paper proposes a time series analysis approach to system self-assessment (prediction) of survival. In the approach, physical performance data are modeled in a time series. The performance forecast is based on the model developed and is converted to the reliability of system survival. In contrast to a standard regression model, a time series model, using on-line data, is suitable for the real-time performance prediction. This paper illustrates an example of time series modeling and survival assessment, regarding an excessive tool edge wear failure mode for a twist drill operation.

  17. A Monte Carlo Approach to Biomedical Time Series Search

    PubMed Central

    Woodbridge, Jonathan; Mortazavi, Bobak; Sarrafzadeh, Majid; Bui, Alex A.T.

    2016-01-01

    Time series subsequence matching (or signal searching) has importance in a variety of areas in health care informatics. These areas include case-based diagnosis and treatment as well as the discovery of trends and correlations between data. Much of the traditional research in signal searching has focused on high dimensional R-NN matching. However, the results of R-NN are often small and yield minimal information gain; especially with higher dimensional data. This paper proposes a randomized Monte Carlo sampling method to broaden search criteria such that the query results are an accurate sampling of the complete result set. The proposed method is shown both theoretically and empirically to improve information gain. The number of query results are increased by several orders of magnitude over approximate exact matching schemes and fall within a Gaussian distribution. The proposed method also shows excellent performance as the majority of overhead added by sampling can be mitigated through parallelization. Experiments are run on both simulated and real-world biomedical datasets.

  18. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    PubMed Central

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  19. Chaos time series prediction based on membrane optimization algorithms.

    PubMed

    Li, Meng; Yi, Liangzhong; Pei, Zheng; Gao, Zhisheng; Peng, Hong

    2015-01-01

    This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ, m) and least squares support vector machine (LS-SVM) (γ, σ) by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM) broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE), root mean square error (RMSE), and mean absolute percentage error (MAPE). PMID:25874249

  20. Database for Hydrological Time Series of Inland Waters (DAHITI)

    NASA Astrophysics Data System (ADS)

    Schwatke, Christian; Dettmering, Denise

    2016-04-01

    Satellite altimetry was designed for ocean applications. However, since some years, satellite altimetry is also used over inland water to estimate water level time series of lakes, rivers and wetlands. The resulting water level time series can help to understand the water cycle of system earth and makes altimetry to a very useful instrument for hydrological applications. In this poster, we introduce the "Database for Hydrological Time Series of Inland Waters" (DAHITI). Currently, the database contains about 350 water level time series of lakes, reservoirs, rivers, and wetlands which are freely available after a short registration process via http://dahiti.dgfi.tum.de. In this poster, we introduce the product of DAHITI and the functionality of the DAHITI web service. Furthermore, selected examples of inland water targets are presented in detail. DAHITI provides time series of water level heights of inland water bodies and their formal errors . These time series are available within the period of 1992-2015 and have varying temporal resolutions depending on the data coverage of the investigated water body. The accuracies of the water level time series depend mainly on the extent of the investigated water body and the quality of the altimeter measurements. Hereby, an external validation with in-situ data reveals RMS differences between 5 cm and 40 cm for lakes and 10 cm and 140 cm for rivers, respectively.

  1. Estimation of Parameters from Discrete Random Nonstationary Time Series

    NASA Astrophysics Data System (ADS)

    Takayasu, H.; Nakamura, T.

    For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.

  2. Detecting temporal and spatial correlations in pseudoperiodic time series

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Luo, Xiaodong; Nakamura, Tomomichi; Sun, Junfeng; Small, Michael

    2007-01-01

    Recently there has been much attention devoted to exploring the complicated possibly chaotic dynamics in pseudoperiodic time series. Two methods [Zhang , Phys. Rev. E 73, 016216 (2006); Zhang and Small, Phys. Rev. Lett. 96, 238701 (2006)] have been forwarded to reveal the chaotic temporal and spatial correlations, respectively, among the cycles in the time series. Both these methods treat the cycle as the basic unit and design specific statistics that indicate the presence of chaotic dynamics. In this paper, we verify the validity of these statistics to capture the chaotic correlation among cycles by using the surrogate data method. In particular, the statistics computed for the original time series are compared with those from its surrogates. The surrogate data we generate is pseudoperiodic type (PPS), which preserves the inherent periodic components while destroying the subtle nonlinear (chaotic) structure. Since the inherent chaotic correlations among cycles, either spatial or temporal (which are suitably characterized by the proposed statistics), are eliminated through the surrogate generation process, we expect the statistics from the surrogate to take significantly different values than those from the original time series. Hence the ability of the statistics to capture the chaotic correlation in the time series can be validated. Application of this procedure to both chaotic time series and real world data clearly demonstrates the effectiveness of the statistics. We have found clear evidence of chaotic correlations among cycles in human electrocardiogram and vowel time series. Furthermore, we show that this framework is more sensitive to examine the subtle changes in the dynamics of the time series due to the match between PPS surrogate and the statistics adopted. It offers a more reliable tool to reveal the possible correlations among cycles intrinsic to the chaotic nature of the pseudoperiodic time series.

  3. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  4. Modelling road accidents: An approach using structural time series

    NASA Astrophysics Data System (ADS)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  5. Wavelet analysis and scaling properties of time series

    NASA Astrophysics Data System (ADS)

    Manimaran, P.; Panigrahi, Prasanta K.; Parikh, Jitendra C.

    2005-10-01

    We propose a wavelet based method for the characterization of the scaling behavior of nonstationary time series. It makes use of the built-in ability of the wavelets for capturing the trends in a data set, in variable window sizes. Discrete wavelets from the Daubechies family are used to illustrate the efficacy of this procedure. After studying binomial multifractal time series with the present and earlier approaches of detrending for comparison, we analyze the time series of averaged spin density in the 2D Ising model at the critical temperature, along with several experimental data sets possessing multifractal behavior.

  6. Estimation of connectivity measures in gappy time series

    NASA Astrophysics Data System (ADS)

    Papadopoulos, G.; Kugiumtzis, D.

    2015-10-01

    A new method is proposed to compute connectivity measures on multivariate time series with gaps. Rather than removing or filling the gaps, the rows of the joint data matrix containing empty entries are removed and the calculations are done on the remainder matrix. The method, called measure adapted gap removal (MAGR), can be applied to any connectivity measure that uses a joint data matrix, such as cross correlation, cross mutual information and transfer entropy. MAGR is favorably compared using these three measures to a number of known gap-filling techniques, as well as the gap closure. The superiority of MAGR is illustrated on time series from synthetic systems and financial time series.

  7. Wavelet analysis and scaling properties of time series.

    PubMed

    Manimaran, P; Panigrahi, Prasanta K; Parikh, Jitendra C

    2005-10-01

    We propose a wavelet based method for the characterization of the scaling behavior of nonstationary time series. It makes use of the built-in ability of the wavelets for capturing the trends in a data set, in variable window sizes. Discrete wavelets from the Daubechies family are used to illustrate the efficacy of this procedure. After studying binomial multifractal time series with the present and earlier approaches of detrending for comparison, we analyze the time series of averaged spin density in the 2D Ising model at the critical temperature, along with several experimental data sets possessing multifractal behavior. PMID:16383481

  8. Load Balancing Using Time Series Analysis for Soft Real Time Systems with Statistically Periodic Loads

    NASA Technical Reports Server (NTRS)

    Hailperin, Max

    1993-01-01

    This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.

  9. Quantifying Memory in Complex Physiological Time-Series

    PubMed Central

    Shirazi, Amir H.; Raoufy, Mohammad R.; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R.; Amodio, Piero; Jafari, G. Reza; Montagnese, Sara; Mani, Ali R.

    2013-01-01

    In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of “memory length” was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are ‘forgotten’ quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations. PMID:24039811

  10. Nonstationary time series prediction combined with slow feature analysis

    NASA Astrophysics Data System (ADS)

    Wang, G.; Chen, X.

    2015-07-01

    Almost all climate time series have some degree of nonstationarity due to external driving forces perturbing the observed system. Therefore, these external driving forces should be taken into account when constructing the climate dynamics. This paper presents a new technique of obtaining the driving forces of a time series from the slow feature analysis (SFA) approach, and then introduces them into a predictive model to predict nonstationary time series. The basic theory of the technique is to consider the driving forces as state variables and to incorporate them into the predictive model. Experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted to test the model. The results showed improved prediction skills.

  11. A mixed time series model of binomial counts

    NASA Astrophysics Data System (ADS)

    Khoo, Wooi Chen; Ong, Seng Huat

    2015-10-01

    Continuous time series modelling has been an active research in the past few decades. However, time series data in terms of correlated counts appear in many situations such as the counts of rainy days and access downloading. Therefore, the study on count data has become popular in time series modelling recently. This article introduces a new mixture model, which is an univariate non-negative stationary time series model with binomial marginal distribution, arising from the combination of the well-known binomial thinning and Pegram's operators. A brief review of important properties will be carried out and the EM algorithm is applied in parameter estimation. A numerical study is presented to show the performance of the model. Finally, a potential real application will be presented to illustrate the advantage of the new mixture model.

  12. The use of synthetic input sequences in time series modeling

    NASA Astrophysics Data System (ADS)

    de Oliveira, Dair José; Letellier, Christophe; Gomes, Murilo E. D.; Aguirre, Luis A.

    2008-08-01

    In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure.

  13. Crop growth dynamics modeling using time-series satellite imagery

    NASA Astrophysics Data System (ADS)

    Zhao, Yu

    2014-11-01

    In modern agriculture, remote sensing technology plays an essential role in monitoring crop growth and crop yield prediction. To monitor crop growth and predict crop yield, accurate and timely crop growth information is significant, in particularly for large scale farming. As the high cost and low data availability of high-resolution satellite images such as RapidEye, we focus on the time-series low resolution satellite imagery. In this research, NDVI curve, which was retrieved from satellite images of MODIS 8-days 250m surface reflectance, was applied to monitor soybean's yield. Conventional model and vegetation index for yield prediction has problems on describing the growth basic processes affecting yield component formation. In our research, a novel method is developed to well model the Crop Growth Dynamics (CGD) and generate CGD index to describe the soybean's yield component formation. We analyze the standard growth stage of soybean and to model the growth process, we have two key calculate process. The first is normalization of the NDVI-curve coordinate and division of the crop growth based on the standard development stages using EAT (Effective accumulated temperature).The second is modeling the biological growth on each development stage through analyzing the factors of yield component formation. The evaluation was performed through the soybean yield prediction using the CGD Index in the growth stage when the whole dataset for modeling is available and we got precision of 88.5% which is about 10% higher than the conventional method. The validation results showed that prediction accuracy using our CGD modeling is satisfied and can be applied in practice of large scale soybean yield monitoring.

  14. Time Series Analysis of Insar Data: Methods and Trends

    NASA Technical Reports Server (NTRS)

    Osmanoglu, Batuhan; Sunar, Filiz; Wdowinski, Shimon; Cano-Cabral, Enrique

    2015-01-01

    Time series analysis of InSAR data has emerged as an important tool for monitoring and measuring the displacement of the Earth's surface. Changes in the Earth's surface can result from a wide range of phenomena such as earthquakes, volcanoes, landslides, variations in ground water levels, and changes in wetland water levels. Time series analysis is applied to interferometric phase measurements, which wrap around when the observed motion is larger than one-half of the radar wavelength. Thus, the spatio-temporal ''unwrapping" of phase observations is necessary to obtain physically meaningful results. Several different algorithms have been developed for time series analysis of InSAR data to solve for this ambiguity. These algorithms may employ different models for time series analysis, but they all generate a first-order deformation rate, which can be compared to each other. However, there is no single algorithm that can provide optimal results in all cases. Since time series analyses of InSAR data are used in a variety of applications with different characteristics, each algorithm possesses inherently unique strengths and weaknesses. In this review article, following a brief overview of InSAR technology, we discuss several algorithms developed for time series analysis of InSAR data using an example set of results for measuring subsidence rates in Mexico City.

  15. Comparison of New and Old Sunspot Number Time Series

    NASA Astrophysics Data System (ADS)

    Cliver, Edward W.; Clette, Frédéric; Lefévre, Laure; Svalgaard, Leif

    2016-05-01

    As a result of the Sunspot Number Workshops, five new sunspot series have recently been proposed: a revision of the original Wolf or international sunspot number (Lockwood et al., 2014), a backbone-based group sunspot number (Svalgaard and Schatten, 2016), a revised group number series that employs active day fractions (Usoskin et al., 2016), a provisional group sunspot number series (Cliver and Ling, 2016) that removes flaws in the normalization scheme for the original group sunspot number (Hoyt and Schatten,1998), and a revised Wolf or international number (termed SN) published on the SILSO website as a replacement for the original Wolf number (Clette and Lefèvre, 2016; thttp://www.sidc.be/silso/datafiles). Despite quite different construction methods, the five new series agree reasonably well after about 1900. From 1750 to ~1875, however, the Lockwood et al. and Usoskin et al. time series are lower than the other three series. Analysis of the Hoyt and Schatten normalization factors used to scale secondary observers to their Royal Greenwich Observatory primary observer reveals a significant inhomogeneity spanning the divergence in ~1885 of the group number from the original Wolf number. In general, a correction factor time series, obtained by dividing an annual group count series by the corresponding yearly averages of raw group counts for all observers, can be used to assess the reliability of new sunspot number reconstructions.

  16. A method for detecting changes in long time series

    SciTech Connect

    Downing, D.J.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1995-09-01

    Modern scientific activities, both physical and computational, can result in time series of many thousands or even millions of data values. Here the authors describe a statistically motivated algorithm for quick screening of very long time series data for the presence of potentially interesting but arbitrary changes. The basic data model is a stationary Gaussian stochastic process, and the approach to detecting a change is the comparison of two predictions of the series at a time point or contiguous collection of time points. One prediction is a ``forecast``, i.e. based on data from earlier times, while the other a ``backcast``, i.e. based on data from later times. The statistic is the absolute value of the log-likelihood ratio for these two predictions, evaluated at the observed data. A conservative procedure is suggested for specifying critical values for the statistic under the null hypothesis of ``no change``.

  17. Symplectic geometry spectrum regression for prediction of noisy time series

    NASA Astrophysics Data System (ADS)

    Xie, Hong-Bo; Dokos, Socrates; Sivakumar, Bellie; Mengersen, Kerrie

    2016-05-01

    We present the symplectic geometry spectrum regression (SGSR) technique as well as a regularized method based on SGSR for prediction of nonlinear time series. The main tool of analysis is the symplectic geometry spectrum analysis, which decomposes a time series into the sum of a small number of independent and interpretable components. The key to successful regularization is to damp higher order symplectic geometry spectrum components. The effectiveness of SGSR and its superiority over local approximation using ordinary least squares are demonstrated through prediction of two noisy synthetic chaotic time series (Lorenz and Rössler series), and then tested for prediction of three real-world data sets (Mississippi River flow data and electromyographic and mechanomyographic signal recorded from human body).

  18. Symplectic geometry spectrum regression for prediction of noisy time series.

    PubMed

    Xie, Hong-Bo; Dokos, Socrates; Sivakumar, Bellie; Mengersen, Kerrie

    2016-05-01

    We present the symplectic geometry spectrum regression (SGSR) technique as well as a regularized method based on SGSR for prediction of nonlinear time series. The main tool of analysis is the symplectic geometry spectrum analysis, which decomposes a time series into the sum of a small number of independent and interpretable components. The key to successful regularization is to damp higher order symplectic geometry spectrum components. The effectiveness of SGSR and its superiority over local approximation using ordinary least squares are demonstrated through prediction of two noisy synthetic chaotic time series (Lorenz and Rössler series), and then tested for prediction of three real-world data sets (Mississippi River flow data and electromyographic and mechanomyographic signal recorded from human body). PMID:27300890

  19. Time series analysis as a tool for karst water management

    NASA Astrophysics Data System (ADS)

    Fournier, Matthieu; Massei, Nicolas; Duran, Léa

    2015-04-01

    Karst hydrosystems are well known for their vulnerability to turbidity due to their complex and unique characteristics which make them very different from other aquifers. Moreover, many parameters can affect their functioning. It makes the characterization of their vulnerability difficult and needs the use of statistical analyses Time series analyses on turbidity, electrical conductivity and water discharge datasets, such as correlation and spectral analyses, have proven to be useful in improving our understanding of karst systems. However, the loss of information on time localization is a major drawback of those Fourier spectral methods; this problem has been overcome by the development of wavelet analysis (continuous or discrete) for hydrosystems offering the possibility to better characterize the complex modalities of variation inherent to non stationary processes. Nevertheless, from wavelet transform, signal is decomposed on several continuous wavelet signals which cannot be true with local-time processes frequently observed in karst aquifer. More recently, a new approach associating empirical mode decomposition and the Hilbert transform was presented for hydrosystems. It allows an orthogonal decomposition of the signal analyzed and provides a more accurate estimation of changing variability scales across time for highly transient signals. This study aims to identify the natural and anthropogenic parameters which control turbidity released at a well for drinking water supply. The well is located in the chalk karst aquifer near the Seine river at 40 km of the Seine estuary in western Paris Basin. At this location, tidal variations greatly affect the level of the water in the Seine. Continuous wavelet analysis on turbidity dataset have been used to decompose turbidity release at the well into three components i) the rain event periods, ii) the pumping periods and iii) the tidal range of Seine river. Time-domain reconstruction by inverse wavelet transform allows

  20. Exploratory Causal Analysis in Bivariate Time Series Data

    NASA Astrophysics Data System (ADS)

    McCracken, James M.

    Many scientific disciplines rely on observational data of systems for which it is difficult (or impossible) to implement controlled experiments and data analysis techniques are required for identifying causal information and relationships directly from observational data. This need has lead to the development of many different time series causality approaches and tools including transfer entropy, convergent cross-mapping (CCM), and Granger causality statistics. In this thesis, the existing time series causality method of CCM is extended by introducing a new method called pairwise asymmetric inference (PAI). It is found that CCM may provide counter-intuitive causal inferences for simple dynamics with strong intuitive notions of causality, and the CCM causal inference can be a function of physical parameters that are seemingly unrelated to the existence of a driving relationship in the system. For example, a CCM causal inference might alternate between ''voltage drives current'' and ''current drives voltage'' as the frequency of the voltage signal is changed in a series circuit with a single resistor and inductor. PAI is introduced to address both of these limitations. Many of the current approaches in the times series causality literature are not computationally straightforward to apply, do not follow directly from assumptions of probabilistic causality, depend on assumed models for the time series generating process, or rely on embedding procedures. A new approach, called causal leaning, is introduced in this work to avoid these issues. The leaning is found to provide causal inferences that agree with intuition for both simple systems and more complicated empirical examples, including space weather data sets. The leaning may provide a clearer interpretation of the results than those from existing time series causality tools. A practicing analyst can explore the literature to find many proposals for identifying drivers and causal connections in times series data

  1. Change-point detection in time-series data by relative density-ratio estimation.

    PubMed

    Liu, Song; Yamada, Makoto; Collier, Nigel; Sugiyama, Masashi

    2013-07-01

    The objective of change-point detection is to discover abrupt property changes lying behind time-series data. In this paper, we present a novel statistical change-point detection algorithm based on non-parametric divergence estimation between time-series samples from two retrospective segments. Our method uses the relative Pearson divergence as a divergence measure, and it is accurately and efficiently estimated by a method of direct density-ratio estimation. Through experiments on artificial and real-world datasets including human-activity sensing, speech, and Twitter messages, we demonstrate the usefulness of the proposed method. PMID:23500502

  2. Evaluation of Scaling Invariance Embedded in Short Time Series

    PubMed Central

    Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping

    2014-01-01

    Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356

  3. Detection of flood events in hydrological discharge time series

    NASA Astrophysics Data System (ADS)

    Seibert, S. P.; Ehret, U.

    2012-04-01

    The shortcomings of mean-squared-error (MSE) based distance metrics are well known (Beran 1999, Schaeffli & Gupta 2007) and the development of novel distance metrics (Pappenberger & Beven 2004, Ehret & Zehe 2011) and multi-criteria-approaches enjoy increasing popularity (Reusser 2009, Gupta et al. 2009). Nevertheless, the hydrological community still lacks metrics which identify and thus, allow signature based evaluations of hydrological discharge time series. Signature based information/evaluations are required wherever specific time series features, such as flood events, are of special concern. Calculation of event based runoff coefficients or precise knowledge on flood event characteristics (like onset or duration of rising limp or the volume of falling limp, etc.) are possible applications. The same applies for flood forecasting/simulation models. Directly comparing simulated and observed flood event features may reveal thorough insights into model dynamics. Compared to continuous space-and-time-aggregated distance metrics, event based evaluations may provide answers like the distributions of event characteristics or the percentage of the events which were actually reproduced by a hydrological model. It also may help to provide information on the simulation accuracy of small, medium and/or large events in terms of timing and magnitude. However, the number of approaches which expose time series features is small and their usage is limited to very specific questions (Merz & Blöschl 2009, Norbiato et al. 2009). We believe this is due to the following reasons: i) a generally accepted definition of the signature of interest is missing or difficult to obtain (in our case: what makes a flood event a flood event?) and/or ii) it is difficult to translate such a definition into a equation or (graphical) procedure which exposes the feature of interest in the discharge time series. We reviewed approaches which detect event starts and/or ends in hydrological discharge time

  4. Homogenization of snow depth time series of the Trentino Province (North-East Italy)

    NASA Astrophysics Data System (ADS)

    Marcolini, Giorgia; Bellin, Alberto; Trenti, Alberto; Chiogna, Gabriele

    2015-04-01

    Snow depth and duration are significantly affected by small variations in temperature and atmospheric pressure, and hence represent valuable metrics to detect and quantify the ongoing effects of climate change in Alpine regions. However, long and accurate time series of snow depth measurements are rare. In this work, we present the snow depth dataset collected for the Trentino Province (North-East Italy). It consists of 65 site time series located between 900 m a.s.l. and 2900 m a.s.l. and a temporal extension, which ranges between 2 and 70 years. The time series have been constructed merging data from different sources and collected using different measurement instruments. The dataset has been homogenized using the Standard Normal Homogeneity Test (SNHT), in order to detect and correct breakpoints caused by changes in the instrument's location and other external effect different from climate change. A few approaches have been selected for the construction of the reference time series used to detect the breakpoints in the tested time series (Alexanderson and Moberg 1997, Peterson and Easterling 1994). The results obtained with the selected methodologies provide consistent results and agree on the identification of the breakpoints, which are to a great extent independent on the methodology applied to construct the reference time series. The test was able to detect 16 breakpoints in the time series, 14 of which have been confirmed by metadata. In summary, the SNHT was able to identify breakpoints in long-term snow depth time series. The successive homogenization provided useful datasets to investigate the impact of climate change in the Southern Alps. References Alexandersson, Hans, and Anders Moberg. "Homogenization of Swedish temperature data. Part I: Homogeneity test for linear trends." International Journal of Climatology 17.1 (1997): 25-34. Peterson, Thomas C., and David R. Easterling. "Creation of homogeneous composite climatological reference series

  5. Statistical modelling of agrometeorological time series by exponential smoothing

    NASA Astrophysics Data System (ADS)

    Murat, Małgorzata; Malinowska, Iwona; Hoffmann, Holger; Baranowski, Piotr

    2016-01-01

    Meteorological time series are used in modelling agrophysical processes of the soil-plant-atmosphere system which determine plant growth and yield. Additionally, long-term meteorological series are used in climate change scenarios. Such studies often require forecasting or projection of meteorological variables, eg the projection of occurrence of the extreme events. The aim of the article was to determine the most suitable exponential smoothing models to generate forecast using data on air temperature, wind speed, and precipitation time series in Jokioinen (Finland), Dikopshof (Germany), Lleida (Spain), and Lublin (Poland). These series exhibit regular additive seasonality or non-seasonality without any trend, which is confirmed by their autocorrelation functions and partial autocorrelation functions. The most suitable models were indicated by the smallest mean absolute error and the smallest root mean squared error.

  6. Stochastic modeling of hourly rainfall times series in Campania (Italy)

    NASA Astrophysics Data System (ADS)

    Giorgio, M.; Greco, R.

    2009-04-01

    Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil

  7. Compounding approach for univariate time series with nonstationary variances.

    PubMed

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances. PMID:26764768

  8. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  9. Liquid propellant rocket engine combustion simulation with a time-accurate CFD method

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Shang, H. M.; Liaw, Paul; Hutt, J.

    1993-01-01

    Time-accurate computational fluid dynamics (CFD) algorithms are among the basic requirements as an engineering or research tool for realistic simulations of transient combustion phenomena, such as combustion instability, transient start-up, etc., inside the rocket engine combustion chamber. A time-accurate pressure based method is employed in the FDNS code for combustion model development. This is in connection with other program development activities such as spray combustion model development and efficient finite-rate chemistry solution method implementation. In the present study, a second-order time-accurate time-marching scheme is employed. For better spatial resolutions near discontinuities (e.g., shocks, contact discontinuities), a 3rd-order accurate TVD scheme for modeling the convection terms is implemented in the FDNS code. Necessary modification to the predictor/multi-corrector solution algorithm in order to maintain time-accurate wave propagation is also investigated. Benchmark 1-D and multidimensional test cases, which include the classical shock tube wave propagation problems, resonant pipe test case, unsteady flow development of a blast tube test case, and H2/O2 rocket engine chamber combustion start-up transient simulation, etc., are investigated to validate and demonstrate the accuracy and robustness of the present numerical scheme and solution algorithm.

  10. Generalized Dynamic Factor Models for Mixed-Measurement Time Series

    PubMed Central

    Cui, Kai; Dunson, David B.

    2013-01-01

    In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation (GDKA) and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody’s rated firms from 1982–2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplemental materials available online. PMID:24791133

  11. Generalized Dynamic Factor Models for Mixed-Measurement Time Series.

    PubMed

    Cui, Kai; Dunson, David B

    2014-02-12

    In this article, we propose generalized Bayesian dynamic factor models for jointly modeling mixed-measurement time series. The framework allows mixed-scale measurements associated with each time series, with different measurements having different distributions in the exponential family conditionally on time-varying latent factor(s). Efficient Bayesian computational algorithms are developed for posterior inference on both the latent factors and model parameters, based on a Metropolis Hastings algorithm with adaptive proposals. The algorithm relies on a Greedy Density Kernel Approximation (GDKA) and parameter expansion with latent factor normalization. We tested the framework and algorithms in simulated studies and applied them to the analysis of intertwined credit and recovery risk for Moody's rated firms from 1982-2008, illustrating the importance of jointly modeling mixed-measurement time series. The article has supplemental materials available online. PMID:24791133

  12. A refined fuzzy time series model for stock market forecasting

    NASA Astrophysics Data System (ADS)

    Jilani, Tahseen Ahmed; Burney, Syed Muhammad Aqil

    2008-05-01

    Time series models have been used to make predictions of stock prices, academic enrollments, weather, road accident casualties, etc. In this paper we present a simple time-variant fuzzy time series forecasting method. The proposed method uses heuristic approach to define frequency-density-based partitions of the universe of discourse. We have proposed a fuzzy metric to use the frequency-density-based partitioning. The proposed fuzzy metric also uses a trend predictor to calculate the forecast. The new method is applied for forecasting TAIEX and enrollments’ forecasting of the University of Alabama. It is shown that the proposed method work with higher accuracy as compared to other fuzzy time series methods developed for forecasting TAIEX and enrollments of the University of Alabama.

  13. Multiscale entropy analysis of complex physiologic time series.

    PubMed

    Costa, Madalena; Goldberger, Ary L; Peng, C-K

    2002-08-01

    There has been considerable interest in quantifying the complexity of physiologic time series, such as heart rate. However, traditional algorithms indicate higher complexity for certain pathologic processes associated with random outputs than for healthy dynamics exhibiting long-range correlations. This paradox may be due to the fact that conventional algorithms fail to account for the multiple time scales inherent in healthy physiologic dynamics. We introduce a method to calculate multiscale entropy (MSE) for complex time series. We find that MSE robustly separates healthy and pathologic groups and consistently yields higher values for simulated long-range correlated noise compared to uncorrelated noise. PMID:12190613

  14. Minimum entropy density method for the time series analysis

    NASA Astrophysics Data System (ADS)

    Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae

    2009-01-01

    The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.

  15. Fuzzy time-series based on Fibonacci sequence for stock price forecasting

    NASA Astrophysics Data System (ADS)

    Chen, Tai-Liang; Cheng, Ching-Hsue; Jong Teoh, Hia

    2007-07-01

    Time-series models have been utilized to make reasonably accurate predictions in the areas of stock price movements, academic enrollments, weather, etc. For promoting the forecasting performance of fuzzy time-series models, this paper proposes a new model, which incorporates the concept of the Fibonacci sequence, the framework of Song and Chissom's model and the weighted method of Yu's model. This paper employs a 5-year period TSMC (Taiwan Semiconductor Manufacturing Company) stock price data and a 13-year period of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) stock index data as experimental datasets. By comparing our forecasting performances with Chen's (Forecasting enrollments based on fuzzy time-series. Fuzzy Sets Syst. 81 (1996) 311-319), Yu's (Weighted fuzzy time-series models for TAIEX forecasting. Physica A 349 (2004) 609-624) and Huarng's (The application of neural networks to forecast fuzzy time series. Physica A 336 (2006) 481-491) models, we conclude that the proposed model surpasses in accuracy these conventional fuzzy time-series models.

  16. Fast and Accurate Fourier Series Solutions to Gravitational Lensing by a General Family of Two-Power-Law Mass Distributions

    NASA Astrophysics Data System (ADS)

    Chae, Kyu-Hyun

    2002-04-01

    Fourier series solutions to the deflection and magnification by a family of three-dimensional cusped two-power-law ellipsoidal mass distributions are presented. The cusped two-power-law ellipsoidal mass distributions are characterized by inner and outer power-law radial indices and a break (or transition) radius. The model family includes mass models mimicking Jaffe, Hernquist, and η models and dark matter halo profiles from numerical simulations. The Fourier series solutions for the cusped two-power-law mass distributions are relatively simple and allow a very fast calculation, even for a chosen small fractional calculational error (e.g., 10-5). These results will be particularly useful for studying lensed systems that provide a number of accurate lensing constraints and for systematic analyses of large numbers of lenses. Subroutines employing these results for the two-power-law model and the results by Chae, Khersonsky, & Turnshek for the generalized single-power-law mass model are made publicly available.

  17. Wavelet analysis for non-stationary, nonlinear time series

    NASA Astrophysics Data System (ADS)

    Schulte, Justin A.

    2016-08-01

    Methods for detecting and quantifying nonlinearities in nonstationary time series are introduced and developed. In particular, higher-order wavelet analysis was applied to an ideal time series and the quasi-biennial oscillation (QBO) time series. Multiple-testing problems inherent in wavelet analysis were addressed by controlling the false discovery rate. A new local autobicoherence spectrum facilitated the detection of local nonlinearities and the quantification of cycle geometry. The local autobicoherence spectrum of the QBO time series showed that the QBO time series contained a mode with a period of 28 months that was phase coupled to a harmonic with a period of 14 months. An additional nonlinearly interacting triad was found among modes with periods of 10, 16 and 26 months. Local biphase spectra determined that the nonlinear interactions were not quadratic and that the effect of the nonlinearities was to produce non-smoothly varying oscillations. The oscillations were found to be skewed so that negative QBO regimes were preferred, and also asymmetric in the sense that phase transitions between the easterly and westerly phases occurred more rapidly than those from westerly to easterly regimes.

  18. Time Series Analysis Based on Running Mann Whitney Z Statistics

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...

  19. Nonlinear Analysis of Surface EMG Time Series of Back Muscles

    NASA Astrophysics Data System (ADS)

    Dolton, Donald C.; Zurcher, Ulrich; Kaufman, Miron; Sung, Paul

    2004-10-01

    A nonlinear analysis of surface electromyography time series of subjects with and without low back pain is presented. The mean-square displacement and entropy shows anomalous diffusive behavior on intermediate time range 10 ms < t < 1 s. This behavior implies the presence of correlations in the signal. We discuss the shape of the power spectrum of the signal.

  20. Long-range correlations in time series generated by time-fractional diffusion: A numerical study

    NASA Astrophysics Data System (ADS)

    Barbieri, Davide; Vivoli, Alessandro

    2005-09-01

    Time series models showing power law tails in autocorrelation functions are common in econometrics. A special non-Markovian model for such kind of time series is provided by the random walk introduced by Gorenflo et al. as a discretization of time fractional diffusion. The time series so obtained are analyzed here from a numerical point of view in terms of autocorrelations and covariance matrices.

  1. MODIS Vegetation Indices time series improvement considering real acquisition dates

    NASA Astrophysics Data System (ADS)

    Testa, S.; Borgogno Mondino, E.

    2013-12-01

    Satellite Vegetation Indices (VI) time series images are widely used for the characterization phenology, which requires a high temporal accuracy of the satellite data. The present work is based on the MODerate resolution Imaging Spectroradiometer (MODIS) MOD13Q1 product - Vegetation Indices 16-Day L3 Global 250m, which is generated through a maximum value compositing process that reduces the number of cloudy pixels and excludes, when possible, off-nadir ones. Because of its 16-days compositing period, the distance between two adjacent-in-time values within each pixel NDVI time series can range from 1 to 32 days, thus not acceptable for phenologic studies. Moreover, most of the available smoothing algorithms, which are widely used for phenology characterization, assume that data points are equidistant in time and contemporary over the image. The objective of this work was to assess temporal features of NDVI time series over a test area, composed by Castanea sativa (chestnut) and Fagus sylvatica (beech) pure pixels within the Piemonte region in Northwestern Italy. Firstly, NDVI, Pixel Reliability (PR) and Composite Day of the Year (CDOY) data ranging from 2000 to 2011 were extracted from MOD13Q1 and corresponding time series were generated (in further computations, 2000 was not considered since it is not complete because acquisition began in February and calibration is unreliable until October). Analysis of CDOY time series (containing the actual reference date of each NDVI value) over the selected study areas showed NDVI values to be prevalently generated from data acquired at the centre of each 16-days period (the 9th day), at least constantly along the year. This leads to consider each original NDVI value nominally placed to the centre of its 16-days reference period. Then, a new NDVI time series was generated: a) moving each NDVI value to its actual "acquisition" date, b) interpolating the obtained temporary time series through SPLINE functions, c) sampling such

  2. Mining approximate periodic pattern in hydrological time series

    NASA Astrophysics Data System (ADS)

    Zhu, Y. L.; Li, S. J.; Bao, N. N.; Wan, D. S.

    2012-04-01

    There is a lot of information about the hidden laws of nature evolution and the influences of human beings activities on the earth surface in long sequence of hydrological time series. Data mining technology can help find those hidden laws, such as flood frequency and abrupt change, which is useful for the decision support of hydrological prediction and flood control scheduling. The periodic nature of hydrological time series is important for trend forecasting of drought and flood and hydraulic engineering planning. In Hydrology, the full period analysis of hydrological time series has attracted a lot of attention, such as the discrete periodogram, simple partial wave method, Fourier analysis method, and maximum entropy spectral analysis method and wavelet analysis. In fact, the hydrological process is influenced both by deterministic factors and stochastic ones. For example, the tidal level is also affected by moon circling the Earth, in addition to the Earth revolution and its rotation. Hence, there is some kind of approximate period hidden in the hydrological time series, sometimes which is also called the cryptic period. Recently, partial period mining originated from the data mining domain can be a remedy for the traditional period analysis methods in hydrology, which has a loose request of the data integrity and continuity. They can find some partial period in the time series. This paper is focused on the partial period mining in the hydrological time series. Based on asynchronous periodic pattern and partial period mining with suffix tree, this paper proposes to mine multi-event asynchronous periodic pattern based on modified suffix tree representation and traversal, and invent a dynamic candidate period intervals adjusting method, which can avoids period omissions or waste of time and space. The experimental results on synthetic data and real water level data of the Yangtze River at Nanjing station indicate that this algorithm can discover hydrological

  3. Finding unstable periodic orbits from chaotic time series

    NASA Astrophysics Data System (ADS)

    Buhl, Michael

    Contained within a chaotic attractor is an infinite number of unstable periodic orbits (UPOs). Although these orbits have zero measure, they form a skeleton of the dynamics. However, they are difficult to find from an observed time series. In this thesis I present several methods to find UPOs from measured time series. In Chapter 2 I look at data measured from the stomatogastric system of the California spiny lobster as an example to find unstable periodic orbits. With this time series I use two methods. The first creates a local linear model of the dynamics and finds the periodic orbits of the model, and the second applies a linear transform to the model such that unstable orbits are stable. In addition, in this chapter I describe methods of filtering and embedding the chaotic time series. In Chapter 3 I look at a more complicated model system where the dynamics are described by delay differential equations. Now the future state of the system depends on both the current state and the state a time tau earlier. This makes the phase space of the system infinite dimensional. I present a method for modeling systems such as this and finding UPOs in the infinite dimensional phase space. In Chapters 4 and 5 I describe a new method to find UPOs using symbolic dynamics. This has many advantages over the methods described in Chapter 2; more orbits can be found using a smaller time series---even in the presence of noise. First in Chapter 4 I describe how the phase space can be partitioned so that we can use symbolic dynamics. Then in Chapter 5 I describe how the UPOs can be found from the symbolic time series. Here, I model the symbolic dynamics with a Markov chain, represented by a graph, and then the symbolic UPOs are found from the graph. These symbolic cycles can then be localized back in phase space.

  4. Entropy measure of stepwise component in GPS time series

    NASA Astrophysics Data System (ADS)

    Lyubushin, A. A.; Yakovlev, P. V.

    2016-01-01

    A new method for estimating the stepwise component in the time series is suggested. The method is based on the application of a pseudo-derivative. The advantage of this method lies in the simplicity of its practical implementation compared to the more common methods for identifying the peculiarities in the time series against the noise. The need for automatic detection of the jumps in the noised signal and for introducing a quantitative measure of a stepwise behavior of the signal arises in the problems of the GPS time series analysis. The interest in the jumps in the mean level of the GPS signal is associated with the fact that they may reflect the typical earthquakes or the so-called silent earthquakes. In this paper, we offer the criteria for quantifying the degree of the stepwise behavior of the noised time series. These criteria are based on calculating the entropy for the auxiliary series of averaged stepwise approximations, which are constructed with the use of pseudo-derivatives.

  5. Time series, correlation matrices and random matrix models

    SciTech Connect

    Vinayak; Seligman, Thomas H.

    2014-01-08

    In this set of five lectures the authors have presented techniques to analyze open classical and quantum systems using correlation matrices. For diverse reasons we shall see that random matrices play an important role to describe a null hypothesis or a minimum information hypothesis for the description of a quantum system or subsystem. In the former case various forms of correlation matrices of time series associated with the classical observables of some system. The fact that such series are necessarily finite, inevitably introduces noise and this finite time influence lead to a random or stochastic component in these time series. By consequence random correlation matrices have a random component, and corresponding ensembles are used. In the latter we use random matrices to describe high temperature environment or uncontrolled perturbations, ensembles of differing chaotic systems etc. The common theme of the lectures is thus the importance of random matrix theory in a wide range of fields in and around physics.

  6. On fractal analysis of cardiac interbeat time series

    NASA Astrophysics Data System (ADS)

    Guzmán-Vargas, L.; Calleja-Quevedo, E.; Angulo-Brown, F.

    2003-09-01

    In recent years the complexity of a cardiac beat-to-beat time series has been taken as an auxiliary tool to identify the health status of human hearts. Several methods has been employed to characterize the time series complexity. In this work we calculate the fractal dimension of interbeat time series arising from three groups: 10 young healthy persons, 8 elderly healthy persons and 10 patients with congestive heart failures. Our numerical results reflect evident differences in the dynamic behavior corresponding to each group. We discuss these results within the context of the neuroautonomic control of heart rate dynamics. We also propose a numerical simulation which reproduce aging effects of heart rate behavior.

  7. A multidisciplinary database for geophysical time series management

    NASA Astrophysics Data System (ADS)

    Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.

    2013-12-01

    The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.

  8. Dynamic Modeling of time series using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Nair, A. D.; Principe, Jose C.

    1995-12-01

    Artificial Neural Networks (ANN) have the ability to adapt to and learn complex topologies, they represent new technology with which to explore dynamical systems. Multi-step prediction is used to capture the dynamics of the system that produced the time series. Multi-step prediction is implemented by a recurrent ANN trained with trajectory learning. Two separate memories are employed in training the ANN, the common tapped delay-line memory and the new gamma memory. This methodology has been applied to the time series of a white dwarf and to the quasar 3C 345.

  9. Application of nonlinear time series models to driven systems

    SciTech Connect

    Hunter, N.F. Jr.

    1990-01-01

    In our laboratory we have been engaged in an effort to model nonlinear systems using time series methods. Our objectives have been, first, to understand how the time series response of a nonlinear system unfolds as a function of the underlying state variables, second, to model the evolution of the state variables, and finally, to predict nonlinear system responses. We hope to address the relationship between model parameters and system parameters in the near future. Control of nonlinear systems based on experimentally derived parameters is also a planned topic of future research. 28 refs., 15 figs., 2 tabs.

  10. Scale dependence of the directional relationships between coupled time series

    NASA Astrophysics Data System (ADS)

    Shirazi, Amir Hossein; Aghamohammadi, Cina; Anvari, Mehrnaz; Bahraminasab, Alireza; Rahimi Tabar, M. Reza; Peinke, Joachim; Sahimi, Muhammad; Marsili, Matteo

    2013-02-01

    Using the cross-correlation of the wavelet transformation, we propose a general method of studying the scale dependence of the direction of coupling for coupled time series. The method is first demonstrated by applying it to coupled van der Pol forced oscillators and coupled nonlinear stochastic equations. We then apply the method to the analysis of the log-return time series of the stock values of the IBM and General Electric (GE) companies. Our analysis indicates that, on average, IBM stocks react earlier to possible common sector price movements than those of GE.

  11. Scaling analysis of multi-variate intermittent time series

    NASA Astrophysics Data System (ADS)

    Kitt, Robert; Kalda, Jaan

    2005-08-01

    The scaling properties of the time series of asset prices and trading volumes of stock markets are analysed. It is shown that similar to the asset prices, the trading volume data obey multi-scaling length-distribution of low-variability periods. In the case of asset prices, such scaling behaviour can be used for risk forecasts: the probability of observing next day a large price movement is (super-universally) inversely proportional to the length of the ongoing low-variability period. Finally, a method is devised for a multi-factor scaling analysis. We apply the simplest, two-factor model to equity index and trading volume time series.

  12. Adaptive median filtering for preprocessing of time series measurements

    NASA Technical Reports Server (NTRS)

    Paunonen, Matti

    1993-01-01

    A median (L1-norm) filtering program using polynomials was developed. This program was used in automatic recycling data screening. Additionally, a special adaptive program to work with asymmetric distributions was developed. Examples of adaptive median filtering of satellite laser range observations and TV satellite time measurements are given. The program proved to be versatile and time saving in data screening of time series measurements.

  13. Kālī: Time series data modeler

    NASA Astrophysics Data System (ADS)

    Kasliwal, Vishal P.

    2016-07-01

    The fully parallelized and vectorized software package Kālī models time series data using various stochastic processes such as continuous-time ARMA (C-ARMA) processes and uses Bayesian Markov Chain Monte-Carlo (MCMC) for inferencing a stochastic light curve. Kālimacr; is written in c++ with Python language bindings for ease of use. K¯lī is named jointly after the Hindu goddess of time, change, and power and also as an acronym for KArma LIbrary.

  14. Estimating the largest Lyapunov exponent and noise level from chaotic time series.

    PubMed

    Yao, Tian-Liang; Liu, Hai-Feng; Xu, Jian-Liang; Li, Wei-Feng

    2012-09-01

    A novel method for estimating simultaneously the largest Lyapunov exponent (LLE) and noise level (NL) from a noisy chaotic time series is presented in this paper. We research the influence of noise on the average distance of different pairs of points in an embedding phase space and provide a rescaled formula for calculating the LLE when the time series is contaminated with noise. Our algorithm is proposed based on this formula and the invariant of the LLE in different dimensional embedding phase spaces. With numerical simulation, we find that the proposed method provides a reasonable estimate of the LLE and NL when the NL is less than 10% of the signal content. The comparison with Kantz algorithm shows that our method gives more accurate results of the LLE for the noisy time series. Furthermore, our method is not sensitive to the distribution of the noise. PMID:23020441

  15. Ultrasound-guided characterization of interstitial ablated tissue using RF time series: feasibility study.

    PubMed

    Imani, Farhad; Abolmaesumi, Purang; Wu, Mark Z; Lasso, Andras; Burdette, Everett C; Ghoshal, Goutam; Heffter, Tamas; Williams, Emery; Neubauer, Paul; Fichtinger, Gabor; Mousavi, Parvin

    2013-06-01

    This paper presents the results of a feasibility study to demonstrate the application of ultrasound RF time series imaging to accurately differentiate ablated and nonablated tissue. For 12 ex vivo and two in situ tissue samples, RF ultrasound signals are acquired prior to, and following, high-intensity ultrasound ablation. Spatial and temporal features of these signals are used to characterize ablated and nonablated tissue in a supervised-learning framework. In cross-validation evaluation, a subset of four features extracted from RF time series produce a classification accuracy of 84.5%, an area under ROC curve of 0.91 for ex vivo data, and an accuracy of 85% for in situ data. Ultrasound RF time series is a promising approach for characterizing ablated tissue. PMID:23335657

  16. The study of coastal groundwater depth and salinity variation using time-series analysis

    SciTech Connect

    Tularam, G.A. . E-mail: a.tularam@griffith.edu.au; Keeler, H.P. . E-mail: p.keeler@ms.unimelb.edu.au

    2006-10-15

    A time-series approach is applied to study and model tidal intrusion into coastal aquifers. The authors examine the effect of tidal behaviour on groundwater level and salinity intrusion for the coastal Brisbane region using auto-correlation and spectral analyses. The results show a close relationship between tidal behaviour, groundwater depth and salinity levels for the Brisbane coast. The known effect can be quantified and incorporated into new models in order to more accurately map salinity intrusion into coastal groundwater table.

  17. Learning time series evolution by unsupervised extraction of correlations

    SciTech Connect

    Deco, G.; Schuermann, B. )

    1995-03-01

    As a consequence, we are able to model chaotic and nonchaotic time series. Furthermore, one critical point in modeling time series is the determination of the dimension of the embedding vector used, i.e., the number of components of the past that are needed to predict the future. With this method we can detect the embedding dimension by extracting the influence of the past on the future, i.e., the correlation of remote past and future. Optimal embedding dimensions are obtained for the Henon map and the Mackey-Glass series. When noisy data corrupted by colored noise are used, a model is still possible. The noise will then be decorrelated by the network. In the case of modeling a chemical reaction, the most natural architecture that conserves the volume is a symplectic network which describes a system that conserves the entropy and therefore the transmitted information.

  18. A multiscale statistical model for time series forecasting

    NASA Astrophysics Data System (ADS)

    Wang, W.; Pollak, I.

    2007-02-01

    We propose a stochastic grammar model for random-walk-like time series that has features at several temporal scales. We use a tree structure to model these multiscale features. The inside-outside algorithm is used to estimate the model parameters. We develop an algorithm to forecast the sign of the first difference of a time series. We illustrate the algorithm using log-price series of several stocks and compare with linear prediction and a neural network approach. We furthermore illustrate our algorithm using synthetic data and show that it significantly outperforms both the linear predictor and the neural network. The construction of our synthetic data indicates what types of signals our algorithm is well suited for.

  19. Segmentation of time series with long-range fractal correlations

    PubMed Central

    Bernaola-Galván, P.; Oliver, J.L.; Hackenberg, M.; Coronado, A.V.; Ivanov, P.Ch.; Carpena, P.

    2012-01-01

    Segmentation is a standard method of data analysis to identify change-points dividing a nonstationary time series into homogeneous segments. However, for long-range fractal correlated series, most of the segmentation techniques detect spurious change-points which are simply due to the heterogeneities induced by the correlations and not to real nonstationarities. To avoid this oversegmentation, we present a segmentation algorithm which takes as a reference for homogeneity, instead of a random i.i.d. series, a correlated series modeled by a fractional noise with the same degree of correlations as the series to be segmented. We apply our algorithm to artificial series with long-range correlations and show that it systematically detects only the change-points produced by real nonstationarities and not those created by the correlations of the signal. Further, we apply the method to the sequence of the long arm of human chromosome 21, which is known to have long-range fractal correlations. We obtain only three segments that clearly correspond to the three regions of different G + C composition revealed by means of a multi-scale wavelet plot. Similar results have been obtained when segmenting all human chromosome sequences, showing the existence of previously unknown huge compositional superstructures in the human genome. PMID:23645997

  20. A time-accurate implicit method for chemical non-equilibrium flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, Jian-Shun

    1992-01-01

    A new time accurate coupled solution procedure for solving the chemical non-equilibrium Navier-Stokes equations over a wide range of Mach numbers is described. The scheme is shown to be very efficient and robust for flows with velocities ranging from M less than or equal to 10(exp -10) to supersonic speeds.

  1. Ocean time-series near Bermuda: Hydrostation S and the US JGOFS Bermuda Atlantic time-series study

    NASA Technical Reports Server (NTRS)

    Michaels, Anthony F.; Knap, Anthony H.

    1992-01-01

    Bermuda is the site of two ocean time-series programs. At Hydrostation S, the ongoing biweekly profiles of temperature, salinity and oxygen now span 37 years. This is one of the longest open-ocean time-series data sets and provides a view of decadal scale variability in ocean processes. In 1988, the U.S. JGOFS Bermuda Atlantic Time-series Study began a wide range of measurements at a frequency of 14-18 cruises each year to understand temporal variability in ocean biogeochemistry. On each cruise, the data range from chemical analyses of discrete water samples to data from electronic packages of hydrographic and optics sensors. In addition, a range of biological and geochemical rate measurements are conducted that integrate over time-periods of minutes to days. This sampling strategy yields a reasonable resolution of the major seasonal patterns and of decadal scale variability. The Sargasso Sea also has a variety of episodic production events on scales of days to weeks and these are only poorly resolved. In addition, there is a substantial amount of mesoscale variability in this region and some of the perceived temporal patterns are caused by the intersection of the biweekly sampling with the natural spatial variability. In the Bermuda time-series programs, we have added a series of additional cruises to begin to assess these other sources of variation and their impacts on the interpretation of the main time-series record. However, the adequate resolution of higher frequency temporal patterns will probably require the introduction of new sampling strategies and some emerging technologies such as biogeochemical moorings and autonomous underwater vehicles.

  2. Complexity analysis of the turbulent environmental fluid flow time series

    NASA Astrophysics Data System (ADS)

    Mihailović, D. T.; Nikolić-Đorić, E.; Drešković, N.; Mimić, G.

    2014-02-01

    We have used the Kolmogorov complexities, sample and permutation entropies to quantify the randomness degree in river flow time series of two mountain rivers in Bosnia and Herzegovina, representing the turbulent environmental fluid, for the period 1926-1990. In particular, we have examined the monthly river flow time series from two rivers (the Miljacka and the Bosnia) in the mountain part of their flow and then calculated the Kolmogorov complexity (KL) based on the Lempel-Ziv Algorithm (LZA) (lower-KLL and upper-KLU), sample entropy (SE) and permutation entropy (PE) values for each time series. The results indicate that the KLL, KLU, SE and PE values in two rivers are close to each other regardless of the amplitude differences in their monthly flow rates. We have illustrated the changes in mountain river flow complexity by experiments using (i) the data set for the Bosnia River and (ii) anticipated human activities and projected climate changes. We have explored the sensitivity of considered measures in dependence on the length of time series. In addition, we have divided the period 1926-1990 into three subintervals: (a) 1926-1945, (b) 1946-1965, (c) 1966-1990, and calculated the KLL, KLU, SE, PE values for the various time series in these subintervals. It is found that during the period 1946-1965, there is a decrease in their complexities, and corresponding changes in the SE and PE, in comparison to the period 1926-1990. This complexity loss may be primarily attributed to (i) human interventions, after the Second World War, on these two rivers because of their use for water consumption and (ii) climate change in recent times.

  3. Estimating The Seasonal Components In Hydrological Time Series

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Montanari, A.

    The hydrological safety of dams is usually evaluated by analysing historical data of river flows into the reservoir. When only short observed records are available, one is often forced to generate synthetic flow series in order to verify the safety of the dam with respect to more equally likely hydrological scenarios. To this end, stochastic pro- cesses are frequently applied and a key point of many of the simulation procedures which can be used is the estimation of the seasonal periodicities that may be present in the analysed time series. Such seasonalities often have to be removed from the his- torical record before performing the estimation of the parameters of the simulation model. A usual procedure is to estimate and subsequently eliminate the periodicities which may be present in the mean and variance of the considered time series. This study analyses the performances of various techniques for the estimation of the sea- sonal components which may affect the statistics of hydrological time series observed at fine time step. The scientific literature proposed different approaches to this end, but nevertheless their application to records collected at fine time step is often diffi- cult, due to the high variability of the data and the major significance of measurement errors which may occur during extreme events. This study aims at comparing some of the techniques proposed by the literature with a simple approach, that is obtained by modifying the well known STL method. The proposed approach is tested by estimat- ing the periodical components of some synthetic time series and applied by analysing the daily river flows of two major rivers located in Italy.

  4. Handbook for Using the Intensive Time-Series Design.

    ERIC Educational Resources Information Center

    Mayer, Victor J.; Monk, John S.

    Work on the development of the intensive time-series design was initiated because of the dissatisfaction with existing research designs. This dissatisfaction resulted from the paucity of data obtained from designs such as the pre-post and randomized posttest-only designs. All have the common characteristic of yielding data from only one or two…

  5. IDENTIFICATION OF REGIME SHIFTS IN TIME SERIES USING NEIGHBORHOOD STATISTICS

    EPA Science Inventory

    The identification of alternative dynamic regimes in ecological systems requires several lines of evidence. Previous work on time series analysis of dynamic regimes includes mainly model-fitting methods. We introduce two methods that do not use models. These approaches use state-...

  6. Time Series Data Visualization in World Wide Telescope

    NASA Astrophysics Data System (ADS)

    Fay, J.

    WorldWide Telescope provides a rich set of timer series visualization for both archival and real time data. WWT consists of both interactive desktop tools for interactive immersive visualization and HTML5 web based controls that can be utilized in customized web pages. WWT supports a range of display options including full dome, power walls, stereo and virtual reality headsets.

  7. The Design of Time-Series Comparisons under Resource Constraints.

    ERIC Educational Resources Information Center

    Willemain, Thomas R.; Hartunian, Nelson S.

    1982-01-01

    Two methods for dividing an interrupted time-series study between baseline and experimental phases when study resources are limited are compared. In fixed designs, the baseline duration is predetermined. In flexible designs the baseline duration is contingent on remaining resources and the match of results to prior expectations of the evaluator.…

  8. Synchronization-based parameter estimation from time series

    NASA Astrophysics Data System (ADS)

    Parlitz, U.; Junge, L.; Kocarev, L.

    1996-12-01

    The parameters of a given (chaotic) dynamical model are estimated from scalar time series by adapting a computer model until it synchronizes with the given data. This parameter identification method is applied to numerically generated and experimental data from Chua's circuit.

  9. Ultrasound RF time series for classification of breast lesions.

    PubMed

    Uniyal, Nishant; Eskandari, Hani; Abolmaesumi, Purang; Sojoudi, Samira; Gordon, Paula; Warren, Linda; Rohling, Robert N; Salcudean, Septimiu E; Moradi, Mehdi

    2015-02-01

    This work reports the use of ultrasound radio frequency (RF) time series analysis as a method for ultrasound-based classification of malignant breast lesions. The RF time series method is versatile and requires only a few seconds of raw ultrasound data with no need for additional instrumentation. Using the RF time series features, and a machine learning framework, we have generated malignancy maps, from the estimated cancer likelihood, for decision support in biopsy recommendation. These maps depict the likelihood of malignancy for regions of size 1 mm(2) within the suspicious lesions. We report an area under receiver operating characteristics curve of 0.86 (95% confidence interval [CI]: 0.84%-0.90%) using support vector machines and 0.81 (95% CI: 0.78-0.85) using Random Forests classification algorithms, on 22 subjects with leave-one-subject-out cross-validation. Changing the classification method yielded consistent results which indicates the robustness of this tissue typing method. The findings of this report suggest that ultrasound RF time series, along with the developed machine learning framework, can help in differentiating malignant from benign breast lesions, subsequently reducing the number of unnecessary biopsies after mammography screening. PMID:25350925

  10. The Relationship of Negative Affect and Thought: Time Series Analyses.

    ERIC Educational Resources Information Center

    Rubin, Amy; And Others

    In recent years, the relationship between moods and thoughts has been the focus of much theorizing and some empirical work. A study was undertaken to examine the intraindividual relationship between negative affect and negative thoughts using a Box-Jenkins time series analysis. College students (N=33) completed a measure of negative mood and…

  11. Analysis of Complex Intervention Effects in Time-Series Experiments.

    ERIC Educational Resources Information Center

    Bower, Cathleen

    An iterative least squares procedure for analyzing the effect of various kinds of intervention in time-series data is described. There are numerous applications of this design in economics, education, and psychology, although until recently, no appropriate analysis techniques had been developed to deal with the model adequately. This paper…

  12. ADAPTIVE DATA ANALYSIS OF COMPLEX FLUCTUATIONS IN PHYSIOLOGIC TIME SERIES

    PubMed Central

    PENG, C.-K.; COSTA, MADALENA; GOLDBERGER, ARY L.

    2009-01-01

    We introduce a generic framework of dynamical complexity to understand and quantify fluctuations of physiologic time series. In particular, we discuss the importance of applying adaptive data analysis techniques, such as the empirical mode decomposition algorithm, to address the challenges of nonlinearity and nonstationarity that are typically exhibited in biological fluctuations. PMID:20041035

  13. A Time-Series Analysis of Hispanic Unemployment.

    ERIC Educational Resources Information Center

    Defreitas, Gregory

    1986-01-01

    This study undertakes the first systematic time-series research on the cyclical patterns and principal determinants of Hispanic joblessness in the United States. The principal findings indicate that Hispanics tend to bear a disproportionate share of increases in unemployment during recessions. (Author/CT)

  14. What Makes a Coursebook Series Stand the Test of Time?

    ERIC Educational Resources Information Center

    Illes, Eva

    2009-01-01

    Intriguingly, at a time when the ELT market is inundated with state-of-the-art coursebooks teaching modern-day English, a 30-year-old series enjoys continuing popularity in some secondary schools in Hungary. Why would teachers, several of whom are school-based teacher-mentors in the vanguard of the profession, purposefully choose materials which…

  15. A Method for Comparing Multivariate Time Series with Different Dimensions

    PubMed Central

    Tapinos, Avraam; Mendes, Pedro

    2013-01-01

    In many situations it is desirable to compare dynamical systems based on their behavior. Similarity of behavior often implies similarity of internal mechanisms or dependency on common extrinsic factors. While there are widely used methods for comparing univariate time series, most dynamical systems are characterized by multivariate time series. Yet, comparison of multivariate time series has been limited to cases where they share a common dimensionality. A semi-metric is a distance function that has the properties of non-negativity, symmetry and reflexivity, but not sub-additivity. Here we develop a semi-metric – SMETS – that can be used for comparing groups of time series that may have different dimensions. To demonstrate its utility, the method is applied to dynamic models of biochemical networks and to portfolios of shares. The former is an example of a case where the dependencies between system variables are known, while in the latter the system is treated (and behaves) as a black box. PMID:23393554

  16. Daily time series evapotranspiration maps for Oklahoma and Texas panhandle

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Evapotranspiration (ET) is an important process in ecosystems’ water budget and closely linked to its productivity. Therefore, regional scale daily time series ET maps developed at high and medium resolutions have large utility in studying the carbon-energy-water nexus and managing water resources. ...

  17. The application of the transfer entropy to gappy time series

    NASA Astrophysics Data System (ADS)

    Kulp, C. W.; Tracy, E. R.

    2009-03-01

    The application of the transfer entropy to gappy symbolic time series is discussed. Although the transfer entropy can fail to correctly identify the drive-response relationship, it is able to robustly detect phase relationships. Hence, it might still be of use in applications requiring the detection of changes in these relationships.

  18. Identification of human operator performance models utilizing time series analysis

    NASA Technical Reports Server (NTRS)

    Holden, F. M.; Shinners, S. M.

    1973-01-01

    The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.

  19. Time Series Analysis for the Drac River Basin (france)

    NASA Astrophysics Data System (ADS)

    Parra-Castro, K.; Donado-Garzon, L. D.; Rodriguez, E.

    2013-12-01

    This research is based on analyzing of discharge time-series in four stream flow gage stations located in the Drac River basin in France: (i) Guinguette Naturelle, (ii) Infernet, (iii) Parassat and the stream flow gage (iv) Villard Loubière. In addition, time-series models as the linear regression (single and multiple) and the MORDOR model were implemented to analyze the behavior the Drac River from year 1969 until year 2010. Twelve different models were implemented to assess the daily and monthly discharge time-series for the four flow gage stations. Moreover, five selection criteria were use to analyze the models: average division, variance division, the coefficient R2, Kling-Gupta Efficiency (KGE) and the Nash Number. The selection of the models was made to have the strongest models with an important level confidence. In this case, according to the best correlation between the time-series of stream flow gage stations and the best fitting models. Four of the twelve models were selected: two models for the stream flow gage station Guinguette Naturel, one for the station Infernet and one model for the station Villard Loubière. The R2 coefficients achieved were 0.87, 0.95, 0.85 and 0.87 respectively. Consequently, both confidence levels (the modeled and the empirical) were tested in the selected model, leading to the best fitting of both discharge time-series and models with the empirical confidence interval. Additionally, a procedure for validation of the models was conducted using the data for the year 2011, where extreme hydrologic and changes in hydrologic regimes events were identified. Furthermore, two different forms of estimating uncertainty through the use of confidence levels were studied: the modeled and the empirical confidence levels. This research was useful to update the used procedures and validate time-series in the four stream flow gage stations for the use of the company Électricité de France. Additionally, coefficients for both the models and

  20. Multiple imputation for time series data with Amelia package.

    PubMed

    Zhang, Zhongheng

    2016-02-01

    Time series data are common in medical researches. Many laboratory variables or study endpoints could be measured repeatedly over time. Multiple imputation (MI) without considering time trend of a variable may cause it to be unreliable. The article illustrates how to perform MI by using Amelia package in a clinical scenario. Amelia package is powerful in that it allows for MI for time series data. External information on the variable of interest can also be incorporated by using prior or bound argument. Such information may be based on previous published observations, academic consensus, and personal experience. Diagnostics of imputation model can be performed by examining the distributions of imputed and observed values, or by using over-imputation technique. PMID:26904578

  1. [Anomaly Detection of Multivariate Time Series Based on Riemannian Manifolds].

    PubMed

    Xu, Yonghong; Hou, Xiaoying; Li Shuting; Cui, Jie

    2015-06-01

    Multivariate time series problems widely exist in production and life in the society. Anomaly detection has provided people with a lot of valuable information in financial, hydrological, meteorological fields, and the research areas of earthquake, video surveillance, medicine and others. In order to quickly and efficiently find exceptions in time sequence so that it can be presented in front of people in an intuitive way, we in this study combined the Riemannian manifold with statistical process control charts, based on sliding window, with a description of the covariance matrix as the time sequence, to achieve the multivariate time series of anomaly detection and its visualization. We made MA analog data flow and abnormal electrocardiogram data from MIT-BIH as experimental objects, and verified the anomaly detection method. The results showed that the method was reasonable and effective. PMID:26485975

  2. Irreversibility of financial time series: A graph-theoretical approach

    NASA Astrophysics Data System (ADS)

    Flanagan, Ryan; Lacasa, Lucas

    2016-04-01

    The relation between time series irreversibility and entropy production has been recently investigated in thermodynamic systems operating away from equilibrium. In this work we explore this concept in the context of financial time series. We make use of visibility algorithms to quantify, in graph-theoretical terms, time irreversibility of 35 financial indices evolving over the period 1998-2012. We show that this metric is complementary to standard measures based on volatility and exploit it to both classify periods of financial stress and to rank companies accordingly. We then validate this approach by finding that a projection in principal components space of financial years, based on time irreversibility features, clusters together periods of financial stress from stable periods. Relations between irreversibility, efficiency and predictability are briefly discussed.

  3. Classification of time series patterns from complex dynamic systems

    SciTech Connect

    Schryver, J.C.; Rao, N.

    1998-07-01

    An increasing availability of high-performance computing and data storage media at decreasing cost is making possible the proliferation of large-scale numerical databases and data warehouses. Numeric warehousing enterprises on the order of hundreds of gigabytes to terabytes are a reality in many fields such as finance, retail sales, process systems monitoring, biomedical monitoring, surveillance and transportation. Large-scale databases are becoming more accessible to larger user communities through the internet, web-based applications and database connectivity. Consequently, most researchers now have access to a variety of massive datasets. This trend will probably only continue to grow over the next several years. Unfortunately, the availability of integrated tools to explore, analyze and understand the data warehoused in these archives is lagging far behind the ability to gain access to the same data. In particular, locating and identifying patterns of interest in numerical time series data is an increasingly important problem for which there are few available techniques. Temporal pattern recognition poses many interesting problems in classification, segmentation, prediction, diagnosis and anomaly detection. This research focuses on the problem of classification or characterization of numerical time series data. Highway vehicles and their drivers are examples of complex dynamic systems (CDS) which are being used by transportation agencies for field testing to generate large-scale time series datasets. Tools for effective analysis of numerical time series in databases generated by highway vehicle systems are not yet available, or have not been adapted to the target problem domain. However, analysis tools from similar domains may be adapted to the problem of classification of numerical time series data.

  4. Mixed Spectrum Analysis on fMRI Time-Series.

    PubMed

    Kumar, Arun; Lin, Feng; Rajapakse, Jagath C

    2016-06-01

    Temporal autocorrelation present in functional magnetic resonance image (fMRI) data poses challenges to its analysis. The existing approaches handling autocorrelation in fMRI time-series often presume a specific model of autocorrelation such as an auto-regressive model. The main limitation here is that the correlation structure of voxels is generally unknown and varies in different brain regions because of different levels of neurogenic noises and pulsatile effects. Enforcing a universal model on all brain regions leads to bias and loss of efficiency in the analysis. In this paper, we propose the mixed spectrum analysis of the voxel time-series to separate the discrete component corresponding to input stimuli and the continuous component carrying temporal autocorrelation. A mixed spectral analysis technique based on M-spectral estimator is proposed, which effectively removes autocorrelation effects from voxel time-series and identify significant peaks of the spectrum. As the proposed method does not assume any prior model for the autocorrelation effect in voxel time-series, varying correlation structure among the brain regions does not affect its performance. We have modified the standard M-spectral method for an application on a spatial set of time-series by incorporating the contextual information related to the continuous spectrum of neighborhood voxels, thus reducing considerably the computation cost. Likelihood of the activation is predicted by comparing the amplitude of discrete component at stimulus frequency of voxels across the brain by using normal distribution and modeling spatial correlations among the likelihood with a conditional random field. We also demonstrate the application of the proposed method in detecting other desired frequencies. PMID:26800533

  5. An Introductory Overview of Statistical Methods for Discrete Time Series

    NASA Astrophysics Data System (ADS)

    Meng, X.-L.; California-Harvard AstroStat Collaboration

    2004-08-01

    A number of statistical problems encounted in astrophysics are concerned with discrete time series, such as photon counts with variation in source intensity over time. This talk provides an introductory overview of the current state-of-the-art methods in statistics, including Bayesian methods aided by Markov chain Monte Carlo, for modeling and analyzing such data. These methods have also been successfully applied in other fields, such as economics.

  6. Normalization methods in time series of platelet function assays

    PubMed Central

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  7. Accuracy enhancement of GPS time series using principal component analysis and block spatial filtering

    NASA Astrophysics Data System (ADS)

    He, Xiaoxing; Hua, Xianghong; Yu, Kegen; Xuan, Wei; Lu, Tieding; Zhang, W.; Chen, X.

    2015-03-01

    This paper focuses on performance analysis and accuracy enhancement of long-term position time series of a regional network of GPS stations with two near sub-blocks, one block of 8 stations in Cascadia region and another block of 14 stations in Southern California. We have analyzed the seasonal variations of the 22 IGS site positions between 2004 and 2011. The Green's function is used to calculate the station-site displacements induced by the environmental loading due to atmospheric pressure, soil moisture, snow depth and nontidal ocean. The analysis has revealed that these loading factors can result in position shift of centimeter level, the displacement time series exhibit a periodic pattern, which can explain about 12.70-21.78% of the seasonal amplitude on vertical GPS time series, and the loading effect is significantly different among the two nearby geographical regions. After the loading effect is corrected, the principal component analysis (PCA)-based block spatial filtering is proposed to filter out the remaining common mode error (CME) of the GPS time series. The results show that the PCA-based block spatial filtering can extract the CME more accurately and effectively than the conventional overall filtering method, reducing more of the uncertainty. With the loading correction and block spatial filtering, about 68.34-73.20% of the vertical GPS seasonal power can be separated and removed, improving the reliability of the GPS time series and hence enabling better deformation analysis and higher precision geodetic applications.

  8. National Ignition Campaign (NIC) Precision Tuning Series Shock Timing Experiments

    SciTech Connect

    Robey, H F; Celliers, P M

    2011-07-19

    A series of precision shock timing experiments have been performed on NIF. These experiments continue to adjust the laser pulse shape and employ the adjusted cone fraction (CF) in the picket (1st 2 ns of the laser pulse) as determined from the re-emit experiment series. The NIF ignition laser pulse is precisely shaped and consists of a series of four impulses, which drive a corresponding series of shock waves of increasing strength to accelerate and compress the capsule ablator and fuel layer. To optimize the implosion, they tune not only the strength (or power) but also, to sub-nanosecond accuracy, the timing of the shock waves. In a well-tuned implosion, the shock waves work together to compress and heat the fuel. For the shock timing experiments, a re-entrant cone is inserted through both the hohlraum wall and the capsule ablator allowing a direct optical view of the propagating shocks in the capsule interior using the VISAR (Velocity Interferometer System for Any Reflector) diagnostic from outside the hohlraum. To emulate the DT ice of an ignition capsule, the inside of the cone and the capsule are filled with liquid deuterium.

  9. Fast computation of recurrences in long time series

    NASA Astrophysics Data System (ADS)

    Rawald, Tobias; Sips, Mike; Marwan, Norbert; Dransch, Doris

    2014-05-01

    The quadratic time complexity of calculating basic RQA measures, doubling the size of the input time series leads to a quadrupling in operations, impairs the fast computation of RQA in many application scenarios. As an example, we analyze the Potsdamer Reihe, an ongoing non-interrupted hourly temperature profile since 1893, consisting of 1,043,112 data points. Using an optimized single-threaded CPU implementation this analysis requires about six hours. Our approach conducts RQA for the Potsdamer Reihe in five minutes. We automatically split a long time series into smaller chunks (Divide) and distribute the computation of RQA measures across multiple GPU devices. To guarantee valid RQA results, we employ carryover buffers that allow sharing information between pairs of chunks (Recombine). We demonstrate the capabilities of our Divide and Recombine approach to process long time series by comparing the runtime of our implementation to existing RQA tools. We support a variety of platforms by employing the computing framework OpenCL. Our current implementation supports the computation of standard RQA measures (recurrence rate, determinism, laminarity, ratio, average diagonal line length, trapping time, longest diagonal line, longest vertical line, divergence, entropy, trend) and also calculates recurrence times. To utilize the potential of our approach for a number of applications, we plan to release our implementation under an Open Source software license. It will be available at http://www.gfz-potsdam.de/fast-rqa/. Since our approach allows to compute RQA measures for a long time series fast, we plan to extend our implementation to support multi-scale RQA.

  10. The extraction of multiple cropping index of China based on NDVI time-series

    NASA Astrophysics Data System (ADS)

    Huang, Haitao; Gao, Zhiqiang

    2011-09-01

    Multiple cropping index reflects the intensity of arable land been used by a certain planting system. The bond between multiple cropping index and NDVI time-series is the crop cycle rule, which determines the crop process of seeding, jointing, tasseling, ripeness and harvesting and so on. The cycle rule can be retrieved by NDVI time-series for that peaks and valleys on the time-series curve correspond to different periods of crop growth. In this paper, we aim to extract the multiple cropping index of China from NDVI time-series. Because of cloud contamination, some NDVI values are depressed. MVC (Maximum Value Composite) synthesis is used to SPOT-VGT data to remove the noise, but this method doesn't work sufficiently. In order to accurately extract the multiple cropping index, the algorithm HANTS (Harmonic Analysis of Time Series) is employed to remove the cloud contamination. The reconstructed NDVI time-series can explicitly characterize the biophysical process of planting, seedling, elongating, heading, harvesting of crops. Based on the reconstructed curve, we calculate the multiple cropping index of arable land by extracting the number of peaks of the curve for that one peak represents one season crop. This paper presents a method to extracting the multiple cropping index from remote sensing image and then the multiple cropping index of China is extracted from VEGETATION decadal composites NDVI time series of year 2000 and 2009. From the processed data, we can get the spatial distribution of tillage system of China, and then further discussion about cropping index change between the 10 years is conducted.

  11. Efficient spectral estimation for time series with intermittent gaps

    NASA Astrophysics Data System (ADS)

    Smith, L. T.; Constable, C.

    2009-12-01

    Data from magnetic satellites like CHAMP, Ørsted, and Swarm can be used to study electromagnetic induction in Earth’s mantle. Time series of internal and external spherical harmonic coefficients (usually those associated with the predominantly dipolar structure of ring current variations) are used to determine Earth’s electromagnetic response as a function of frequency of the external variations. Inversion of this response can yield information about electrical conductivity variations in Earth’s mantle. The inductive response depends on frequency through skin depth, so it is desirable to work with the longest time series possible. Intermittent gaps in available data complicate attempts to estimate the power or cross spectra and thus the electromagnetic response for satellite records. Complete data series are most effectively analyzed using direct multi-taper spectral estimation, either with prolate multitapers that efficiently minimize broadband bias, or with a set designed to minimize local bias. The latter group have frequently been approximated by sine tapers. Intermittent gaps in data may be patched over using custom designed interpolation. We focus on a different approach, using sets of multitapers explicitly designed to accommodate gaps in the data. The optimization problems for the prolate and minimum bias tapers are altered to allow a specific arrangement of data samples, producing a modified eigenvalue-eigenfunction problem. We have shown that the prolate tapers with gaps and the minimum bias tapers with gaps provide higher resolution spectral estimates with less leakage than spectral averaging of data sections bounded by gaps. Current work is focused on producing efficient algorithms for spectral estimation of data series with gaps. A major limitation is the time and memory needed for the solution of large eigenvalue problems used to calculate the tapers for long time series. Fortunately only a limited set of the largest eigenvalues are needed, and

  12. Wavelet transform approach for fitting financial time series data

    NASA Astrophysics Data System (ADS)

    Ahmed, Amel Abdoullah; Ismail, Mohd Tahir

    2015-10-01

    This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.

  13. Segmentation of biological multivariate time-series data

    NASA Astrophysics Data System (ADS)

    Omranian, Nooshin; Mueller-Roeber, Bernd; Nikoloski, Zoran

    2015-03-01

    Time-series data from multicomponent systems capture the dynamics of the ongoing processes and reflect the interactions between the components. The progression of processes in such systems usually involves check-points and events at which the relationships between the components are altered in response to stimuli. Detecting these events together with the implicated components can help understand the temporal aspects of complex biological systems. Here we propose a regularized regression-based approach for identifying breakpoints and corresponding segments from multivariate time-series data. In combination with techniques from clustering, the approach also allows estimating the significance of the determined breakpoints as well as the key components implicated in the emergence of the breakpoints. Comparative analysis with the existing alternatives demonstrates the power of the approach to identify biologically meaningful breakpoints in diverse time-resolved transcriptomics data sets from the yeast Saccharomyces cerevisiae and the diatom Thalassiosira pseudonana.

  14. Load Balancing Using Time Series Analysis for Soft Real Time Systems with Statistically Periodic Loads

    NASA Technical Reports Server (NTRS)

    Hailperin, M.

    1993-01-01

    This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.

  15. A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Hsu, Andrew T.

    1989-01-01

    A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.

  16. Supplementing environmental isotopes with time series methods to date groundwater

    NASA Astrophysics Data System (ADS)

    Farlin, Julien

    2015-04-01

    A popular method to estimate the transit time of groundwater is to fit the predictions of a lumped parameter model (LPM) to environmental isotope measurements. The fitting, or inverse modeling, procedure consists in rejecting all parameters (or parameter combinations for more complex LPMs) that exceeds a given error threshold. In many usual cases where this does not lead to a single acceptable solution, additional and independent data can prove useful to further eliminate some of the remaining solutions. In the case study presented here, groundwater transit times have been estimated by combining tritium, temperature, and discharge measurements. Tritium measurements from a series of contact springs draining the Luxembourg Sandstone aquifer were used to estimate the two parameters of an exponential piston flow model. The piston flow parameter gives the transit time of tritium through the thick unsaturated zone of the aquifer, while the exponential component corresponds to its mean transit time in the saturated zone. Due to the limited extent of the tritium time series and the fact that tritium activity has nearly returned to its background concentration, the solution of the inverse modeling was not unique. The discharge measurements were then used to reduce the number of retained parameter combinations by estimating independently from tritium the transit time through the unsaturated and saturated zones. The former was calculated from the time lag between a time series of net annual recharge over ten years and the fluctuations in discharge over that same period, while the latter was calculated from the discharge recession during the dry season. Although both methods necessitate relatively long time series of at least a few years, they reduce dramatically the range of estimated transit times. Another possibility is to use the temperature signal measured in spring water. The amplitude damping and its shift relatively to air temperature (which we used as proxy for the

  17. Time Accurate Unsteady Pressure Loads Simulated for the Space Launch System at a Wind Tunnel Condition

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.

    2015-01-01

    Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.

  18. The utility of accurate mass and LC elution time information in the analysis of complex proteomes

    SciTech Connect

    Norbeck, Angela D.; Monroe, Matthew E.; Adkins, Joshua N.; Anderson, Kevin K.; Daly, Don S.; Smith, Richard D.

    2005-08-01

    Theoretical tryptic digests of all predicted proteins from the genomes of three organisms of varying complexity were evaluated for specificity and possible utility of combined peptide accurate mass and predicted LC normalized elution time (NET) information. The uniqueness of each peptide was evaluated using its combined mass (+/- 5 ppm and 1 ppm) and NET value (no constraint, +/- 0.05 and 0.01 on a 0-1 NET scale). The set of peptides both underestimates actual biological complexity due to the lack of specific modifications, and overestimates the expected complexity since many proteins will not be present in the sample or observable on the mass spectrometer because of dynamic range limitations. Once a peptide is identified from an LCMS/MS experiment, its mass and elution time is representative of a unique fingerprint for that peptide. The uniqueness of that fingerprint in comparison to that for the other peptides present is indicative of the ability to confidently identify that peptide based on accurate mass and NET measurements. These measurements can be made using HPLC coupled with high resolution MS in a high-throughput manner. Results show that for organisms with comparatively small proteomes, such as Deinococcus radiodurans, modest mass and elution time accuracies are generally adequate for peptide identifications. For more complex proteomes, increasingly accurate easurements are required. However, the majority of proteins should be uniquely identifiable by using LC-MS with mass accuracies within +/- 1 ppm and elution time easurements within +/- 0.01 NET.

  19. The Puoko-nui CCD Time-Series Photometer

    NASA Astrophysics Data System (ADS)

    Chote, P.; Sullivan, D. J.

    2013-01-01

    Puoko-nui (te reo Maori for ‘big eye’) is a precision time series photometer developed at Victoria University of Wellington, primarily for use with the 1m McLellan telescope at Mt John University Observatory (MJUO), at Lake Tekapo, New Zealand. GPS based timing provides excellent timing accuracy, and online reduction software processes frames as they are acquired. The user is presented with a simple user interface that includes instrument control and an up to date lightcurve and Fourier amplitude spectrum of the target star. Puoko-nui has been operating in its current form since early 2011, where it is primarily used to monitor pulsating white dwarf stars.

  20. Rényi’s information transfer between financial time series

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Kleinert, Hagen; Shefaat, Mohammad

    2012-05-01

    In this paper, we quantify the statistical coherence between financial time series by means of the Rényi entropy. With the help of Campbell’s coding theorem, we show that the Rényi entropy selectively emphasizes only certain sectors of the underlying empirical distribution while strongly suppressing others. This accentuation is controlled with Rényi’s parameter q. To tackle the issue of the information flow between time series, we formulate the concept of Rényi’s transfer entropy as a measure of information that is transferred only between certain parts of underlying distributions. This is particularly pertinent in financial time series, where the knowledge of marginal events such as spikes or sudden jumps is of a crucial importance. We apply the Rényian information flow to stock market time series from 11 world stock indices as sampled at a daily rate in the time period 02.01.1990-31.12.2009. Corresponding heat maps and net information flows are represented graphically. A detailed discussion of the transfer entropy between the DAX and S&P500 indices based on minute tick data gathered in the period 02.04.2008-11.09.2009 is also provided. Our analysis shows that the bivariate information flow between world markets is strongly asymmetric with a distinct information surplus flowing from the Asia-Pacific region to both European and US markets. An important yet less dramatic excess of information also flows from Europe to the US. This is particularly clearly seen from a careful analysis of Rényi information flow between the DAX and S&P500 indices.

  1. FTSPlot: Fast Time Series Visualization for Large Datasets

    PubMed Central

    Riss, Michael

    2014-01-01

    The analysis of electrophysiological recordings often involves visual inspection of time series data to locate specific experiment epochs, mask artifacts, and verify the results of signal processing steps, such as filtering or spike detection. Long-term experiments with continuous data acquisition generate large amounts of data. Rapid browsing through these massive datasets poses a challenge to conventional data plotting software because the plotting time increases proportionately to the increase in the volume of data. This paper presents FTSPlot, which is a visualization concept for large-scale time series datasets using techniques from the field of high performance computer graphics, such as hierarchic level of detail and out-of-core data handling. In a preprocessing step, time series data, event, and interval annotations are converted into an optimized data format, which then permits fast, interactive visualization. The preprocessing step has a computational complexity of ; the visualization itself can be done with a complexity of and is therefore independent of the amount of data. A demonstration prototype has been implemented and benchmarks show that the technology is capable of displaying large amounts of time series data, event, and interval annotations lag-free with ms. The current 64-bit implementation theoretically supports datasets with up to bytes, on the x86_64 architecture currently up to bytes are supported, and benchmarks have been conducted with bytes/1 TiB or double precision samples. The presented software is freely available and can be included as a Qt GUI component in future software projects, providing a standard visualization method for long-term electrophysiological experiments. PMID:24732865

  2. Assessing spatial covariance among time series of abundance.

    PubMed

    Jorgensen, Jeffrey C; Ward, Eric J; Scheuerell, Mark D; Zabel, Richard W

    2016-04-01

    For species of conservation concern, an essential part of the recovery planning process is identifying discrete population units and their location with respect to one another. A common feature among geographically proximate populations is that the number of organisms tends to covary through time as a consequence of similar responses to exogenous influences. In turn, high covariation among populations can threaten the persistence of the larger metapopulation. Historically, explorations of the covariance in population size of species with many (>10) time series have been computationally difficult. Here, we illustrate how dynamic factor analysis (DFA) can be used to characterize diversity among time series of population abundances and the degree to which all populations can be represented by a few common signals. Our application focuses on anadromous Chinook salmon (Oncorhynchus tshawytscha), a species listed under the US Endangered Species Act, that is impacted by a variety of natural and anthropogenic factors. Specifically, we fit DFA models to 24 time series of population abundance and used model selection to identify the minimum number of latent variables that explained the most temporal variation after accounting for the effects of environmental covariates. We found support for grouping the time series according to 5 common latent variables. The top model included two covariates: the Pacific Decadal Oscillation in spring and summer. The assignment of populations to the latent variables matched the currently established population structure at a broad spatial scale. At a finer scale, there was more population grouping complexity. Some relatively distant populations were grouped together, and some relatively close populations - considered to be more aligned with each other - were more associated with populations further away. These coarse- and fine-grained examinations of spatial structure are important because they reveal different structural patterns not evident

  3. Time-Accurate, Unstructured-Mesh Navier-Stokes Computations with the Space-Time CESE Method

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2006-01-01

    Application of the newly emerged space-time conservation element solution element (CESE) method to compressible Navier-Stokes equations is studied. In contrast to Euler equations solvers, several issues such as boundary conditions, numerical dissipation, and grid stiffness warrant systematic investigations and validations. Non-reflecting boundary conditions applied at the truncated boundary are also investigated from the stand point of acoustic wave propagation. Validations of the numerical solutions are performed by comparing with exact solutions for steady-state as well as time-accurate viscous flow problems. The test cases cover a broad speed regime for problems ranging from acoustic wave propagation to 3D hypersonic configurations. Model problems pertinent to hypersonic configurations demonstrate the effectiveness of the CESE method in treating flows with shocks, unsteady waves, and separations. Good agreement with exact solutions suggests that the space-time CESE method provides a viable alternative for time-accurate Navier-Stokes calculations of a broad range of problems.

  4. Financial time series analysis based on information categorization method

    NASA Astrophysics Data System (ADS)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  5. Dynamical Analysis and Visualization of Tornadoes Time Series

    PubMed Central

    2015-01-01

    In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns. PMID:25790281

  6. Dynamical analysis and visualization of tornadoes time series.

    PubMed

    Lopes, António M; Tenreiro Machado, J A

    2015-01-01

    In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns. PMID:25790281

  7. Satellite time series analysis using Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Pannimpullath, R. Renosh; Doolaeghe, Diane; Loisel, Hubert; Vantrepotte, Vincent; Schmitt, Francois G.

    2016-04-01

    Geophysical fields possess large fluctuations over many spatial and temporal scales. Satellite successive images provide interesting sampling of this spatio-temporal multiscale variability. Here we propose to consider such variability by performing satellite time series analysis, pixel by pixel, using Empirical Mode Decomposition (EMD). EMD is a time series analysis technique able to decompose an original time series into a sum of modes, each one having a different mean frequency. It can be used to smooth signals, to extract trends. It is built in a data-adaptative way, and is able to extract information from nonlinear signals. Here we use MERIS Suspended Particulate Matter (SPM) data, on a weekly basis, during 10 years. There are 458 successive time steps. We have selected 5 different regions of coastal waters for the present study. They are Vietnam coastal waters, Brahmaputra region, St. Lawrence, English Channel and McKenzie. These regions have high SPM concentrations due to large scale river run off. Trend and Hurst exponents are derived for each pixel in each region. The energy also extracted using Hilberts Spectral Analysis (HSA) along with EMD method. Normalised energy computed for each mode for each region with the total energy. The total energy computed using all the modes are extracted using EMD method.

  8. Accurate time-of-flight measurement of particle based on ECL-TTL Timer

    NASA Astrophysics Data System (ADS)

    Li, Deping; Liu, Jianguo; Huang, Shuhua; Gui, Huaqiao; Cheng, Yin; Wang, Jie; Lu, Yihuai

    2014-11-01

    Because of its aerodynamic diameter of the aerosol particles are stranded in different parts of different human respiratory system, thus affecting human health. Therefore, how to continue to effectively monitor the aerosol particles become increasingly concerned about. Use flight time of aerosol particle beam spectroscopy of atmospheric aerosol particle size distribution is the typical method for monitoring atmospheric aerosol particle size and particle concentration measurement , and it is the key point to accurate measurement of aerosol particle size spectra that measurement of aerosol particle flight time. In order to achieve accurate measurements of aerosol particles in time-of-flight, this paper design an ECL-TTL high-speed timer with ECL counter and TTL counter. The high-speed timer includes a clock generation, high-speed timer and the control module. Clock Generation Module using a crystal plus multiplier design ideas, take advantage of the stability of the crystal to provide a stable 500MHz clock signal is high counter. High count module design using ECL and TTL counter mix design, timing accuracy while effectively maintaining , expanding the timing range, and simplifies circuit design . High-speed counter control module controls high-speed counter start, stop and reset timely based on aerosol particles time-of-flight, is a key part of the high-speed counting. The high-speed counting resolution of 4ns, the full scale of 4096ns, has been successfully applied Aerodynamic Particle Sizer, to meet the precise measurement of aerosol particles time-of-flight.

  9. Learning time series evolution by unsupervised extraction of correlations

    NASA Astrophysics Data System (ADS)

    Deco, Gustavo; Schürmann, Bernd

    1995-03-01

    We focus on the problem of modeling time series by learning statistical correlations between the past and present elements of the series in an unsupervised fashion. This kind of correlation is, in general, nonlinear, especially in the chaotic domain. Therefore the learning algorithm should be able to extract statistical correlations, i.e., higher-order correlations between the elements of the time signal. This problem can be viewed as a special case of factorial learning. Factorial learning may be formulated as an unsupervised redundancy reduction between the output components of a transformation that conserves the transmitted information. An information-theoretic-based architecture and learning paradigm are introduced. The neural architecture has only one layer and a triangular structure in order to transform elements by observing only the past and to conserve the volume. In this fashion, a transformation that guarantees transmission of information without loss is formulated. The learning rule decorrelates the output components of the network. Two methods are used: higher-order decorrelation by explicit evaluation of higher-order cumulants of the output distributions, and minimization of the sum of entropies of each output component in order to minimize the mutual information between them, assuming that the entropies have an upper bound given by Gibbs second theorem. After decorrelation between the output components, the correlation between the elements of the time series can be extracted by analyzing the trained neural architecture. As a consequence, we are able to model chaotic and nonchaotic time series. Furthermore, one critical point in modeling time series is the determination of the dimension of the embedding vector used, i.e., the number of components of the past that are needed to predict the future. With this method we can detect the embedding dimension by extracting the influence of the past on the future, i.e., the correlation of remote past and future

  10. Modeling Philippine Stock Exchange Composite Index Using Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Gayo, W. S.; Urrutia, J. D.; Temple, J. M. F.; Sandoval, J. R. D.; Sanglay, J. E. A.

    2015-06-01

    This study was conducted to develop a time series model of the Philippine Stock Exchange Composite Index and its volatility using the finite mixture of ARIMA model with conditional variance equations such as ARCH, GARCH, EG ARCH, TARCH and PARCH models. Also, the study aimed to find out the reason behind the behaviorof PSEi, that is, which of the economic variables - Consumer Price Index, crude oil price, foreign exchange rate, gold price, interest rate, money supply, price-earnings ratio, Producers’ Price Index and terms of trade - can be used in projecting future values of PSEi and this was examined using Granger Causality Test. The findings showed that the best time series model for Philippine Stock Exchange Composite index is ARIMA(1,1,5) - ARCH(1). Also, Consumer Price Index, crude oil price and foreign exchange rate are factors concluded to Granger cause Philippine Stock Exchange Composite Index.

  11. Time Series Analysis of 3D Coordinates Using Nonstochastic Observations

    NASA Astrophysics Data System (ADS)

    Velsink, Hiddo

    2016-03-01

    Adjustment and testing of a combination of stochastic and nonstochastic observations is applied to the deformation analysis of a time series of 3D coordinates. Nonstochastic observations are constant values that are treated as if they were observations. They are used to formulate constraints on the unknown parameters of the adjustment problem. Thus they describe deformation patterns. If deformation is absent, the epochs of the time series are supposed to be related via affine, similarity or congruence transformations. S-basis invariant testing of deformation patterns is treated. The model is experimentally validated by showing the procedure for a point set of 3D coordinates, determined from total station measurements during five epochs. The modelling of two patterns, the movement of just one point in several epochs, and of several points, is shown. Full, rank deficient covariance matrices of the 3D coordinates, resulting from free network adjustments of the total station measurements of each epoch, are used in the analysis.

  12. Fast Nonparametric Clustering of Structured Time-Series.

    PubMed

    Hensman, James; Rattray, Magnus; Lawrence, Neil D

    2015-02-01

    In this publication, we combine two Bayesian nonparametric models: the Gaussian Process (GP) and the Dirichlet Process (DP). Our innovation in the GP model is to introduce a variation on the GP prior which enables us to model structured time-series data, i.e., data containing groups where we wish to model inter- and intra-group variability. Our innovation in the DP model is an implementation of a new fast collapsed variational inference procedure which enables us to optimize our variational approximation significantly faster than standard VB approaches. In a biological time series application we show how our model better captures salient features of the data, leading to better consistency with existing biological classifications, while the associated inference algorithm provides a significant speed-up over EM-based variational inference. PMID:26353249

  13. Deviations from uniform power law scaling in nonstationary time series

    NASA Technical Reports Server (NTRS)

    Viswanathan, G. M.; Peng, C. K.; Stanley, H. E.; Goldberger, A. L.

    1997-01-01

    A classic problem in physics is the analysis of highly nonstationary time series that typically exhibit long-range correlations. Here we test the hypothesis that the scaling properties of the dynamics of healthy physiological systems are more stable than those of pathological systems by studying beat-to-beat fluctuations in the human heart rate. We develop techniques based on the Fano factor and Allan factor functions, as well as on detrended fluctuation analysis, for quantifying deviations from uniform power-law scaling in nonstationary time series. By analyzing extremely long data sets of up to N = 10(5) beats for 11 healthy subjects, we find that the fluctuations in the heart rate scale approximately uniformly over several temporal orders of magnitude. By contrast, we find that in data sets of comparable length for 14 subjects with heart disease, the fluctuations grow erratically, indicating a loss of scaling stability.

  14. Simple Patterns in Fluctuations of Time Series of Economic Interest

    NASA Astrophysics Data System (ADS)

    Fanchiotti, H.; García Canal, C. A.; García Zúñiga, H.

    Time series corresponding to nominal exchange rates between the US dollar and Argentina, Brazil and European Economic Community currencies; different financial indexes as the Industrial Dow Jones, the British Footsie, the German DAX Composite, the Australian Share Price and the Nikkei Cash and also different Argentine local tax revenues, are analyzed looking for the appearance of simple patterns and the possible definition of forecast evaluators. In every case, the statistical fractal dimensions are obtained from the behavior of the corresponding variance of increments at a given lag. The detrended fluctuation analysis of the data in terms of the corresponding exponent in the resulting power law is carried out. Finally, the frequency power spectra of all the time series considered are computed and compared

  15. Analyzing Single-Molecule Time Series via Nonparametric Bayesian Inference

    PubMed Central

    Hines, Keegan E.; Bankston, John R.; Aldrich, Richard W.

    2015-01-01

    The ability to measure the properties of proteins at the single-molecule level offers an unparalleled glimpse into biological systems at the molecular scale. The interpretation of single-molecule time series has often been rooted in statistical mechanics and the theory of Markov processes. While existing analysis methods have been useful, they are not without significant limitations including problems of model selection and parameter nonidentifiability. To address these challenges, we introduce the use of nonparametric Bayesian inference for the analysis of single-molecule time series. These methods provide a flexible way to extract structure from data instead of assuming models beforehand. We demonstrate these methods with applications to several diverse settings in single-molecule biophysics. This approach provides a well-constrained and rigorously grounded method for determining the number of biophysical states underlying single-molecule data. PMID:25650922

  16. The multiscale analysis between stock market time series

    NASA Astrophysics Data System (ADS)

    Shi, Wenbin; Shang, Pengjian

    2015-11-01

    This paper is devoted to multiscale cross-correlation analysis on stock market time series, where multiscale DCCA cross-correlation coefficient as well as multiscale cross-sample entropy (MSCE) is applied. Multiscale DCCA cross-correlation coefficient is a realization of DCCA cross-correlation coefficient on multiple scales. The results of this method present a good scaling characterization. More significantly, this method is able to group stock markets by areas. Compared to multiscale DCCA cross-correlation coefficient, MSCE presents a more remarkable scaling characterization and the value of each log return of financial time series decreases with the increasing of scale factor. But the results of grouping is not as good as multiscale DCCA cross-correlation coefficient.

  17. The Connected Scatterplot for Presenting Paired Time Series.

    PubMed

    Haroz, Steve; Kosara, Robert; Franconeri, Steven L

    2016-09-01

    The connected scatterplot visualizes two related time series in a scatterplot and connects the points with a line in temporal sequence. News media are increasingly using this technique to present data under the intuition that it is understandable and engaging. To explore these intuitions, we (1) describe how paired time series relationships appear in a connected scatterplot, (2) qualitatively evaluate how well people understand trends depicted in this format, (3) quantitatively measure the types and frequency of misinter pretations, and (4) empirically evaluate whether viewers will preferentially view graphs in this format over the more traditional format. The results suggest that low-complexity connected scatterplots can be understood with little explanation, and that viewers are biased towards inspecting connected scatterplots over the more traditional format. We also describe misinterpretations of connected scatterplots and propose further research into mitigating these mistakes for viewers unfamiliar with the technique. PMID:26600062

  18. A Multiscale Approach to InSAR Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Hetland, E. A.; Muse, P.; Simons, M.; Lin, N.; Dicaprio, C. J.

    2010-12-01

    We present a technique to constrain time-dependent deformation from repeated satellite-based InSAR observations of a given region. This approach, which we call MInTS (Multiscale InSAR Time Series analysis), relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. As opposed to single pixel InSAR time series techniques, MInTS takes advantage of both spatial and temporal characteristics of the deformation field. We use a weighting scheme which accounts for the presence of localized holes due to decorrelation or unwrapping errors in any given interferogram. We represent time-dependent deformation using a dictionary of general basis functions, capable of detecting both steady and transient processes. The estimation is regularized using a model resolution based smoothing so as to be able to capture rapid deformation where there are temporally dense radar acquisitions and to avoid oscillations during time periods devoid of acquisitions. MInTS also has the flexibility to explicitly parametrize known time-dependent processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). We use cross validation to choose the regularization penalty parameter in the inversion of for the time-dependent deformation field. We demonstrate MInTS using a set of 63 ERS-1/2 and 29 Envisat interferograms for Long Valley Caldera.

  19. Multifractal analysis of time series generated by discrete Ito equations

    SciTech Connect

    Telesca, Luciano; Czechowski, Zbigniew; Lovallo, Michele

    2015-06-15

    In this study, we show that discrete Ito equations with short-tail Gaussian marginal distribution function generate multifractal time series. The multifractality is due to the nonlinear correlations, which are hidden in Markov processes and are generated by the interrelation between the drift and the multiplicative stochastic forces in the Ito equation. A link between the range of the generalized Hurst exponents and the mean of the squares of all averaged net forces is suggested.

  20. Identification of neutral biochemical network models from time series data

    PubMed Central

    Vilela, Marco; Vinga, Susana; Maia, Marco A Grivet Mattoso; Voit, Eberhard O; Almeida, Jonas S

    2009-01-01

    Background The major difficulty in modeling biological systems from multivariate time series is the identification of parameter sets that endow a model with dynamical behaviors sufficiently similar to the experimental data. Directly related to this parameter estimation issue is the task of identifying the structure and regulation of ill-characterized systems. Both tasks are simplified if the mathematical model is canonical, i.e., if it is constructed according to strict guidelines. Results In this report, we propose a method for the identification of admissible parameter sets of canonical S-systems from biological time series. The method is based on a Monte Carlo process that is combined with an improved version of our previous parameter optimization algorithm. The method maps the parameter space into the network space, which characterizes the connectivity among components, by creating an ensemble of decoupled S-system models that imitate the dynamical behavior of the time series with sufficient accuracy. The concept of sloppiness is revisited in the context of these S-system models with an exploration not only of different parameter sets that produce similar dynamical behaviors but also different network topologies that yield dynamical similarity. Conclusion The proposed parameter estimation methodology was applied to actual time series data from the glycolytic pathway of the bacterium Lactococcus lactis and led to ensembles of models with different network topologies. In parallel, the parameter optimization algorithm was applied to the same dynamical data upon imposing a pre-specified network topology derived from prior biological knowledge, and the results from both strategies were compared. The results suggest that the proposed method may serve as a powerful exploration tool for testing hypotheses and the design of new experiments. PMID:19416537

  1. Time series prediction using a rational fraction neural networks

    SciTech Connect

    Lee, K.; Lee, Y.C.; Barnes, C.; Aldrich, C.H.; Kindel, J.

    1988-01-01

    An efficient neural network based on a rational fraction representation has been trained to perform time series prediction. The network is a generalization of the Volterra-Wiener network while still retaining the computational efficiency of the latter. Because of the second order convergent nature of the learning algorithm, the rational net is computationally far more efficient than multilayer networks. The rational fractional representation is, however, more restrictive than the multilayer networks.

  2. Stratospheric ozone time series analysis using dynamical linear models

    NASA Astrophysics Data System (ADS)

    Laine, Marko; Kyrölä, Erkki

    2013-04-01

    We describe a hierarchical statistical state space model for ozone profile time series. The time series are from satellite measurements by the SAGE II and GOMOS instruments spanning years 1984-2012. The original data sets are combined and gridded monthly using 10 degree latitude bands, and covering 20-60 km with 1 km vertical spacing. Model components include level, trend, seasonal effect with solar activity, and quasi biennial oscillations as proxy variables. A typical feature of an atmospheric time series is that they are not stationary but exhibit both slowly varying and abrupt changes in the distributional properties. These are caused by external forcing such as changes in the solar activity or volcanic eruptions. Further, the data sampling is often nonuniform, there are data gaps, and the uncertainty of the observations can vary. When observations are combined from various sources there will be instrument and retrieval method related biases. The differences in sampling lead also to uncertainties. Standard classical ARIMA type of statistical time series methods are mostly useless for atmospheric data. A more general approach makes use of dynamical linear models and Kalman filter type of sequential algorithms. These state space models assume a linear relationship between the unknown state of the system and the observations and for the process evolution of the hidden states. They are still flexible enough to model both smooth trends and sudden changes. The above mentioned methodological challenges are discussed, together with analysis of change points in trends related to recovery of stratospheric ozone. This work is part of the ESA SPIN and ozone CCI projects.

  3. A data-fitting procedure for chaotic time series

    SciTech Connect

    McDonough, J.M.; Mukerji, S.; Chung, S.

    1998-10-01

    In this paper the authors introduce data characterizations for fitting chaotic data to linear combinations of one-dimensional maps (say, of the unit interval) for use in subgrid-scale turbulence models. They test the efficacy of these characterizations on data generated by a chaotically-forced Burgers` equation and demonstrate very satisfactory results in terms of modeled time series, power spectra and delay maps.

  4. An online novel adaptive filter for denoising time series measurements.

    PubMed

    Willis, Andrew J

    2006-04-01

    A nonstationary form of the Wiener filter based on a principal components analysis is described for filtering time series data possibly derived from noisy instrumentation. The theory of the filter is developed, implementation details are presented and two examples are given. The filter operates online, approximating the maximum a posteriori optimal Bayes reconstruction of a signal with arbitrarily distributed and non stationary statistics. PMID:16649562

  5. One nanosecond time synchronization using series and GPS

    NASA Technical Reports Server (NTRS)

    Buennagel, A. A.; Spitzmesser, D. J.; Young, L. E.

    1983-01-01

    Subnanosecond time sychronization between two remote rubidium frequency standards is verified by a traveling clock comparison. Using a novel, code ignorant Global Positioning System (GPS) receiver developed at JPL, the SERIES geodetic baseline measurement system is applied to establish the offset between the 1 Hz. outputs of the remote standards. Results of the two intercomparison experiments to date are presented as well as experimental details.

  6. Time series regression model for infectious disease and weather.

    PubMed

    Imai, Chisato; Armstrong, Ben; Chalabi, Zaid; Mangtani, Punam; Hashizume, Masahiro

    2015-10-01

    Time series regression has been developed and long used to evaluate the short-term associations of air pollution and weather with mortality or morbidity of non-infectious diseases. The application of the regression approaches from this tradition to infectious diseases, however, is less well explored and raises some new issues. We discuss and present potential solutions for five issues often arising in such analyses: changes in immune population, strong autocorrelations, a wide range of plausible lag structures and association patterns, seasonality adjustments, and large overdispersion. The potential approaches are illustrated with datasets of cholera cases and rainfall from Bangladesh and influenza and temperature in Tokyo. Though this article focuses on the application of the traditional time series regression to infectious diseases and weather factors, we also briefly introduce alternative approaches, including mathematical modeling, wavelet analysis, and autoregressive integrated moving average (ARIMA) models. Modifications proposed to standard time series regression practice include using sums of past cases as proxies for the immune population, and using the logarithm of lagged disease counts to control autocorrelation due to true contagion, both of which are motivated from "susceptible-infectious-recovered" (SIR) models. The complexity of lag structures and association patterns can often be informed by biological mechanisms and explored by using distributed lag non-linear models. For overdispersed models, alternative distribution models such as quasi-Poisson and negative binomial should be considered. Time series regression can be used to investigate dependence of infectious diseases on weather, but may need modifying to allow for features specific to this context. PMID:26188633

  7. New Comprehensive System to Construct Speleothem Fabrics Time Series

    NASA Astrophysics Data System (ADS)

    Frisia, S.; Borsato, A.

    2014-12-01

    Speleothem fabrics record processes that influence the way geochemical proxy data are encoded in speleothems, yet, there has been little advance in the use of fabrics as a complement to palaeo-proxy datasets since the fabric classification proposed by us in 2010. The systematic use of fabrics documentation in speleothem science has been limited by the absence of a comprehensive, numerical system that would allow constructing fabric time series comparable with the widely used geochemical time series. Documentation of speleothem fabrics is fundamental for a robust interpretation of speleothem time series where stable isotopes and trace elements are used as proxy, because fabrics highlight depositional as well as post-depositional processes whose understanding complements reconstructions based on geochemistry. Here we propose a logic system allowing transformation of microscope observations into numbers tied to acronyms that specify each fabric type and subtype. The rationale for ascribing progressive numbers to fabrics is based on the most up-to-date growth models. In this conceptual framework, the progression reflects hydrological conditions, bio-mediation and diagenesis. The lowest numbers are given to calcite fabrics formed at relatively constant drip rates: the columnar types (compact and open). Higher numbers are ascribed to columnar fabrics characterized by presence of impurities that cause elongation or lattice distortion (Elongated, Fascicular Optic and Radiaxial calcites). The sequence progresses with the dendritic fabrics, followed by micrite (M), which has been observed in association with microbial films. Microsparite (Ms) and mosaic calcite (Mc) have the highest numbers, being considered as diagenetic. Acronyms and subfixes are intended to become universally acknowledged. Thus, fabrics can be plotted vs. age to yield time series, where numbers are replaced by the acronyms. This will result in a visual representation of climate- or environmental

  8. Mode Analysis with Autocorrelation Method (Single Time Series) in Tokamak

    NASA Astrophysics Data System (ADS)

    Saadat, Shervin; Salem, Mohammad K.; Goranneviss, Mahmoud; Khorshid, Pejman

    2010-08-01

    In this paper plasma mode analyzed with statistical method that designated Autocorrelation function. Auto correlation function used from one time series, so for this purpose we need one Minov coil. After autocorrelation analysis on mirnov coil data, spectral density diagram is plotted. Spectral density diagram from symmetries and trends can analyzed plasma mode. RHF fields effects with this method ate investigated in IR-T1 tokamak and results corresponded with multichannel methods such as SVD and FFT.

  9. Geodetic Time Series: An Overview of UNAVCO Community Resources and Examples of Time Series Analysis Using GPS and Strainmeter Data

    NASA Astrophysics Data System (ADS)

    Phillips, D. A.; Meertens, C. M.; Hodgkinson, K. M.; Puskas, C. M.; Boler, F. M.; Snett, L.; Mattioli, G. S.

    2013-12-01

    We present an overview of time series data, tools and services available from UNAVCO along with two specific and compelling examples of geodetic time series analysis. UNAVCO provides a diverse suite of geodetic data products and cyberinfrastructure services to support community research and education. The UNAVCO archive includes data from 2500+ continuous GPS stations, borehole geophysics instruments (strainmeters, seismometers, tiltmeters, pore pressure sensors), and long baseline laser strainmeters. These data span temporal scales from seconds to decades and provide global spatial coverage with regionally focused networks including the EarthScope Plate Boundary Observatory (PBO) and COCONet. This rich, open access dataset is a tremendous resource that enables the exploration, identification and analysis of time varying signals associated with crustal deformation, reference frame determinations, isostatic adjustments, atmospheric phenomena, hydrologic processes and more. UNAVCO provides a suite of time series exploration and analysis resources including static plots, dynamic plotting tools, and data products and services designed to enhance time series analysis. The PBO GPS network allow for identification of ~1 mm level deformation signals. At some GPS stations seasonal signals and longer-term trends in both the vertical and horizontal components can be dominated by effects of hydrological loading from natural and anthropogenic sources. Modeling of hydrologic deformation using GLDAS and a variety of land surface models (NOAH, MOSAIC, VIC and CLM) shows promise for independently modeling hydrologic effects and separating them from tectonic deformation as well as anthropogenic loading sources. A major challenge is to identify where loading is dominant and corrections from GLDAS can apply and where pumping is the dominant signal and corrections are not possible without some other data. In another arena, the PBO strainmeter network was designed to capture small short

  10. Data visualization in interactive maps and time series

    NASA Astrophysics Data System (ADS)

    Maigne, Vanessa; Evano, Pascal; Brockmann, Patrick; Peylin, Philippe; Ciais, Philippe

    2014-05-01

    State-of-the-art data visualization has nothing to do with plots and maps we used few years ago. Many opensource tools are now available to provide access to scientific data and implement accessible, interactive, and flexible web applications. Here we will present a web site opened November 2013 to create custom global and regional maps and time series from research models and datasets. For maps, we explore and get access to data sources from a THREDDS Data Server (TDS) with the OGC WMS protocol (using the ncWMS implementation) then create interactive maps with the OpenLayers javascript library and extra information layers from a GeoServer. Maps become dynamic, zoomable, synchroneaously connected to each other, and exportable to Google Earth. For time series, we extract data from a TDS with the Netcdf Subset Service (NCSS) then display interactive graphs with a custom library based on the Data Driven Documents javascript library (D3.js). This time series application provides dynamic functionalities such as interpolation, interactive zoom on different axes, display of point values, and export to different formats. These tools were implemented for the Global Carbon Atlas (http://www.globalcarbonatlas.org): a web portal to explore, visualize, and interpret global and regional carbon fluxes from various model simulations arising from both human activities and natural processes, a work led by the Global Carbon Project.

  11. Characterization of aggressive prostate cancer using ultrasound RF time series

    NASA Astrophysics Data System (ADS)

    Khojaste, Amir; Imani, Farhad; Moradi, Mehdi; Berman, David; Siemens, D. Robert; Sauerberi, Eric E.; Boag, Alexander H.; Abolmaesumi, Purang; Mousavi, Parvin

    2015-03-01

    Prostate cancer is the most prevalently diagnosed and the second cause of cancer-related death in North American men. Several approaches have been proposed to augment detection of prostate cancer using different imaging modalities. Due to advantages of ultrasound imaging, these approaches have been the subject of several recent studies. This paper presents the results of a feasibility study on differentiating between lower and higher grade prostate cancer using ultrasound RF time series data. We also propose new spectral features of RF time series to highlight aggressive prostate cancer in small ROIs of size 1 mm × 1 mm in a cohort of 19 ex vivo specimens of human prostate tissue. In leave-one-patient-out cross-validation strategy, an area under accumulated ROC curve of 0.8 has been achieved with overall sensitivity and specificity of 81% and 80%, respectively. The current method shows promising results on differentiating between lower and higher grade of prostate cancer using ultrasound RF time series.

  12. Time series analysis for psychological research: examining and forecasting change

    PubMed Central

    Jebb, Andrew T.; Tay, Louis; Wang, Wei; Huang, Qiming

    2015-01-01

    Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials. PMID:26106341

  13. Time series analysis for psychological research: examining and forecasting change.

    PubMed

    Jebb, Andrew T; Tay, Louis; Wang, Wei; Huang, Qiming

    2015-01-01

    Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials. PMID:26106341

  14. Hydroxyl time series and recirculation in turbulent nonpremixed swirling flames

    SciTech Connect

    Guttenfelder, Walter A.; Laurendeau, Normand M.; Ji, Jun; King, Galen B.; Gore, Jay P.; Renfro, Michael W.

    2006-10-15

    Time-series measurements of OH, as related to accompanying flow structures, are reported using picosecond time-resolved laser-induced fluorescence (PITLIF) and particle-imaging velocimetry (PIV) for turbulent, swirling, nonpremixed methane-air flames. The [OH] data portray a primary reaction zone surrounding the internal recirculation zone, with residual OH in the recirculation zone approaching chemical equilibrium. Modeling of the OH electronic quenching environment, when compared to fluorescence lifetime measurements, offers additional evidence that the reaction zone burns as a partially premixed flame. A time-series analysis affirms the presence of thin flamelet-like regions based on the relation between swirl-induced turbulence and fluctuations of [OH] in the reaction and recirculation zones. The OH integral time-scales are found to correspond qualitatively to local mean velocities. Furthermore, quantitative dependencies can be established with respect to axial position, Reynolds number, and global equivalence ratio. Given these relationships, the OH time-scales, and thus the primary reaction zone, appear to be dominated by convection-driven fluctuations. Surprisingly, the OH time-scales for these nominally swirling flames demonstrate significant similarities to previous PITLIF results in nonpremixed jet flames. (author)

  15. A method for generating high resolution satellite image time series

    NASA Astrophysics Data System (ADS)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation

  16. Estimating the Lyapunov spectrum of time delay feedback systems from scalar time series

    NASA Astrophysics Data System (ADS)

    Hegger, Rainer

    1999-08-01

    On the basis of a recently developed method for modeling time delay systems, we propose a procedure to estimate the spectrum of Lyapunov exponents from a scalar time series. It turns out that the spectrum is approximated very well and allows for good estimates of the Lyapunov dimension even if the sampling rate of the time series is so low that the infinite dimensional tangent space is spanned quite sparsely.

  17. Estimating the Lyapunov spectrum of time delay feedback systems from scalar time series.

    PubMed

    Hegger, R

    1999-08-01

    On the basis of a recently developed method for modeling time delay systems, we propose a procedure to estimate the spectrum of Lyapunov exponents from a scalar time series. It turns out that the spectrum is approximated very well and allows for good estimates of the Lyapunov dimension even if the sampling rate of the time series is so low that the infinite dimensional tangent space is spanned quite sparsely. PMID:11969918

  18. Time Series Models for Salinity and Other Environmental Factors in the Apalachicola Estuarine System

    NASA Astrophysics Data System (ADS)

    Niu, X.-F.; Edmiston, H. L.; Bailey, G. O.

    1998-04-01

    In this study, statistical structures of several key environmental variables in the Apalachicola Bay area are investigated based on daily averages over a 45-month sampling period. Univariate time series models are established for Apalachicola River discharge, local rainfall, transformations of wind speed and wind direction, Apalachicola Bay salinity and water-level fluctuations measured at two stations. Rational form transfer function models, which take into account time-lagged effects of environmental variables and serial correlations of the salinity series, are developed to describe the relationship between salinity and other factors. Compared with time-lagged multiple regression models, transfer function models explain a much larger proportion of salinity variability and give more accurate estimates of the environmental effects.Copyrigt 1998 Academic Press Limited

  19. Robust, automatic GPS station velocities and velocity time series

    NASA Astrophysics Data System (ADS)

    Blewitt, G.; Kreemer, C.; Hammond, W. C.

    2014-12-01

    Automation in GPS coordinate time series analysis makes results more objective and reproducible, but not necessarily as robust as the human eye to detect problems. Moreover, it is not a realistic option to manually scan our current load of >20,000 time series per day. This motivates us to find an automatic way to estimate station velocities that is robust to outliers, discontinuities, seasonality, and noise characteristics (e.g., heteroscedasticity). Here we present a non-parametric method based on the Theil-Sen estimator, defined as the median of velocities vij=(xj-xi)/(tj-ti) computed between all pairs (i, j). Theil-Sen estimators produce statistically identical solutions to ordinary least squares for normally distributed data, but they can tolerate up to 29% of data being problematic. To mitigate seasonality, our proposed estimator only uses pairs approximately separated by an integer number of years (N-δt)<(tj-ti )<(N+δt), where δt is chosen to be small enough to capture seasonality, yet large enough to reduce random error. We fix N=1 to maximally protect against discontinuities. In addition to estimating an overall velocity, we also use these pairs to estimate velocity time series. To test our methods, we process real data sets that have already been used with velocities published in the NA12 reference frame. Accuracy can be tested by the scatter of horizontal velocities in the North American plate interior, which is known to be stable to ~0.3 mm/yr. This presents new opportunities for time series interpretation. For example, the pattern of velocity variations at the interannual scale can help separate tectonic from hydrological processes. Without any step detection, velocity estimates prove to be robust for stations affected by the Mw7.2 2010 El Mayor-Cucapah earthquake, and velocity time series show a clear change after the earthquake, without any of the usual parametric constraints, such as relaxation of postseismic velocities to their preseismic values.

  20. Detecting hidden nodes in complex networks from time series.

    PubMed

    Su, Ri-Qi; Wang, Wen-Xu; Lai, Ying-Cheng

    2012-06-01

    We develop a general method to detect hidden nodes in complex networks, using only time series from nodes that are accessible to external observation. Our method is based on compressive sensing and we formulate a general framework encompassing continuous- and discrete-time and the evolutionary-game type of dynamical systems as well. For concrete demonstration, we present an example of detecting hidden nodes from an experimental social network. Our paradigm for detecting hidden nodes is expected to find applications in a variety of fields where identifying hidden or black-boxed objects based on a limited amount of data is of interest. PMID:23005153

  1. Studies in Astronomical Time Series Analysis. VI. Bayesian Block Representations

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James

    2013-01-01

    This paper addresses the problem of detecting and characterizing local variability in time series and other forms of sequential data. The goal is to identify and characterize statistically significant variations, at the same time suppressing the inevitable corrupting observational errors. We present a simple nonparametric modeling technique and an algorithm implementing it-an improved and generalized version of Bayesian Blocks [Scargle 1998]-that finds the optimal segmentation of the data in the observation interval. The structure of the algorithm allows it to be used in either a real-time trigger mode, or a retrospective mode. Maximum likelihood or marginal posterior functions to measure model fitness are presented for events, binned counts, and measurements at arbitrary times with known error distributions. Problems addressed include those connected with data gaps, variable exposure, extension to piece- wise linear and piecewise exponential representations, multivariate time series data, analysis of variance, data on the circle, other data modes, and dispersed data. Simulations provide evidence that the detection efficiency for weak signals is close to a theoretical asymptotic limit derived by [Arias-Castro, Donoho and Huo 2003]. In the spirit of Reproducible Research [Donoho et al. (2008)] all of the code and data necessary to reproduce all of the figures in this paper are included as auxiliary material.

  2. STUDIES IN ASTRONOMICAL TIME SERIES ANALYSIS. VI. BAYESIAN BLOCK REPRESENTATIONS

    SciTech Connect

    Scargle, Jeffrey D.; Norris, Jay P.; Jackson, Brad; Chiang, James

    2013-02-20

    This paper addresses the problem of detecting and characterizing local variability in time series and other forms of sequential data. The goal is to identify and characterize statistically significant variations, at the same time suppressing the inevitable corrupting observational errors. We present a simple nonparametric modeling technique and an algorithm implementing it-an improved and generalized version of Bayesian Blocks-that finds the optimal segmentation of the data in the observation interval. The structure of the algorithm allows it to be used in either a real-time trigger mode, or a retrospective mode. Maximum likelihood or marginal posterior functions to measure model fitness are presented for events, binned counts, and measurements at arbitrary times with known error distributions. Problems addressed include those connected with data gaps, variable exposure, extension to piecewise linear and piecewise exponential representations, multivariate time series data, analysis of variance, data on the circle, other data modes, and dispersed data. Simulations provide evidence that the detection efficiency for weak signals is close to a theoretical asymptotic limit derived by Arias-Castro et al. In the spirit of Reproducible Research all of the code and data necessary to reproduce all of the figures in this paper are included as supplementary material.

  3. Nonlinear time-series-based adaptive control applications

    NASA Technical Reports Server (NTRS)

    Mohler, R. R.; Rajkumar, V.; Zakrzewski, R. R.

    1991-01-01

    A control design methodology based on a nonlinear time-series reference model is presented. It is indicated by highly nonlinear simulations that such designs successfully stabilize troublesome aircraft maneuvers undergoing large changes in angle of attack as well as large electric power transients due to line faults. In both applications, the nonlinear controller was significantly better than the corresponding linear adaptive controller. For the electric power network, a flexible AC transmission system with series capacitor power feedback control is studied. A bilinear autoregressive moving average reference model is identified from system data, and the feedback control is manipulated according to a desired reference state. The control is optimized according to a predictive one-step quadratic performance index. A similar algorithm is derived for control of rapid changes in aircraft angle of attack over a normally unstable flight regime. In the latter case, however, a generalization of a bilinear time-series model reference includes quadratic and cubic terms in angle of attack.

  4. Unraveling the cause-effect relation between time series

    NASA Astrophysics Data System (ADS)

    Liang, X. San

    2014-11-01

    Given two time series, can one faithfully tell, in a rigorous and quantitative way, the cause and effect between them? Based on a recently rigorized physical notion, namely, information flow, we solve an inverse problem and give this important and challenging question, which is of interest in a wide variety of disciplines, a positive answer. Here causality is measured by the time rate of information flowing from one series to the other. The resulting formula is tight in form, involving only commonly used statistics, namely, sample covariances; an immediate corollary is that causation implies correlation, but correlation does not imply causation. It has been validated with touchstone linear and nonlinear series, purportedly generated with one-way causality that evades the traditional approaches. It has also been applied successfully to the investigation of real-world problems; an example presented here is the cause-and-effect relation between the two climate modes, El Niño and the Indian Ocean Dipole (IOD), which have been linked to hazards in far-flung regions of the globe. In general, the two modes are mutually causal, but the causality is asymmetric: El Niño tends to stabilize IOD, while IOD functions to make El Niño more uncertain. To El Niño, the information flowing from IOD manifests itself as a propagation of uncertainty from the Indian Ocean.

  5. Time-series animation techniques for visualizing urban growth

    USGS Publications Warehouse

    Acevedo, W.; Masuoka, P.

    1997-01-01

    Time-series animation is a visually intuitive way to display urban growth. Animations of landuse change for the Baltimore-Washington region were generated by showing a series of images one after the other in sequential order. Before creating an animation, various issues which will affect the appearance of the animation should be considered, including the number of original data frames to use, the optimal animation display speed, the number of intermediate frames to create between the known frames, and the output media on which the animations will be displayed. To create new frames between the known years of data, the change in each theme (i.e. urban development, water bodies, transportation routes) must be characterized and an algorithm developed to create the in-between frames. Example time-series animations were created using a temporal GIS database of the Baltimore-Washington area. Creating the animations involved generating raster images of the urban development, water bodies, and principal transportation routes; overlaying the raster images on a background image; and importing the frames to a movie file. Three-dimensional perspective animations were created by draping each image over digital elevation data prior to importing the frames to a movie file. ?? 1997 Elsevier Science Ltd.

  6. Unraveling the cause-effect relation between time series.

    PubMed

    Liang, X San

    2014-11-01

    Given two time series, can one faithfully tell, in a rigorous and quantitative way, the cause and effect between them? Based on a recently rigorized physical notion, namely, information flow, we solve an inverse problem and give this important and challenging question, which is of interest in a wide variety of disciplines, a positive answer. Here causality is measured by the time rate of information flowing from one series to the other. The resulting formula is tight in form, involving only commonly used statistics, namely, sample covariances; an immediate corollary is that causation implies correlation, but correlation does not imply causation. It has been validated with touchstone linear and nonlinear series, purportedly generated with one-way causality that evades the traditional approaches. It has also been applied successfully to the investigation of real-world problems; an example presented here is the cause-and-effect relation between the two climate modes, El Niño and the Indian Ocean Dipole (IOD), which have been linked to hazards in far-flung regions of the globe. In general, the two modes are mutually causal, but the causality is asymmetric: El Niño tends to stabilize IOD, while IOD functions to make El Niño more uncertain. To El Niño, the information flowing from IOD manifests itself as a propagation of uncertainty from the Indian Ocean. PMID:25493782

  7. Deriving crop calendar using NDVI time-series

    NASA Astrophysics Data System (ADS)

    Patel, J. H.; Oza, M. P.

    2014-11-01

    Agricultural intensification is defined in terms as cropping intensity, which is the numbers of crops (single, double and triple) per year in a unit cropland area. Information about crop calendar (i.e. number of crops in a parcel of land and their planting & harvesting dates and date of peak vegetative stage) is essential for proper management of agriculture. Remote sensing sensors provide a regular, consistent and reliable measurement of vegetation response at various growth stages of crop. Therefore it is ideally suited for monitoring purpose. The spectral response of vegetation, as measured by the Normalized Difference Vegetation Index (NDVI) and its profiles, can provide a new dimension for describing vegetation growth cycle. The analysis based on values of NDVI at regular time interval provides useful information about various crop growth stages and performance of crop in a season. However, the NDVI data series has considerable amount of local fluctuation in time domain and needs to be smoothed so that dominant seasonal behavior is enhanced. Based on temporal analysis of smoothed NDVI series, it is possible to extract number of crop cycles per year and their crop calendar. In the present study, a methodology is developed to extract key elements of crop growth cycle (i.e. number of crops per year and their planting - peak - harvesting dates). This is illustrated by analysing MODIS-NDVI data series of one agricultural year (from June 2012 to May 2013) over Gujarat. Such an analysis is very useful for analysing dynamics of kharif and rabi crops.

  8. A Physiological Time Series Dynamics-Based Approach to Patient Monitoring and Outcome Prediction

    PubMed Central

    Lehman, Li-Wei H.; Adams, Ryan P.; Mayaud, Louis; Moody, George B.; Malhotra, Atul; Mark, Roger G.; Nemati, Shamim

    2015-01-01

    Cardiovascular variables such as heart rate (HR) and blood pressure (BP) are regulated by an underlying control system, and therefore, the time series of these vital signs exhibit rich dynamical patterns of interaction in response to external perturbations (e.g., drug administration), as well as pathological states (e.g., onset of sepsis and hypotension). A question of interest is whether “similar” dynamical patterns can be identified across a heterogeneous patient cohort, and be used for prognosis of patients’ health and progress. In this paper, we used a switching vector autoregressive framework to systematically learn and identify a collection of vital sign time series dynamics, which are possibly recurrent within the same patient and may be shared across the entire cohort. We show that these dynamical behaviors can be used to characterize the physiological “state” of a patient. We validate our technique using simulated time series of the cardiovascular system, and human recordings of HR and BP time series from an orthostatic stress study with known postural states. Using the HR and BP dynamics of an intensive care unit (ICU) cohort of over 450 patients from the MIMIC II database, we demonstrate that the discovered cardiovascular dynamics are significantly associated with hospital mortality (dynamic modes 3 and 9, p = 0.001, p = 0.006 from logistic regression after adjusting for the APACHE scores). Combining the dynamics of BP time series and SAPS-I or APACHE-III provided a more accurate assessment of patient survival/mortality in the hospital than using SAPS-I and APACHE-III alone (p = 0.005 and p = 0.045). Our results suggest that the discovered dynamics of vital sign time series may contain additional prognostic value beyond that of the baseline acuity measures, and can potentially be used as an independent predictor of outcomes in the ICU. PMID:25014976

  9. Measuring frequency domain granger causality for multiple blocks of interacting time series.

    PubMed

    Faes, Luca; Nollo, Giandomenico

    2013-04-01

    In the past years, several frequency-domain causality measures based on vector autoregressive time series modeling have been suggested to assess directional connectivity in neural systems. The most followed approaches are based on representing the considered set of multiple time series as a realization of two or three vector-valued processes, yielding the so-called Geweke linear feedback measures, or as a realization of multiple scalar-valued processes, yielding popular measures like the directed coherence (DC) and the partial DC (PDC). In the present study, these two approaches are unified and generalized by proposing novel frequency-domain causality measures which extend the existing measures to the analysis of multiple blocks of time series. Specifically, the block DC (bDC) and block PDC (bPDC) extend DC and PDC to vector-valued processes, while their logarithmic counterparts, denoted as multivariate total feedback [Formula: see text] and direct feedback [Formula: see text], represent into a full multivariate framework the Geweke's measures. Theoretical analysis of the proposed measures shows that they: (i) possess desirable properties of causality measures; (ii) are able to reflect either direct causality (bPDC, [Formula: see text] or total (direct + indirect) causality (bDC, [Formula: see text] between time series blocks; (iii) reduce to the DC and PDC measures for scalar-valued processes, and to the Geweke's measures for pairs of processes; (iv) are able to capture internal dependencies between the scalar constituents of the analyzed vector processes. Numerical analysis showed that the proposed measures can be efficiently estimated from short time series, allow to represent in an objective, compact way the information derived from the causal analysis of several pairs of time series, and may detect frequency domain causality more accurately than existing measures. The proposed measures find their natural application in the evaluation of directional

  10. Time-Series Analysis of Supergranule Characterstics at Solar Minimum

    NASA Technical Reports Server (NTRS)

    Williams, Peter E.; Pesnell, W. Dean

    2013-01-01

    Sixty days of Doppler images from the Solar and Heliospheric Observatory (SOHO) / Michelson Doppler Imager (MDI) investigation during the 1996 and 2008 solar minima have been analyzed to show that certain supergranule characteristics (size, size range, and horizontal velocity) exhibit fluctuations of three to five days. Cross-correlating parameters showed a good, positive correlation between supergranulation size and size range, and a moderate, negative correlation between size range and velocity. The size and velocity do exhibit a moderate, negative correlation, but with a small time lag (less than 12 hours). Supergranule sizes during five days of co-temporal data from MDI and the Solar Dynamics Observatory (SDO) / Helioseismic Magnetic Imager (HMI) exhibit similar fluctuations with a high level of correlation between them. This verifies the solar origin of the fluctuations, which cannot be caused by instrumental artifacts according to these observations. Similar fluctuations are also observed in data simulations that model the evolution of the MDI Doppler pattern over a 60-day period. Correlations between the supergranule size and size range time-series derived from the simulated data are similar to those seen in MDI data. A simple toy-model using cumulative, uncorrelated exponential growth and decay patterns at random emergence times produces a time-series similar to the data simulations. The qualitative similarities between the simulated and the observed time-series suggest that the fluctuations arise from stochastic processes occurring within the solar convection zone. This behavior, propagating to surface manifestations of supergranulation, may assist our understanding of magnetic-field-line advection, evolution, and interaction.

  11. Application of the accurate mass and time tag approach in studies of the human blood lipidome

    SciTech Connect

    Ding, Jie; Sorensen, Christina M.; Jaitly, Navdeep; Jiang, Hongliang; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Metz, Thomas O.

    2008-08-15

    We report a preliminary demonstration of the accurate mass and time (AMT) tag approach for lipidomics. Initial data-dependent LC-MS/MS analyses of human plasma, erythrocyte, and lymphocyte lipids were performed in order to identify lipid molecular species in conjunction with complementary accurate mass and isotopic distribution information. Identified lipids were used to populate initial lipid AMT tag databases containing 250 and 45 entries for those species detected in positive and negative electrospray ionization (ESI) modes, respectively. The positive ESI database was then utilized to identify human plasma, erythrocyte, and lymphocyte lipids in high-throughput quantitative LC-MS analyses based on the AMT tag approach. We were able to define the lipid profiles of human plasma, erythrocytes, and lymphocytes based on qualitative and quantitative differences in lipid abundance. In addition, we also report on the optimization of a reversed-phase LC method for the separation of lipids in these sample types.

  12. Inverse sequential procedures for the monitoring of time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1995-01-01

    When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.

  13. Quantile-oriented nonlinear time series modelling of river flows

    NASA Astrophysics Data System (ADS)

    Elek, P.; Márkus, L.

    2003-04-01

    Daily river flows of Tisza River in Hungary are investigated. Various by now classical methods suggest that the series exhibit substantial long memory. Thus, as a first step, a fractional ARIMA model may be fitted to the appropriately deseasonalised data. Synthetic streamflow series can then be generated easily from the bootstrapped innovations. (This approach has recently been used by Montanari et al., Water Resources Res. 33, 1035-1044., 1997.) However, simulating flows for the Tisza river this way, we experience a significant difference between the empirical and the synthetic density functions as well as the quantiles. It brings attention to the fact that the innovations are not independent: their squares and their absolute values are autocorrelated. Furthermore, they display nonseasonal periods of high and low variances. We propose to fit a smooth transition generalised autoregressive conditional heteroscedastic (GARCH) process to the innovations. Similar models are frequently used in mathematical finance to analyse uncorrelated series with time-varying variance. However, as hydrologic time series are less heavy-tailed than financial ones, the models must differ as well. In a standard GARCH-model the dependence of the variance on the lagged innovation is quadratic whereas in the model that we intend to present in detail at the conference, it is a bounded function. The new model is superior to the previously mentioned ones in approximating the probability density, the high quantiles and the extremal behaviour of the empirical river flows. Acknowledgement: This research was supported by Hungarian Research Dev. Programme NKFP, grant No. 3/067/2001 and by Nat. Sci. Research Fund OTKA, grant No. T 032725.

  14. Monitoring Forest Regrowth Using a Multi-Platform Time Series

    NASA Technical Reports Server (NTRS)

    Sabol, Donald E., Jr.; Smith, Milton O.; Adams, John B.; Gillespie, Alan R.; Tucker, Compton J.

    1996-01-01

    Over the past 50 years, the forests of western Washington and Oregon have been extensively harvested for timber. This has resulted in a heterogeneous mosaic of remaining mature forests, clear-cuts, new plantations, and second-growth stands that now occur in areas that formerly were dominated by extensive old-growth forests and younger forests resulting from fire disturbance. Traditionally, determination of seral stage and stand condition have been made using aerial photography and spot field observations, a methodology that is not only time- and resource-intensive, but falls short of providing current information on a regional scale. These limitations may be solved, in part, through the use of multispectral images which can cover large areas at spatial resolutions in the order of tens of meters. The use of multiple images comprising a time series potentially can be used to monitor land use (e.g. cutting and replanting), and to observe natural processes such as regeneration, maturation and phenologic change. These processes are more likely to be spectrally observed in a time series composed of images taken during different seasons over a long period of time. Therefore, for many areas, it may be necessary to use a variety of images taken with different imaging systems. A common framework for interpretation is needed that reduces topographic, atmospheric, instrumental, effects as well as differences in lighting geometry between images. The present state of remote-sensing technology in general use does not realize the full potential of the multispectral data in areas of high topographic relief. For example, the primary method for analyzing images of forested landscapes in the Northwest has been with statistical classifiers (e.g. parallelepiped, nearest-neighbor, maximum likelihood, etc.), often applied to uncalibrated multispectral data. Although this approach has produced useful information from individual images in some areas, landcover classes defined by these

  15. Remote-Sensing Time Series Analysis, a Vegetation Monitoring Tool

    NASA Technical Reports Server (NTRS)

    McKellip, Rodney; Prados, Donald; Ryan, Robert; Ross, Kenton; Spruce, Joseph; Gasser, Gerald; Greer, Randall

    2008-01-01

    The Time Series Product Tool (TSPT) is software, developed in MATLAB , which creates and displays high signal-to- noise Vegetation Indices imagery and other higher-level products derived from remotely sensed data. This tool enables automated, rapid, large-scale regional surveillance of crops, forests, and other vegetation. TSPT temporally processes high-revisit-rate satellite imagery produced by the Moderate Resolution Imaging Spectroradiometer (MODIS) and by other remote-sensing systems. Although MODIS imagery is acquired daily, cloudiness and other sources of noise can greatly reduce the effective temporal resolution. To improve cloud statistics, the TSPT combines MODIS data from multiple satellites (Aqua and Terra). The TSPT produces MODIS products as single time-frame and multitemporal change images, as time-series plots at a selected location, or as temporally processed image videos. Using the TSPT program, MODIS metadata is used to remove and/or correct bad and suspect data. Bad pixel removal, multiple satellite data fusion, and temporal processing techniques create high-quality plots and animated image video sequences that depict changes in vegetation greenness. This tool provides several temporal processing options not found in other comparable imaging software tools. Because the framework to generate and use other algorithms is established, small modifications to this tool will enable the use of a large range of remotely sensed data types. An effective remote-sensing crop monitoring system must be able to detect subtle changes in plant health in the earliest stages, before the effects of a disease outbreak or other adverse environmental conditions can become widespread and devastating. The integration of the time series analysis tool with ground-based information, soil types, crop types, meteorological data, and crop growth models in a Geographic Information System, could provide the foundation for a large-area crop-surveillance system that could identify

  16. Dynamical recurrent neural networks--towards environmental time series prediction.

    PubMed

    Aussem, A; Murtagh, F; Sarazin, M

    1995-06-01

    Dynamical Recurrent Neural Networks (DRNN) (Aussem 1995a) are a class of fully recurrent networks obtained by modeling synapses as autoregressive filters. By virtue of their internal dynamic, these networks approximate the underlying law governing the time series by a system of nonlinear difference equations of internal variables. They therefore provide history-sensitive forecasts without having to be explicitly fed with external memory. The model is trained by a local and recursive error propagation algorithm called temporal-recurrent-backpropagation. The efficiency of the procedure benefits from the exponential decay of the gradient terms backpropagated through the adjoint network. We assess the predictive ability of the DRNN model with meterological and astronomical time series recorded around the candidate observation sites for the future VLT telescope. The hope is that reliable environmental forecasts provided with the model will allow the modern telescopes to be preset, a few hours in advance, in the most suited instrumental mode. In this perspective, the model is first appraised on precipitation measurements with traditional nonlinear AR and ARMA techniques using feedforward networks. Then we tackle a complex problem, namely the prediction of astronomical seeing, known to be a very erratic time series. A fuzzy coding approach is used to reduce the complexity of the underlying laws governing the seeing. Then, a fuzzy correspondence analysis is carried out to explore the internal relationships in the data. Based on a carefully selected set of meteorological variables at the same time-point, a nonlinear multiple regression, termed nowcasting (Murtagh et al. 1993, 1995), is carried out on the fuzzily coded seeing records. The DRNN is shown to outperform the fuzzy k-nearest neighbors method. PMID:7496587

  17. Loading effects in GPS vertical displacement time series

    NASA Astrophysics Data System (ADS)

    Memin, A.; Boy, J. P.; Santamaría-Gómez, A.; Watson, C.; Gravelle, M.; Tregoning, P.

    2015-12-01

    Surface deformations due to loading, with yet no comprehensive representation, account for a significant part of the variability in geodetic time series. We assess effects of loading in GPS vertical displacement time series at several frequency bands. We compare displacement derived from up-to-date loading models to two global sets of positioning time series, and investigate how they are reduced looking at interannual periods (> 2 months), intermediate periods (> 7 days) and the whole spectrum (> 1day). We assess the impact of interannual loading on estimating velocities. We compute atmospheric loading effects using surface pressure fields from the ECMWF. We use the inverted barometer (IB) hypothesis valid for periods exceeding a week to describe the ocean response to the pressure forcing. We used general circulation ocean model (ECCO and GLORYS) to account for wind, heat and fresh water flux. We separately use the Toulouse Unstructured Grid Ocean model (TUGO-m), forced by air pressure and winds, to represent the dynamics of the ocean response at high frequencies. The continental water storage is described using GLDAS/Noah and MERRA-land models. Non-hydrology loading reduces the variability of the observed vertical displacement differently according to the frequency band. The hydrology loading leads to a further reduction mostly at annual periods. ECMWF+TUGO-m better agrees with vertical surface motion than the ECMWF+IB model at all frequencies. The interannual deformation is time-correlated at most of the locations. It is adequately described by a power-law process of spectral index varying from -1.5 to -0.2. Depending on the power-law parameters, the predicted non-linear deformation due to mass loading variations leads to vertical velocity biases up to 0.7 mm/yr when estimated from 5 years of continuous observations. The maximum velocity bias can reach up to 1 mm/yr in regions around the southern Tropical band.

  18. Identifying multiple periodicities in sparse photon event time series

    NASA Astrophysics Data System (ADS)

    Koen, Chris

    2016-07-01

    The data considered are event times (e.g. photon arrival times, or the occurrence of sharp pulses). The source is multiperiodic, or the data could be multiperiodic because several unresolved sources contribute to the time series. Most events may be unobserved, either because the source is intermittent, or because some events are below the detection limit. The data may also be contaminated by spurious pulses. The problem considered is the determination of the periods in the data. A two-step procedure is proposed: in the first, a likely period is identified; in the second, events associated with this periodicity are removed from the time series. The steps are repeated until the remaining events do not exhibit any periodicity. A number of period-finding methods from the literature are reviewed, and a new maximum likelihood statistic is also introduced. It is shown that the latter is competitive compared to other techniques. The proposed methodology is tested on simulated data. Observations of two rotating radio transients are discussed, but contrary to claims in the literature, no evidence for multiperiodicity could be found.

  19. Identifying Multiple Periodicities in Sparse Photon Event Time Series

    NASA Astrophysics Data System (ADS)

    Koen, Chris

    2016-04-01

    The data considered are event times (e.g. photon arrival times, or the occurrence of sharp pulses). The source is multiperiodic, or the data could be multiperiodic because several unresolved sources contribute to the time series. Most events may be unobserved, either because the source is intermittent, or because some events are below the detection limit. The data may also be contaminated by spurious pulses. The problem considered is the determination of the periods in the data. A two-step procedure is proposed: in the first, a likely period is identified; in the second, events associated with this periodicity are removed from the time series. The steps are repeated until the remaining events do not exhibit any periodicity. A number of period-finding methods from the literature are reviewed, and a new maximum likelihood statistic is also introduced. It is shown that the latter is competitive compared to other techniques. The proposed methodology is tested on simulated data. Observations of two rotating radio transients are discussed, but contrary to claims in the literature, no evidence for multiperiodicity could be found.

  20. Long-term time series prediction using OP-ELM.

    PubMed

    Grigorievskiy, Alexander; Miche, Yoan; Ventelä, Anne-Mari; Séverin, Eric; Lendasse, Amaury

    2014-03-01

    In this paper, an Optimally Pruned Extreme Learning Machine (OP-ELM) is applied to the problem of long-term time series prediction. Three known strategies for the long-term time series prediction i.e. Recursive, Direct and DirRec are considered in combination with OP-ELM and compared with a baseline linear least squares model and Least-Squares Support Vector Machines (LS-SVM). Among these three strategies DirRec is the most time consuming and its usage with nonlinear models like LS-SVM, where several hyperparameters need to be adjusted, leads to relatively heavy computations. It is shown that OP-ELM, being also a nonlinear model, allows reasonable computational time for the DirRec strategy. In all our experiments, except one, OP-ELM with DirRec strategy outperforms the linear model with any strategy. In contrast to the proposed algorithm, LS-SVM behaves unstably without variable selection. It is also shown that there is no superior strategy for OP-ELM: any of three can be the best. In addition, the prediction accuracy of an ensemble of OP-ELM is studied and it is shown that averaging predictions of the ensemble can improve the accuracy (Mean Square Error) dramatically. PMID:24365536

  1. Periodicity detection method for small-sample time series datasets.

    PubMed

    Tominaga, Daisuke

    2010-01-01

    Time series of gene expression often exhibit periodic behavior under the influence of multiple signal pathways, and are represented by a model that incorporates multiple harmonics and noise. Most of these data, which are observed using DNA microarrays, consist of few sampling points in time, but most periodicity detection methods require a relatively large number of sampling points. We have previously developed a detection algorithm based on the discrete Fourier transform and Akaike's information criterion. Here we demonstrate the performance of the algorithm for small-sample time series data through a comparison with conventional and newly proposed periodicity detection methods based on a statistical analysis of the power of harmonics.We show that this method has higher sensitivity for data consisting of multiple harmonics, and is more robust against noise than other methods. Although "combinatorial explosion" occurs for large datasets, the computational time is not a problem for small-sample datasets. The MATLAB/GNU Octave script of the algorithm is available on the author's web site: http://www.cbrc.jp/%7Etominaga/piccolo/. PMID:21151841

  2. An accurate assay for HCV based on real-time fluorescence detection of isothermal RNA amplification.

    PubMed

    Wu, Xuping; Wang, Jianfang; Song, Jinyun; Li, Jiayan; Yang, Yongfeng

    2016-09-01

    Hepatitis C virus (HCV) is one of the common reasons of liver fibrosis and hepatocellular carcinoma (HCC). Early, rapid and accurate HCV RNA detection is important to prevent and control liver disease. A simultaneous amplification and testing (SAT) assay, which is based on isothermal amplification of RNA and real-time fluorescence detection, was designed to optimize routine HCV RNA detection. In this study, HCV RNA and an internal control (IC) were amplified and analyzed simultaneously by SAT assay and detection of fluorescence using routine real-time PCR equipment. The assay detected as few as 10 copies of HCV RNA transcripts. We tested 705 serum samples with SAT, among which 96.4% (680/705) showed consistent results compared with routine real-time PCR. About 92% (23/25) discordant samples were confirmed to be same results as SAT-HCV by using a second real-time PCR. The sensitivity and specificity of SAT-HCV assay were 99.6% (461/463) and 100% (242/242), respectively. In conclusion, the SAT assay is an accurate test with a high specificity and sensitivity which may increase the detection rate of HCV. It is therefore a promising tool to diagnose HCV infection. PMID:27283884

  3. Moored instrument for time series studies of primary production and other microbial rate processes

    SciTech Connect

    Taylor, C.D.; Doherty, K.W.

    1993-01-20

    The goal of this project is to build and test a Time Series Submersible Incubation Device (TS-SID) capable of the autonomous in situ measurement of phytoplankton production and other rate processes for a period of up at least three months. The instrument is conceptually based on a recently constructed Submersible Incubation Device (SID). The TS-SID is to possess the ability to periodically incubate samples in the presence of an appropriate tracer, and to store 94 chemically fixed subsamples for later analysis. The TS-SID has been designed to accurately simulate the natural environment, and to avoid trace metal contamination and physical damage to cells. Devices for biofouling control of internal and external surfaces are to be incorporated into the instrument. After the time series capabilities of the instrument have been successfully evaluated by medium-term coastal time series studies (up to one month), longer-term coastal time series studies (2-3 months) will be conducted to evaluate the biofouling prevention measures that have been used with the instrument.

  4. Moored instrument for time series studies of primary production and other microbial rate processes. Progress report

    SciTech Connect

    Taylor, C.D.; Doherty, K.W.

    1993-01-20

    The goal of this project is to build and test a Time Series Submersible Incubation Device (TS-SID) capable of the autonomous in situ measurement of phytoplankton production and other rate processes for a period of up at least three months. The instrument is conceptually based on a recently constructed Submersible Incubation Device (SID). The TS-SID is to possess the ability to periodically incubate samples in the presence of an appropriate tracer, and to store 94 chemically fixed subsamples for later analysis. The TS-SID has been designed to accurately simulate the natural environment, and to avoid trace metal contamination and physical damage to cells. Devices for biofouling control of internal and external surfaces are to be incorporated into the instrument. After the time series capabilities of the instrument have been successfully evaluated by medium-term coastal time series studies (up to one month), longer-term coastal time series studies (2-3 months) will be conducted to evaluate the biofouling prevention measures that have been used with the instrument.

  5. Investigating fractal property and respiratory modulation of human heartbeat time series using empirical mode decomposition.

    PubMed

    Yeh, Jia-Rong; Sun, Wei-Zen; Shieh, Jiann-Shing; Huang, Norden E

    2010-06-01

    The human heartbeat interval reflects a complicated composition with different underlying modulations and the reactions against environmental inputs. As a result, the human heartbeat interval is a complex time series and its complexity can be scaled using various physical quantifications, such as the property of long-term correlation in detrended fluctuation analysis (DFA). Recently, empirical mode decomposition (EMD) has been shown to be a dyadic filter bank resembling those involved in wavelet decomposition. Moreover, the hierarchy of the extracted modes may be exploited for getting access to the Hurst exponent, which also reflects the property of long-term correlation for a stochastic time series. In this paper, we present significant findings for the dynamic properties of human heartbeat time series by EMD. According to our results, EMD provides a more accurate access to long-term correlation than Hurst exponent does. Moreover, the first intrinsic mode function (IMF 1) is an indicator of orderliness, which reflects the modulation of respiratory sinus arrhythmia (RSA) for healthy subjects or performs a characteristic component similar to that decomposed from a stochastic time series for subjects with congestive heart failure (CHF) and atrial fibrillation (AF). In addition, the averaged amplitude of IMF 1 acts as a parameter of RSA modulation, which reflects significantly negative correlation with aging. These findings lead us to a better understanding of the cardiac system. PMID:20338798

  6. A Framework and Algorithms for Multivariate Time Series Analytics (MTSA): Learning, Monitoring, and Recommendation

    ERIC Educational Resources Information Center

    Ngan, Chun-Kit

    2013-01-01

    Making decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts…

  7. Time accurate application of the MacCormack 2-4 scheme on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Hudson, Dale A.; Long, Lyle N.

    1995-01-01

    Many recent computational efforts in turbulence and acoustics research have used higher order numerical algorithms. One popular method has been the explicit MacCormack 2-4 scheme. The MacCormack 2-4 scheme is second order accurate in time and fourth order accurate in space, and is stable for CFL's below 2/3. Current research has shown that the method can give accurate results but does exhibit significant Gibbs phenomena at sharp discontinuities. The impact of adding Jameson type second, third, and fourth order artificial viscosity was examined here. Category 2 problems, the nonlinear traveling wave and the Riemann problem, were computed using a CFL number of 0.25. This research has found that dispersion errors can be significantly reduced or nearly eliminated by using a combination of second and third order terms in the damping. Use of second and fourth order terms reduced the magnitude of dispersion errors but not as effectively as the second and third order combination. The program was coded using Thinking Machine's CM Fortran, a variant of Fortran 90/High Performance Fortran, and was executed on a 2K CM-200. Simple extrapolation boundary conditions were used for both problems.

  8. Synthesis of rainfall time series in a high temporal resolution

    NASA Astrophysics Data System (ADS)

    Callau Poduje, Ana Claudia; Haberlandt, Uwe

    2014-05-01

    In order to optimize the design and operation of urban drainage systems, long and continuous rain series in a high temporal resolution are essential. As the length of the rainfall records is often short, particularly the data available with the temporal and regional resolutions required for urban hydrology, it is necessary to find some numerical representation of the precipitation phenomenon to generate long synthetic rainfall series. An Alternating Renewal Model (ARM) is applied for this purpose, which consists of two structures: external and internal. The former is the sequence of wet and dry spells, described by their durations which are simulated stochastically. The internal structure is characterized by the amount of rain corresponding to each wet spell and its distribution within the spell. A multivariate frequency analysis is applied to analyze the internal structure of the wet spells and to generate synthetic events. The stochastic time series must reproduce the statistical characteristics of observed high resolution precipitation measurements used to generate them. The spatio-temporal interdependencies between stations are addressed by resampling the continuous synthetic series based on the Simulated Annealing (SA) procedure. The state of Lower-Saxony and surrounding areas, located in the north-west of Germany is used to develop the ARM. A total of 26 rainfall stations with high temporal resolution records, i.e. rainfall data every 5 minutes, are used to define the events, find the most suitable probability distributions, calibrate the corresponding parameters, simulate long synthetic series and evaluate the results. The length of the available data ranges from 10 to 20 years. The rainfall series involved in the different steps of calculation are compared using a rainfall-runoff model to simulate the runoff behavior in urban areas. The EPA Storm Water Management Model (SWMM) is applied for this evaluation. The results show a good representation of the

  9. Estimation of coupling between time-delay systems from time series

    NASA Astrophysics Data System (ADS)

    Prokhorov, M. D.; Ponomarenko, V. I.

    2005-07-01

    We propose a method for estimation of coupling between the systems governed by scalar time-delay differential equations of the Mackey-Glass type from the observed time series data. The method allows one to detect the presence of certain types of linear coupling between two time-delay systems, to define the type, strength, and direction of coupling, and to recover the model equations of coupled time-delay systems from chaotic time series corrupted by noise. We verify our method using both numerical and experimental data.

  10. Characterizability of metabolic pathway systems from time series data.

    PubMed

    Voit, Eberhard O

    2013-12-01

    Over the past decade, the biomathematical community has devoted substantial effort to the complicated challenge of estimating parameter values for biological systems models. An even more difficult issue is the characterization of functional forms for the processes that govern these systems. Most parameter estimation approaches tacitly assume that these forms are known or can be assumed with some validity. However, this assumption is not always true. The recently proposed method of Dynamic Flux Estimation (DFE) addresses this problem in a genuinely novel fashion for metabolic pathway systems. Specifically, DFE allows the characterization of fluxes within such systems through an analysis of metabolic time series data. Its main drawback is the fact that DFE can only directly be applied if the pathway system contains as many metabolites as unknown fluxes. This situation is unfortunately rare. To overcome this roadblock, earlier work in this field had proposed strategies for augmenting the set of unknown fluxes with independent kinetic information, which however is not always available. Employing Moore-Penrose pseudo-inverse methods of linear algebra, the present article discusses an approach for characterizing fluxes from metabolic time series data that is applicable even if the pathway system is underdetermined and contains more fluxes than metabolites. Intriguingly, this approach is independent of a specific modeling framework and unaffected by noise in the experimental time series data. The results reveal whether any fluxes may be characterized and, if so, which subset is characterizable. They also help with the identification of fluxes that, if they could be determined independently, would allow the application of DFE. PMID:23391489

  11. Analysis of Multipsectral Time Series for supporting Forest Management Plans

    NASA Astrophysics Data System (ADS)

    Simoniello, T.; Carone, M. T.; Costantini, G.; Frattegiani, M.; Lanfredi, M.; Macchiato, M.

    2010-05-01

    Adequate forest management requires specific plans based on updated and detailed mapping. Multispectral satellite time series have been largely applied to forest monitoring and studies at different scales tanks to their capability of providing synoptic information on some basic parameters descriptive of vegetation distribution and status. As a low expensive tool for supporting forest management plans in operative context, we tested the use of Landsat-TM/ETM time series (1987-2006) in the high Agri Valley (Southern Italy) for planning field surveys as well as for the integration of existing cartography. As preliminary activity to make all scenes radiometrically consistent the no-change regression normalization was applied to the time series; then all the data concerning available forest maps, municipal boundaries, water basins, rivers, and roads were overlapped in a GIS environment. From the 2006 image we elaborated the NDVI map and analyzed the distribution for each land cover class. To separate the physiological variability and identify the anomalous areas, a threshold on the distributions was applied. To label the non homogenous areas, a multitemporal analysis was performed by separating heterogeneity due to cover changes from that linked to basilar unit mapping and classification labelling aggregations. Then a map of priority areas was produced to support the field survey plan. To analyze the territorial evolution, the historical land cover maps were elaborated by adopting a hybrid classification approach based on a preliminary segmentation, the identification of training areas, and a subsequent maximum likelihood categorization. Such an analysis was fundamental for the general assessment of the territorial dynamics and in particular for the evaluation of the efficacy of past intervention activities.

  12. Assemblage time series reveal biodiversity change but not systematic loss.

    PubMed

    Dornelas, Maria; Gotelli, Nicholas J; McGill, Brian; Shimadzu, Hideyasu; Moyes, Faye; Sievers, Caya; Magurran, Anne E

    2014-04-18

    The extent to which biodiversity change in local assemblages contributes to global biodiversity loss is poorly understood. We analyzed 100 time series from biomes across Earth to ask how diversity within assemblages is changing through time. We quantified patterns of temporal α diversity, measured as change in local diversity, and temporal β diversity, measured as change in community composition. Contrary to our expectations, we did not detect systematic loss of α diversity. However, community composition changed systematically through time, in excess of predictions from null models. Heterogeneous rates of environmental change, species range shifts associated with climate change, and biotic homogenization may explain the different patterns of temporal α and β diversity. Monitoring and understanding change in species composition should be a conservation priority. PMID:24744374

  13. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    PubMed Central

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  14. Time series analysis of electron flux at geostationary orbit

    SciTech Connect

    Szita, S.; Rodgers, D.J.; Johnstone, A.D.

    1996-07-01

    Time series of energetic (42.9{endash}300 keV) electron flux data from the geostationary satellite Meteosat-3 shows variability over various timescales. Of particular interest are the strong local time dependence of the flux data and the large flux peaks associated with particle injection events which occur over a timescale of a few hours. Fourier analysis has shown that for this energy range, the average electron flux diurnal variation can be approximated by a combination of two sine waves with periods of 12 and 24 hours. The data have been further examined using wavelet analysis, which shows how the diurnal variation changes and where it appears most significant. The injection events have a characteristic appearance but do not occur in phase with one another and therefore do not show up in a Fourier spectrum. Wavelet analysis has been used to look for characteristic time scales for these events. {copyright} {ital 1996 American Institute of Physics.}

  15. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    PubMed

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  16. Time series predictions with neural nets: Application to airborne pollen forecasting

    NASA Astrophysics Data System (ADS)

    Arizmendi, C. M.; Sanchez, J. R.; Ramos, N. E.; Ramos, G. I.

    1993-09-01

    Pollen allergy is a common disease causing rhinoconjunctivitis (hay fever) in 5 10% of the population. Medical studies have indicated that pollen related diseases could be highly reduced if future pollen contents in the air could be predicted. In this work we have developed a new forecasting method that applies the ability of neural nets to predict the future behaviour of chaotic systems in order to make accurate predictions of the airborne pollen concentration. The method requires that the neural net be fed with non-zero values, which restricts the method predictions to the period following the start of pollen flight. The operational method outlined here constitutes a different point of view with respect to the more generally used forecasts of time series analysis, which require input of many meteorological parameters. Excellent forecasts were obtained training a neural net by using only the time series pollen concentration values.

  17. On the maximum-entropy/autoregressive modeling of time series

    NASA Technical Reports Server (NTRS)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  18. Time series analysis of waterfowl species number change

    NASA Astrophysics Data System (ADS)

    Mengjung Chou, Caroline; Da-Wei Tsai, David; Honglay Chen, Paris

    2014-05-01

    The objective of this study is to analyze the time series of waterfowl species numbers in Da-du estuary which was set up as Important Bird Areas (IBAs) from birdlife international in 2004. The multiplicative decomposition method has been adapted to determine the species variations, including long-term (T), seasonal (S), circular (C), and irregular (I). The results indicated: (1) The long-term trend decreased with time from 1989 to 2012; (2) There were two seasonal high peaks in April and November each year with the lowest peak in June. Moreover, since the winter visitors had the dominant numbers in total species numbers, the seasonal changes were mainly depended on the winter birds' migration. (3) The waterfowl was gradually restored back from lowest point in 1996, but the difference between 1989 and 2003 indicated the irreversible effect existed already. (4) The irregular variation was proved as a random distribution by several statistical tests including normality test, homogeneity of variance, independence test and variation probability method to portray the characteristics of the distributions and to demonstrate its randomness. Consequently, this study exhibited the time series analysis methods were reasonable well to present the waterfowl species changes numerically. And those results could be the precious data for the researches of ecosystem succession and anthropogenic impacts in the estuary.

  19. Exploratory joint and separate tracking of geographically related time series

    NASA Astrophysics Data System (ADS)

    Balasingam, Balakumar; Willett, Peter; Levchuk, Georgiy; Freeman, Jared

    2012-05-01

    Target tracking techniques have usually been applied to physical systems via radar, sonar or imaging modalities. But the same techniques - filtering, association, classification, track management - can be applied to nontraditional data such as one might find in other fields such as economics, business and national defense. In this paper we explore a particular data set. The measurements are time series collected at various sites; but other than that little is known about it. We shall refer to as the data as representing the Megawatt hour (MWH) output of various power plants located in Afghanistan. We pose such questions as: 1. Which power plants seem to have a common model? 2. Do any power plants change their models with time? 3. Can power plant behavior be predicted, and if so, how far to the future? 4. Are some of the power plants stochastically linked? That is, do we observed a lack of power demand at one power plant as implying a surfeit of demand elsewhere? The observations seem well modeled as hidden Markov. This HMM modeling is compared to other approaches; and tests are continued to other (albeit self-generated) data sets with similar characteristics. Keywords: Time-series analysis, hidden Markov models, statistical similarity, clustering weighted

  20. Albedo Pattern Recognition and Time-Series Analyses in Malaysia

    NASA Astrophysics Data System (ADS)

    Salleh, S. A.; Abd Latif, Z.; Mohd, W. M. N. Wan; Chan, A.

    2012-07-01

    Pattern recognition and time-series analyses will enable one to evaluate and generate predictions of specific phenomena. The albedo pattern and time-series analyses are very much useful especially in relation to climate condition monitoring. This study is conducted to seek for Malaysia albedo pattern changes. The pattern recognition and changes will be useful for variety of environmental and climate monitoring researches such as carbon budgeting and aerosol mapping. The 10 years (2000-2009) MODIS satellite images were used for the analyses and interpretation. These images were being processed using ERDAS Imagine remote sensing software, ArcGIS 9.3, the 6S code for atmospherical calibration and several MODIS tools (MRT, HDF2GIS, Albedo tools). There are several methods for time-series analyses were explored, this paper demonstrates trends and seasonal time-series analyses using converted HDF format MODIS MCD43A3 albedo land product. The results revealed significance changes of albedo percentages over the past 10 years and the pattern with regards to Malaysia's nebulosity index (NI) and aerosol optical depth (AOD). There is noticeable trend can be identified with regards to its maximum and minimum value of the albedo. The rise and fall of the line graph show a similar trend with regards to its daily observation. The different can be identified in term of the value or percentage of rises and falls of albedo. Thus, it can be concludes that the temporal behavior of land surface albedo in Malaysia have a uniform behaviours and effects with regards to the local monsoons. However, although the average albedo shows linear trend with nebulosity index, the pattern changes of albedo with respects to the nebulosity index indicates that there are external factors that implicates the albedo values, as the sky conditions and its diffusion plotted does not have uniform trend over the years, especially when the trend of 5 years interval is examined, 2000 shows high negative linear

  1. Best linear forecast of volatility in financial time series

    NASA Astrophysics Data System (ADS)

    Krivoruchenko, M. I.

    2004-09-01

    The autocorrelation function of volatility in financial time series is fitted well by a superposition of several exponents. This case admits an explicit analytical solution of the problem of constructing the best linear forecast of a stationary stochastic process. We describe and apply the proposed analytical method for forecasting volatility. The leverage effect and volatility clustering are taken into account. Parameters of the predictor function are determined numerically for the Dow Jones 30 Industrial Average. Connection of the proposed method to the popular autoregressive conditional heteroskedasticity models is discussed.

  2. Time series analysis using semiparametric regression on oil palm production

    NASA Astrophysics Data System (ADS)

    Yundari, Pasaribu, U. S.; Mukhaiyar, U.

    2016-04-01

    This paper presents semiparametric kernel regression method which has shown its flexibility and easiness in mathematical calculation, especially in estimating density and regression function. Kernel function is continuous and it produces a smooth estimation. The classical kernel density estimator is constructed by completely nonparametric analysis and it is well reasonable working for all form of function. Here, we discuss about parameter estimation in time series analysis. First, we consider the parameters are exist, then we use nonparametrical estimation which is called semiparametrical. The selection of optimum bandwidth is obtained by considering the approximation of Mean Integrated Square Root Error (MISE).

  3. Time series ARIMA models for daily price of palm oil

    NASA Astrophysics Data System (ADS)

    Ariff, Noratiqah Mohd; Zamhawari, Nor Hashimah; Bakar, Mohd Aftar Abu

    2015-02-01

    Palm oil is deemed as one of the most important commodity that forms the economic backbone of Malaysia. Modeling and forecasting the daily price of palm oil is of great interest for Malaysia's economic growth. In this study, time series ARIMA models are used to fit the daily price of palm oil. The Akaike Infromation Criterion (AIC), Akaike Infromation Criterion with a correction for finite sample sizes (AICc) and Bayesian Information Criterion (BIC) are used to compare between different ARIMA models being considered. It is found that ARIMA(1,2,1) model is suitable for daily price of crude palm oil in Malaysia for the year 2010 to 2012.

  4. Chaotic time series analysis in economics: Balance and perspectives

    SciTech Connect

    Faggini, Marisa

    2014-12-15

    The aim of the paper is not to review the large body of work concerning nonlinear time series analysis in economics, about which much has been written, but rather to focus on the new techniques developed to detect chaotic behaviours in economic data. More specifically, our attention will be devoted to reviewing some of these techniques and their application to economic and financial data in order to understand why chaos theory, after a period of growing interest, appears now not to be such an interesting and promising research area.

  5. Time series as a diagnostic tool for EKG

    NASA Astrophysics Data System (ADS)

    Erkal, Cahit; Cecen, Aydin

    2007-11-01

    A preliminary analysis of heart rate variability (peak-to-peak intervals based on EKG) will be presented using the tools of nonlinear dynamics and chaos. We show that uncertainty determination of the most commonly used invariant-the correlation dimension- and a proper implementation of time series analysis tools are necessary to differentiate between the healthy and unhealthy state of the heart. We present an example analysis based on normal and atrial fibrillation EKGs and point of some pitfalls that may give rise to misleading conclusions.

  6. Multifractal Time Series Analysis Based on Detrended Fluctuation Analysis

    NASA Astrophysics Data System (ADS)

    Kantelhardt, Jan; Stanley, H. Eugene; Zschiegner, Stephan; Bunde, Armin; Koscielny-Bunde, Eva; Havlin, Shlomo

    2002-03-01

    In order to develop an easily applicable method for the multifractal characterization of non-stationary time series, we generalize the detrended fluctuation analysis (DFA), which is a well-established method for the determination of the monofractal scaling properties and the detection of long-range correlations. We relate the new multifractal DFA method to the standard partition function-based multifractal formalism, and compare it to the wavelet transform modulus maxima (WTMM) method which is a well-established, but more difficult procedure for this purpose. We employ the multifractal DFA method to determine if the heartrhythm during different sleep stages is characterized by different multifractal properties.

  7. Time series analysis of transient chaos: Theory and experiment

    SciTech Connect

    Janosi, I.M.; Tel, T.

    1996-06-01

    A simple method is described how nonattracting chaotic sets can be reconstructed from time series by gluing those pieces of many transiently chaotic signals together that come close to this invariant set. The method is illustrated by both a map of well known dynamics, the H{acute e}non map, and a signal obtained from an experiment, the NMR laser. The strange saddle responsible for the transient chaotic behavior is reconstructed and its characteristics like dimension, Lyapunov exponent, and correlation function are determined. {copyright} {ital 1996 American Institute of Physics.}

  8. Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam

    NASA Astrophysics Data System (ADS)

    Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad

    2015-05-01

    Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.

  9. Accurate Time/Frequency Transfer Method Using Bi-Directional WDM Transmission

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1996-01-01

    An accurate time transfer method is proposed using b-directional wavelength division multiplexing (WDM) signal transmission along a single optical fiber. This method will be used in digital telecommunication networks and yield a time synchronization accuracy of better than 1 ns for long transmission lines over several tens of kilometers. The method can accurately measure the difference in delay between two wavelength signals caused by the chromatic dispersion of the fiber in conventional simple bi-directional dual-wavelength frequency transfer methods. We describe the characteristics of this difference in delay and then show that the accuracy of the delay measurements can be obtained below 0.1 ns by transmitting 156 Mb/s times reference signals of 1.31 micrometer and 1.55 micrometers along a 50 km fiber using the proposed method. The sub-nanosecond delay measurement using the simple bi-directional dual-wavelength transmission along a 100 km fiber with a wavelength spacing of 1 nm in the 1.55 micrometer range is also shown.

  10. Nonlinear Aeroelastic Analysis Using a Time-Accurate Navier-Stokes Equations Solver

    NASA Technical Reports Server (NTRS)

    Kuruvila, Geojoe; Bartels, Robert E.; Hong, Moeljo S.; Bhatia, G.

    2007-01-01

    A method to simulate limit cycle oscillation (LCO) due to control surface freeplay using a modified CFL3D, a time-accurate Navier-Stokes computational fluid dynamics (CFD) analysis code with structural modeling capability, is presented. This approach can be used to analyze aeroelastic response of aircraft with structural behavior characterized by nonlinearity in the force verses displacement curve. A limited validation of the method, using very low Mach number experimental data for a three-degrees-of-freedom (pitch/plunge/flap deflection) airfoil model with flap freeplay, is also presented.

  11. On the use of spring baseflow recession for a more accurate parameterization of aquifer transit time distribution functions

    NASA Astrophysics Data System (ADS)

    Farlin, J.; Maloszewski, P.

    2012-12-01

    Baseflow recession analysis and groundwater dating have up to now developed as two distinct branches of hydrogeology and were used to solve entirely different problems. We show that by combining two classical models, namely Boussinesq's Equation describing spring baseflow recession and the exponential piston-flow model used in groundwater dating studies, the parameters describing the transit time distribution of an aquifer can be in some cases estimated to a far more accurate degree than with the latter alone. Under the assumption that the aquifer basis is sub-horizontal, the mean residence time of water in the saturated zone can be estimated from spring baseflow recession. This provides an independent estimate of groundwater residence time that can refine those obtained from tritium measurements. This approach is demonstrated in a case study predicting atrazine concentration trend in a series of springs draining the fractured-rock aquifer known as the Luxembourg Sandstone. A transport model calibrated on tritium measurements alone predicted different times to trend reversal following the nationwide ban on atrazine in 2005 with different rates of decrease. For some of the springs, the best agreement between observed and predicted time of trend reversal was reached for the model calibrated using both tritium measurements and the recession of spring discharge during the dry season. The agreement between predicted and observed values was however poorer for the springs displaying the most gentle recessions, possibly indicating the stronger influence of continuous groundwater recharge during the dry period.

  12. On the use of spring baseflow recession for a more accurate parameterization of aquifer transit time distribution functions

    NASA Astrophysics Data System (ADS)

    Farlin, J.; Maloszewski, P.

    2013-05-01

    Baseflow recession analysis and groundwater dating have up to now developed as two distinct branches of hydrogeology and have been used to solve entirely different problems. We show that by combining two classical models, namely the Boussinesq equation describing spring baseflow recession, and the exponential piston-flow model used in groundwater dating studies, the parameters describing the transit time distribution of an aquifer can be in some cases estimated to a far more accurate degree than with the latter alone. Under the assumption that the aquifer basis is sub-horizontal, the mean transit time of water in the saturated zone can be estimated from spring baseflow recession. This provides an independent estimate of groundwater transit time that can refine those obtained from tritium measurements. The approach is illustrated in a case study predicting atrazine concentration trend in a series of springs draining the fractured-rock aquifer known as the Luxembourg Sandstone. A transport model calibrated on tritium measurements alone predicted different times to trend reversal following the nationwide ban on atrazine in 2005 with different rates of decrease. For some of the springs, the actual time of trend reversal and the rate of change agreed extremely well with the model calibrated using both tritium measurements and the recession of spring discharge during the dry season. The agreement between predicted and observed values was however poorer for the springs displaying the most gentle recessions, possibly indicating a stronger influence of continuous groundwater recharge during the summer months.

  13. An Accurate Timing Alignment Method with Time-to-Digital Converter Linearity Calibration for High-Resolution TOF PET

    PubMed Central

    Li, Hongdi; Wang, Chao; An, Shaohui; Lu, Xingyu; Dong, Yun; Liu, Shitao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Wong, Wai-Hoi

    2015-01-01

    Accurate PET system timing alignment minimizes the coincidence time window and therefore reduces random events and improves image quality. It is also critical for time-of-flight (TOF) image reconstruction. Here, we use a thin annular cylinder (shell) phantom filled with a radioactive source and located axially and centrally in a PET camera for the timing alignment of a TOF PET system. This timing alignment method involves measuring the time differences between the selected coincidence detector pairs, calibrating the differential and integral nonlinearity of the time-to-digital converter (TDC) with the same raw data and deriving the intrinsic time biases for each detector using an iterative algorithm. The raw time bias for each detector is downloaded to the front-end electronics and the residual fine time bias can be applied during the TOF list-mode reconstruction. Our results showed that a timing alignment accuracy of better than ±25 ps can be achieved, and a preliminary timing resolution of 473 ps (full width at half maximum) was measured in our prototype TOF PET/CT system. PMID:26543243

  14. Agile: A Time-series CCD Photometer To Observe Blue Variables

    NASA Astrophysics Data System (ADS)

    Mukadam, Anjum S.; Owen, R.; Mannery, E. J.

    2007-05-01

    We use differential time-series photometry to study phenomena variable at short timescales. A good time-series photometer not only requires an accurate measurement of the start time of an exposure, but also the exposure duration. Elements that cause a jitter in these measurements are undesirable, such as an undisciplined clock used for timing, a mechanical shutter, and an unregulated time-share data acquisition system. Besides accuracy in timing, a good time-series photometer must be able to provide sufficient time resolution to sample the variable phenomena well. This requires that the photometer allow a short exposure time and also introduce an insignificant dead time between consecutive exposures. Frame transfer CCDs are ideal for time-series photometry as they can provide back-to-back exposures with no dead time. We have assembled a time-series photometer called Agile, optimized to observe the rapid oscillations of blue variables such as pulsating white dwarfs and subdwarfs, cataclysmic variables, and flare stars. Agile is based on the design of a time-series photometer called Argos at McDonald Observatory (Nather & Mukadam 2004), and utilizes a commercial frame transfer CCD camera from Princeton Instruments with 1024x1024 active pixels. This instrument mounts at the Nasmyth focus of the 3.5m telescope at Apache Point Observatory, where we expect a field-of-view of 2.6x2.6 arcmin and a platescale of 0.15 arcsec/pixel using a focal reducer. We use a GPS-based programmable timer card to generate pulses that initiate frame transfer in the CCD camera, giving us complete control over both the exposure start time and its duration to high precision. We expect to read an unbinned full frame in 1.1s using a low noise amplifier operating at 1MHz with a read noise of order <8 electrons RMS. The CCD is back-illuminated and thinned for improved blue sensitivity and provides a quantum efficiency >80% in the wavelength range 4500-7500A.

  15. Application of the G-JF discrete-time thermostat for fast and accurate molecular simulations

    NASA Astrophysics Data System (ADS)

    Grønbech-Jensen, Niels; Hayre, Natha Robert; Farago, Oded

    2014-02-01

    A new Langevin-Verlet thermostat that preserves the fluctuation-dissipation relationship for discrete time steps is applied to molecular modeling and tested against several popular suites (AMBER, GROMACS, LAMMPS) using a small molecule as an example that can be easily simulated by all three packages. Contrary to existing methods, the new thermostat exhibits no detectable changes in the sampling statistics as the time step is varied in the entire numerical stability range. The simple form of the method, which we express in the three common forms (Velocity-Explicit, Störmer-Verlet, and Leap-Frog), allows for easy implementation within existing molecular simulation packages to achieve faster and more accurate results with no cost in either computing time or programming complexity.

  16. A comparative study of shallow groundwater level simulation with three time series models in a coastal aquifer of South China

    NASA Astrophysics Data System (ADS)

    Yang, Q.; Wang, Y.; Zhang, J.; Delgado, J.

    2015-04-01

    Accurate and reliable groundwater level forecasting models can help ensure the sustainable use of a watershed's aquifers for urban and rural water supply. In this paper, three time series analysis methods, Holt-Winters (HW), integrated time series (ITS), and seasonal autoregressive integrated moving average (SARIMA), are explored to simulate the groundwater level in a coastal aquifer, China. The monthly groundwater table depth data collected in a long time series from 2000 to 2011 are simulated and compared with those three time series models. The error criteria are estimated using coefficient of determination (R 2), Nash-Sutcliffe model efficiency coefficient (E), and root-mean-squared error. The results indicate that three models are all accurate in reproducing the historical time series of groundwater levels. The comparisons of three models show that HW model is more accurate in predicting the groundwater levels than SARIMA and ITS models. It is recommended that additional studies explore this proposed method, which can be used in turn to facilitate the development and implementation of more effective and sustainable groundwater management strategies.

  17. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    PubMed Central

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  18. Comparison of time series models for predicting campylobacteriosis risk in New Zealand.

    PubMed

    Al-Sakkaf, A; Jones, G

    2014-05-01

    Predicting campylobacteriosis cases is a matter of considerable concern in New Zealand, after the number of the notified cases was the highest among the developed countries in 2006. Thus, there is a need to develop a model or a tool to predict accurately the number of campylobacteriosis cases as the Microbial Risk Assessment Model used to predict the number of campylobacteriosis cases failed to predict accurately the number of actual cases. We explore the appropriateness of classical time series modelling approaches for predicting campylobacteriosis. Finding the most appropriate time series model for New Zealand data has additional practical considerations given a possible structural change, that is, a specific and sudden change in response to the implemented interventions. A univariate methodological approach was used to predict monthly disease cases using New Zealand surveillance data of campylobacteriosis incidence from 1998 to 2009. The data from the years 1998 to 2008 were used to model the time series with the year 2009 held out of the data set for model validation. The best two models were then fitted to the full 1998-2009 data and used to predict for each month of 2010. The Holt-Winters (multiplicative) and ARIMA (additive) intervention models were considered the best models for predicting campylobacteriosis in New Zealand. It was noticed that the prediction by an additive ARIMA with intervention was slightly better than the prediction by a Holt-Winter multiplicative method for the annual total in year 2010, the former predicting only 23 cases less than the actual reported cases. It is confirmed that classical time series techniques such as ARIMA with intervention and Holt-Winters can provide a good prediction performance for campylobacteriosis risk in New Zealand. The results reported by this study are useful to the New Zealand Health and Safety Authority's efforts in addressing the problem of the campylobacteriosis epidemic. PMID:23551848

  19. Feature extraction for change analysis in SAR time series

    NASA Astrophysics Data System (ADS)

    Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan

    2015-10-01

    In remote sensing, the change detection topic represents a broad field of research. If time series data is available, change detection can be used for monitoring applications. These applications require regular image acquisitions at identical time of day along a defined period. Focusing on remote sensing sensors, radar is especially well-capable for applications requiring regularity, since it is independent from most weather and atmospheric influences. Furthermore, regarding the image acquisitions, the time of day plays no role due to the independence from daylight. Since 2007, the German SAR (Synthetic Aperture Radar) satellite TerraSAR-X (TSX) permits the acquisition of high resolution radar images capable for the analysis of dense built-up areas. In a former study, we presented the change analysis of the Stuttgart (Germany) airport. The aim of this study is the categorization of detected changes in the time series. This categorization is motivated by the fact that it is a poor statement only to describe where and when a specific area has changed. At least as important is the statement about what has caused the change. The focus is set on the analysis of so-called high activity areas (HAA) representing areas changing at least four times along the investigated period. As first step for categorizing these HAAs, the matching HAA changes (blobs) have to be identified. Afterwards, operating in this object-based blob level, several features are extracted which comprise shape-based, radiometric, statistic, morphological values and one context feature basing on a segmentation of the HAAs. This segmentation builds on the morphological differential attribute profiles (DAPs). Seven context classes are established: Urban, infrastructure, rural stable, rural unstable, natural, water and unclassified. A specific HA blob is assigned to one of these classes analyzing the CovAmCoh time series signature of the surrounding segments. In combination, also surrounding GIS information

  20. Land Cover Change Detection from MODIS Vegetation Index Time Series Data

    NASA Astrophysics Data System (ADS)

    Mithal, V.; O'Connor, Z.; Steinhaeuser, K.; Boriah, S.; Kumar, V.; Potter, C. S.; Klooster, S. A.

    2012-12-01

    Quantifiable knowledge about changes occurring in land cover and land use at a global scale is key to effective planning for sustainable use of diminishing natural resources such as forest cover and agricultural land. Accurate and timely information about land cover and land use changes is therefore of significant interest to earth and climate scientists as well as policy and decision makers. Recently, global time series data sets, such as Moderate Resolution Imaging Spectroradiometer Enhanced Vegetation Index (EVI), have become publicly available and have been used to identify changes in vegetation cover. In this talk, we will discuss our work that analyzes the MODIS EVI time series data sets for global land cover change detection. Our group has developed a suite of time series change detection methods that are used to identify EVI time series with patterns indicative of land cover disturbance such as abrupt or gradual change, or changes in the recurring annual vegetation pattern. These algorithms can successfully identify different land cover change events such as deforestation, forest fires, agricultural conversions, and degradation due to insect damage at a global scale. In context of land cover monitoring, one of the significant challenges is posed by the differences in inter-annual variability and noise characteristics of different land cover types. These data characteristics can significantly impact change detection performance especially in land cover types such as farms, grasslands and tropical forests. We will discuss our recent work that incorporates a bootstrap-based normalization of change detection scores to account for the natural variability present in vegetation time series data. We studied the strengths and weakness of our proposed normalizing approaches in the context of characteristics of land cover data such as seasonality and noise and showed that relative performance of normalization approaches vary significantly depending on the

  1. Detection of intermittent events in atmospheric time series

    NASA Astrophysics Data System (ADS)

    Paradisi, P.; Cesari, R.; Palatella, L.; Contini, D.; Donateo, A.

    2009-04-01

    associated with the occurrence of critical events in the atmospheric dynamics. The critical events are associated with transitions between meta-stable configurations. Consequently, this approach could give some effort in the study of Extreme Events in meteorology and climatology and in weather classification schemes. Then, the renewal approach could give some effort in the modelling of non-Gaussian closures for turbulent fluxes [3]. In the proposed approach the main features that need to be estimated are: (a) the distribution of life-times of a given atmospheric meta-stable structure (Waiting Times between two critical events); (b) the statistical distribution of fluctuations; (c) the presence of memory in the time series. These features are related to the evaluation of memory content and scaling from the time series. In order to analyze these features, in recent years some novel statistical techniques have been developed. In particular, the analysis of Diffusion Entropy [4] was shown to be a robust method for the determination of the dynamical scaling. This property is related to the power-law behaviour of the life-time statistics and to the memory properties of the time series. The analysis of Renewal Aging [5], based on renewal theory [2], allows to estimate the content of memory in a time series that is related to the amount of critical events in the time series itself. After a brief review of the statistical techniques (Diffusion Entropy and Renewal Aging), an application to experimental atmospheric time series will be illustrated. References [1] Weiss G.H., Rubin R.J., Random Walks: theory and selected applications, Advances in Chemical Physics,1983, 52, 363-505 (1983). [2] D.R. Cox, Renewal Theory, Methuen, London (1962). [3] P. Paradisi, R. Cesari, F. Mainardi, F. Tampieri: The fractional Fick's law for non-local transport processes, Physica A, 293, p. 130-142 (2001). [4] P. Grigolini, L. Palatella, G. Raffaelli, Fractals 9 (2001) 439. [5] P. Allegrini, F. Barbi, P

  2. Girls' Series Books: A View of Times Past.

    ERIC Educational Resources Information Center

    Schumacher, Mark

    The Girls' Books in Series collection at the University of North Carolina at Greensboro's Jackson Library contains over 1850 volumes, with publication dates ranging from the mid-1800s to the 1980s. The library's list currently contains approximately 511 different series. The library owns all the titles for 85 of the series. For 167 of the series,…

  3. Prediction of altimetric sea level anomalies using time series models based on spatial correlation

    NASA Astrophysics Data System (ADS)

    Miziński, Bartłomiej; Niedzielski, Tomasz

    2014-05-01

    Sea level anomaly (SLA) times series, which are time-varying gridded data, can be modelled and predicted using time series methods. This approach has been shown to provide accurate forecasts within the Prognocean system, the novel infrastructure for anticipating sea level change designed and built at the University of Wrocław (Poland) which utilizes the real-time SLA data from Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO). The system runs a few models concurrently, and our ocean prediction experiment includes both uni- and multivariate time series methods. The univariate ones are: extrapolation of polynomial-harmonic model (PH), extrapolation of polynomial-harmonic model and autoregressive prediction (PH+AR), extrapolation of polynomial-harmonic model and self-exciting threshold autoregressive prediction (PH+SETAR). The following multivariate methods are used: extrapolation of polynomial-harmonic model and vector autoregressive prediction (PH+VAR), extrapolation of polynomial-harmonic model and generalized space-time autoregressive prediction (PH+GSTAR). As the aforementioned models and the corresponding forecasts are computed in real time, hence independently and in the same computational setting, we are allowed to compare the accuracies offered by the models. The objective of this work is to verify the hypothesis that the multivariate prediction techniques, which make use of cross-correlation and spatial correlation, perform better than the univariate ones. The analysis is based on the daily-fitted and updated time series models predicting the SLA data (lead time of two weeks) over several months when El Niño/Southern Oscillation (ENSO) was in its neutral state.

  4. Hardware and Software Developments for the Accurate Time-Linked Data Acquisition System

    SciTech Connect

    BERG,DALE E.; RUMSEY,MARK A.; ZAYAS,JOSE R.

    1999-11-09

    Wind-energy researchers at Sandia National Laboratories have developed a new, light-weight, modular data acquisition system capable of acquiring long-term, continuous, multi-channel time-series data from operating wind-turbines. New hardware features have been added to this system to make it more flexible and permit programming via telemetry. User-friendly Windows-based software has been developed for programming the hardware and acquiring, storing, analyzing, and archiving the data. This paper briefly reviews the major components of the system, summarizes the recent hardware enhancements and operating experiences, and discusses the features and capabilities of the software programs that have been developed.

  5. Earth's Surface Displacements from the GPS Time Series

    NASA Astrophysics Data System (ADS)

    Haritonova, D.; Balodis, J.; Janpaule, I.; Morozova, K.

    2015-11-01

    The GPS observations of both Latvian permanent GNSS networks - EUPOS®-Riga and LatPos, have been collected for a period of 8 years - from 2007 to 2014. Local surface displacements have been derived from the obtained coordinate time series eliminating different impact sources. The Bernese software is used for data processing. The EUREF Permanent Network (EPN) stations in the surroundings of Latvia are selected as fiducial stations. The results have shown a positive tendency of vertical displacements in the western part of Latvia - station heights are increasing, and negative velocities are observed in the central and eastern parts. Station vertical velocities are ranging in diapason of 4 mm/year. In the case of horizontal displacements, site velocities are up to 1 mm/year and mostly oriented to the south. The comparison of the obtained results with data from the deformation model NKG_RF03vel has been made. Additionally, the purpose of this study is to analyse GPS time series obtained using two different data processing strategies: Precise Point Positioning (PPP) and estimation of station coordinates relatively to the positions of fiducial stations also known as Differential GNSS.

  6. On clustering of non-stationary meteorological time series

    NASA Astrophysics Data System (ADS)

    Horenko, Illia

    2010-04-01

    A method for clustering of multidimensional non-stationary meteorological time series is presented. The approach is based on optimization of the regularized averaged clustering functional describing the quality of data representation in terms of several regression models and a metastable hidden process switching between them. Proposed numerical clustering algorithm is based on application of the finite element method ( FEM) to the problem of non-stationary time series analysis. The main advantage of the presented algorithm compared to Hidden Markov Models (HMMs) and to finite mixture models is that no a priori assumptions about the probability model for the hidden and observed processes (e.g., Markovianity or stationarity) are necessary for the proposed method. Another attractive numerical feature of the discussed algorithm is the possibility to choose the optimal number of metastable clusters and a natural opportunity to control the fuzziness of the resulting decomposition a posteriory, based on the statistical distinguishability of the resulting persistent cluster states. The resulting FEM-K-trends algorithm is compared with some standard fuzzy clustering methods on toy model examples and on analysis of multidimensional historical temperature data locally in Europe and on the global temperature data set.

  7. A Markov switching model for annual hydrologic time series

    NASA Astrophysics Data System (ADS)

    Akıntuǧ, B.; Rasmussen, P. F.

    2005-09-01

    This paper investigates the properties of Markov switching (MS) models (also known as hidden Markov models) for generating annual time series. This type of model has been used in a number of recent studies in the water resources literature. The model considered here assumes that climate is switching between M states and that the state sequence can be described by a Markov chain. Observations are assumed to be drawn from a normal distribution whose parameters depend on the state variable. We present the stochastic properties of this class of models along with procedures for model identification and parameter estimation. Although, at a first glance, MS models appear to be quite different from ARMA models, we show that it is possible to find an ARMA model that has the same autocorrelation function and the same marginal distribution as any given MS model. Hence, despite the difference in model structure, there are strong similarities between MS and ARMA models. MS and ARMA models are applied to the time series of mean annual discharge of the Niagara River. Although it is difficult to draw any general conclusion from a single case study, it appears that MS models (and ARMA models derived from MS models) generally have stronger autocorrelation at higher lags than ARMA models estimated by conventional maximum likelihood. This may be an important property if the purpose of the study is the analysis of multiyear droughts.

  8. Efficient Bayesian inference for natural time series using ARFIMA processes

    NASA Astrophysics Data System (ADS)

    Graves, T.; Gramacy, R. B.; Franzke, C. L. E.; Watkins, N. W.

    2015-11-01

    Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. In this paper we present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators. For CET we also extend our method to seasonal long memory.

  9. Financial Time Series Prediction Using Spiking Neural Networks

    PubMed Central

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two “traditional”, rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618

  10. Data compression to define information content of hydrological time series

    NASA Astrophysics Data System (ADS)

    Weijs, S. V.; van de Giesen, N.; Parlange, M. B.

    2013-08-01

    When inferring models from hydrological data or calibrating hydrological models, we are interested in the information content of those data to quantify how much can potentially be learned from them. In this work we take a perspective from (algorithmic) information theory, (A)IT, to discuss some underlying issues regarding this question. In the information-theoretical framework, there is a strong link between information content and data compression. We exploit this by using data compression performance as a time series analysis tool and highlight the analogy to information content, prediction and learning (understanding is compression). The analysis is performed on time series of a set of catchments. We discuss both the deeper foundation from algorithmic information theory, some practical results and the inherent difficulties in answering the following question: "How much information is contained in this data set?". The conclusion is that the answer to this question can only be given once the following counter-questions have been answered: (1) information about which unknown quantities? and (2) what is your current state of knowledge/beliefs about those quantities? Quantifying information content of hydrological data is closely linked to the question of separating aleatoric and epistemic uncertainty and quantifying maximum possible model performance, as addressed in the current hydrological literature. The AIT perspective teaches us that it is impossible to answer this question objectively without specifying prior beliefs.

  11. Data compression to define information content of hydrological time series

    NASA Astrophysics Data System (ADS)

    Weijs, S. V.; van de Giesen, N.; Parlange, M. B.

    2013-02-01

    When inferring models from hydrological data or calibrating hydrological models, we might be interested in the information content of those data to quantify how much can potentially be learned from them. In this work we take a perspective from (algorithmic) information theory (AIT) to discuss some underlying issues regarding this question. In the information-theoretical framework, there is a strong link between information content and data compression. We exploit this by using data compression performance as a time series analysis tool and highlight the analogy to information content, prediction, and learning (understanding is compression). The analysis is performed on time series of a set of catchments, searching for the mechanisms behind compressibility. We discuss both the deeper foundation from algorithmic information theory, some practical results and the inherent difficulties in answering the question: "How much information is contained in this data?". The conclusion is that the answer to this question can only be given once the following counter-questions have been answered: (1) Information about which unknown quantities? (2) What is your current state of knowledge/beliefs about those quantities? Quantifying information content of hydrological data is closely linked to the question of separating aleatoric and epistemic uncertainty and quantifying maximum possible model performance, as addressed in current hydrological literature. The AIT perspective teaches us that it is impossible to answer this question objectively, without specifying prior beliefs. These beliefs are related to the maximum complexity one is willing to accept as a law and what is considered as random.

  12. Time series clustering analysis of health-promoting behavior

    NASA Astrophysics Data System (ADS)

    Yang, Chi-Ta; Hung, Yu-Shiang; Deng, Guang-Feng

    2013-10-01

    Health promotion must be emphasized to achieve the World Health Organization goal of health for all. Since the global population is aging rapidly, ComCare elder health-promoting service was developed by the Taiwan Institute for Information Industry in 2011. Based on the Pender health promotion model, ComCare service offers five categories of health-promoting functions to address the everyday needs of seniors: nutrition management, social support, exercise management, health responsibility, stress management. To assess the overall ComCare service and to improve understanding of the health-promoting behavior of elders, this study analyzed health-promoting behavioral data automatically collected by the ComCare monitoring system. In the 30638 session records collected for 249 elders from January, 2012 to March, 2013, behavior patterns were identified by fuzzy c-mean time series clustering algorithm combined with autocorrelation-based representation schemes. The analysis showed that time series data for elder health-promoting behavior can be classified into four different clusters. Each type reveals different health-promoting needs, frequencies, function numbers and behaviors. The data analysis result can assist policymakers, health-care providers, and experts in medicine, public health, nursing and psychology and has been provided to Taiwan National Health Insurance Administration to assess the elder health-promoting behavior.

  13. Predicting physical time series using dynamic ridge polynomial neural networks.

    PubMed

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950

  14. Financial time series prediction using spiking neural networks.

    PubMed

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618

  15. Coastal Atmosphere and Sea Time Series (CoASTS)

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Berthon, Jean-Francoise; Zibordi, Giuseppe; Doyle, John P.; Grossi, Stefania; vanderLinde, Dirk; Targa, Cristina; McClain, Charles R. (Technical Monitor)

    2002-01-01

    In this document, the first three years of a time series of bio-optical marine and atmospheric measurements are presented and analyzed. These measurements were performed from an oceanographic tower in the northern Adriatic Sea within the framework of the Coastal Atmosphere and Sea Time Series (CoASTS) project, an ocean color calibration and validation activity. The data set collected includes spectral measurements of the in-water apparent (diffuse attenuation coefficient, reflectance, Q-factor, etc.) and inherent (absorption and scattering coefficients) optical properties, as well as the concentrations of the main optical components (pigment and suspended matter concentrations). Clear seasonal patterns are exhibited by the marine quantities on which an appreciable short-term variability (on the order of a half day to one day) is superimposed. This short-term variability is well correlated with the changes in salinity at the surface resulting from the southward transport of freshwater coming from the northern rivers. Concentrations of chlorophyll alpha and total suspended matter span more than two orders of magnitude. The bio-optical characteristics of the measurement site pertain to both Case-I (about 64%) and Case-II (about 36%) waters, based on a relationship between the beam attenuation coefficient at 660nm and the chlorophyll alpha concentration. Empirical algorithms relating in-water remote sensing reflectance ratios and optical components or properties of interest (chlorophyll alpha, total suspended matter, and the diffuse attenuation coefficient) are presented.

  16. Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks

    PubMed Central

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950

  17. Diagnosis of nonlinear systems using time series analysis

    SciTech Connect

    Hunter, N.F. Jr.

    1991-01-01

    Diagnosis and analysis techniques for linear systems have been developed and refined to a high degree of precision. In contrast, techniques for the analysis of data from nonlinear systems are in the early stages of development. This paper describes a time series technique for the analysis of data from nonlinear systems. The input and response time series resulting from excitation of the nonlinear system are embedded in a state space. The form of the embedding is optimized using local canonical variate analysis and singular value decomposition techniques. From the state space model, future system responses are estimated. The expected degree of predictability of the system is investigated using the state transition matrix. The degree of nonlinearity present is quantified using the geometry of the transfer function poles in the z plane. Examples of application to a linear single-degree-of-freedom system, a single-degree-of-freedom Duffing Oscillator, and linear and nonlinear three degree of freedom oscillators are presented. 11 refs., 9 figs.

  18. Software for detection and correction of inhomogeneities in time series

    NASA Astrophysics Data System (ADS)

    Stepanek, Petr

    2010-05-01

    During the last decade, software package consisting of AnClim, ProClimDB and LoadData software for processing climatological data has been created. This software offers complex solution in processing climatological time series, starting from loading data from a central database (e.g. Oracle, software LoadData), through data duality control and homogenization to time series analysis, extreme values evaluation and model outputs verification (ProClimDB and AnClim software). In recent years tools for correction of inhomogeneites in daily data was introduced. Partly methods already programmed in R (e.g. by Christine Gruber, ZAMG) like HOM of Paul Della-Marta and SPLIDHOM method of Olivier Mestre or own methods are available, some of them being able to apply multi-element approach (using e.g. weather types). Available methods can be easily compared and evaluated (both for inhomogeneity detection or correction in this case). Comparison of the available correction methods is also current task of ongoing COST action ESO601 (www. homogenisation.org). Further methods, if available under R, can be easily linked with the software and then the whole processing can benefit from user-friendly environment in which all the most commonly used functions for data handling and climatological processing are available (read more at www.climahom.eu).

  19. Bayesian Inference of Natural Selection from Allele Frequency Time Series.

    PubMed

    Schraiber, Joshua G; Evans, Steven N; Slatkin, Montgomery

    2016-05-01

    The advent of accessible ancient DNA technology now allows the direct ascertainment of allele frequencies in ancestral populations, thereby enabling the use of allele frequency time series to detect and estimate natural selection. Such direct observations of allele frequency dynamics are expected to be more powerful than inferences made using patterns of linked neutral variation obtained from modern individuals. We developed a Bayesian method to make use of allele frequency time series data and infer the parameters of general diploid selection, along with allele age, in nonequilibrium populations. We introduce a novel path augmentation approach, in which we use Markov chain Monte Carlo to integrate over the space of allele frequency trajectories consistent with the observed data. Using simulations, we show that this approach has good power to estimate selection coefficients and allele age. Moreover, when applying our approach to data on horse coat color, we find that ignoring a relevant demographic history can significantly bias the results of inference. Our approach is made available in a C++ software package. PMID:27010022

  20. Automatising the analysis of stochastic biochemical time-series

    PubMed Central

    2015-01-01

    Background Mathematical and computational modelling of biochemical systems has seen a lot of effort devoted to the definition and implementation of high-performance mechanistic simulation frameworks. Within these frameworks it is possible to analyse complex models under a variety of configurations, eventually selecting the best setting of, e.g., parameters for a target system. Motivation This operational pipeline relies on the ability to interpret the predictions of a model, often represented as simulation time-series. Thus, an efficient data analysis pipeline is crucial to automatise time-series analyses, bearing in mind that errors in this phase might mislead the modeller's conclusions. Results For this reason we have developed an intuitive framework-independent Python tool to automate analyses common to a variety of modelling approaches. These include assessment of useful non-trivial statistics for simulation ensembles, e.g., estimation of master equations. Intuitive and domain-independent batch scripts will allow the researcher to automatically prepare reports, thus speeding up the usual model-definition, testing and refinement pipeline. PMID:26051821

  1. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  2. A new complexity measure for time series analysis and classification

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Balasubramanian, Karthi; Dey, Sutirth

    2013-07-01

    Complexity measures are used in a number of applications including extraction of information from data such as ecological time series, detection of non-random structure in biomedical signals, testing of random number generators, language recognition and authorship attribution etc. Different complexity measures proposed in the literature like Shannon entropy, Relative entropy, Lempel-Ziv, Kolmogrov and Algorithmic complexity are mostly ineffective in analyzing short sequences that are further corrupted with noise. To address this problem, we propose a new complexity measure ETC and define it as the "Effort To Compress" the input sequence by a lossless compression algorithm. Here, we employ the lossless compression algorithm known as Non-Sequential Recursive Pair Substitution (NSRPS) and define ETC as the number of iterations needed for NSRPS to transform the input sequence to a constant sequence. We demonstrate the utility of ETC in two applications. ETC is shown to have better correlation with Lyapunov exponent than Shannon entropy even with relatively short and noisy time series. The measure also has a greater rate of success in automatic identification and classification of short noisy sequences, compared to entropy and a popular measure based on Lempel-Ziv compression (implemented by Gzip).

  3. Empirical intrinsic geometry for nonlinear modeling and time series filtering

    PubMed Central

    Talmon, Ronen; Coifman, Ronald R.

    2013-01-01

    In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization. PMID:23847205

  4. Empirical intrinsic geometry for nonlinear modeling and time series filtering.

    PubMed

    Talmon, Ronen; Coifman, Ronald R

    2013-07-30

    In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization. PMID:23847205

  5. Two algorithms to fill cloud gaps in LST time series

    NASA Astrophysics Data System (ADS)

    Frey, Corinne; Kuenzer, Claudia

    2013-04-01

    Cloud contamination is a challenge for optical remote sensing. This is especially true for the recording of a fast changing radiative quantity like land surface temperature (LST). The substitution of cloud contaminated pixels with estimated values - gap filling - is not straightforward but possible to a certain extent, as this research shows for medium-resolution time series of MODIS data. Area of interest is the Upper Mekong Delta (UMD). The background for this work is an analysis of the temporal development of 1-km LST in the context of the WISDOM project. The climate of the UMD is characterized by peak rainfalls in the summer months, which is also the time where cloud contamination is highest in the area. Average number of available daytime observations per pixel can go down to less than five for example in the month of June. In winter the average number may reach 25 observations a month. This situation is not appropriate to the calculation of longterm statistics; an adequate gap filling method should be used beforehand. In this research, two different algorithms were tested on an 11 year time series: 1) a gradient based algorithm and 2) a method based on ECMWF era interim re-analysis data. The first algorithm searches for stable inter-image gradients from a given environment and for a certain period of time. These gradients are then used to estimate LST for cloud contaminated pixels in each acquisition. The estimated LSTs are clear-sky LSTs and solely based on the MODIS LST time series. The second method estimates LST on the base of adapted ECMWF era interim skin temperatures and creates a set of expected LSTs. The estimated values were used to fill the gaps in the original dataset, creating two new daily, 1 km datasets. The maps filled with the gradient based method had more than the double amount of valid pixels than the original dataset. The second method (ECMWF era interim based) was able to fill all data gaps. From the gap filled data sets then monthly

  6. Satellite image time series simulation for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Guo, Tao

    2014-11-01

    The performance of environmental monitoring heavily depends on the availability of consecutive observation data and it turns out an increasing demand in remote sensing community for satellite image data in the sufficient resolution with respect to both spatial and temporal requirements, which appear to be conflictive and hard to tune tradeoffs. Multiple constellations could be a solution if without concerning cost, and thus it is so far interesting but very challenging to develop a method which can simultaneously improve both spatial and temporal details. There are some research efforts to deal with the problem from various aspects, a type of approaches is to enhance the spatial resolution using techniques of super resolution, pan-sharpen etc. which can produce good visual effects, but mostly cannot preserve spectral signatures and result in losing analytical value. Another type is to fill temporal frequency gaps by adopting time interpolation, which actually doesn't increase informative context at all. In this paper we presented a novel method to generate satellite images in higher spatial and temporal details, which further enables satellite image time series simulation. Our method starts with a pair of high-low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and the temporal change is then projected onto high resolution data plane and assigned to each high resolution pixel referring the predefined temporal change patterns of each type of ground objects to generate a simulated high resolution data. A preliminary experiment shows that our method can simulate a high resolution data with a good accuracy. We consider the contribution of our method is to enable timely monitoring of temporal changes through analysis of low resolution images time series only, and usage of

  7. Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK.

    PubMed

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting

    2016-01-01

    This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507

  8. Behavior of road accidents: Structural time series approach

    NASA Astrophysics Data System (ADS)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir; Arsad, Zainudin

    2014-12-01

    Road accidents become a major issue in contributing to the increasing number of deaths. Few researchers suggest that road accidents occur due to road structure and road condition. The road structure and condition may differ according to the area and volume of traffic of the location. Therefore, this paper attempts to look up the behavior of the road accidents in four main regions in Peninsular Malaysia by employing a structural time series (STS) approach. STS offers the possibility of modelling the unobserved component such as trends and seasonal component and it is allowed to vary over time. The results found that the number of road accidents is described by a different model. Perhaps, the results imply that the government, especially a policy maker should consider to implement a different approach in ways to overcome the increasing number of road accidents.

  9. A new correlation coefficient for bivariate time-series data

    NASA Astrophysics Data System (ADS)

    Erdem, Orhan; Ceyhan, Elvan; Varli, Yusuf

    2014-11-01

    The correlation in time series has received considerable attention in the literature. Its use has attained an important role in the social sciences and finance. For example, pair trading in finance is concerned with the correlation between stock prices, returns, etc. In general, Pearson’s correlation coefficient is employed in these areas although it has many underlying assumptions which restrict its use. Here, we introduce a new correlation coefficient which takes into account the lag difference of data points. We investigate the properties of this new correlation coefficient. We demonstrate that it is more appropriate for showing the direction of the covariation of the two variables over time. We also compare the performance of the new correlation coefficient with Pearson’s correlation coefficient and Detrended Cross-Correlation Analysis (DCCA) via simulated examples.

  10. Adaptive Sensing of Time Series with Application to Remote Exploration

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Cabrol, Nathalie A.; Furlong, Michael; Hardgrove, Craig; Low, Bryan K. H.; Moersch, Jeffrey; Wettergreen, David

    2013-01-01

    We address the problem of adaptive informationoptimal data collection in time series. Here a remote sensor or explorer agent throttles its sampling rate in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility -- all collected datapoints lie in the past, but its resource allocation decisions require predicting far into the future. Our solution is to continually fit a Gaussian process model to the latest data and optimize the sampling plan on line to maximize information gain. We compare the performance characteristics of stationary and nonstationary Gaussian process models. We also describe an application based on geologic analysis during planetary rover exploration. Here adaptive sampling can improve coverage of localized anomalies and potentially benefit mission science yield of long autonomous traverses.

  11. A quasi-global precipitation time series for drought monitoring

    USGS Publications Warehouse

    Funk, Chris C.; Peterson, Pete J.; Landsfeld, Martin F.; Pedreros, Diego H.; Verdin, James P.; Rowland, James D.; Romero, Bo E.; Husak, Gregory J.; Michaelsen, Joel C.; Verdin, Andrew P.

    2014-01-01

    Estimating precipitation variations in space and time is an important aspect of drought early warning and environmental monitoring. An evolving drier-than-normal season must be placed in historical context so that the severity of rainfall deficits may quickly be evaluated. To this end, scientists at the U.S. Geological Survey Earth Resources Observation and Science Center, working closely with collaborators at the University of California, Santa Barbara Climate Hazards Group, have developed a quasi-global (50°S–50°N, 180°E–180°W), 0.05° resolution, 1981 to near-present gridded precipitation time series: the Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) data archive.

  12. Time series analysis of molecular dynamics simulation using wavelet

    NASA Astrophysics Data System (ADS)

    Toda, Mikito

    2012-08-01

    A new method is presented to extract nonstationary features of slow collective motion toward time series data of molecular dynamics simulation for proteins. The method consists of the following two steps: (1) the wavelet transformation and (2) the singular value decomposition (SVD). The wavelet transformation enables us to characterize time varying features of oscillatory motions and SVD enables us to reduce the degrees of freedom of the movement. We apply the method to molecular dynamics simulation of various proteins such as Adenylate Kinase from Escherichia coli (AKE) and Thermomyces lanuginosa lipase (TLL). Moreover, we introduce indexes to characterize collective motion of proteins. These indexes provide us with information of nonstationary deformation of protein structures. We discuss future prospects of our study involving "intrinsically disordered proteins".

  13. Detecting Abrupt Changes in a Piecewise Locally Stationary Time Series

    PubMed Central

    Last, Michael; Shumway, Robert

    2007-01-01

    Non-stationary time series arise in many settings, such as seismology, speech-processing, and finance. In many of these settings we are interested in points where a model of local stationarity is violated. We consider the problem of how to detect these change-points, which we identify by finding sharp changes in the time-varying power spectrum. Several different methods are considered, and we find that the symmetrized Kullback-Leibler information discrimination performs best in simulation studies. We derive asymptotic normality of our test statistic, and consistency of estimated change-point locations. We then demonstrate the technique on the problem of detecting arrival phases in earthquakes. PMID:19190715

  14. Estimation of Hurst Exponent for the Financial Time Series

    NASA Astrophysics Data System (ADS)

    Kumar, J.; Manchanda, P.

    2009-07-01

    Till recently statistical methods and Fourier analysis were employed to study fluctuations in stock markets in general and Indian stock market in particular. However current trend is to apply the concepts of wavelet methodology and Hurst exponent, see for example the work of Manchanda, J. Kumar and Siddiqi, Journal of the Frankline Institute 144 (2007), 613-636 and paper of Cajueiro and B. M. Tabak. Cajueiro and Tabak, Physica A, 2003, have checked the efficiency of emerging markets by computing Hurst component over a time window of 4 years of data. Our goal in the present paper is to understand the dynamics of the Indian stock market. We look for the persistency in the stock market through Hurst exponent and fractal dimension of time series data of BSE 100 and NIFTY 50.

  15. A 40 Year Time Series of SBUV Observations: the Version 8.6 Processing

    NASA Technical Reports Server (NTRS)

    McPeters, Richard; Bhartia, P. K.; Flynn, L.

    2012-01-01

    Under a NASA program to produce long term data records from instruments on multiple satellites (MEaSUREs), data from a series of eight SBUV and SBUV 12 instruments have been reprocessed to create a 40 year long ozone time series. Data from the Nimbus 4 BUV, Nimbus 7 SBUV, and SBUV/2 instruments on NOAA 9, 11, 14, 16, 17, and 18 were used covering the period 1970 to 1972 and 1979 to the present. In past analyses an ozone time series was created from these instruments by adjusting ozone itself, instrument by instrument, for consistency during overlap periods. In the version 8.6 processing adjustments were made to the radiance calibration of each instrument to maintain a consistent calibration over the entire time series. Data for all eight instruments were then reprocessed using the adjusted radiances. Reprocessing is necessary to produce an accurate latitude dependence. Other improvements incorporated in version 8.6 included the use of the ozone cross sections of Brion, Daumont, and Malicet, and the use of a cloud height climatology derived from Aura OMI measurements. The new cross sections have a more accurate temperature dependence than the cross sections previously used. The OMI-based cloud heights account for the penetration of UV into the upper layers of clouds. The consistency of the version 8.6 time series was evaluated by intra-instrument comparisons during overlap periods, comparisons with ground-based instruments, and comparisons with measurements made by instruments on other satellites such as SAGE II and UARS MLS. These comparisons show that for the instruments on NOAA 16, 17 and 18, the instrument calibrations were remarkably stable and consistent from instrument to instrument. The data record from the Nimbus 7 SBUV was also very stable, and SAGE and ground-based comparisons show that the' calibration was consistent with measurements made years laterby the NOAA 16 instrument. The calibrations of the SBUV/2 instruments on NOAA 9, 11, and 14 were more of

  16. MDSINE: Microbial Dynamical Systems INference Engine for microbiome time-series analyses.

    PubMed

    Bucci, Vanni; Tzen, Belinda; Li, Ning; Simmons, Matt; Tanoue, Takeshi; Bogart, Elijah; Deng, Luxue; Yeliseyev, Vladimir; Delaney, Mary L; Liu, Qing; Olle, Bernat; Stein, Richard R; Honda, Kenya; Bry, Lynn; Gerber, Georg K

    2016-01-01

    Predicting dynamics of host-microbial ecosystems is crucial for the rational design of bacteriotherapies. We present MDSINE, a suite of algorithms for inferring dynamical systems models from microbiome time-series data and predicting temporal behaviors. Using simulated data, we demonstrate that MDSINE significantly outperforms the existing inference method. We then show MDSINE's utility on two new gnotobiotic mice datasets, investigating infection with Clostridium difficile and an immune-modulatory probiotic. Using these datasets, we demonstrate new capabilities, including accurate forecasting of microbial dynamics, prediction of stable sub-communities that inhibit pathogen growth, and identification of bacteria most crucial to community integrity in response to perturbations. PMID:27259475

  17. Fast and Flexible Multivariate Time Series Subsequence Search

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Oza, Nikunj C.; Zhu, Qiang; Srivastava, Ashok N.

    2010-01-01

    Multivariate Time-Series (MTS) are ubiquitous, and are generated in areas as disparate as sensor recordings in aerospace systems, music and video streams, medical monitoring, and financial systems. Domain experts are often interested in searching for interesting multivariate patterns from these MTS databases which often contain several gigabytes of data. Surprisingly, research on MTS search is very limited. Most of the existing work only supports queries with the same length of data, or queries on a fixed set of variables. In this paper, we propose an efficient and flexible subsequence search framework for massive MTS databases, that, for the first time, enables querying on any subset of variables with arbitrary time delays between them. We propose two algorithms to solve this problem (1) a List Based Search (LBS) algorithm which uses sorted lists for indexing, and (2) a R*-tree Based Search (RBS) which uses Minimum Bounding Rectangles (MBR) to organize the subsequences. Both algorithms guarantee that all matching patterns within the specified thresholds will be returned (no false dismissals). The very few false alarms can be removed by a post-processing step. Since our framework is also capable of Univariate Time-Series (UTS) subsequence search, we first demonstrate the efficiency of our algorithms on several UTS datasets previously used in the literature. We follow this up with experiments using two large MTS databases from the aviation domain, each containing several millions of observations. Both these tests show that our algorithms have very high prune rates (>99%) thus needing actual disk access for only less than 1% of the observations. To the best of our knowledge, MTS subsequence search has never been attempted on datasets of the size we have used in this paper.

  18. Time-Accurate Unsteady Pressure Loads Simulated for the Space Launch System at Wind Tunnel Conditions

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.

    2015-01-01

    A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.

  19. In-Band Asymmetry Compensation for Accurate Time/Phase Transport over Optical Transport Network

    PubMed Central

    Siu, Sammy; Hu, Hsiu-fang; Lin, Shinn-Yan; Liao, Chia-Shu; Lai, Yi-Liang

    2014-01-01

    The demands of precise time/phase synchronization have been increasing recently due to the next generation of telecommunication synchronization. This paper studies the issues that are relevant to distributing accurate time/phase over optical transport network (OTN). Each node and link can introduce asymmetry, which affects the adequate time/phase accuracy over the networks. In order to achieve better accuracy, protocol level full timing support is used (e.g., Telecom-Boundary clock). Due to chromatic dispersion, the use of different wavelengths consequently causes fiber link delay asymmetry. The analytical result indicates that it introduces significant time error (i.e., phase offset) within 0.3397 ns/km in C-band or 0.3943 ns/km in L-band depending on the wavelength spacing. With the proposed scheme in this paper, the fiber link delay asymmetry can be compensated relying on the estimated mean fiber link delay by the Telecom-Boundary clock, while the OTN control plane is responsible for processing the fiber link delay asymmetry to determine the asymmetry compensation in the timing chain. PMID:24982948

  20. Rapid and Accurate Identification of Coagulase-Negative Staphylococci by Real-Time PCR

    PubMed Central

    Edwards, K. J.; Kaufmann, M. E.; Saunders, N. A.

    2001-01-01

    Biprobe identification assays based on real-time PCR were designed for 15 species of coagulase-negative staphylococci (CNS). Three sets of primers and four biprobes were designed from two variable regions of the 16S rRNA gene. An identification scheme was developed based on the pattern of melting peaks observed with the four biprobes that had been tested on 24 type strains. This scheme was then tested on 100 previously identified clinical isolates and 42 blindly tested isolates. For 125 of the 142 clinical isolates there was a perfect correlation between the biprobe identification and the result of the ID 32 Staph phenotypic tests and PCR. For 12 of the other isolates a 300-bp portion of the 16S rRNA gene was sequenced to determine identity. The remaining five isolates could not be fully identified. LightCycler real-time PCR allowed rapid and accurate identification of the important CNS implicated in infection. PMID:11526126

  1. Time Accurate CFD Simulations of the Orion Launch Abort Vehicle in the Transonic Regime

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph; Rojahn, Josh

    2011-01-01

    Significant asymmetries in the fluid dynamics were calculated for some cases in the CFD simulations of the Orion Launch Abort Vehicle through its abort trajectories. The CFD simulations were performed steady state with symmetric boundary conditions and geometries. The trajectory points at issue were in the transonic regime, at 0 and 5 angles of attack with the Abort Motors with and without the Attitude Control Motors (ACM) firing. In some of the cases the asymmetric fluid dynamics resulted in aerodynamic side forces that were large enough that would overcome the control authority of the ACMs. MSFC s Fluid Dynamics Group supported the investigation into the cause of the flow asymmetries with time accurate CFD simulations, utilizing a hybrid RANS-LES turbulence model. The results show that the flow over the vehicle and the subsequent interaction with the AB and ACM motor plumes were unsteady. The resulting instantaneous aerodynamic forces were oscillatory with fairly large magnitudes. Time averaged aerodynamic forces were essentially symmetric.

  2. Seasonal signals in the reprocessed GPS coordinate time series

    NASA Astrophysics Data System (ADS)

    Kenyeres, A.; van Dam, T.; Figurski, M.; Szafranek, K.

    2008-12-01

    The global (IGS) and regional (EPN) CGPS time series have already been studied in detail by several authors to analyze the periodic signals and noise present in the long term displacement series. The comparisons indicated that the amplitude and phase of the CGPS derived seasonal signals mostly disagree with the surface mass redistribution models. The CGPS results are highly overestimating the seasonal term, only about 40% of the observed annual amplitude can be explained with the joint contribution of the geophysical models (Dong et al. 2002). Additionally the estimated amplitudes or phases are poorly coherent with the models, especially at sites close to coastal areas (van Dam et al, 2007). The conclusion of the studies was that the GPS results are distorted by analysis artifacts (e.g. ocean tide loading, aliasing of unmodeled short periodic tidal signals, antenna PCV models), monument thermal effects and multipath. Additionally, the GPS series available so far are inhomogeneous in terms of processing strategy, applied models and reference frames. The introduction of the absolute phase center variation (PCV) models for the satellite and ground antennae in 2006 and the related reprocessing of the GPS precise orbits made a perfect ground and strong argument for the complete re-analysis of the GPS observations from global to local level of networks. This enormous work is in progress within the IGS and a pilot analysis was already done for the complete EPN observations from 1996 to 2007 by the MUT group (Military University of Warsaw). The quick analysis of the results proved the expectations and the superiority of the reprocessed data. The noise level (weekly coordinate repeatability) was highly reduced making ground for the later analysis on the daily solution level. We also observed the significant decrease of the seasonal term in the residual coordinate time series, which called our attention to perform a repeated comparison of the GPS derived annual periodicity

  3. A solution for measuring accurate reaction time to visual stimuli realized with a programmable microcontroller.

    PubMed

    Ohyanagi, Toshio; Sengoku, Yasuhito

    2010-02-01

    This article presents a new solution for measuring accurate reaction time (SMART) to visual stimuli. The SMART is a USB device realized with a Cypress Programmable System-on-Chip (PSoC) mixed-signal array programmable microcontroller. A brief overview of the hardware and firmware of the PSoC is provided, together with the results of three experiments. In Experiment 1, we investigated the timing accuracy of the SMART in measuring reaction time (RT) under different conditions of operating systems (OSs; Windows XP or Vista) and monitor displays (a CRT or an LCD). The results indicated that the timing error in measuring RT by the SMART was less than 2 msec, on average, under all combinations of OS and display and that the SMART was tolerant to jitter and noise. In Experiment 2, we tested the SMART with 8 participants. The results indicated that there was no significant difference among RTs obtained with the SMART under the different conditions of OS and display. In Experiment 3, we used Microsoft (MS) PowerPoint to present visual stimuli on the display. We found no significant difference in RTs obtained using MS DirectX technology versus using the PowerPoint file with the SMART. We are certain that the SMART is a simple and practical solution for measuring RTs accurately. Although there are some restrictions in using the SMART with RT paradigms, the SMART is capable of providing both researchers and health professionals working in clinical settings with new ways of using RT paradigms in their work. PMID:20160303

  4. Salt marsh mapping based on a short-time interval NDVI time-series from HJ-1 CCD imagery

    NASA Astrophysics Data System (ADS)

    SUN, C.

    2015-12-01

    Salt marshes are regard as one of the most dynamic and valuable ecosystems in coastal zone. It is crucial to obtain accurate information on the species composition and spatial distribution of salt marshes in time since they are experiencing tremendous replacement and disappearance. However, discriminating various types of salt marshes is a rather difficult task because of the strong spectral similarities. In previous studies, salt marsh mappings were mainly focused on high-spatial and hyperspectral resolution imageries combined with auxiliary information but this method can hardly extend to a large region. With high temporal and moderate spatial resolutions, Chinese HJ-1 CCD imagery would not only allow monitoring phenological changes of salt marsh vegetation in short-time intervals, but also cover large areas of salt marshes. Taking the middle coast of Jiangsu (east China) as an example, our study first constructed a monthly NDVI time-series to classify various types of salt marshes. Then, we tested the idea of compressed time-series continuously to broaden the applicability and portability of this particular approach. The results showed that (1) the overall accuracy of salt marsh mapping based on the monthly NDVI time-series reached 90.3%, which increased approximately 16.0% in contrast with a single-phase classification strategy; (2) a compressed time-series, including NDVI from six key months (April, June to September, and November) demonstrated very little decline (2.3%) in overall accuracy but led to obvious improvements in unstable regions; (3) Spartina alterniflora identification could be achieved with only a scene NDVI image from November, which could provide an effective way to regularly monitor its distribution. Besides, by comparing the calibrated performance between HJ-1 CCD and other sensors (i.e., Landsat TM/ETM+, OLI), we certified the reliability of HJ-1 CCD imagery, which is expected to pave the way for laws expansibility from this imagery.

  5. Time Series Analysis of the Blazar OJ 287

    NASA Astrophysics Data System (ADS)

    Gamel, Ellen; Ryle, W. T.; Carini, M. T.

    2013-06-01

    Blazars are a subset of active galactic nuclei (AGN) where the light is viewed along the jet of radiation produced by the central supermassive black hole. These very luminous objects vary in brightness and are associated with the cores of distant galaxies. The blazar, OJ 287, has been monitored and its brightness tracked over time. From these light curves the relationship between the characteristic “break frequency” and black hole mass can be determined through the use of power density spectra. In order to obtain a well-sampled light curve, this blazar will be observed at a wide range of timescales. Long time scales will be obtained using archived light curves from published literature. Medium time scales were obtained through a combination of data provided by Western Kentucky University and data collected at The Bank of Kentucky Observatory. Short time scales were achieved via a single night of observation at the 72” Perkins Telescope at Lowell Observatory in Flagstaff, AZ. Using time series analysis, we present a revised mass estimate for the super massive black hole of OJ 287. This object is of particular interest because it may harbor a binary black hole at its center.

  6. Computer Program Recognizes Patterns in Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    A computer program recognizes selected patterns in time-series data like digitized samples of seismic or electrophysiological signals. The program implements an artificial neural network (ANN) and a set of N clocks for the purpose of determining whether N or more instances of a certain waveform, W, occur within a given time interval, T. The ANN must be trained to recognize W in the incoming stream of data. The first time the ANN recognizes W, it sets clock 1 to count down from T to zero; the second time it recognizes W, it sets clock 2 to count down from T to zero, and so forth through the Nth instance. On the N + 1st instance, the cycle is repeated, starting with clock 1. If any clock has not reached zero when it is reset, then N instances of W have been detected within time T, and the program so indicates. The program can readily be encoded in a field-programmable gate array or an application-specific integrated circuit that could be used, for example, to detect electroencephalographic or electrocardiographic waveforms indicative of epileptic seizures or heart attacks, respectively.

  7. Accurate measurement of the rise and decay times of fast scintillators with solid state photon counters

    NASA Astrophysics Data System (ADS)

    Seifert, S.; Steenbergen, J. H. L.; van Dam, H. T.; Schaart, D. R.

    2012-09-01

    In this work we present a measurement setup for the determination of scintillation pulse shapes of fast scintillators. It is based on a time-correlated single photon counting approach that utilizes the correlation between 511 keV annihilation photons to produce start and stop signals in two separate crystals. The measurement is potentially cost-effective and simple to set up while maintaining an excellent system timing resolution of 125 ps. As a proof-of-concept the scintillation photon arrival time histograms were recorded for two well-known, fast scintillators: LYSO:Ce and LaBr3:5%Ce. The scintillation pulse shapes were modeled as a linear combination of exponentially distributed charge transfer and photon emission processes. Correcting for the system timing resolution, the exponential time constants were extracted from the recorded histograms. A decay time of 43 ns and a rise time of 72 ps were determined for LYSO:Ce thus demonstrating the capability of the system to accurately measure very fast rise times. In the case of LaBr3:5%Ce two processes were observed to contribute to the rising edge of the scintillation pulse. The faster component (270 ps) contributes with 72% to the rising edge of the scintillation pulse while the second, slower component (2.0 ns) contributes with 27%. The decay of the LaBr3:5%Ce scintillation pulse was measured to be 15.4 ns with a small contribution (2%) of a component with a larger time constant (130 ns).

  8. Adaptive Sampling of Time Series During Remote Exploration

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2012-01-01

    This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models

  9. Evaluation of the Time-Derivative Coupling for Accurate Electronic State Transition Probabilities from Numerical Simulations.

    PubMed

    Meek, Garrett A; Levine, Benjamin G

    2014-07-01

    Spikes in the time-derivative coupling (TDC) near surface crossings make the accurate integration of the time-dependent Schrödinger equation in nonadiabatic molecular dynamics simulations a challenge. To address this issue, we present an approximation to the TDC based on a norm-preserving interpolation (NPI) of the adiabatic electronic wave functions within each time step. We apply NPI and two other schemes for computing the TDC in numerical simulations of the Landau-Zener model, comparing the simulated transfer probabilities to the exact solution. Though NPI does not require the analytical calculation of nonadiabatic coupling matrix elements, it consistently yields unsigned population transfer probability errors of ∼0.001, whereas analytical calculation of the TDC yields errors of 0.0-1.0 depending on the time step, the offset of the maximum in the TDC from the beginning of the time step, and the coupling strength. The approximation of Hammes-Schiffer and Tully yields errors intermediate between NPI and the analytical scheme. PMID:26279558

  10. Statistical processing and convergence of finite-record-length time-series measurements from turbulent flows

    NASA Astrophysics Data System (ADS)

    Papageorge, Michael; Sutton, Jeffrey A.

    2016-08-01

    In this manuscript, we investigate the statistical convergence of turbulent flow statistics from finite-record-length time-series measurements. Analytical solutions of the convergence rate of the mean, variance, and autocorrelation function as a function of record length are presented based on using mean-squared error analysis and the consideration of turbulent flows as random processes. Experimental assessment of the statistical convergence theory is presented using 20-kHz laser Rayleigh scattering measurements of a conserved scalar (ξ) in a turbulent free jet. Excellent agreement between experiments and theory is noted, providing validation of the statistical convergence analysis. To the authors' knowledge, this is the first reported assessment and verification of statistical convergence theory as applied to turbulent flows. The verified theory provides a practitioner a method for a priori determining the necessary temporal record length for a desired statistical accuracy or conversely, accurately estimating the uncertainty of a measurement for a given temporal record length. Furthermore, we propose a new hybrid "multi-burst" data processing scheme based on combined independent ensemble and time-series statistics targeted for shorter-duration time-series measurements. The new methodology is based on taking the ensemble mean of derived statistical moments from many individual finite-duration time-series measurements. This approach is used to systematically converge toward the "expected" value of any statistical moment at a rate of √ M, where M is the number of individual time-series measurements. The proposed multi-burst methodology is assessed experimentally, and excellent agreement between measurements and theory is observed. A key outcome of the implementation of the multi-burst processing method is noted in the estimation of the autocorrelation function. Specifically, an unbiased estimator of the autocorrelation function can be used with much less

  11. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    NASA Technical Reports Server (NTRS)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  12. Nonlinear times series analysis of epileptic human electroencephalogram (EEG)

    NASA Astrophysics Data System (ADS)

    Li, Dingzhou

    The problem of seizure anticipation in patients with epilepsy has attracted significant attention in the past few years. In this paper we discuss two approaches, using methods of nonlinear time series analysis applied to scalp electrode recordings, which is able to distinguish between epochs temporally distant from and just prior to, the onset of a seizure in patients with temporal lobe epilepsy. First we describe a method involving a comparison of recordings taken from electrodes adjacent to and remote from the site of the seizure focus. In particular, we define a nonlinear quantity which we call marginal predictability. This quantity is computed using data from remote and from adjacent electrodes. We find that the difference between the marginal predictabilities computed for the remote and adjacent electrodes decreases several tens of minutes prior to seizure onset, compared to its value interictally. We also show that these difl'crcnc es of marginal predictability intervals are independent of the behavior state of the patient. Next we examine the please coherence between different electrodes both in the long-range and the short-range. When time is distant from seizure onsets ("interictally"), epileptic patients have lower long-range phase coherence in the delta (1-4Hz) and beta (18-30Hz) frequency band compared to nonepileptic subjects. When seizures approach (''preictally"), we observe an increase in phase coherence in the beta band. However, interictally there is no difference in short-range phase coherence between this cohort of patients and non-epileptic subjects. Preictally short-range phase coherence also increases in the alpha (10-13Hz) and the beta band. Next we apply the quantity marginal predictability on the phase difference time series. Such marginal predictabilities are lower in the patients than in the non-epileptic subjects. However, when seizure approaches, the former moves asymptotically towards the latter.

  13. United States Forest Disturbance Trends Observed Using Landsat Time Series

    NASA Technical Reports Server (NTRS)

    Masek, Jeffrey G.; Goward, Samuel N.; Kennedy, Robert E.; Cohen, Warren B.; Moisen, Gretchen G.; Schleeweis, Karen; Huang, Chengquan

    2013-01-01

    Disturbance events strongly affect the composition, structure, and function of forest ecosystems; however, existing U.S. land management inventories were not designed to monitor disturbance. To begin addressing this gap, the North American Forest Dynamics (NAFD) project has examined a geographic sample of 50 Landsat satellite image time series to assess trends in forest disturbance across the conterminous United States for 1985-2005. The geographic sample design used a probability-based scheme to encompass major forest types and maximize geographic dispersion. For each sample location disturbance was identified in the Landsat series using the Vegetation Change Tracker (VCT) algorithm. The NAFD analysis indicates that, on average, 2.77 Mha/yr of forests were disturbed annually, representing 1.09%/yr of US forestland. These satellite-based national disturbance rates estimates tend to be lower than those derived from land management inventories, reflecting both methodological and definitional differences. In particular the VCT approach used with a biennial time step has limited sensitivity to low-intensity disturbances. Unlike prior satellite studies, our biennial forest disturbance rates vary by nearly a factor of two between high and low years. High western US disturbance rates were associated with active fire years and insect activity, while variability in the east is more strongly related to harvest rates in managed forests. We note that generating a geographic sample based on representing forest type and variability may be problematic since the spatial pattern of disturbance does not necessarily correlate with forest type. We also find that the prevalence of diffuse, non-stand clearing disturbance in US forests makes the application of a biennial geographic sample problematic. Future satellite-based studies of disturbance at regional and national scales should focus on wall-to-wall analyses with annual time step for improved accuracy.

  14. Established time series measure occurrence and frequency of episodic events.

    NASA Astrophysics Data System (ADS)

    Pebody, Corinne; Lampitt, Richard

    2015-04-01

    Established time series measure occurrence and frequency of episodic events. Episodic flux events occur in open oceans. Time series making measurements over significant time scales are one of the few methods that can capture these events and compare their impact with 'normal' flux. Seemingly rare events may be significant on local scales, but without the ability to measure the extent of flux on spatial and temporal scales and combine with the frequency of occurrence, it is difficult to constrain their impact. The Porcupine Abyssal Plain Sustained Observatory (PAP-SO) in the Northeast Atlantic (49 °N 16 °W, 5000m water depth) has measured particle flux since 1989 and zooplankton swimmers since 2000. Sediment traps at 3000m and 100 metres above bottom, collect material year round and we have identified close links between zooplankton and particle flux. Some of these larger animals, for example Diacria trispinosa, make a significant contribution to carbon flux through episodic flux events. D. trispinosa is a euthecosome mollusc which occurs in the Northeast Atlantic, though the PAP-SO is towards the northern limit of its distribution. Pteropods are comprised of aragonite shell, containing soft body parts excepting the muscular foot which extends beyond the mouth of the living animal. Pteropods, both live-on-entry animals and the empty shells are found year round in the 3000m trap. Generally the abundance varies with particle flux, but within that general pattern there are episodic events where significant numbers of these animals containing both organic and inorganic carbon are captured at depth and therefore could be defined as contributing to export flux. Whether the pulse of animals is as a result of the life cycle of D. trispinosa or the effects of the physics of the water column is unclear, but the complexity of the PAP-SO enables us not only to collect these animals but to examine them in parallel to the biogeochemical and physical elements measured by the

  15. Identifying Signatures of Selection in Genetic Time Series

    PubMed Central

    Feder, Alison F.; Kryazhimskiy, Sergey; Plotkin, Joshua B.

    2014-01-01

    Both genetic drift and natural selection cause the frequencies of alleles in a population to vary over time. Discriminating between these two evolutionary forces, based on a time series of samples from a population, remains an outstanding problem with increasing relevance to modern data sets. Even in the idealized situation when the sampled locus is independent of all other loci, this problem is difficult to solve, especially when the size of the population from which the samples are drawn is unknown. A standard χ2-based likelihood-ratio test was previously proposed to address this problem. Here we show that the χ2-test of selection substantially underestimates the probability of type I error, leading to more false positives than indicated by its P-value, especially at stringent P-values. We introduce two methods to correct this bias. The empirical likelihood-ratio test (ELRT) rejects neutrality when the likelihood-ratio statistic falls in the tail of the empirical distribution obtained under the most likely neutral population size. The frequency increment test (FIT) rejects neutrality if the distribution of normalized allele-frequency increments exhibits a mean that deviates significantly from zero. We characterize the statistical power of these two tests for selection, and we apply them to three experimental data sets. We demonstrate that both ELRT and FIT have power to detect selection in practical parameter regimes, such as those encountered in microbial evolution experiments. Our analysis applies to a single diallelic locus, assumed independent of all other loci, which is most relevant to full-genome selection scans in sexual organisms, and also to evolution experiments in asexual organisms as long as clonal interference is weak. Different techniques will be required to detect selection in time series of cosegregating linked loci. PMID:24318534

  16. Trend Estimation and Regression Analysis in Climatological Time Series: An Application of Structural Time Series Models and the Kalman Filter.

    NASA Astrophysics Data System (ADS)

    Visser, H.; Molenaar, J.

    1995-05-01

    The detection of trends in climatological data has become central to the discussion on climate change due to the enhanced greenhouse effect. To prove detection, a method is needed (i) to make inferences on significant rises or declines in trends, (ii) to take into account natural variability in climate series, and (iii) to compare output from GCMs with the trends in observed climate data. To meet these requirements, flexible mathematical tools are needed. A structural time series model is proposed with which a stochastic trend, a deterministic trend, and regression coefficients can be estimated simultaneously. The stochastic trend component is described using the class of ARIMA models. The regression component is assumed to be linear. However, the regression coefficients corresponding with the explanatory variables may be time dependent to validate this assumption. The mathematical technique used to estimate this trend-regression model is the Kaiman filter. The main features of the filter are discussed.Examples of trend estimation are given using annual mean temperatures at a single station in the Netherlands (1706-1990) and annual mean temperatures at Northern Hemisphere land stations (1851-1990). The inclusion of explanatory variables is shown by regressing the latter temperature series on four variables: Southern Oscillation index (SOI), volcanic dust index (VDI), sunspot numbers (SSN), and a simulated temperature signal, induced by increasing greenhouse gases (GHG). In all analyses, the influence of SSN on global temperatures is found to be negligible. The correlations between temperatures and SOI and VDI appear to be negative. For SOI, this correlation is significant, but for VDI it is not, probably because of a lack of volcanic eruptions during the sample period. The relation between temperatures and GHG is positive, which is in agreement with the hypothesis of a warming climate because of increasing levels of greenhouse gases. The prediction performance of

  17. Assimilation of LAI time-series in crop production models

    NASA Astrophysics Data System (ADS)

    Kooistra, Lammert; Rijk, Bert; Nannes, Louis

    2014-05-01

    Agriculture is worldwide a large consumer of freshwater, nutrients and land. Spatial explicit agricultural management activities (e.g., fertilization, irrigation) could significantly improve efficiency in resource use. In previous studies and operational applications, remote sensing has shown to be a powerful method for spatio-temporal monitoring of actual crop status. As a next step, yield forecasting by assimilating remote sensing based plant variables in crop production models would improve agricultural decision support both at the farm and field level. In this study we investigated the potential of remote sensing based Leaf Area Index (LAI) time-series assimilated in the crop production model LINTUL to improve yield forecasting at field level. The effect of assimilation method and amount of assimilated observations was evaluated. The LINTUL-3 crop production model was calibrated and validated for a potato crop on two experimental fields in the south of the Netherlands. A range of data sources (e.g., in-situ soil moisture and weather sensors, destructive crop measurements) was used for calibration of the model for the experimental field in 2010. LAI from cropscan field radiometer measurements and actual LAI measured with the LAI-2000 instrument were used as input for the LAI time-series. The LAI time-series were assimilated in the LINTUL model and validated for a second experimental field on which potatoes were grown in 2011. Yield in 2011 was simulated with an R2 of 0.82 when compared with field measured yield. Furthermore, we analysed the potential of assimilation of LAI into the LINTUL-3 model through the 'updating' assimilation technique. The deviation between measured and simulated yield decreased from 9371 kg/ha to 8729 kg/ha when assimilating weekly LAI measurements in the LINTUL model over the season of 2011. LINTUL-3 furthermore shows the main growth reducing factors, which are useful for farm decision support. The combination of crop models and sensor

  18. The Mount Wilson CaK Plage Index Time Series

    NASA Astrophysics Data System (ADS)

    Bertello, L.; Ulrich, R. K.; Boyden, J. E.; Javaraiah, J.

    2008-05-01

    The Mount Wilson solar photographic archive digitization project makes available to the scientific community in digital form a selection of the solar images in the archives of the Carnegie Observatories. This archive contains over 150,000 images of the Sun which were acquired over a time span in excess of 100 years. The images include broad-band images called White Light Directs, ionized CaK line spectroheliograms and Hydrogen Balmer alpha spectroheliograms. This project will digitize essentially all of the CaK and broad-band direct images out of the archive with 12 bits of significant precision and up to 3000 by 3000 spatial pixels. The analysis of this data set will permit a variety of retrospective analyzes of the state of the solar magnetism and provide a temporal baseline of about 100 years for many solar properties. We have already completed the digitization of the CaK series and we are currently working on the broad-band direct images. Solar images have been extracted and identified with original logbook parameters of observation time and scan format, and they are available from the project web site at www.astro.ucla.edu/~ulrich/MW_SPADP. We present preliminary results on a CaK plage index time series derived from the analysis of 70 years of CaK observations, from 1915 to 1985. One of the main problem we encountered during the calibration process of these images is the presence of a vignetting function. This function is linked to the relative position between the pupil and the grating. As a result of this effect the intensity and its gradient are highly variable from one image to another. We currently remove this effect by using a running median filter to determine the background of the image and divide the image by this background to obtain a flat image. A plage index value is then computed from the intensity distribution of this flat image. We show that the temporal variability of our CaK plage index agrees very well with the behavior of the international

  19. Aerosol Climate Time Series in ESA Aerosol_cci

    NASA Astrophysics Data System (ADS)

    Popp, Thomas; de Leeuw, Gerrit; Pinnock, Simon

    2016-04-01

    Within the ESA Climate Change Initiative (CCI) Aerosol_cci (2010 - 2017) conducts intensive work to improve algorithms for the retrieval of aerosol information from European sensors. Meanwhile, full mission time series of 2 GCOS-required aerosol parameters are completely validated and released: Aerosol Optical Depth (AOD) from dual view ATSR-2 / AATSR radiometers (3 algorithms, 1995 - 2012), and stratospheric extinction profiles from star occultation GOMOS spectrometer (2002 - 2012). Additionally, a 35-year multi-sensor time series of the qualitative Absorbing Aerosol Index (AAI) together with sensitivity information and an AAI model simulator is available. Complementary aerosol properties requested by GCOS are in a "round robin" phase, where various algorithms are inter-compared: fine mode AOD, mineral dust AOD (from the thermal IASI spectrometer, but also from ATSR instruments and the POLDER sensor), absorption information and aerosol layer height. As a quasi-reference for validation in few selected regions with sparse ground-based observations the multi-pixel GRASP algorithm for the POLDER instrument is used. Validation of first dataset versions (vs. AERONET, MAN) and inter-comparison to other satellite datasets (MODIS, MISR, SeaWIFS) proved the high quality of the available datasets comparable to other satellite retrievals and revealed needs for algorithm improvement (for example for higher AOD values) which were taken into account for a reprocessing. The datasets contain pixel level uncertainty estimates which were also validated and improved in the reprocessing. For the three ATSR algorithms the use of an ensemble method was tested. The paper will summarize and discuss the status of dataset reprocessing and validation. The focus will be on the ATSR, GOMOS and IASI datasets. Pixel level uncertainties validation will be summarized and discussed including unknown components and their potential usefulness and limitations. Opportunities for time series extension

  20. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID... Penalties associated with the failure to submit timely and accurate ASP data. Section 1847A(d)(4) specifies the penalties associated with misrepresentations associated with ASP data. If the Secretary...

  1. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID... Penalties associated with the failure to submit timely and accurate ASP data. Section 1847A(d)(4) specifies the penalties associated with misrepresentations associated with ASP data. If the Secretary...

  2. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID... Penalties associated with the failure to submit timely and accurate ASP data. Section 1847A(d)(4) specifies the penalties associated with misrepresentations associated with ASP data. If the Secretary...

  3. 42 CFR 414.806 - Penalties associated with the failure to submit timely and accurate ASP data.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... timely and accurate ASP data. 414.806 Section 414.806 Public Health CENTERS FOR MEDICARE & MEDICAID... associated with the failure to submit timely and accurate ASP data. Section 1847A(d)(4) specifies the penalties associated with misrepresentations associated with ASP data. If the Secretary determines that...

  4. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.

    2002-01-01

    Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.

  5. Optical remotely sensed time series data for land cover classification: A review

    NASA Astrophysics Data System (ADS)

    Gómez, Cristina; White, Joanne C.; Wulder, Michael A.

    2016-06-01

    Accurate land cover information is required for science, monitoring, and reporting. Land cover changes naturally over time, as well as a result of anthropogenic activities. Monitoring and mapping of land cover and land cover change in a consistent and robust manner over large areas is made possible with Earth Observation (EO) data. Land cover products satisfying a range of science and policy information needs are currently produced periodically at different spatial and temporal scales. The increased availability of EO data-particularly from the Landsat archive (and soon to be augmented with Sentinel-2 data)-coupled with improved computing and storage capacity with novel image compositing approaches, have resulted in the availability of annual, large-area, gap-free, surface reflectance data products. In turn, these data products support the development of annual land cover products that can be both informed and constrained by change detection outputs. The inclusion of time series change in the land cover mapping process provides information on class stability and informs on logical class transitions (both temporally and categorically). In this review, we present the issues and opportunities associated with generating and validating time-series informed annual, large-area, land cover products, and identify methods suited to incorporating time series information and other novel inputs for land cover characterization.

  6. Weighted permutation entropy based on different symbolic approaches for financial time series

    NASA Astrophysics Data System (ADS)

    Yin, Yi; Shang, Pengjian

    2016-02-01

    In this paper, we introduce weighted permutation entropy (WPE) and three different symbolic approaches to investigate the complexities of stock time series containing amplitude-coded information and explore the influence of using different symbolic approaches on obtained WPE results. We employ WPE based on symbolic approaches to the US and Chinese stock markets and make a comparison between the results of US and Chinese stock markets. Three symbolic approaches are able to help the complexity containing in the stock time series by WPE method drop whatever the embedding dimension is. The similarity between these stock markets can be detected by the WPE based on Binary Δ-coding-method, while the difference between them can be revealed by the WPE based on σ-method, Max-min-method. The combinations of the symbolic approaches: σ-method and Max-min-method, and WPE method are capable of reflecting the multiscale structure of complexity by different time delay and analyze the differences between complexities of stock time series in more detail and more accurately. Furthermore, the correlations between stock markets in the same region and the similarities hidden in the S&P500 and DJI, ShangZheng and ShenCheng are uncovered by the comparison of the WPE based on Binary Δ-coding-method of six stock markets.

  7. [Fast and accurate extraction of ring-down time in cavity ring-down spectroscopy].

    PubMed

    Wang, Dan; Hu, Ren-Zhi; Xie, Pin-Hua; Qin, Min; Ling, Liu-Yi; Duan, Jun

    2014-10-01

    Research is conducted to accurate and efficient algorithms for extracting ring-down time (r) in cavity ring-down spectroscopy (CRDS) which is used to measure NO3 radical in the atmosphere. Fast and accurate extraction of ring-down time guarantees more precise and higher speed of measurement. In this research, five kinds of commonly used algorithms are selected to extract ring-down time which respectively are fast Fourier transform (FFT) algorithm, discrete Fourier transform (DFT) algorithm, linear regression of the sum (LRS) algorithm, Levenberg-Marquardt (LM) algorithm and least squares (LS) algorithm. Simulated ring-down signals with various amplitude levels of white noises are fitted by using five kinds of the above-mentioned algorithms, and comparison and analysis is conducted to the fitting results of five kinds of algorithms from four respects: the vulnerability to noises, the accuracy and precision of the fitting, the speed of the fitting and preferable fitting ring-down signal waveform length The research results show that Levenberg-Marquardt algorithm and linear regression of the sum algorithm are able to provide more precise results and prove to have higher noises immunity, and by comparison, the fitting speed of Leven- berg-Marquardt algorithm turns out to be slower. In addition, by analysis of simulated ring-down signals, five to ten times of ring-down time is selected to be the best fitting waveform length because in this case, standard deviation of fitting results of five kinds of algorithms proves to be the minimum. External modulation diode laser and cavity which consists of two high reflectivity mirrors are used to construct a cavity ring-down spectroscopy detection system. According to our experimental conditions, in which the noise level is 0.2%, linear regression of the sum algorithm and Levenberg-Marquardt algorithm are selected to process experimental data. The experimental results show that the accuracy and precision of linear regression of

  8. Optimizing Functional Network Representation of Multivariate Time Series

    NASA Astrophysics Data System (ADS)

    Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco Del; Menasalvas, Ernestina; Boccaletti, Stefano

    2012-09-01

    By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks.

  9. Optimal estimation of recurrence structures from time series

    NASA Astrophysics Data System (ADS)

    beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel

    2016-05-01

    Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.

  10. Spitzer IRAC Photometry for Time Series in Crowded Fields

    NASA Astrophysics Data System (ADS)

    Calchi Novati, S.; Gould, A.; Yee, J. C.; Beichman, C.; Bryden, G.; Carey, S.; Fausnaugh, M.; Gaudi, B. S.; Henderson, C. B.; Pogge, R. W.; Shvartzvald, Y.; Wibking, B.; Zhu, W.; Spitzer Team; Udalski, A.; Poleski, R.; Pawlak, M.; Szymański, M. K.; Skowron, J.; Mróz, P.; Kozłowski, S.; Wyrzykowski, Ł.; Pietrukowicz, P.; Pietrzyński, G.; Soszyński, I.; Ulaczyk, K.; OGLE Group

    2015-12-01

    We develop a new photometry algorithm that is optimized for the Infrared Array Camera (IRAC) Spitzer time series in crowded fields and that is particularly adapted to faint or heavily blended targets. We apply this to the 170 targets from the 2015 Spitzer microlensing campaign and present the results of three variants of this algorithm in an online catalog. We present detailed accounts of the application of this algorithm to two difficult cases, one very faint and the other very crowded. Several of Spitzer's instrumental characteristics that drive the specific features of this algorithm are shared by Kepler and WFIRST, implying that these features may prove to be a useful starting point for algorithms designed for microlensing campaigns by these other missions.

  11. Binary Time Series Modeling with Application to Adhesion Frequency Experiments

    PubMed Central

    Hung, Ying; Zarnitsyna, Veronika; Zhang, Yan; Zhu, Cheng; Wu, C. F. Jeff

    2011-01-01

    Repeated adhesion frequency assay is the only published method for measuring the kinetic rates of cell adhesion. Cell adhesion plays an important role in many physiological and pathological processes. Traditional analysis of adhesion frequency experiments assumes that the adhesion test cycles are independent Bernoulli trials. This assumption can often be violated in practice. Motivated by the analysis of repeated adhesion tests, a binary time series model incorporating random effects is developed in this paper. A goodness-of-fit statistic is introduced to assess the adequacy of distribution assumptions on the dependent binary data with random effects. The asymptotic distribution of the goodness-of-fit statistic is derived and its finite-sample performance is examined via a simulation study. Application of the proposed methodology to real data from a T-cell experiment reveals some interesting information, including the dependency between repeated adhesion tests. PMID:22180690

  12. Incorporating Satellite Time-Series Data into Modeling

    NASA Technical Reports Server (NTRS)

    Gregg, Watson

    2008-01-01

    In situ time series observations have provided a multi-decadal view of long-term changes in ocean biology. These observations are sufficiently reliable to enable discernment of even relatively small changes, and provide continuous information on a host of variables. Their key drawback is their limited domain. Satellite observations from ocean color sensors do not suffer the drawback of domain, and simultaneously view the global oceans. This attribute lends credence to their use in global and regional model validation and data assimilation. We focus on these applications using the NASA Ocean Biogeochemical Model. The enhancement of the satellite data using data assimilation is featured and the limitation of tongterm satellite data sets is also discussed.

  13. Predicting chaotic time series with a partial model

    NASA Astrophysics Data System (ADS)

    Hamilton, Franz; Berry, Tyrus; Sauer, Timothy

    2015-07-01

    Methods for forecasting time series are a critical aspect of the understanding and control of complex networks. When the model of the network is unknown, nonparametric methods for prediction have been developed, based on concepts of attractor reconstruction pioneered by Takens and others. In this Rapid Communication we consider how to make use of a subset of the system equations, if they are known, to improve the predictive capability of forecasting methods. A counterintuitive implication of the results is that knowledge of the evolution equation of even one variable, if known, can improve forecasting of all variables. The method is illustrated on data from the Lorenz attractor and from a small network with chaotic dynamics.

  14. Directional ocean spectra by three-dimensional displacement time series

    SciTech Connect

    Su, T.Z.

    1984-01-01

    The directionality of ocean waves is considered as the most problem area of today's wave measurement technology. In 1982 the University of Hawaii Ocean ''Engineering Department began a research project Engineering Development of a Directional Wave Spectrum Measurement System for OTEC Applications'' to address this problem. A new technology was developed in this research. This technology uses acoustic signals to determine the trajectory of a floating buoy which simulates the movement of a surface water particle. Transfer functions of the three-dimensional displacement time series are used to describe the wave kinematics. The described wave kinematics are directly applied to calculate hydrodynamic loading. Cospectra and quadrature spectra determine the directional distribution function. The resultant directional distribution function is used to predict the directional progression of ocean waves.

  15. Time-series analysis of Campylobacter incidence in Switzerland.

    PubMed

    Wei, W; Schüpbach, G; Held, L

    2015-07-01

    Campylobacteriosis has been the most common food-associated notifiable infectious disease in Switzerland since 1995. Contact with and ingestion of raw or undercooked broilers are considered the dominant risk factors for infection. In this study, we investigated the temporal relationship between the disease incidence in humans and the prevalence of Campylobacter in broilers in Switzerland from 2008 to 2012. We use a time-series approach to describe the pattern of the disease by incorporating seasonal effects and autocorrelation. The analysis shows that prevalence of Campylobacter in broilers, with a 2-week lag, has a significant impact on disease incidence in humans. Therefore Campylobacter cases in humans can be partly explained by contagion through broiler meat. We also found a strong autoregressive effect in human illness, and a significant increase of illness during Christmas and New Year's holidays. In a final analysis, we corrected for the sampling error of prevalence in broilers and the results gave similar conclusions. PMID:25400006

  16. Polynomial harmonic GMDH learning networks for time series modeling.

    PubMed

    Nikolaev, Nikolay Y; Iba, Hitoshi

    2003-12-01

    This paper presents a constructive approach to neural network modeling of polynomial harmonic functions. This is an approach to growing higher-order networks like these build by the multilayer GMDH algorithm using activation polynomials. Two contributions for enhancement of the neural network learning are offered: (1) extending the expressive power of the network representation with another compositional scheme for combining polynomial terms and harmonics obtained analytically from the data; (2) space improving the higher-order network performance with a backpropagation algorithm for further gradient descent learning of the weights, initialized by least squares fitting during the growing phase. Empirical results show that the polynomial harmonic version phGMDH outperforms the previous GMDH, a Neurofuzzy GMDH and traditional MLP neural networks on time series modeling tasks. Applying next backpropagation training helps to achieve superior polynomial network performances. PMID:14622880

  17. Forecasting electricity usage using univariate time series models

    NASA Astrophysics Data System (ADS)

    Hock-Eam, Lim; Chee-Yin, Yip

    2014-12-01

    Electricity is one of the important energy sources. A sufficient supply of electricity is vital to support a country's development and growth. Due to the changing of socio-economic characteristics, increasing competition and deregulation of electricity supply industry, the electricity demand forecasting is even more important than before. It is imperative to evaluate and compare the predictive performance of various forecasting methods. This will provide further insights on the weakness and strengths of each method. In literature, there are mixed evidences on the best forecasting methods of electricity demand. This paper aims to compare the predictive performance of univariate time series models for forecasting the electricity demand using a monthly data of maximum electricity load in Malaysia from January 2003 to December 2013. Results reveal that the Box-Jenkins method produces the best out-of-sample predictive performance. On the other hand, Holt-Winters exponential smoothing method is a good forecasting method for in-sample predictive performance.

  18. Optimizing Functional Network Representation of Multivariate Time Series

    PubMed Central

    Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco del; Menasalvas, Ernestina; Boccaletti, Stefano

    2012-01-01

    By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks. PMID:22953051

  19. Practical Measures of Integrated Information for Time-Series Data

    PubMed Central

    Barrett, Adam B.; Seth, Anil K.

    2011-01-01

    A recent measure of ‘integrated information’, ΦDM, quantifies the extent to which a system generates more information than the sum of its parts as it transitions between states, possibly reflecting levels of consciousness generated by neural systems. However, ΦDM is defined only for discrete Markov systems, which are unusual in biology; as a result, ΦDM can rarely be measured in practice. Here, we describe two new measures, ΦE and ΦAR, that overcome these limitations and are easy to apply to time-series data. We use simulations to demonstrate the in-practice applicability of our measures, and to explore their properties. Our results provide new opportunities for examining information integration in real and model systems and carry implications for relations between integrated information, consciousness, and other neurocognitive processes. However, our findings pose challenges for theories that ascribe physical meaning to the measured quantities. PMID:21283779

  20. An Ontology for the Discovery of Time-series Data

    NASA Astrophysics Data System (ADS)

    Hooper, R. P.; Choi, Y.; Piasecki, M.; Zaslavsky, I.; Valentine, D. W.; Whitenack, T.

    2010-12-01

    An ontology was developed to enable a single-dimensional keyword search of time-series data collected at fixed points, such as stream gage records, water quality observations, or repeated biological measurements collected at fixed stations. The hierarchical levels were developed to allow navigation from general concepts to more specific ones, terminating in a leaf concept, which is the specific property measured. For example, the concept “nutrient” has child concepts of “nitrogen”, “phosphorus”, and “carbon”; each of these children concepts are then broken into the actual constituent measured (e.g., “total kjeldahl nitrogen” or “nitrate + nitrite”). In this way, a non-expert user can find all nutrients containing nitrogen without knowing all the species measured, but an expert user can go immediately to the compound of interest. In addition, a property, such as dissolved silica, can appear as a leaf concept under nutrients or weathering products. This flexibility allows users from various disciplines to find properties of interest. The ontology can be viewed at http://water.sdsc.edu/hiscentral/startree.aspx. Properties measured by various data publishers (e.g., universities and government agencies) are tagged with leaf concepts from this ontology. A discovery client, HydroDesktop, creates a search request by defining the spatial and temporal extent of interest and a keyword taken from the discovery ontology. Metadata returned from the catalog describes the time series which meet the specified search criteria. This ontology is considered to be an initial description of physical, chemical and biological properties measured in water and suspended sediment. Future plans call for creating a moderated forum for the scientific community to add to and to modify this ontology. Further information for the Hydrologic Information Systems project, of which this is a part, is available at http://his.cuahsi.org.

  1. Efficient Bayesian inference for natural time series using ARFIMA processes

    NASA Astrophysics Data System (ADS)

    Graves, Timothy; Gramacy, Robert; Franzke, Christian; Watkins, Nicholas

    2016-04-01

    Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. We present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators [1]. In addition we show how the method can be used to perform joint inference of the stability exponent and the memory parameter when ARFIMA is extended to allow for alpha-stable innovations. Such models can be used to study systems where heavy tails and long range memory coexist. [1] Graves et al, Nonlin. Processes Geophys., 22, 679-700, 2015; doi:10.5194/npg-22-679-2015.

  2. Controlled, distributed data management of an Antarctic time series

    NASA Astrophysics Data System (ADS)

    Leadbetter, Adam; Connor, David; Cunningham, Nathan; Reynolds, Sarah

    2010-05-01

    The Rothera Time Series (RaTS) presents over ten years of oceanographic data collected off the Antarctic Peninsula comprising conductivity, temperature, depth cast data; current meter data; and bottle sample data. The data set has been extensively analysed and is well represented in the scientific literature. However, it has never been available to browse as a coherent entity. Work has been undertaken by both the data collecting organisation (the British Antarctic Survey, BAS) and the associated national data centre (the British Oceanographic Data Centre, BODC) to describe the parameters comprising the dataset in a consistent manner. To this end, each data point in the RaTS dataset has now been ascribed a parameter usage term, selected from the appropriate controlled vocabulary of the Natural Environment Research Council's Data Grid (NDG). By marking up the dataset in this way the semantic richness of the NDG vocabularies is fully accessible, and the dataset can be then explored using the Global Change Master Directory keyword set, the International Standards Organisation topic categories, SeaDataNet disciplines and agreed parameter groups, and the NDG parameter discovery vocabulary. We present a single data discovery and exploration tool, a web portal which allows the user to drill down through the dataset using their chosen keyword set. The spatial coverage of the chosen data is displayed through a Google Earth web plugin. Finally, as the time series data are held at BODC and the discrete sample data held at BAS (which are separate physical locations), a mechanism has been established to provide metadata from one site to another. This takes the form of an Open Geospatial Consortium Web Map Service server at BODC feeding information into the portal hosted at BAS.

  3. Statistical methods of parameter estimation for deterministically chaotic time series.

    PubMed

    Pisarenko, V F; Sornette, D

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A "segmentation fitting" maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x(1) considered as an additional unknown parameter. The segmentation fitting method, called "piece-wise" ML, is similar in spirit but simpler and has smaller bias than the "multiple shooting" previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically). PMID:15089376

  4. A time-accurate algorithm for chemical non-equilibrium viscous flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, J.-S.; Chen, K.-H.; Choi, Y.

    1992-01-01

    A time-accurate, coupled solution procedure is described for the chemical nonequilibrium Navier-Stokes equations over a wide range of Mach numbers. This method employs the strong conservation form of the governing equations, but uses primitive variables as unknowns. Real gas properties and equilibrium chemistry are considered. Numerical tests include steady convergent-divergent nozzle flows with air dissociation/recombination chemistry, dump combustor flows with n-pentane-air chemistry, nonreacting flow in a model double annular combustor, and nonreacting unsteady driven cavity flows. Numerical results for both the steady and unsteady flows demonstrate the efficiency and robustness of the present algorithm for Mach numbers ranging from the incompressible limit to supersonic speeds.

  5. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  6. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  7. Accurate mass - time tag library for LC/MS-based metabolite profiling of medicinal plants

    PubMed Central

    Cuthbertson, Daniel J.; Johnson, Sean R.; Piljac-Žegarac, Jasenka; Kappel, Julia; Schäfer, Sarah; Wüst, Matthias; Ketchum, Raymond E. B.; Croteau, Rodney B.; Marques, Joaquim V.; Davin, Laurence B.; Lewis, Norman G.; Rolf, Megan; Kutchan, Toni M.; Soejarto, D. Doel; Lange, B. Markus

    2013-01-01

    We report the development and testing of an accurate mass – time (AMT) tag approach for the LC/MS-based identification of plant natural products (PNPs) in complex extracts. An AMT tag library was developed for approximately 500 PNPs with diverse chemical structures, detected in electrospray and atmospheric pressure chemical ionization modes (both positive and negative polarities). In addition, to enable peak annotations with high confidence, MS/MS spectra were acquired with three different fragmentation energies. The LC/MS and MS/MS data sets were integrated into online spectral search tools and repositories (Spektraris and MassBank), thus allowing users to interrogate their own data sets for the potential presence of PNPs. The utility of the AMT tag library approach is demonstrated by the detection and annotation of active principles in 27 different medicinal plant species with diverse chemical constituents. PMID:23597491

  8. Disambiguating past events: Accurate source memory for time and context depends on different retrieval processes.

    PubMed

    Persson, Bjorn M; Ainge, James A; O'Connor, Akira R

    2016-07-01

    Current animal models of episodic memory are usually based on demonstrating integrated memory for what happened, where it happened, and when an event took place. These models aim to capture the testable features of the definition of human episodic memory which stresses the temporal component of the memory as a unique piece of source information that allows us to disambiguate one memory from another. Recently though, it has been suggested that a more accurate model of human episodic memory would include contextual rather than temporal source information, as humans' memory for time is relatively poor. Here, two experiments were carried out investigating human memory for temporal and contextual source information, along with the underlying dual process retrieval processes, using an immersive virtual environment paired with a 'Remember-Know' memory task. Experiment 1 (n=28) showed that contextual information could only be retrieved accurately using recollection, while temporal information could be retrieved using either recollection or familiarity. Experiment 2 (n=24), which used a more difficult task, resulting in reduced item recognition rates and therefore less potential for contamination by ceiling effects, replicated the pattern of results from Experiment 1. Dual process theory predicts that it should only be possible to retrieve source context from an event using recollection, and our results are consistent with this prediction. That temporal information can be retrieved using familiarity alone suggests that it may be incorrect to view temporal context as analogous to other typically used source contexts. This latter finding supports the alternative proposal that time since presentation may simply be reflected in the strength of memory trace at retrieval - a measure ideally suited to trace strength interrogation using familiarity, as is typically conceptualised within the dual process framework. PMID:27174312

  9. Blind source separation problem in GPS time series

    NASA Astrophysics Data System (ADS)

    Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.

    2016-04-01

    A critical point in the analysis of ground displacement time series, as those recorded by space geodetic techniques, is the development of data-driven methods that allow the different sources of deformation to be discerned and characterized in the space and time domains. Multivariate statistic includes several approaches that can be considered as a part of data-driven methods. A widely used technique is the principal component analysis (PCA), which allows us to reduce the dimensionality of the data space while maintaining most of the variance of the dataset explained. However, PCA does not perform well in finding the solution to the so-called blind source separation (BSS) problem, i.e., in recovering and separating the original sources that generate the observed data. This is mainly due to the fact that PCA minimizes the misfit calculated using an L2 norm (χ 2), looking for a new Euclidean space where the projected data are uncorrelated. The independent component analysis (ICA) is a popular technique adopted to approach the BSS problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we test the use of a modified variational Bayesian ICA (vbICA) method to recover the multiple sources of ground deformation even in the presence of missing data. The vbICA method models the probability density function (pdf) of each source signal using a mix of Gaussian distributions, allowing for more flexibility in the description of the pdf of the sources with respect to standard ICA, and giving a more reliable estimate of them. Here we present its application to synthetic global positioning system (GPS) position time series, generated by simulating deformation near an active fault, including inter-seismic, co-seismic, and post-seismic signals, plus seasonal signals and noise, and an additional time-dependent volcanic source. We evaluate the ability of the PCA and ICA decomposition

  10. Time series inversion of spectra from ground-based radiometers

    NASA Astrophysics Data System (ADS)

    Christensen, O. M.; Eriksson, P.

    2013-07-01

    Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument, which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the Onsala Space Observatory (OSO) water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.

  11. Time series inversion of spectra from ground-based radiometers

    NASA Astrophysics Data System (ADS)

    Christensen, O. M.; Eriksson, P.

    2013-02-01

    Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the OSO water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.

  12. Streamflow properties from time series of surface velocity and stage

    USGS Publications Warehouse

    Plant, W.J.; Keller, W.C.; Hayes, K.; Spicer, K.

    2005-01-01

    Time series of surface velocity and stage have been collected simultaneously. Surface velocity was measured using an array of newly developed continuous-wave microwave sensors. Stage was obtained from the standard U.S. Geological Survey (USGS) measurements. The depth of the river was measured several times during our experiments using sounding weights. The data clearly showed that the point of zero flow was not the bottom at the measurement site, indicating that a downstream control exists. Fathometer measurements confirmed this finding. A model of the surface velocity expected at a site having a downstream control was developed. The model showed that the standard form for the friction velocity does not apply to sites where a downstream control exists. This model fit our measured surface velocity versus stage plots very well with reasonable values of the parameters. Discharges computed using the surface velocities and measured depths matched the USGS rating curve for the site. Values of depth-weighted mean velocities derived from our data did not agree with those expected from Manning's equation due to the downstream control. These results suggest that if real-time surface velocities were available at a gauging station, unstable stream beds could be monitored. Journal of Hydraulic Engineering ?? ASCE.

  13. A conservative finite volume scheme with time-accurate local time stepping for scalar transport on unstructured grids

    NASA Astrophysics Data System (ADS)

    Cavalcanti, José Rafael; Dumbser, Michael; Motta-Marques, David da; Fragoso Junior, Carlos Ruberto

    2015-12-01

    In this article we propose a new conservative high resolution TVD (total variation diminishing) finite volume scheme with time-accurate local time stepping (LTS) on unstructured grids for the solution of scalar transport problems, which are typical in the context of water quality simulations. To keep the presentation of the new method as simple as possible, the algorithm is only derived in two space dimensions and for purely convective transport problems, hence neglecting diffusion and reaction terms. The new numerical method for the solution of the scalar transport is directly coupled to the hydrodynamic model of Casulli and Walters (2000) that provides the dynamics of the free surface and the velocity vector field based on a semi-implicit discretization of the shallow water equations. Wetting and drying is handled rigorously by the nonlinear algorithm proposed by Casulli (2009). The new time-accurate LTS algorithm allows a different time step size for each element of the unstructured grid, based on an element-local Courant-Friedrichs-Lewy (CFL) stability condition. The proposed method does not need any synchronization between different time steps of different elements and is by construction locally and globally conservative. The LTS scheme is based on a piecewise linear polynomial reconstruction in space-time using the MUSCL-Hancock method, to obtain second order of accuracy in both space and time. The new algorithm is first validated on some classical test cases for pure advection problems, for which exact solutions are known. In all cases we obtain a very good level of accuracy, showing also numerical convergence results; we furthermore confirm mass conservation up to machine precision and observe an improved computational efficiency compared to a standard second order TVD scheme for scalar transport with global time stepping (GTS). Then, the new LTS method is applied to some more complex problems, where the new scalar transport scheme has also been coupled to

  14. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  15. New insights into time series analysis. I. Correlated observations

    NASA Astrophysics Data System (ADS)

    Ferreira Lopes, C. E.; Cross, N. J. G.

    2016-02-01

    Context. The first step when investigating time varying data is the detection of any reliable changes in star brightness. This step is crucial to decreasing the processing time by reducing the number of sources processed in later, slower steps. Variability indices and their combinations have been used to identify variability patterns and to select non-stochastic variations, but the separation of true variables is hindered because of wavelength-correlated systematics of instrumental and atmospheric origin or due to possible data reduction anomalies. Aims: The main aim is to review the current inventory of correlation variability indices and measure the efficiency for selecting non-stochastic variations in photometric data. Methods: We test new and standard data-mining methods for correlated data using public time-domain data from the WFCAM Science Archive (WSA). This archive contains multi-wavelength calibration data (WFCAMCAL) for 216,722 point sources, with at least ten unflagged epochs in any of five filters (YZJHK), which were used to test the different indices against. We improve the panchromatic variability indices and introduce a new set of variability indices for preselecting variable star candidates. Using the WFCAMCAL Variable Star Catalogue (WVSC1) we delimit the efficiency of each variability index. Moreover we test new insights about these indices to improve the efficiency of detection of time-series data dominated by correlated variations. Results: We propose five new variability indices that display high efficiency for the detection of variable stars. We determine the best way to select variable stars with these indices and the current tool inventory. In addition, we propose a universal analytical expression to select likely variables using the fraction of fluctuations on these indices (ffluc). The ffluc can be used as a universal way to analyse photometric data since it displays a only weak dependency with the instrument properties. The variability

  16. Toward an Accurate Prediction of the Arrival Time of Geomagnetic-Effective Coronal Mass Ejections

    NASA Astrophysics Data System (ADS)

    Shi, T.; Wang, Y.; Wan, L.; Cheng, X.; Ding, M.; Zhang, J.

    2015-12-01

    Accurately predicting the arrival of coronal mass ejections (CMEs) to the Earth based on remote images is of critical significance for the study of space weather. Here we make a statistical study of 21 Earth-directed CMEs, specifically exploring the relationship between CME initial speeds and transit times. The initial speed of a CME is obtained by fitting the CME with the Graduated Cylindrical Shell model and is thus free of projection effects. We then use the drag force model to fit results of the transit time versus the initial speed. By adopting different drag regimes, i.e., the viscous, aerodynamics, and hybrid regimes, we get similar results, with a least mean estimation error of the hybrid model of 12.9 hr. CMEs with a propagation angle (the angle between the propagation direction and the Sun-Earth line) larger than their half-angular widths arrive at the Earth with an angular deviation caused by factors other than the radial solar wind drag. The drag force model cannot be reliably applied to such events. If we exclude these events in the sample, the prediction accuracy can be improved, i.e., the estimation error reduces to 6.8 hr. This work suggests that it is viable to predict the arrival time of CMEs to the Earth based on the initial parameters with fairly good accuracy. Thus, it provides a method of forecasting space weather 1-5 days following the occurrence of CMEs.

  17. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing

    NASA Astrophysics Data System (ADS)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2016-08-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  18. Analyzing a stochastic time series obeying a second-order differential equation

    NASA Astrophysics Data System (ADS)

    Lehle, B.; Peinke, J.

    2015-06-01

    The stochastic properties of a Langevin-type Markov process can be extracted from a given time series by a Markov analysis. Also processes that obey a stochastically forced second-order differential equation can be analyzed this way by employing a particular embedding approach: To obtain a Markovian process in 2 N dimensions from a non-Markovian signal in N dimensions, the system is described in a phase space that is extended by the temporal derivative of the signal. For a discrete time series, however, this derivative can only be calculated by a differencing scheme, which introduces an error. If the effects of this error are not accounted for, this leads to systematic errors in the estimation of the drift and diffusion functions of the process. In this paper we will analyze these errors and we will propose an approach that correctly accounts for them. This approach allows an accurate parameter estimation and, additionally, is able to cope with weak measurement noise, which may be superimposed to a given time series.

  19. Analyzing a stochastic time series obeying a second-order differential equation.

    PubMed

    Lehle, B; Peinke, J

    2015-06-01

    The stochastic properties of a Langevin-type Markov process can be extracted from a given time series by a Markov analysis. Also processes that obey a stochastically forced second-order differential equation can be analyzed this way by employing a particular embedding approach: To obtain a Markovian process in 2N dimensions from a non-Markovian signal in N dimensions, the system is described in a phase space that is extended by the temporal derivative of the signal. For a discrete time series, however, this derivative can only be calculated by a differencing scheme, which introduces an error. If the effects of this error are not accounted for, this leads to systematic errors in the estimation of the drift and diffusion functions of the process. In this paper we will analyze these errors and we will propose an approach that correctly accounts for them. This approach allows an accurate parameter estimation and, additionally, is able to cope with weak measurement noise, which may be superimposed to a given time series. PMID:26172667

  20. Interglacial climate dynamics and advanced time series analysis

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred; Bermejo, Miguel; Köhler, Peter; Lohmann, Gerrit

    2013-04-01

    Studying the climate dynamics of past interglacials (IGs) helps to better assess the anthropogenically influenced dynamics of the current IG, the Holocene. We select the IG portions from the EPICA Dome C ice core archive, which covers the past 800 ka, to apply methods of statistical time series analysis (Mudelsee 2010). The analysed variables are deuterium/H (indicating temperature) (Jouzel et al. 2007), greenhouse gases (Siegenthaler et al. 2005, Loulergue et al. 2008, L¨ü thi et al. 2008) and a model-co-derived climate radiative forcing (Köhler et al. 2010). We select additionally high-resolution sea-surface-temperature records from the marine sedimentary archive. The first statistical method, persistence time estimation (Mudelsee 2002) lets us infer the 'climate memory' property of IGs. Second, linear regression informs about long-term climate trends during IGs. Third, ramp function regression (Mudelsee 2000) is adapted to look on abrupt climate changes during IGs. We compare the Holocene with previous IGs in terms of these mathematical approaches, interprete results in a climate context, assess uncertainties and the requirements to data from old IGs for yielding results of 'acceptable' accuracy. This work receives financial support from the Deutsche Forschungsgemeinschaft (Project ClimSens within the DFG Research Priority Program INTERDYNAMIK) and the European Commission (Marie Curie Initial Training Network LINC, No. 289447, within the 7th Framework Programme). References Jouzel J, Masson-Delmotte V, Cattani O, Dreyfus G, Falourd S, Hoffmann G, Minster B, Nouet J, Barnola JM, Chappellaz J, Fischer H, Gallet JC, Johnsen S, Leuenberger M, Loulergue L, Luethi D, Oerter H, Parrenin F, Raisbeck G, Raynaud D, Schilt A, Schwander J, Selmo E, Souchez R, Spahni R, Stauffer B, Steffensen JP, Stenni B, Stocker TF, Tison JL, Werner M, Wolff EW (2007) Orbital and millennial Antarctic climate variability over the past 800,000 years. Science 317:793. Köhler P, Bintanja R

  1. Time series photometry of faint cataclysmic variables with a CCD

    NASA Astrophysics Data System (ADS)

    Abbott, Timothy Mark Cameron

    1992-08-01

    I describe a new hardware and software environment for the practice of time-series stellar photometry with the CCD systems available at McDonald Observatory. This instrument runs suitable CCD's in frame transfer mode and permits windowing on the CCD image to maximize the duty cycle of the photometer. Light curves may be extracted and analyzed in real time at the telescope and image data are stored for later, more thorough analysis. I describe a star tracking algorithm, which is optimized for a timeseries of images of the same stellar field. I explore the extraction of stellar brightness measures from these images using circular software apertures and develop a complete description of the noise properties of this technique. I show that scintillation and pixelization noise have a significant effect on high quality observations. I demonstrate that optimal sampling and profile fitting techniques are unnecessarily complex or detrimental methods of obtaining stellar brightness measures under conditions commonly encountered in timeseries CCD photometry. I compare CCD's and photomultiplier tubes as detectors for timeseries photometry using light curves of a variety of stars obtained simultaneously with both detectors and under equivalent conditions. A CCD can produce useful data under conditions when a photomultiplier tube cannot, and a CCD will often produce more reliable results even under photometric conditions. I prevent studies of the cataclysmic variables (CV's) AL Com, CP Eri, V Per, and DO Leo made using the time series CCD photometer. AL Com is a very faint CV at high Galactic latitude and a bona fide Population II CV. Some of the properties of AL Com are similar to the dwarf nova WZ Sge and others are similar to the intermediate polar EX Hya, but overall AL Com is unlike any other well-studied cataclysmic variable. CP Eri is shown to be the fifth known interacting binary white dwarf. V Per was the first CV found to have an orbital period near the middle of the

  2. Time-series growth in the female labor force.

    PubMed

    Smith, J P; Ward, M P

    1985-01-01

    This paper investigates the reasons for the growth in the female labor force in the US during the 20th century. Female labor force participation rates increased by 50% from 1950 to 1970. Real wages have played a significant but hardly exclusive role both in the long term growth in female employment and in the more accelerated growth after 1950. At the beginning of this century, fewer than 1 woman in 5 was a member of the labor force; by 1981 more than 6 in 10 were. Increases in female participation were slightly larger among younger women during the 1970s; for the next 20 years the age shape tilted toward older women. For US women 25-34 years old, labor force participation rates have been rising by more than 2 percentage points per year. Closely intertwined with decisions regarding women's work are those involving marriage and family formation. 2 demographic factors that would play a part in subsequent developments are: nuclearization of the US family and urbanization. Time-series trends in education are observed because schooling affects female labor supply independently of any influence through wages; increased years of schooling across birth cohorts shows that an increase of 1.33 years of schooling increased labor participation by 6.9 percentage points during the pre-World War II era. The swing in marriage rates also affects timing, especially for younger women. Based on disaggregated time series data across the period 1950-1981, mean values at single years of age of labor supply, education, work experience, weekly wages, and fertility are determined. Profiles indicate that female labor supply varies considerably not only across cohorts but also over life cycles within birth cohorts. Results show that: 1) relative female wages defined over the work force were lower in 1980 than in 1950, 2) children, especially when young, reduce labor supply, 3) large negative elasticities are linked to female wages, and 4) with all fertility induced effects included, real wage

  3. A multiscale approach to InSAR time series analysis

    NASA Astrophysics Data System (ADS)

    Simons, M.; Hetland, E. A.; Muse, P.; Lin, Y. N.; Dicaprio, C.; Rickerby, A.

    2008-12-01

    We describe a new technique to constrain time-dependent deformation from repeated satellite-based InSAR observations of a given region. This approach, which we call MInTS (Multiscale analysis of InSAR Time Series), relies on a spatial wavelet decomposition to permit the inclusion of distance based spatial correlations in the observations while maintaining computational tractability. This approach also permits a consistent treatment of all data independent of the presence of localized holes in any given interferogram. In essence, MInTS allows one to considers all data at the same time (as opposed to one pixel at a time), thereby taking advantage of both spatial and temporal characteristics of the deformation field. In terms of the temporal representation, we have the flexibility to explicitly parametrize known processes that are expected to contribute to a given set of observations (e.g., co-seismic steps and post-seismic transients, secular variations, seasonal oscillations, etc.). Our approach also allows for the temporal parametrization to includes a set of general functions (e.g., splines) in order to account for unexpected processes. We allow for various forms of model regularization using a cross-validation approach to select penalty parameters. The multiscale analysis allows us to consider various contributions (e.g., orbit errors) that may affect specific scales but not others. The methods described here are all embarrassingly parallel and suitable for implementation on a cluster computer. We demonstrate the use of MInTS using a large suite of ERS-1/2 and Envisat interferograms for Long Valley Caldera, and validate our results by comparing with ground-based observations.

  4. Nonparametric directionality measures for time series and point process data.

    PubMed

    Halliday, David M

    2015-06-01

    The need to determine the directionality of interactions between neural signals is a key requirement for analysis of multichannel recordings. Approaches most commonly used are parametric, typically relying on autoregressive models. A number of concerns have been expressed regarding parametric approaches, thus there is a need to consider alternatives. We present an alternative nonparametric approach for construction of directionality measures for bivariate random processes. The method combines time and frequency domain representations of bivariate data to decompose the correlation by direction. Our framework generates two sets of complementary measures, a set of scalar measures, which decompose the total product moment correlation coefficient summatively into three terms by direction and a set of functions which decompose the coherence summatively at each frequency into three terms by direction: forward direction, reverse direction and instantaneous interaction. It can be undertaken as an addition to a standard bivariate spectral and coherence analysis, and applied to either time series or point-process (spike train) data or mixtures of the two (hybrid data). In this paper, we demonstrate application to spike train data using simulated cortical neurone networks and application to experimental data from isolated muscle spindle sensory endings subject to random efferent stimulation. PMID:25958923

  5. Echoed time series predictions, neural networks and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Conway, A.

    This work aims to illustrate a potentially serious and previously unrecognised problem in using Neural Networks (NNs), and possibly other techniques, to predict Time Series (TS). It also demonstrates how a new training scheme using a genetic algorithm can alleviate this problem. Although it is already established that NNs can predict TS such as Sunspot Number (SSN) with reasonable success, the accuracy of these predictions is often judged solely by an RMS or related error. The use of this type of error overlooks the presence of what we have termed echoing, where the NN outputs its most recent input as its prediction. Therefore, a method of detecting echoed predictions is introduced, called time-shifting. Reasons for the presence of echo are discussed and then related to the choice of TS sampling. Finally, a new specially designed training scheme is described, which is a hybrid of a genetic algorithm search and back propagation. With this method we have successfully trained NNs to predict without any echo.

  6. Automatic CCD Imaging Systems for Time-series CCD Photometry

    NASA Astrophysics Data System (ADS)

    Caton, D. B.; Pollock, J. T.; Davis, S. A.

    2004-12-01

    CCDs allow precision photometry to be done with small telescopes and at sites with less than ideal seeing conditions. The addition of an automatic observing mode makes it easy to do time-series CCD photometry of variable stars and AGN/QSOs. At Appalachian State University's Dark Sky Observatory (DSO), we have implemented automatic imaging systems for image acquisition, scripted filter changing, data storage and quick-look online photometry two different telescopes, the 32-inch and 18-inch telescopes. The camera at the 18-inch allows a simple system where the data acquisition PC controls a DFM Engineering filter wheel and Photometrics/Roper camera. The 32-inch system is the more complex, with three computers communicating in order to make good use of its camera's 30-second CCD-read time for filter change. Both telescopes use macros written in the PMIS software (GKR Computer Consulting). Both systems allow automatic data capture with only tended care provided by the observer. Indeed, one observer can easily run both telescopes simultaneously. The efficiency and reliability of these systems also reduces observer errors. The only unresolved problem is an occasional but rare camera-read error (the PC is apparently interrupted). We also sometimes experience a crash of the PMIS software, probably due to its 16-bit code now running in the Windows 2000 32-bit environment. We gratefully acknowledge the support of the National Science Foundation through grants number AST-0089248 and AST-9119750, the Dunham Fund for Astrophysical Research, and the ASU Research Council.

  7. Time-series models for border inspection data.

    PubMed

    Decrouez, Geoffrey; Robinson, Andrew

    2013-12-01

    We propose a new modeling approach for inspection data that provides a more useful interpretation of the patterns of detections of invasive pests, using cargo inspection as a motivating example. Methods that are currently in use generally classify shipments according to their likelihood of carrying biosecurity risk material, given available historical and contextual data. Ideally, decisions regarding which cargo containers to inspect should be made in real time, and the models used should be able to focus efforts when the risk is higher. In this study, we propose a dynamic approach that treats the data as a time series in order to detect periods of high risk. A regulatory organization will respond differently to evidence of systematic problems than evidence of random problems, so testing for serial correlation is of major interest. We compare three models that account for various degrees of serial dependence within the data. First is the independence model where the prediction of the arrival of a risky shipment is made solely on the basis of contextual information. We also consider a Markov chain that allows dependence between successive observations, and a hidden Markov model that allows further dependence on past data. The predictive performance of the models is then evaluated using ROC and leakage curves. We illustrate this methodology on two sets of real inspection data. PMID:23682814

  8. Spectrophotometric Time Series of η Carinae's Great Eruption

    NASA Astrophysics Data System (ADS)

    Rest, Armin; Bianco, Federica; Chornock, Ryan; Clocchiatti, Alejandro; James, David; Margheim, Steve; Matheson, Thomas; Prieto, Jose Luis; Smith, Chris; Smith, Nathan; Walborn, Nolan; Welch, Doug; Zenteno, Alfredo

    2014-08-01

    η Car serves as our most important template for understanding non-SN transients from massive stars in external galaxies. However, until recently, no spectra were available because its historic ``Great Eruption'' (GE) occurred from 1838-1858, before the invention of the astronomical spectrograph, and only visual estimates of its brightness were recorded teSF11. Now we can also obtain a spectral sequence of the eruption through its light echoes we discovered, which will be of great value since spectra are our most important tool for inferring physical properties of extragalactic transients. Subsequent spectroscopic follow-up revealed that its outburst was most similar to those of G-type supergiants, rather than reported LBV outburst spectral types of F-type (or earlier) teRest12_eta. These differences between the GE and the extragalactic transients presumed to be its analogues raise questions about traditional scenarios for the outburst. We propose to obtain a spectrophotometric time series of the GE from different directions, allowing the original eruption of η Car to be studied as a function of time as well as latitude, something only possible with light echoes. This unique detailed spectroscopic study of the light echoes of η Car will help us understand (episodic) mass- loss in the most massive evolved stars and their connection to the most energetic core-collapse SNe.

  9. Coastal Atmosphere and Sea Time Series (CoASTS)

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Zibordi, Giuseppe; Berthon, Jean-Francoise; Doyle, John P.; Grossi, Stefania; vanderLinde, Dirk; Targa, Cristina; Alberotanza, Luigi; McClain, Charles R. (Technical Monitor)

    2002-01-01

    The Coastal Atmosphere and Sea Time Series (CoASTS) Project aimed at supporting ocean color research and applications, from 1995 up to the time of publication of this document, has ensured the collection of a comprehensive atmospheric and marine data set from an oceanographic tower located in the northern Adriatic Sea. The instruments and the measurement methodologies used to gather quantities relevant for bio-optical modeling and for the calibration and validation of ocean color sensors, are described. Particular emphasis is placed on four items: (1) the evaluation of perturbation effects in radiometric data (i.e., tower-shading, instrument self-shading, and bottom effects); (2) the intercomparison of seawater absorption coefficients from in situ measurements and from laboratory spectrometric analysis on discrete samples; (3) the intercomparison of two filter techniques for in vivo measurement of particulate absorption coefficients; and (4) the analysis of repeatability and reproducibility of the most relevant laboratory measurements carried out on seawater samples (i.e., particulate and yellow substance absorption coefficients, and pigment and total suspended matter concentrations). Sample data are also presented and discussed to illustrate the typical features characterizing the CoASTS measurement site in view of supporting the suitability of the CoASTS data set for bio-optical modeling and ocean color calibration and validation.

  10. Urban Area Monitoring using MODIS Time Series Data

    NASA Astrophysics Data System (ADS)

    Devadiga, S.; Sarkar, S.; Mauoka, E.

    2015-12-01

    Growing urban sprawl and its impact on global climate due to urban heat island effects has been an active area of research over the recent years. This is especially significant in light of rapid urbanization that is happening in some of the first developing nations across the globe. But so far study of urban area growth has been largely restricted to local and regional scales, using high to medium resolution satellite observations, taken at distinct time periods. In this presentation we propose a new approach to detect and monitor urban area expansion using long time series of MODIS data. This work characterizes data points using a vector of several annual metrics computed from the MODIS 8-day and 16-day composite L3 data products, at 250M resolution and over several years and then uses a vector angle mapping classifier to detect and segment the urban area. The classifier is trained using a set of training points obtained from a reference vector point and polygon pre-filtered using the MODIS VI product. This work gains additional significance, given that, despite unprecedented urban growth since 2000, the area covered by the urban class in the MODIS Global Land Cover (MCD12Q1, MCDLCHKM and MCDLC1KM) product hasn't changed since the launch of Terra and Aqua. The proposed approach was applied to delineate the urban area around several cities in Asia known to have maximum growth in the last 15 years. Results were verified using high resolution Landsat data.

  11. Impact of Sensor Degradation on the MODIS NDVI Time Series

    NASA Technical Reports Server (NTRS)

    Wang, Dongdong; Morton, Douglas; Masek, Jeffrey; Wu, Aisheng; Nagol, Jyoteshwar; Xiong, Xiaoxiong; Levy, Robert; Vermote, Eric; Wolfe, Robert

    2011-01-01

    Time series of satellite data provide unparalleled information on the response of vegetation to climate variability. Detecting subtle changes in vegetation over time requires consistent satellite-based measurements. Here, we evaluated the impact of sensor degradation on trend detection using Collection 5 data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors on the Terra and Aqua platforms. For Terra MODIS, the impact of blue band (Band 3, 470nm) degradation on simulated surface reflectance was most pronounced at near-nadir view angles, leading to a 0.001-0.004/yr decline in Normalized Difference Vegetation Index (NDVI) under a range of simulated aerosol conditions and surface types. Observed trends MODIS NDVI over North America were consistent with simulated results, with nearly a threefold difference in negative NDVI trends derived from Terra (17.4%) and Aqua (6.7%) MODIS sensors during 2002-2010. Planned adjustments to Terra MODIS calibration for Collection 6 data reprocessing will largely eliminate this negative bias in NDVI trends over vegetation.

  12. Impact of Sensor Degradation on the MODIS NDVI Time Series

    NASA Technical Reports Server (NTRS)

    Wang, Dongdong; Morton, Douglas Christopher; Masek, Jeffrey; Wu, Aisheng; Nagol, Jyoteshwar; Xiong, Xiaoxiong; Levy, Robert; Vermote, Eric; Wolfe, Robert

    2012-01-01

    Time series of satellite data provide unparalleled information on the response of vegetation to climate variability. Detecting subtle changes in vegetation over time requires consistent satellite-based measurements. Here, the impact of sensor degradation on trend detection was evaluated using Collection 5 data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors on the Terra and Aqua platforms. For Terra MODIS, the impact of blue band (Band 3, 470 nm) degradation on simulated surface reflectance was most pronounced at near-nadir view angles, leading to a 0.001-0.004 yr-1 decline in Normalized Difference Vegetation Index (NDVI) under a range of simulated aerosol conditions and surface types. Observed trends in MODIS NDVI over North America were consistentwith simulated results,with nearly a threefold difference in negative NDVI trends derived from Terra (17.4%) and Aqua (6.7%) MODIS sensors during 2002-2010. Planned adjustments to Terra MODIS calibration for Collection 6 data reprocessing will largely eliminate this negative bias in detection of NDVI trends.

  13. Providing community-based health practitioners with timely and accurate discharge medicines information

    PubMed Central

    2012-01-01

    Background Accurate and timely medication information at the point of discharge is essential for continuity of care. There are scarce data on the clinical significance if poor quality medicines information is passed to the next episode of care. This study aimed to compare the number and clinical significance of medication errors and omission in discharge medicines information, and the timeliness of delivery of this information to community-based health practitioners, between the existing Hospital Discharge Summary (HDS) and a pharmacist prepared Medicines Information Transfer Fax (MITF). Method The study used a sample of 80 hospital patients who were at high risk of medication misadventure, and who had a MITF completed in the study period June – October 2009 at a tertiary referral hospital. The medicines information in participating patients’ MITFs was validated against their Discharge Prescriptions (DP). Medicines information in each patient’s HDS was then compared with their validated MITF. An expert clinical panel reviewed identified medication errors and omissions to determine their clinical significance. The time between patient discharge and the dispatching of the MITF and the HDS to each patient’s community-based practitioners was calculated from hospital records. Results DPs for 77 of the 80 patients were available for comparison with their MITFs. Medicines information in 71 (92%) of the MITFs matched that of the DP. Comparison of the HDS against the MITF revealed that no HDS was prepared for 16 (21%) patients. Of the remaining 61 patients; 33 (54%), had required medications omitted and 38 (62%) had medication errors in their HDS. The Clinical Panel rated the significance of errors or omissions for 70 patients (16 with no HDS prepared and 54 who’s HDS was inconsistent with the validated MITF). In 17 patients the error or omission was rated as insignificant to minor; 23 minor to moderate; 24 moderate to major and 6 major to catastrophic. 28 (35

  14. TEMPORAL SIGNATURES OF AIR QUALITY OBSERVATIONS AND MODEL OUTPUTS: DO TIME SERIES DECOMPOSITION METHODS CAPTURE RELEVANT TIME SCALES?

    EPA Science Inventory

    Time series decomposition methods were applied to meteorological and air quality data and their numerical model estimates. Decomposition techniques express a time series as the sum of a small number of independent modes which hypothetically represent identifiable forcings, thereb...

  15. Time-of-flight accurate mass spectrometry identification of quinoline alkaloids in honey.

    PubMed

    Rodríguez-Cabo, Tamara; Moniruzzaman, Mohammed; Rodríguez, Isaac; Ramil, María; Cela, Rafael; Gan, Siew Hua

    2015-08-01

    Time-of-flight accurate mass spectrometry (TOF-MS), following a previous chromatographic (gas or liquid chromatography) separation step, is applied to the identification and structural elucidation of quinoline-like alkaloids in honey. Both electron ionization (EI) MS and positive electrospray (ESI+) MS spectra afforded the molecular ions (M(.+) and M+H(+), respectively) of target compounds with mass errors below 5 mDa. Scan EI-MS and product ion scan ESI-MS/MS spectra permitted confirmation of the existence of a quinoline ring in the structures of the candidate compounds. Also, the observed fragmentation patterns were useful to discriminate between quinoline derivatives having the same empirical formula but different functionalities, such as aldoximes and amides. In the particular case of phenylquinolines, ESI-MS/MS spectra provided valuable clues regarding the position of the phenyl moiety attached to the quinoline ring. The aforementioned spectral information, combined with retention times matching, led to the identification of quinoline and five quinoline derivatives, substituted at carbon number 4, in honey samples. An isomer of phenyquinoline was also noticed; however, its exact structure could not be established. Liquid-liquid microextraction and gas chromatography (GC) TOF-MS were applied to the screening of the aforementioned compounds in a total of 62 honeys. Species displaying higher occurrence frequencies were 4-quinolinecarbonitrile, 4-quinolinecarboxaldehyde, 4-quinolinealdoxime, and the phenylquinoline isomer. The Pearson test revealed strong correlations among the first three compounds. PMID:26041455

  16. Accurate estimation of airborne ultrasonic time-of-flight for overlapping echoes.

    PubMed

    Sarabia, Esther G; Llata, Jose R; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P

    2013-01-01

    In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774

  17. Accurate Estimation of Airborne Ultrasonic Time-of-Flight for Overlapping Echoes

    PubMed Central

    Sarabia, Esther G.; Llata, Jose R.; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P.

    2013-01-01

    In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774

  18. Accurate mass tag retention time database for urine proteome analysis by chromatography--mass spectrometry.

    PubMed

    Agron, I A; Avtonomov, D M; Kononikhin, A S; Popov, I A; Moshkovskii, S A; Nikolaev, E N

    2010-05-01

    Information about peptides and proteins in urine can be used to search for biomarkers of early stages of various diseases. The main technology currently used for identification of peptides and proteins is tandem mass spectrometry, in which peptides are identified by mass spectra of their fragmentation products. However, the presence of the fragmentation stage decreases sensitivity of analysis and increases its duration. We have developed a method for identification of human urinary proteins and peptides. This method based on the accurate mass and time tag (AMT) method does not use tandem mass spectrometry. The database of AMT tags containing more than 1381 AMT tags of peptides has been constructed. The software for database filling with AMT tags, normalizing the chromatograms, database application for identification of proteins and peptides, and their quantitative estimation has been developed. The new procedures for peptide identification by tandem mass spectra and the AMT tag database are proposed. The paper also lists novel proteins that have been identified in human urine for the first time. PMID:20632944

  19. Explicit off-line criteria for stable accurate time filtering of strongly unstable spatially extended systems.

    PubMed

    Majda, Andrew J; Grote, Marcus J

    2007-01-23

    Many contemporary problems in science involve making predictions based on partial observation of extremely complicated spatially extended systems with many degrees of freedom and physical instabilities on both large and small scales. Various new ensemble filtering strategies have been developed recently for these applications, and new mathematical issues arise. Here, explicit off-line test criteria for stable accurate discrete filtering are developed for use in the above context and mimic the classical stability analysis for finite difference schemes. First, constant coefficient partial differential equations, which are randomly forced and damped to mimic mesh scale energy spectra in the above problems are developed as off-line filtering test problems. Then mathematical analysis is used to show that under natural suitable hypothesis the time filtering algorithms for general finite difference discrete approximations to an sxs partial differential equation system with suitable observations decompose into much simpler independent s-dimensional filtering problems for each spatial wave number separately; in other test problems, such block diagonal models rigorously provide upper and lower bounds on the filtering algorithm. In this fashion, elementary off-line filtering criteria can be developed for complex spatially extended systems. The theory is illustrated for time filters by using both unstable and implicit difference scheme approximations to the stochastically forced heat equation where the combined effects of filter stability and model error are analyzed through the simpler off-line criteria. PMID:17227864

  20. Time Accurate CFD Simulations of the Orion Launch Abort Vehicle in the Transonic Regime

    NASA Technical Reports Server (NTRS)

    Rojahn, Josh; Ruf, Joe

    2011-01-01

    Significant asymmetries in the fluid dynamics were calculated for some cases in the CFD simulations of the Orion Launch Abort Vehicle through its abort trajectories. The CFD simulations were performed steady state and in three dimensions with symmetric geometries, no freestream sideslip angle, and motors firing. The trajectory points at issue were in the transonic regime, at 0 and +/- 5 angles of attack with the Abort Motors with and without the Attitude Control Motors (ACM) firing. In some of the cases the asymmetric fluid dynamics resulted in aerodynamic side forces that were large enough that would overcome the control authority of the ACMs. MSFC's Fluid Dynamics Group supported the investigation into the cause of the flow asymmetries with time accurate CFD simulations, utilizing a hybrid RANS-LES turbulence model. The results show that the flow over the vehicle and the subsequent interaction with the AB and ACM motor plumes were unsteady. The resulting instantaneous aerodynamic forces were oscillatory with fairly large magnitudes. Time averaged aerodynamic forces were essentially symmetric.