Sample records for parameter time series

  1. Estimating rainfall time series and model parameter distributions using model data reduction and inversion techniques

    NASA Astrophysics Data System (ADS)

    Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.

    2017-08-01

    Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  2. Influence of characteristics of time series on short-term forecasting error parameter changes in real time

    NASA Astrophysics Data System (ADS)

    Klevtsov, S. I.

    2018-05-01

    The impact of physical factors, such as temperature and others, leads to a change in the parameters of the technical object. Monitoring the change of parameters is necessary to prevent a dangerous situation. The control is carried out in real time. To predict the change in the parameter, a time series is used in this paper. Forecasting allows one to determine the possibility of a dangerous change in a parameter before the moment when this change occurs. The control system in this case has more time to prevent a dangerous situation. A simple time series was chosen. In this case, the algorithm is simple. The algorithm is executed in the microprocessor module in the background. The efficiency of using the time series is affected by its characteristics, which must be adjusted. In the work, the influence of these characteristics on the error of prediction of the controlled parameter was studied. This takes into account the behavior of the parameter. The values of the forecast lag are determined. The results of the research, in the case of their use, will improve the efficiency of monitoring the technical object during its operation.

  3. Parameter motivated mutual correlation analysis: Application to the study of currency exchange rates based on intermittency parameter and Hurst exponent

    NASA Astrophysics Data System (ADS)

    Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.

    2012-04-01

    We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.

  4. Quantifying Selection with Pool-Seq Time Series Data.

    PubMed

    Taus, Thomas; Futschik, Andreas; Schlötterer, Christian

    2017-11-01

    Allele frequency time series data constitute a powerful resource for unraveling mechanisms of adaptation, because the temporal dimension captures important information about evolutionary forces. In particular, Evolve and Resequence (E&R), the whole-genome sequencing of replicated experimentally evolving populations, is becoming increasingly popular. Based on computer simulations several studies proposed experimental parameters to optimize the identification of the selection targets. No such recommendations are available for the underlying parameters selection strength and dominance. Here, we introduce a highly accurate method to estimate selection parameters from replicated time series data, which is fast enough to be applied on a genome scale. Using this new method, we evaluate how experimental parameters can be optimized to obtain the most reliable estimates for selection parameters. We show that the effective population size (Ne) and the number of replicates have the largest impact. Because the number of time points and sequencing coverage had only a minor effect, we suggest that time series analysis is feasible without major increase in sequencing costs. We anticipate that time series analysis will become routine in E&R studies. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  5. Estimation of Parameters from Discrete Random Nonstationary Time Series

    NASA Astrophysics Data System (ADS)

    Takayasu, H.; Nakamura, T.

    For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.

  6. Response to ``Comment on `Adaptive Q-S (lag, anticipated, and complete) time-varying synchronization and parameters identification of uncertain delayed neural networks''' [Chaos 17, 038101 (2007)

    NASA Astrophysics Data System (ADS)

    Yu, Wenwu; Cao, Jinde

    2007-09-01

    Parameter identification of dynamical systems from time series has received increasing interest due to its wide applications in secure communication, pattern recognition, neural networks, and so on. Given the driving system, parameters can be estimated from the time series by using an adaptive control algorithm. Recently, it has been reported that for some stable systems, in which parameters are difficult to be identified [Li et al., Phys Lett. A 333, 269-270 (2004); Remark 5 in Yu and Cao, Physica A 375, 467-482 (2007); and Li et al., Chaos 17, 038101 (2007)], and in this paper, a brief discussion about whether parameters can be identified from time series is investigated. From some detailed analyses, the problem of why parameters of stable systems can be hardly estimated is discussed. Some interesting examples are drawn to verify the proposed analysis.

  7. Using NASA's Giovanni System to Simulate Time-Series Stations in the Outflow Region of California's Eel River

    NASA Technical Reports Server (NTRS)

    Acker, James G.; Shen, Suhung; Leptoukh, Gregory G.; Lee, Zhongping

    2012-01-01

    Oceanographic time-series stations provide vital data for the monitoring of oceanic processes, particularly those associated with trends over time and interannual variability. There are likely numerous locations where the establishment of a time-series station would be desirable, but for reasons of funding or logistics, such establishment may not be feasible. An alternative to an operational time-series station is monitoring of sites via remote sensing. In this study, the NASA Giovanni data system is employed to simulate the establishment of two time-series stations near the outflow region of California s Eel River, which carries a high sediment load. Previous time-series analysis of this location (Acker et al. 2009) indicated that remotely-sensed chl a exhibits a statistically significant increasing trend during summer (low flow) months, but no apparent trend during winter (high flow) months. Examination of several newly-available ocean data parameters in Giovanni, including 8-day resolution data, demonstrates the differences in ocean parameter trends at the two locations compared to regionally-averaged time-series. The hypothesis that the increased summer chl a values are related to increasing SST is evaluated, and the signature of the Eel River plume is defined with ocean optical parameters.

  8. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  9. Compounding approach for univariate time series with nonstationary variances.

    PubMed

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  10. Appropriate use of the increment entropy for electrophysiological time series.

    PubMed

    Liu, Xiaofeng; Wang, Xue; Zhou, Xu; Jiang, Aimin

    2018-04-01

    The increment entropy (IncrEn) is a new measure for quantifying the complexity of a time series. There are three critical parameters in the IncrEn calculation: N (length of the time series), m (dimensionality), and q (quantifying precision). However, the question of how to choose the most appropriate combination of IncrEn parameters for short datasets has not been extensively explored. The purpose of this research was to provide guidance on choosing suitable IncrEn parameters for short datasets by exploring the effects of varying the parameter values. We used simulated data, epileptic EEG data and cardiac interbeat (RR) data to investigate the effects of the parameters on the calculated IncrEn values. The results reveal that IncrEn is sensitive to changes in m, q and N for short datasets (N≤500). However, IncrEn reaches stability at a data length of N=1000 with m=2 and q=2, and for short datasets (N=100), it shows better relative consistency with 2≤m≤6 and 2≤q≤8 We suggest that the value of N should be no less than 100. To enable a clear distinction between different classes based on IncrEn, we recommend that m and q should take values between 2 and 4. With appropriate parameters, IncrEn enables the effective detection of complexity variations in physiological time series, suggesting that IncrEn should be useful for the analysis of physiological time series in clinical applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Superstatistical fluctuations in time series: Applications to share-price dynamics and turbulence

    NASA Astrophysics Data System (ADS)

    van der Straeten, Erik; Beck, Christian

    2009-09-01

    We report a general technique to study a given experimental time series with superstatistics. Crucial for the applicability of the superstatistics concept is the existence of a parameter β that fluctuates on a large time scale as compared to the other time scales of the complex system under consideration. The proposed method extracts the main superstatistical parameters out of a given data set and examines the validity of the superstatistical model assumptions. We test the method thoroughly with surrogate data sets. Then the applicability of the superstatistical approach is illustrated using real experimental data. We study two examples, velocity time series measured in turbulent Taylor-Couette flows and time series of log returns of the closing prices of some stock market indices.

  12. Sensitivity analysis of machine-learning models of hydrologic time series

    NASA Astrophysics Data System (ADS)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  13. The method of trend analysis of parameters time series of gas-turbine engine state

    NASA Astrophysics Data System (ADS)

    Hvozdeva, I.; Myrhorod, V.; Derenh, Y.

    2017-10-01

    This research substantiates an approach to interval estimation of time series trend component. The well-known methods of spectral and trend analysis are used for multidimensional data arrays. The interval estimation of trend component is proposed for the time series whose autocorrelation matrix possesses a prevailing eigenvalue. The properties of time series autocorrelation matrix are identified.

  14. Crop Identification Using Time Series of Landsat-8 and Radarsat-2 Images: Application in a Groundwater Irrigated Region, South India

    NASA Astrophysics Data System (ADS)

    Sharma, A. K.; Hubert-Moy, L.; Betbederet, J.; Ruiz, L.; Sekhar, M.; Corgne, S.

    2016-08-01

    Monitoring land use and land cover and more particularly irrigated cropland dynamics is of great importance for water resources management and land use planning. The objective of this study was to evaluate the combined use of multi-temporal optical and radar data with a high spatial resolution in order to improve the precision of irrigated crop identification by taking into account information on crop phenological stages. SAR and optical parameters were derived from time- series of seven quad-pol RADARSAT-2 and four Landsat-8 images which were acquired on the Berambadi catchment, South India, during the monsoon crop season at the growth stages of turmeric crop. To select the best parameter to discriminate turmeric crops, an analysis of covariance (ANCOVA) was applied on all the time-series parameters and the most discriminant ones were classified using the Support Vector Machine (SVM) technique. Results show that in absence of optical images, polarimetric parameters derived from SAR time-series can be used for the turmeric area estimates and that the combined use of SAR and optical parameters can improve the classification accuracy to identify turmeric.

  15. Stochastic modelling of non-stationary financial assets

    NASA Astrophysics Data System (ADS)

    Estevens, Joana; Rocha, Paulo; Boto, João P.; Lind, Pedro G.

    2017-11-01

    We model non-stationary volume-price distributions with a log-normal distribution and collect the time series of its two parameters. The time series of the two parameters are shown to be stationary and Markov-like and consequently can be modelled with Langevin equations, which are derived directly from their series of values. Having the evolution equations of the log-normal parameters, we reconstruct the statistics of the first moments of volume-price distributions which fit well the empirical data. Finally, the proposed framework is general enough to study other non-stationary stochastic variables in other research fields, namely, biology, medicine, and geology.

  16. Modified DTW for a quantitative estimation of the similarity between rainfall time series

    NASA Astrophysics Data System (ADS)

    Djallel Dilmi, Mohamed; Barthès, Laurent; Mallet, Cécile; Chazottes, Aymeric

    2017-04-01

    The Precipitations are due to complex meteorological phenomenon and can be described as intermittent process. The spatial and temporal variability of this phenomenon is significant and covers large scales. To analyze and model this variability and / or structure, several studies use a network of rain gauges providing several time series of precipitation measurements. To compare these different time series, the authors compute for each time series some parameters (PDF, rain peak intensity, occurrence, amount, duration, intensity …). However, and despite the calculation of these parameters, the comparison of the parameters between two series of measurements remains qualitative. Due to the advection processes, when different sensors of an observation network measure precipitation time series identical in terms of intermitency or intensities, there is a time lag between the different measured series. Analyzing and extracting relevant information on physical phenomena from these precipitation time series implies the development of automatic analytical methods capable of comparing two time series of precipitation measured by different sensors or at two different locations and thus quantifying the difference / similarity. The limits of the Euclidean distance to measure the similarity between the time series of precipitation have been well demonstrated and explained (eg the Euclidian distance is indeed very sensitive to the effects of phase shift : between two identical but slightly shifted time series, this distance is not negligible). To quantify and analysis these time lag, the correlation functions are well established, normalized and commonly used to measure the spatial dependences that are required by many applications. However, authors generally observed that there is always a considerable scatter of the inter-rain gauge correlation coefficients obtained from the individual pairs of rain gauges. Because of a substantial dispersion of estimated time lag, the interpretation of this inter-correlation is not straightforward. We propose here to use an improvement of the Euclidian distance which integrates the global complexity of the rainfall series. The Dynamic Time Wrapping (DTW) used in speech recognition allows matching two time series instantly different and provide the most probable time lag. However, the original formulation of the DTW suffers from some limitations. In particular, it is not adequate to the rain intermittency. In this study we present an adaptation of the DTW for the analysis of rainfall time series : we used time series from the "Météo France" rain gauge network observed between January 1st, 2007 and December 31st, 2015 on 25 stations located in the Île de France area. Then we analyze the results (eg. The distance, the relationship between the time lag detected by our methods and others measured parameters like speed and direction of the wind…) to show the ability of the proposed similarity to provide usefull information on the rain structure. The possibility of using this measure of similarity to define a quality indicator of a sensor integrated into an observation network is also envisaged.

  17. Transmission of linear regression patterns between time series: From relationship in time series to complex networks

    NASA Astrophysics Data System (ADS)

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  18. Transmission of linear regression patterns between time series: from relationship in time series to complex networks.

    PubMed

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  19. Introduction and application of the multiscale coefficient of variation analysis.

    PubMed

    Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh

    2017-10-01

    Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.

  20. A modified temporal criterion to meta-optimize the extended Kalman filter for land cover classification of remotely sensed time series

    NASA Astrophysics Data System (ADS)

    Salmon, B. P.; Kleynhans, W.; Olivier, J. C.; van den Bergh, F.; Wessels, K. J.

    2018-05-01

    Humans are transforming land cover at an ever-increasing rate. Accurate geographical maps on land cover, especially rural and urban settlements are essential to planning sustainable development. Time series extracted from MODerate resolution Imaging Spectroradiometer (MODIS) land surface reflectance products have been used to differentiate land cover classes by analyzing the seasonal patterns in reflectance values. The proper fitting of a parametric model to these time series usually requires several adjustments to the regression method. To reduce the workload, a global setting of parameters is done to the regression method for a geographical area. In this work we have modified a meta-optimization approach to setting a regression method to extract the parameters on a per time series basis. The standard deviation of the model parameters and magnitude of residuals are used as scoring function. We successfully fitted a triply modulated model to the seasonal patterns of our study area using a non-linear extended Kalman filter (EKF). The approach uses temporal information which significantly reduces the processing time and storage requirements to process each time series. It also derives reliability metrics for each time series individually. The features extracted using the proposed method are classified with a support vector machine and the performance of the method is compared to the original approach on our ground truth data.

  1. Methods for developing time-series climate surfaces to drive topographically distributed energy- and water-balance models

    USGS Publications Warehouse

    Susong, D.; Marks, D.; Garen, D.

    1999-01-01

    Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.

  2. A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades

    PubMed Central

    Dai, Weiwei; Selesnick, Ivan; Rizzo, John-Ross; Rucker, Janet; Hudson, Todd

    2017-01-01

    The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter. PMID:28813566

  3. A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades.

    PubMed

    Dai, Weiwei; Selesnick, Ivan; Rizzo, John-Ross; Rucker, Janet; Hudson, Todd

    2017-08-01

    The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter.

  4. PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting

    PubMed Central

    Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693

  5. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    PubMed

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  6. A new parametric method to smooth time-series data of metabolites in metabolic networks.

    PubMed

    Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide

    2016-12-01

    Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Modeling of Engine Parameters for Condition-Based Maintenance of the MTU Series 2000 Diesel Engine

    DTIC Science & Technology

    2016-09-01

    are suitable. To model the behavior of the engine, an autoregressive distributed lag (ARDL) time series model of engine speed and exhaust gas... time series model of engine speed and exhaust gas temperature is derived. The lag length for ARDL is determined by whitening of residuals using the...15 B. REGRESSION ANALYSIS ....................................................................15 1. Time Series Analysis

  8. Information and Complexity Measures Applied to Observed and Simulated Soil Moisture Time Series

    USDA-ARS?s Scientific Manuscript database

    Time series of soil moisture-related parameters provides important insights in functioning of soil water systems. Analysis of patterns within these time series has been used in several studies. The objective of this work was to compare patterns in observed and simulated soil moisture contents to u...

  9. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    ERIC Educational Resources Information Center

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  10. A likelihood-based time series modeling approach for application in dendrochronology to examine the growth-climate relations and forest disturbance history

    EPA Science Inventory

    A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...

  11. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    USGS Publications Warehouse

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  12. Modelling fourier regression for time series data- a case study: modelling inflation in foods sector in Indonesia

    NASA Astrophysics Data System (ADS)

    Prahutama, Alan; Suparti; Wahyu Utami, Tiani

    2018-03-01

    Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.

  13. Accuracy of time-domain and frequency-domain methods used to characterize catchment transit time distributions

    NASA Astrophysics Data System (ADS)

    Godsey, S. E.; Kirchner, J. W.

    2008-12-01

    The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.

  14. Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.

    PubMed

    Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L

    2018-02-01

    This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  15. A New Hybrid-Multiscale SSA Prediction of Non-Stationary Time Series

    NASA Astrophysics Data System (ADS)

    Ghanbarzadeh, Mitra; Aminghafari, Mina

    2016-02-01

    Singular spectral analysis (SSA) is a non-parametric method used in the prediction of non-stationary time series. It has two parameters, which are difficult to determine and very sensitive to their values. Since, SSA is a deterministic-based method, it does not give good results when the time series is contaminated with a high noise level and correlated noise. Therefore, we introduce a novel method to handle these problems. It is based on the prediction of non-decimated wavelet (NDW) signals by SSA and then, prediction of residuals by wavelet regression. The advantages of our method are the automatic determination of parameters and taking account of the stochastic structure of time series. As shown through the simulated and real data, we obtain better results than SSA, a non-parametric wavelet regression method and Holt-Winters method.

  16. The application of time series models to cloud field morphology analysis

    NASA Technical Reports Server (NTRS)

    Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.

    1987-01-01

    A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.

  17. Phenological Parameters Estimation Tool

    NASA Technical Reports Server (NTRS)

    McKellip, Rodney D.; Ross, Kenton W.; Spruce, Joseph P.; Smoot, James C.; Ryan, Robert E.; Gasser, Gerald E.; Prados, Donald L.; Vaughan, Ronald D.

    2010-01-01

    The Phenological Parameters Estimation Tool (PPET) is a set of algorithms implemented in MATLAB that estimates key vegetative phenological parameters. For a given year, the PPET software package takes in temporally processed vegetation index data (3D spatio-temporal arrays) generated by the time series product tool (TSPT) and outputs spatial grids (2D arrays) of vegetation phenological parameters. As a precursor to PPET, the TSPT uses quality information for each pixel of each date to remove bad or suspect data, and then interpolates and digitally fills data voids in the time series to produce a continuous, smoothed vegetation index product. During processing, the TSPT displays NDVI (Normalized Difference Vegetation Index) time series plots and images from the temporally processed pixels. Both the TSPT and PPET currently use moderate resolution imaging spectroradiometer (MODIS) satellite multispectral data as a default, but each software package is modifiable and could be used with any high-temporal-rate remote sensing data collection system that is capable of producing vegetation indices. Raw MODIS data from the Aqua and Terra satellites is processed using the TSPT to generate a filtered time series data product. The PPET then uses the TSPT output to generate phenological parameters for desired locations. PPET output data tiles are mosaicked into a Conterminous United States (CONUS) data layer using ERDAS IMAGINE, or equivalent software package. Mosaics of the vegetation phenology data products are then reprojected to the desired map projection using ERDAS IMAGINE

  18. Time series analyses of hydrological parameter variations and their correlations at a coastal area in Busan, South Korea

    NASA Astrophysics Data System (ADS)

    Chung, Sang Yong; Senapathi, Venkatramanan; Sekar, Selvam; Kim, Tae Hyung

    2018-02-01

    Monitoring and time-series analysis of the hydrological parameters electrical conductivity (EC), water pressure, precipitation and tide were carried out, to understand the characteristics of the parameter variations and their correlations at a coastal area in Busan, South Korea. The monitoring data were collected at a sharp interface between freshwater and saline water at the depth of 25 m below ground. Two well-logging profiles showed that seawater intrusion has largely expanded (progressed inland), and has greatly affected the groundwater quality in a coastal aquifer of tuffaceous sedimentary rock over a 9-year period. According to the time series analyses, the periodograms of the hydrological parameters present very similar trends to the power spectral densities (PSD) of the hydrological parameters. Autocorrelation functions (ACF) and partial autocorrelation functions (PACF) of the hydrological parameters were produced to evaluate their self-correlations. The ACFs of all hydrologic parameters showed very good correlation over the entire time lag, but the PACF revealed that the correlations were good only at time lag 1. Crosscorrelation functions (CCF) were used to evaluate the correlations between the hydrological parameters and the characteristics of seawater intrusion in the coastal aquifer system. The CCFs showed that EC had a close relationship with water pressure and precipitation rather than tide. The CCFs of water pressure with tide and precipitation were in inverse proportion, and the CCF of water pressure with precipitation was larger than that with tide.

  19. A time series model: First-order integer-valued autoregressive (INAR(1))

    NASA Astrophysics Data System (ADS)

    Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.

    2017-07-01

    Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.

  20. Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.

    PubMed

    Kis, Maria

    2005-01-01

    In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.

  1. EO-1 Hyperion reflectance time series at calibration and validation sites: stability and sensitivity to seasonal dynamics

    Treesearch

    Petya K. Entcheva Campbell; Elizabeth M. Middleton; Kurt J. Thome; Raymond F. Kokaly; Karl Fred Huemmrich; David Lagomasino; Kimberly A. Novick; Nathaniel A. Brunsell

    2013-01-01

    This study evaluated Earth Observing 1 (EO-1) Hyperion reflectance time series at established calibration sites to assess the instrument stability and suitability for monitoring vegetation functional parameters. Our analysis using three pseudo-invariant calibration sites in North America indicated that the reflectance time series are devoid of apparent spectral trends...

  2. Determination of fundamental asteroseismic parameters using the Hilbert transform

    NASA Astrophysics Data System (ADS)

    Kiefer, René; Schad, Ariane; Herzberg, Wiebke; Roth, Markus

    2015-06-01

    Context. Solar-like oscillations exhibit a regular pattern of frequencies. This pattern is dominated by the small and large frequency separations between modes. The accurate determination of these parameters is of great interest, because they give information about e.g. the evolutionary state and the mass of a star. Aims: We want to develop a robust method to determine the large and small frequency separations for time series with low signal-to-noise ratio. For this purpose, we analyse a time series of the Sun from the GOLF instrument aboard SOHO and a time series of the star KIC 5184732 from the NASA Kepler satellite by employing a combination of Fourier and Hilbert transform. Methods: We use the analytic signal of filtered stellar oscillation time series to compute the signal envelope. Spectral analysis of the signal envelope then reveals frequency differences of dominant modes in the periodogram of the stellar time series. Results: With the described method the large frequency separation Δν can be extracted from the envelope spectrum even for data of poor signal-to-noise ratio. A modification of the method allows for an overview of the regularities in the periodogram of the time series.

  3. Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments.

    PubMed

    Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn

    2015-01-01

    Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect's 3D body point's time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point's time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point's time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters' walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman's bias and limits of agreement. Body point's time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point's time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner.

  4. VizieR Online Data Catalog: Activity cycles in 3203 Kepler stars (Reinhold+, 2017)

    NASA Astrophysics Data System (ADS)

    Reinhold, T.; Cameron, R. H.; Gizon, L.

    2017-05-01

    Rvar time series, sine fit parameters, mean rotation periods, and false alarm probabilities of all 3203 Kepler stars are presented. For simplicity, the KIC number and the fit parameters of a certain star are repeated in each line. The fit function to the Rvar(t) time series equals y_fit=Acyc*sin(2*pi/(Pcyc*365)*(t-t0))+Offset. (2 data files).

  5. Aggregate R-R-V Analysis

    EPA Pesticide Factsheets

    The excel file contains time series data of flow rates, concentrations of alachlor , atrazine, ammonia, total phosphorus, and total suspended solids observed in two watersheds in Indiana from 2002 to 2007. The aggregate time series data corresponding or representative to all these parameters was obtained using a specialized, data-driven technique. The aggregate data is hypothesized in the published paper to represent the overall health of both watersheds with respect to various potential water quality impairments. The time series data for each of the individual water quality parameters were used to compute corresponding risk measures (Rel, Res, and Vul) that are reported in Table 4 and 5. The aggregation of the risk measures, which is computed from the aggregate time series and water quality standards in Table 1, is also reported in Table 4 and 5 of the published paper. Values under column heading uncertainty reports uncertainties associated with reconstruction of missing records of the water quality parameters. Long-term records of the water quality parameters were reconstructed in order to estimate the (R-R-V) and corresponding aggregate risk measures. This dataset is associated with the following publication:Hoque, Y., S. Tripathi, M. Hantush , and R. Govindaraju. Aggregate Measures of Watershed Health from Reconstructed Water Quality Data with Uncertainty. Ed Gregorich JOURNAL OF ENVIRONMENTAL QUALITY. American Society of Agronomy, MADISON, WI,

  6. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  7. Analysis of Nonstationary Time Series for Biological Rhythms Research.

    PubMed

    Leise, Tanya L

    2017-06-01

    This article is part of a Journal of Biological Rhythms series exploring analysis and statistics topics relevant to researchers in biological rhythms and sleep research. The goal is to provide an overview of the most common issues that arise in the analysis and interpretation of data in these fields. In this article on time series analysis for biological rhythms, we describe some methods for assessing the rhythmic properties of time series, including tests of whether a time series is indeed rhythmic. Because biological rhythms can exhibit significant fluctuations in their period, phase, and amplitude, their analysis may require methods appropriate for nonstationary time series, such as wavelet transforms, which can measure how these rhythmic parameters change over time. We illustrate these methods using simulated and real time series.

  8. The computation of dynamic fractional difference parameter for S&P500 index

    NASA Astrophysics Data System (ADS)

    Pei, Tan Pei; Cheong, Chin Wen; Galagedera, Don U. A.

    2015-10-01

    This study evaluates the time-varying long memory behaviors of the S&P500 volatility index using dynamic fractional difference parameters. Time-varying fractional difference parameter shows the dynamic of long memory in volatility series for the pre and post subprime mortgage crisis triggered by U.S. The results find an increasing trend in the S&P500 long memory volatility for the pre-crisis period. However, the onset of Lehman Brothers event reduces the predictability of volatility series following by a slight fluctuation of the factional differencing parameters. After that, the U.S. financial market becomes more informationally efficient and follows a non-stationary random process.

  9. Shortened acquisition protocols for the quantitative assessment of the 2-tissue-compartment model using dynamic PET/CT 18F-FDG studies.

    PubMed

    Strauss, Ludwig G; Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2011-03-01

    (18)F-FDG kinetics are quantified by a 2-tissue-compartment model. The routine use of dynamic PET is limited because of this modality's 1-h acquisition time. We evaluated shortened acquisition protocols up to 0-30 min regarding the accuracy for data analysis with the 2-tissue-compartment model. Full dynamic series for 0-60 min were analyzed using a 2-tissue-compartment model. The time-activity curves and the resulting parameters for the model were stored in a database. Shortened acquisition data were generated from the database using the following time intervals: 0-10, 0-16, 0-20, 0-25, and 0-30 min. Furthermore, the impact of adding a 60-min uptake value to the dynamic series was evaluated. The datasets were analyzed using dedicated software to predict the results of the full dynamic series. The software is based on a modified support vector machines (SVM) algorithm and predicts the compartment parameters of the full dynamic series. The SVM-based software provides user-independent results and was accurate at predicting the compartment parameters of the full dynamic series. If a squared correlation coefficient of 0.8 (corresponding to 80% explained variance of the data) was used as a limit, a shortened acquisition of 0-16 min was accurate at predicting the 60-min 2-tissue-compartment parameters. If a limit of 0.9 (90% explained variance) was used, a dynamic series of at least 0-20 min together with the 60-min uptake values is required. Shortened acquisition protocols can be used to predict the parameters of the 2-tissue-compartment model. Either a dynamic PET series of 0-16 min or a combination of a dynamic PET/CT series of 0-20 min and a 60-min uptake value is accurate for analysis with a 2-tissue-compartment model.

  10. Empirical method to measure stochasticity and multifractality in nonlinear time series

    NASA Astrophysics Data System (ADS)

    Lin, Chih-Hao; Chang, Chia-Seng; Li, Sai-Ping

    2013-12-01

    An empirical algorithm is used here to study the stochastic and multifractal nature of nonlinear time series. A parameter can be defined to quantitatively measure the deviation of the time series from a Wiener process so that the stochasticity of different time series can be compared. The local volatility of the time series under study can be constructed using this algorithm, and the multifractal structure of the time series can be analyzed by using this local volatility. As an example, we employ this method to analyze financial time series from different stock markets. The result shows that while developed markets evolve very much like an Ito process, the emergent markets are far from efficient. Differences about the multifractal structures and leverage effects between developed and emergent markets are discussed. The algorithm used here can be applied in a similar fashion to study time series of other complex systems.

  11. Streamflow Prediction based on Chaos Theory

    NASA Astrophysics Data System (ADS)

    Li, X.; Wang, X.; Babovic, V. M.

    2015-12-01

    Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.

  12. Methods for estimating confidence intervals in interrupted time series analyses of health interventions.

    PubMed

    Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis

    2009-02-01

    Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.

  13. Time series smoother for effect detection.

    PubMed

    You, Cheng; Lin, Dennis K J; Young, S Stanley

    2018-01-01

    In environmental epidemiology, it is often encountered that multiple time series data with a long-term trend, including seasonality, cannot be fully adjusted by the observed covariates. The long-term trend is difficult to separate from abnormal short-term signals of interest. This paper addresses how to estimate the long-term trend in order to recover short-term signals. Our case study demonstrates that the current spline smoothing methods can result in significant positive and negative cross-correlations from the same dataset, depending on how the smoothing parameters are chosen. To circumvent this dilemma, three classes of time series smoothers are proposed to detrend time series data. These smoothers do not require fine tuning of parameters and can be applied to recover short-term signals. The properties of these smoothers are shown with both a case study using a factorial design and a simulation study using datasets generated from the original dataset. General guidelines are provided on how to discover short-term signals from time series with a long-term trend. The benefit of this research is that a problem is identified and characteristics of possible solutions are determined.

  14. Time series smoother for effect detection

    PubMed Central

    Lin, Dennis K. J.; Young, S. Stanley

    2018-01-01

    In environmental epidemiology, it is often encountered that multiple time series data with a long-term trend, including seasonality, cannot be fully adjusted by the observed covariates. The long-term trend is difficult to separate from abnormal short-term signals of interest. This paper addresses how to estimate the long-term trend in order to recover short-term signals. Our case study demonstrates that the current spline smoothing methods can result in significant positive and negative cross-correlations from the same dataset, depending on how the smoothing parameters are chosen. To circumvent this dilemma, three classes of time series smoothers are proposed to detrend time series data. These smoothers do not require fine tuning of parameters and can be applied to recover short-term signals. The properties of these smoothers are shown with both a case study using a factorial design and a simulation study using datasets generated from the original dataset. General guidelines are provided on how to discover short-term signals from time series with a long-term trend. The benefit of this research is that a problem is identified and characteristics of possible solutions are determined. PMID:29684033

  15. Inverse sequential procedures for the monitoring of time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1995-01-01

    When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.

  16. The application of neural networks to myoelectric signal analysis: a preliminary study.

    PubMed

    Kelly, M F; Parker, P A; Scott, R N

    1990-03-01

    Two neural network implementations are applied to myoelectric signal (MES) analysis tasks. The motivation behind this research is to explore more reliable methods of deriving control for multidegree of freedom arm prostheses. A discrete Hopfield network is used to calculate the time series parameters for a moving average MES model. It is demonstrated that the Hopfield network is capable of generating the same time series parameters as those produced by the conventional sequential least squares (SLS) algorithm. Furthermore, it can be extended to applications utilizing larger amounts of data, and possibly to higher order time series models, without significant degradation in computational efficiency. The second neural network implementation involves using a two-layer perceptron for classifying a single site MES based on two features, specifically the first time series parameter, and the signal power. Using these features, the perceptron is trained to distinguish between four separate arm functions. The two-dimensional decision boundaries used by the perceptron classifier are delineated. It is also demonstrated that the perceptron is able to rapidly compensate for variations when new data are incorporated into the training set. This adaptive quality suggests that perceptrons may provide a useful tool for future MES analysis.

  17. Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application

    NASA Astrophysics Data System (ADS)

    Chen, Jinduan; Boccelli, Dominic L.

    2018-02-01

    Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.

  18. Reconstruction of ensembles of coupled time-delay systems from time series.

    PubMed

    Sysoev, I V; Prokhorov, M D; Ponomarenko, V I; Bezruchko, B P

    2014-06-01

    We propose a method to recover from time series the parameters of coupled time-delay systems and the architecture of couplings between them. The method is based on a reconstruction of model delay-differential equations and estimation of statistical significance of couplings. It can be applied to networks composed of nonidentical nodes with an arbitrary number of unidirectional and bidirectional couplings. We test our method on chaotic and periodic time series produced by model equations of ensembles of diffusively coupled time-delay systems in the presence of noise, and apply it to experimental time series obtained from electronic oscillators with delayed feedback coupled by resistors.

  19. How does the host population's network structure affect the estimation accuracy of epidemic parameters?

    NASA Astrophysics Data System (ADS)

    Yashima, Kenta; Ito, Kana; Nakamura, Kazuyuki

    2013-03-01

    When an Infectious disease where to prevail throughout the population, epidemic parameters such as the basic reproduction ratio, initial point of infection etc. are estimated from the time series data of infected population. However, it is unclear how does the structure of host population affects this estimation accuracy. In other words, what kind of city is difficult to estimate its epidemic parameters? To answer this question, epidemic data are simulated by constructing a commuting network with different network structure and running the infection process over this network. From the given time series data for each network structure, we would like to analyzed estimation accuracy of epidemic parameters.

  20. Dynamic Factor Analysis Models with Time-Varying Parameters

    ERIC Educational Resources Information Center

    Chow, Sy-Miin; Zu, Jiyun; Shifren, Kim; Zhang, Guangjian

    2011-01-01

    Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor…

  1. Pseudo-random bit generator based on lag time series

    NASA Astrophysics Data System (ADS)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  2. Global Mass Flux Solutions from GRACE: A Comparison of Parameter Estimation Strategies - Mass Concentrations Versus Stokes Coefficients

    NASA Technical Reports Server (NTRS)

    Rowlands, D. D.; Luthcke, S. B.; McCarthy J. J.; Klosko, S. M.; Chinn, D. S.; Lemoine, F. G.; Boy, J.-P.; Sabaka, T. J.

    2010-01-01

    The differences between mass concentration (mas con) parameters and standard Stokes coefficient parameters in the recovery of gravity infonnation from gravity recovery and climate experiment (GRACE) intersatellite K-band range rate data are investigated. First, mascons are decomposed into their Stokes coefficient representations to gauge the range of solutions available using each of the two types of parameters. Next, a direct comparison is made between two time series of unconstrained gravity solutions, one based on a set of global equal area mascon parameters (equivalent to 4deg x 4deg at the equator), and the other based on standard Stokes coefficients with each time series using the same fundamental processing of the GRACE tracking data. It is shown that in unconstrained solutions, the type of gravity parameter being estimated does not qualitatively affect the estimated gravity field. It is also shown that many of the differences in mass flux derivations from GRACE gravity solutions arise from the type of smoothing being used and that the type of smoothing that can be embedded in mas con solutions has distinct advantages over postsolution smoothing. Finally, a 1 year time series based on global 2deg equal area mascons estimated every 10 days is presented.

  3. Influence of ionospheric disturbances onto long-baseline relative positioning in kinematic mode

    NASA Astrophysics Data System (ADS)

    Wezka, Kinga; Herrera, Ivan; Cokrlic, Marija; Galas, Roman

    2013-04-01

    Ionospheric disturbances are fast and random variabilities in the ionosphere and they are difficult to detect and model. Some strong disturbances can cause, among others, interruption of GNSS signal or even lead to loss of signal lock. These phenomena are especially harmful for kinematic real-time applications, where the system availability is one of the most important parameters influencing positioning reliability. Our investigations were conducted using long time series of GNSS observations gathered at high latitude, where ionospheric disturbances more frequently occur. Selected processing strategy was used to monitor ionospheric signatures in time series of the coordinates. Quality of the data of input and of the processing results were examined and described by a set of proposed parameters. Variations in the coordinates were compared with available information about the state of ionosphere derived from Neustrelitz TEC Model (NTCM) and with the time series of raw observations. Some selected parameters were also calculated with the "iono-tools" module of the TUB-NavSolutions software developed by the Precise Navigation and Positioning Group at Technische Universitaet Berlin. The paper presents very first results of evaluation of the robustness of positioning algorithms with respect to ionospheric anomalies using the NTCM model and our calculated ionospheric parameters.

  4. Applying "domino" model to study dipolar geomagnetic field reversals and secular variation

    NASA Astrophysics Data System (ADS)

    Peqini, Klaudio; Duka, Bejo

    2014-05-01

    Aiming to understand the physical processes underneath the reversals events of geomagnetic field, different numerical models have been conceived. We considered the so named "domino" model, an Ising-Heisenberg model of interacting magnetic spins aligned along a ring [Mazaud and Laj, EPSL, 1989; Mori et al., arXiv:1110.5062v2, 2012]. We will present here some results which are slightly different from the already published results, and will give our interpretation on the differences. Following the empirical studies of the long series of the axial magnetic moment (dipolar moment or "magnetization") generated by the model varying all model parameters, we defined the set of parameters that supply the longest mean time between reversals. Using this set of parameters, a short time series (about 10,000 years) of axial magnetic moment was generated. After de-noising the fluctuation of this time series, we compared it with the series of dipolar magnetic moment values supplied by CALS10K.1b model for the last 10000 years. We found similar behavior of the both series, even if the "domino" model could not supply a full explanation of the geomagnetic field SV. In a similar way we will compare a 14000 years long series with the dipolar magnetic moment obtained by the model SHA.DIF.14k [Pavón-Carrasco et al., EPSL, 2014].

  5. Multivariate stochastic analysis for Monthly hydrological time series at Cuyahoga River Basin

    NASA Astrophysics Data System (ADS)

    zhang, L.

    2011-12-01

    Copula has become a very powerful statistic and stochastic methodology in case of the multivariate analysis in Environmental and Water resources Engineering. In recent years, the popular one-parameter Archimedean copulas, e.g. Gumbel-Houggard copula, Cook-Johnson copula, Frank copula, the meta-elliptical copula, e.g. Gaussian Copula, Student-T copula, etc. have been applied in multivariate hydrological analyses, e.g. multivariate rainfall (rainfall intensity, duration and depth), flood (peak discharge, duration and volume), and drought analyses (drought length, mean and minimum SPI values, and drought mean areal extent). Copula has also been applied in the flood frequency analysis at the confluences of river systems by taking into account the dependence among upstream gauge stations rather than by using the hydrological routing technique. In most of the studies above, the annual time series have been considered as stationary signal which the time series have been assumed as independent identically distributed (i.i.d.) random variables. But in reality, hydrological time series, especially the daily and monthly hydrological time series, cannot be considered as i.i.d. random variables due to the periodicity existed in the data structure. Also, the stationary assumption is also under question due to the Climate Change and Land Use and Land Cover (LULC) change in the fast years. To this end, it is necessary to revaluate the classic approach for the study of hydrological time series by relaxing the stationary assumption by the use of nonstationary approach. Also as to the study of the dependence structure for the hydrological time series, the assumption of same type of univariate distribution also needs to be relaxed by adopting the copula theory. In this paper, the univariate monthly hydrological time series will be studied through the nonstationary time series analysis approach. The dependence structure of the multivariate monthly hydrological time series will be studied through the copula theory. As to the parameter estimation, the maximum likelihood estimation (MLE) will be applied. To illustrate the method, the univariate time series model and the dependence structure will be determined and tested using the monthly discharge time series of Cuyahoga River Basin.

  6. Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments

    PubMed Central

    Geerse, Daphne J.; Coolen, Bert H.; Roerdink, Melvyn

    2015-01-01

    Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect’s 3D body point’s time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point’s time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point’s time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters’ walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman’s bias and limits of agreement. Body point’s time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point’s time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner. PMID:26461498

  7. Comparison of detrending methods for fluctuation analysis in hydrology

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Chen, Yongqin David

    2011-03-01

    SummaryTrends within a hydrologic time series can significantly influence the scaling results of fluctuation analysis, such as rescaled range (RS) analysis and (multifractal) detrended fluctuation analysis (MF-DFA). Therefore, removal of trends is important in the study of scaling properties of the time series. In this study, three detrending methods, including adaptive detrending algorithm (ADA), Fourier-based method, and average removing technique, were evaluated by analyzing numerically generated series and observed streamflow series with obvious relative regular periodic trend. Results indicated that: (1) the Fourier-based detrending method and ADA were similar in detrending practices, and given proper parameters, these two methods can produce similarly satisfactory results; (2) detrended series by Fourier-based detrending method and ADA lose the fluctuation information at larger time scales, and the location of crossover points is heavily impacted by the chosen parameters of these two methods; and (3) the average removing method has an advantage over the other two methods, i.e., the fluctuation information at larger time scales is kept well-an indication of relatively reliable performance in detrending. In addition, the average removing method performed reasonably well in detrending a time series with regular periods or trends. In this sense, the average removing method should be preferred in the study of scaling properties of the hydrometeorolgical series with relative regular periodic trend using MF-DFA.

  8. Frequency domain system identification of helicopter rotor dynamics incorporating models with time periodic coefficients

    NASA Astrophysics Data System (ADS)

    Hwang, Sunghwan

    1997-08-01

    One of the most prominent features of helicopter rotor dynamics in forward flight is the periodic coefficients in the equations of motion introduced by the rotor rotation. The frequency response characteristics of such a linear time periodic system exhibits sideband behavior, which is not the case for linear time invariant systems. Therefore, a frequency domain identification methodology for linear systems with time periodic coefficients was developed, because the linear time invariant theory cannot account for sideband behavior. The modulated complex Fourier series was introduced to eliminate the smearing effect of Fourier series expansions of exponentially modulated periodic signals. A system identification theory was then developed using modulated complex Fourier series expansion. Correlation and spectral density functions were derived using the modulated complex Fourier series expansion for linear time periodic systems. Expressions of the identified harmonic transfer function were then formulated using the spectral density functions both with and without additive noise processes at input and/or output. A procedure was developed to identify parameters of a model to match the frequency response characteristics between measured and estimated harmonic transfer functions by minimizing an objective function defined in terms of the trace of the squared frequency response error matrix. Feasibility was demonstrated by the identification of the harmonic transfer function and parameters for helicopter rigid blade flapping dynamics in forward flight. This technique is envisioned to satisfy the needs of system identification in the rotating frame, especially in the context of individual blade control. The technique was applied to the coupled flap-lag-inflow dynamics of a rigid blade excited by an active pitch link. The linear time periodic technique results were compared with the linear time invariant technique results. Also, the effect of noise processes and initial parameter guess on the identification procedure were investigated. To study the effect of elastic modes, a rigid blade with a trailing edge flap excited by a smart actuator was selected and system parameters were successfully identified, but with some expense of computational storage and time. Conclusively, the linear time periodic technique substantially improved the identified parameter accuracy compared to the linear time invariant technique. Also, the linear time periodic technique was robust to noises and initial guess of parameters. However, an elastic mode of higher frequency relative to the system pumping frequency tends to increase the computer storage requirement and computing time.

  9. Time series pCO2 at a coastal mooring: Internal consistency, seasonal cycles, and interannual variability

    NASA Astrophysics Data System (ADS)

    Reimer, Janet J.; Cai, Wei-Jun; Xue, Liang; Vargas, Rodrigo; Noakes, Scott; Hu, Xinping; Signorini, Sergio R.; Mathis, Jeremy T.; Feely, Richard A.; Sutton, Adrienne J.; Sabine, Christopher; Musielewicz, Sylvia; Chen, Baoshan; Wanninkhof, Rik

    2017-08-01

    Marine carbonate system monitoring programs often consist of multiple observational methods that include underway cruise data, moored autonomous time series, and discrete water bottle samples. Monitored parameters include all, or some of the following: partial pressure of CO2 of the water (pCO2w) and air, dissolved inorganic carbon (DIC), total alkalinity (TA), and pH. Any combination of at least two of the aforementioned parameters can be used to calculate the others. In this study at the Gray's Reef (GR) mooring in the South Atlantic Bight (SAB) we: examine the internal consistency of pCO2w from underway cruise, moored autonomous time series, and calculated from bottle samples (DIC-TA pairing); describe the seasonal to interannual pCO2w time series variability and air-sea flux (FCO2), as well as describe the potential sources of pCO2w variability; and determine the source/sink for atmospheric pCO2. Over the 8.5 years of GR mooring time series, mooring-underway and mooring-bottle calculated-pCO2w strongly correlate with r-values > 0.90. pCO2w and FCO2 time series follow seasonal thermal patterns; however, seasonal non-thermal processes, such as terrestrial export, net biological production, and air-sea exchange also influence variability. The linear slope of time series pCO2w increases by 5.2 ± 1.4 μatm y-1 with FCO2 increasing 51-70 mmol m-2 y-1. The net FCO2 sign can switch interannually with the magnitude varying greatly. Non-thermal pCO2w is also increasing over the time series, likely indicating that terrestrial export and net biological processes drive the long term pCO2w increase.

  10. Projection of landfill stabilization period by time series analysis of leachate quality and transformation trends of VOCs.

    PubMed

    Sizirici, Banu; Tansel, Berrin

    2010-01-01

    The purpose of this study was to evaluate suitability of using the time series analysis for selected leachate quantity and quality parameters to forecast the duration of post closure period of a closed landfill. Selected leachate quality parameters (i.e., sodium, chloride, iron, bicarbonate, total dissolved solids (TDS), and ammonium as N) and volatile organic compounds (VOCs) (i.e., vinyl chloride, 1,4-dichlorobenzene, chlorobenzene, benzene, toluene, ethyl benzene, xylenes, total BTEX) were analyzed by the time series multiplicative decomposition model to estimate the projected levels of the parameters. These parameters were selected based on their detection levels and consistency of detection in leachate samples. In addition, VOCs detected in leachate and their chemical transformations were considered in view of the decomposition stage of the landfill. Projected leachate quality trends were analyzed and compared with the maximum contaminant level (MCL) for the respective parameters. Conditions that lead to specific trends (i.e., increasing, decreasing, or steady) and interactions of leachate quality parameters were evaluated. Decreasing trends were projected for leachate quantity, concentrations of sodium, chloride, TDS, ammonia as N, vinyl chloride, 1,4-dichlorobenzene, benzene, toluene, ethyl benzene, xylenes, and total BTEX. Increasing trends were projected for concentrations of iron, bicarbonate, and chlorobenzene. Anaerobic conditions in landfill provide favorable conditions for corrosion of iron resulting in higher concentrations over time. Bicarbonate formation as a byproduct of bacterial respiration during waste decomposition and the lime rock cap system of the landfill contribute to the increasing levels of bicarbonate in leachate. Chlorobenzene is produced during anaerobic biodegradation of 1,4-dichlorobenzene, hence, the increasing trend of chlorobenzene may be due to the declining trend of 1,4-dichlorobenzene. The time series multiplicative decomposition model in general provides an adequate forecast for future planning purposes for the parameters monitored in leachate. The model projections for 1,4-dichlorobenzene were relatively less accurate in comparison to the projections for vinyl chloride and chlorobenzene. Based on the trends observed, future monitoring needs for the selected leachate parameters were identified.

  11. Capturing Context-Related Change in Emotional Dynamics via Fixed Moderated Time Series Analysis.

    PubMed

    Adolf, Janne K; Voelkle, Manuel C; Brose, Annette; Schmiedek, Florian

    2017-01-01

    Much of recent affect research relies on intensive longitudinal studies to assess daily emotional experiences. The resulting data are analyzed with dynamic models to capture regulatory processes involved in emotional functioning. Daily contexts, however, are commonly ignored. This may not only result in biased parameter estimates and wrong conclusions, but also ignores the opportunity to investigate contextual effects on emotional dynamics. With fixed moderated time series analysis, we present an approach that resolves this problem by estimating context-dependent change in dynamic parameters in single-subject time series models. The approach examines parameter changes of known shape and thus addresses the problem of observed intra-individual heterogeneity (e.g., changes in emotional dynamics due to observed changes in daily stress). In comparison to existing approaches to unobserved heterogeneity, model estimation is facilitated and different forms of change can readily be accommodated. We demonstrate the approach's viability given relatively short time series by means of a simulation study. In addition, we present an empirical application, targeting the joint dynamics of affect and stress and how these co-vary with daily events. We discuss potentials and limitations of the approach and close with an outlook on the broader implications for understanding emotional adaption and development.

  12. Non-linear motions in reprocessed GPS station position time series

    NASA Astrophysics Data System (ADS)

    Rudenko, Sergei; Gendt, Gerd

    2010-05-01

    Global Positioning System (GPS) data of about 400 globally distributed stations obtained at time span from 1998 till 2007 were reprocessed using GFZ Potsdam EPOS (Earth Parameter and Orbit System) software within International GNSS Service (IGS) Tide Gauge Benchmark Monitoring (TIGA) Pilot Project and IGS Data Reprocessing Campaign with the purpose to determine weekly precise coordinates of GPS stations located at or near tide gauges. Vertical motions of these stations are used to correct the vertical motions of tide gauges for local motions and to tie tide gauge measurements to the geocentric reference frame. Other estimated parameters include daily values of the Earth rotation parameters and their rates, as well as satellite antenna offsets. The solution GT1 derived is based on using absolute phase center variation model, ITRF2005 as a priori reference frame, and other new models. The solution contributed also to ITRF2008. The time series of station positions are analyzed to identify non-linear motions caused by different effects. The paper presents the time series of GPS station coordinates and investigates apparent non-linear motions and their influence on GPS station height rates.

  13. PENDISC: a simple method for constructing a mathematical model from time-series data of metabolite concentrations.

    PubMed

    Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide

    2014-06-01

    The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.

  14. Using multifractal analysis of ultra-weak photon emission from germinating wheat seedlings to differentiate between two grades of intoxication with potassium dichromate

    NASA Astrophysics Data System (ADS)

    Scholkmann, Felix; Cifra, Michal; Alexandre Moraes, Thiago; de Mello Gallep, Cristiano

    2011-12-01

    The aim of the present study was to test whether the multifractal properties of ultra-weak photon emission (UPE) from germinating wheat seedlings (Triticum aestivum) change when the seedlings are treated with different concentrations of the toxin potassium dichromate (PD). To this end, UPE was measured (50 seedlings in one Petri dish, duration: approx. 16.6- 28 h) from samples of three groups: (i) control (group C, N = 9), (ii) treated with 25 ppm of PD (group G25, N = 32), and (iii) treated with 150 ppm of PD (group G150, N = 23). For the multifractal analysis, the following steps where performed: (i) each UPE time series was trimmed to a final length of 1000 min; (ii) each UPE time series was filtered, linear detrended and normalized; (iii) the multifractal spectrum (f(α)) was calculated for every UPE time series using the backward multifractal detrended moving average (MFDMA) method; (iv) each multifractal spectrum was characterized by calculating the mode (αmode) of the spectrum and the degree of multifractality (Δα) (v) for every UPE time series its mean, skewness and kurtosis were also calculated; finally (vi) all obtained parameters where analyzed to determine their ability to differentiate between the three groups. This was based on Fisher's discriminant ratio (FDR), which was calculated for each parameter combination. Additionally, a non-parametric test was used to test whether the parameter values are significantly different or not. The analysis showed that when comparing all the three groups, FDR had the highest values for the multifractal parameters (αmode, Δα). Furthermore, the differences in these parameters between the groups were statistically significant (p < 0.05). The classical parameters (mean, skewness and kurtosis) had lower FDR values than the multifractal parameters in all cases and showed no significant difference between the groups (except for the skewness between group C and G150). In conclusion, multifractal analysis enables changes in UPE time series to be detected even when they are hidden for normal linear signal analysis methods. The analysis of changes in the multifractal properties might be a basis to design a classification system enabling the intoxication of cell cultures to be quantified based on UPE measurements.

  15. Implicit assimilation for marine ecological models

    NASA Astrophysics Data System (ADS)

    Weir, B.; Miller, R.; Spitz, Y. H.

    2012-12-01

    We use a new data assimilation method to estimate the parameters of a marine ecological model. At a given point in the ocean, the estimated values of the parameters determine the behaviors of the modeled planktonic groups, and thus indicate which species are dominant. To begin, we assimilate in situ observations, e.g., the Bermuda Atlantic Time-series Study, the Hawaii Ocean Time-series, and Ocean Weather Station Papa. From there, we estimate the parameters at surrounding points in space based on satellite observations of ocean color. Given the variation of the estimated parameters, we divide the ocean into regions meant to represent distinct ecosystems. An important feature of the data assimilation approach is that it refines the confidence limits of the optimal Gaussian approximation to the distribution of the parameters. This enables us to determine the ecological divisions with greater accuracy.

  16. Evaluating the temporal stability of synthetically generated time-series for crop types in central Germany

    USDA-ARS?s Scientific Manuscript database

    Synthetically generated Landsat time-series based on the STARFM algorithm are increasingly used for applications in forestry or agriculture. Although successes in classification and derivation of phenological orbiomass parameters are evident, a thorough evaluation of the limits of the method is stil...

  17. KALREF—A Kalman filter and time series approach to the International Terrestrial Reference Frame realization

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoping; Abbondanza, Claudio; Altamimi, Zuheir; Chin, T. Mike; Collilieux, Xavier; Gross, Richard S.; Heflin, Michael B.; Jiang, Yan; Parker, Jay W.

    2015-05-01

    The current International Terrestrial Reference Frame is based on a piecewise linear site motion model and realized by reference epoch coordinates and velocities for a global set of stations. Although linear motions due to tectonic plates and glacial isostatic adjustment dominate geodetic signals, at today's millimeter precisions, nonlinear motions due to earthquakes, volcanic activities, ice mass losses, sea level rise, hydrological changes, and other processes become significant. Monitoring these (sometimes rapid) changes desires consistent and precise realization of the terrestrial reference frame (TRF) quasi-instantaneously. Here, we use a Kalman filter and smoother approach to combine time series from four space geodetic techniques to realize an experimental TRF through weekly time series of geocentric coordinates. In addition to secular, periodic, and stochastic components for station coordinates, the Kalman filter state variables also include daily Earth orientation parameters and transformation parameters from input data frames to the combined TRF. Local tie measurements among colocated stations are used at their known or nominal epochs of observation, with comotion constraints applied to almost all colocated stations. The filter/smoother approach unifies different geodetic time series in a single geocentric frame. Fragmented and multitechnique tracking records at colocation sites are bridged together to form longer and coherent motion time series. While the time series approach to TRF reflects the reality of a changing Earth more closely than the linear approximation model, the filter/smoother is computationally powerful and flexible to facilitate incorporation of other data types and more advanced characterization of stochastic behavior of geodetic time series.

  18. A new approach for measuring power spectra and reconstructing time series in active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Li, Yan-Rong; Wang, Jian-Min

    2018-05-01

    We provide a new approach to measure power spectra and reconstruct time series in active galactic nuclei (AGNs) based on the fact that the Fourier transform of AGN stochastic variations is a series of complex Gaussian random variables. The approach parametrizes a stochastic series in frequency domain and transforms it back to time domain to fit the observed data. The parameters and their uncertainties are derived in a Bayesian framework, which also allows us to compare the relative merits of different power spectral density models. The well-developed fast Fourier transform algorithm together with parallel computation enables an acceptable time complexity for the approach.

  19. Calibrating binary lumped parameter models

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Stewart, Mike

    2017-04-01

    Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.

  20. Multivariate Time Series Decomposition into Oscillation Components.

    PubMed

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-08-01

    Many time series are considered to be a superposition of several oscillation components. We have proposed a method for decomposing univariate time series into oscillation components and estimating their phases (Matsuda & Komaki, 2017 ). In this study, we extend that method to multivariate time series. We assume that several oscillators underlie the given multivariate time series and that each variable corresponds to a superposition of the projections of the oscillators. Thus, the oscillators superpose on each variable with amplitude and phase modulation. Based on this idea, we develop gaussian linear state-space models and use them to decompose the given multivariate time series. The model parameters are estimated from data using the empirical Bayes method, and the number of oscillators is determined using the Akaike information criterion. Therefore, the proposed method extracts underlying oscillators in a data-driven manner and enables investigation of phase dynamics in a given multivariate time series. Numerical results show the effectiveness of the proposed method. From monthly mean north-south sunspot number data, the proposed method reveals an interesting phase relationship.

  1. Rapid determination of thermodynamic parameters from one-dimensional programmed-temperature gas chromatography for use in retention time prediction in comprehensive multidimensional chromatography.

    PubMed

    McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J

    2014-01-17

    A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. About the Modeling of Radio Source Time Series as Linear Splines

    NASA Astrophysics Data System (ADS)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2016-12-01

    Many of the time series of radio sources observed in geodetic VLBI show variations, caused mainly by changes in source structure. However, until now it has been common practice to consider source positions as invariant, or to exclude known misbehaving sources from the datum conditions. This may lead to a degradation of the estimated parameters, as unmodeled apparent source position variations can propagate to the other parameters through the least squares adjustment. In this paper we will introduce an automated algorithm capable of parameterizing the radio source coordinates as linear splines.

  3. Quantifying Transmission Heterogeneity Using Both Pathogen Phylogenies and Incidence Time Series

    PubMed Central

    Li, Lucy M.; Grassly, Nicholas C.; Fraser, Christophe

    2017-01-01

    Abstract Heterogeneity in individual-level transmissibility can be quantified by the dispersion parameter k of the offspring distribution. Quantifying heterogeneity is important as it affects other parameter estimates, it modulates the degree of unpredictability of an epidemic, and it needs to be accounted for in models of infection control. Aggregated data such as incidence time series are often not sufficiently informative to estimate k. Incorporating phylogenetic analysis can help to estimate k concurrently with other epidemiological parameters. We have developed an inference framework that uses particle Markov Chain Monte Carlo to estimate k and other epidemiological parameters using both incidence time series and the pathogen phylogeny. Using the framework to fit a modified compartmental transmission model that includes the parameter k to simulated data, we found that more accurate and less biased estimates of the reproductive number were obtained by combining epidemiological and phylogenetic analyses. However, k was most accurately estimated using pathogen phylogeny alone. Accurately estimating k was necessary for unbiased estimates of the reproductive number, but it did not affect the accuracy of reporting probability and epidemic start date estimates. We further demonstrated that inference was possible in the presence of phylogenetic uncertainty by sampling from the posterior distribution of phylogenies. Finally, we used the inference framework to estimate transmission parameters from epidemiological and genetic data collected during a poliovirus outbreak. Despite the large degree of phylogenetic uncertainty, we demonstrated that incorporating phylogenetic data in parameter inference improved the accuracy and precision of estimates. PMID:28981709

  4. Prediction of mortality rates using a model with stochastic parameters

    NASA Astrophysics Data System (ADS)

    Tan, Chon Sern; Pooi, Ah Hin

    2016-10-01

    Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.

  5. Parametric, nonparametric and parametric modelling of a chaotic circuit time series

    NASA Astrophysics Data System (ADS)

    Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.

    2000-09-01

    The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.

  6. Estimation of Ecosystem Parameters of the Community Land Model with DREAM: Evaluation of the Potential for Upscaling Net Ecosystem Exchange

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.

    2015-12-01

    Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.

  7. JWST NIRCam Time Series Observations

    NASA Technical Reports Server (NTRS)

    Greene, Tom; Schlawin, E.

    2017-01-01

    We explain how to make time-series observations with the Near-Infrared camera (NIRCam) science instrument of the James Webb Space Telescope. Both photometric and spectroscopic observations are described. We present the basic capabilities and performance of NIRCam and show examples of how to set its observing parameters using the Space Telescope Science Institute's Astronomer's Proposal Tool (APT).

  8. Stochastic optimization for modeling physiological time series: application to the heart rate response to exercise

    NASA Astrophysics Data System (ADS)

    Zakynthinaki, M. S.; Stirling, J. R.

    2007-01-01

    Stochastic optimization is applied to the problem of optimizing the fit of a model to the time series of raw physiological (heart rate) data. The physiological response to exercise has been recently modeled as a dynamical system. Fitting the model to a set of raw physiological time series data is, however, not a trivial task. For this reason and in order to calculate the optimal values of the parameters of the model, the present study implements the powerful stochastic optimization method ALOPEX IV, an algorithm that has been proven to be fast, effective and easy to implement. The optimal parameters of the model, calculated by the optimization method for the particular athlete, are very important as they characterize the athlete's current condition. The present study applies the ALOPEX IV stochastic optimization to the modeling of a set of heart rate time series data corresponding to different exercises of constant intensity. An analysis of the optimization algorithm, together with an analytic proof of its convergence (in the absence of noise), is also presented.

  9. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    NASA Astrophysics Data System (ADS)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  10. Polymeric Materials Models in the Warrior Injury Assessment Manikin (WIAMan) Anthropomorphic Test Device (ATD) Tech Demonstrator

    DTIC Science & Technology

    2017-01-01

    are the shear relaxation moduli and relaxation times , which make up the classical Prony series . A Prony- series expansion is a relaxation function...approximation for modeling time -dependent damping. The scalar parameters 1 and 2 control the nonlinearity of the Prony series . Under the...Velodyne that best fit the experimental stress-strain data. To do so, the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA

  11. Power law cross-correlations between price change and volume change of Indian stocks

    NASA Astrophysics Data System (ADS)

    Hasan, Rashid; Mohammed Salim, M.

    2017-05-01

    We study multifractal long-range correlations and cross-correlations of daily price change and volume change of 50 stocks that comprise Nifty index of National Stock Exchange, Mumbai, using MF-DFA and MF-DCCA methods. We find that the time series of price change are uncorrelated, whereas anti-persistent long-range multifractal correlations are found in volume change series. We also find antipersistent long-range multifractal cross-correlations between the time series of price change and volume change. As multifractality is a signature of complexity, we estimate complexity parameters of the time series of price change, volume change, and cross-correlated price-volume change by fitting the fourth-degree polynomials to their multifractal spectra. Our results indicate that the time series of price change display high complexity, whereas the time series of volume change and cross-correlated price-volume change display low complexity.

  12. Nonlinear Prediction As A Tool For Determining Parameters For Phase Space Reconstruction In Meteorology

    NASA Astrophysics Data System (ADS)

    Miksovsky, J.; Raidl, A.

    Time delays phase space reconstruction represents one of useful tools of nonlinear time series analysis, enabling number of applications. Its utilization requires the value of time delay to be known, as well as the value of embedding dimension. There are sev- eral methods how to estimate both these parameters. Typically, time delay is computed first, followed by embedding dimension. Our presented approach is slightly different - we reconstructed phase space for various combinations of mentioned parameters and used it for prediction by means of the nearest neighbours in the phase space. Then some measure of prediction's success was computed (correlation or RMSE, e.g.). The position of its global maximum (minimum) should indicate the suitable combination of time delay and embedding dimension. Several meteorological (particularly clima- tological) time series were used for the computations. We have also created a MS- Windows based program in order to implement this approach - its basic features will be presented as well.

  13. Time series behaviour of the number of Air Asia passengers: A distributional approach

    NASA Astrophysics Data System (ADS)

    Asrah, Norhaidah Mohd; Djauhari, Maman Abdurachman

    2013-09-01

    The common practice to time series analysis is by fitting a model and then further analysis is conducted on the residuals. However, if we know the distributional behavior of time series, the analyses in model identification, parameter estimation, and model checking are more straightforward. In this paper, we show that the number of Air Asia passengers can be represented as a geometric Brownian motion process. Therefore, instead of using the standard approach in model fitting, we use an appropriate transformation to come up with a stationary, normally distributed and even independent time series. An example in forecasting the number of Air Asia passengers will be given to illustrate the advantages of the method.

  14. Estimating soil hydraulic properties from soil moisture time series by inversion of a dual-permeability model

    NASA Astrophysics Data System (ADS)

    Dalla Valle, Nicolas; Wutzler, Thomas; Meyer, Stefanie; Potthast, Karin; Michalzik, Beate

    2017-04-01

    Dual-permeability type models are widely used to simulate water fluxes and solute transport in structured soils. These models contain two spatially overlapping flow domains with different parameterizations or even entirely different conceptual descriptions of flow processes. They are usually able to capture preferential flow phenomena, but a large set of parameters is needed, which are very laborious to obtain or cannot be measured at all. Therefore, model inversions are often used to derive the necessary parameters. Although these require sufficient input data themselves, they can use measurements of state variables instead, which are often easier to obtain and can be monitored by automated measurement systems. In this work we show a method to estimate soil hydraulic parameters from high frequency soil moisture time series data gathered at two different measurement depths by inversion of a simple one dimensional dual-permeability model. The model uses an advection equation based on the kinematic wave theory to describe the flow in the fracture domain and a Richards equation for the flow in the matrix domain. The soil moisture time series data were measured in mesocosms during sprinkling experiments. The inversion consists of three consecutive steps: First, the parameters of the water retention function were assessed using vertical soil moisture profiles in hydraulic equilibrium. This was done using two different exponential retention functions and the Campbell function. Second, the soil sorptivity and diffusivity functions were estimated from Boltzmann-transformed soil moisture data, which allowed the calculation of the hydraulic conductivity function. Third, the parameters governing flow in the fracture domain were determined using the whole soil moisture time series. The resulting retention functions were within the range of values predicted by pedotransfer functions apart from very dry conditions, where all retention functions predicted lower matrix potentials. The diffusivity function predicted values of a similar range as shown in other studies. Overall, the model was able to emulate soil moisture time series for low measurement depths, but deviated increasingly at larger depths. This indicates that some of the model parameters are not constant throughout the profile. However, overall seepage fluxes were still predicted correctly. In the near future we will apply the inversion method to lower frequency soil moisture data from different sites to evaluate the model's ability to predict preferential flow seepage fluxes at the field scale.

  15. Production and Uses of Multi-Decade Geodetic Earth Science Data Records

    NASA Astrophysics Data System (ADS)

    Bock, Y.; Kedar, S.; Moore, A. W.; Fang, P.; Liu, Z.; Sullivan, A.; Argus, D. F.; Jiang, S.; Marshall, S. T.

    2017-12-01

    The Solid Earth Science ESDR System (SESES) project funded under the NASA MEaSUREs program produces and disseminates mature, long-term, calibrated and validated, GNSS based Earth Science Data Records (ESDRs) that encompass multiple diverse areas of interest in Earth Science, such as tectonic motion, transient slip and earthquake dynamics, as well as meteorology, climate, and hydrology. The ESDRs now span twenty-five years for the earliest stations and today are available for thousands of global and regional stations. Using a unified metadata database and a combination of GNSS solutions generated by two independent analysis centers, the project currently produces four long-term ESDR's: Geodetic Displacement Time Series: Daily, combined, cleaned and filtered, GIPSY and GAMIT long-term time series of continuous GPS station positions (global and regional) in the latest version of ITRF, automatically updated weekly. Geodetic Velocities: Weekly updated velocity field + velocity field histories in various reference frames; compendium of all model parameters including earthquake catalog, coseismic offsets, and postseismic model parameters (exponential or logarithmic). Troposphere Delay Time Series: Long-term time series of troposphere delay (30-min resolution) at geodetic stations, necessarily estimated during position time series production and automatically updated weekly. Seismogeodetic records for historic earthquakes: High-rate broadband displacement and seismic velocity time series combining 1 Hz GPS displacements and 100 Hz accelerometer data for select large earthquakes and collocated cGPS and seismic instruments from regional networks. We present several recent notable examples of the ESDR's usage: A transient slip study that uses the combined position time series to unravel "tremor-less" slow tectonic transient events. Fault geometry determination from geodetic slip rates. Changes in water resources across California's physiographic provinces at a spatial resolution of 75 km. Retrospective study of a southern California summer monsoon event.

  16. Parameter estimation methods for gene circuit modeling from time-series mRNA data: a comparative study.

    PubMed

    Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin

    2015-11-01

    Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  17. Kalman filters for assimilating near-surface observations in the Richards equation - Part 2: A dual filter approach for simultaneous retrieval of states and parameters

    NASA Astrophysics Data System (ADS)

    Medina, H.; Romano, N.; Chirico, G. B.

    2012-12-01

    We present a dual Kalman Filter (KF) approach for retrieving states and parameters controlling soil water dynamics in a homogenous soil column by using near-surface state observations. The dual Kalman filter couples a standard KF algorithm for retrieving the states and an unscented KF algorithm for retrieving the parameters. We examine the performance of the dual Kalman Filter applied to two alternative state-space formulations of the Richards equation, respectively differentiated by the type of variable employed for representing the states: either the soil water content (θ) or the soil matric pressure head (h). We use a synthetic time-series series of true states and noise corrupted observations and a synthetic time-series of meteorological forcing. The performance analyses account for the effect of the input parameters, the observation depth and the assimilation frequency as well as the relationship between the retrieved states and the assimilated variables. We show that the identifiability of the parameters is strongly conditioned by several factors, such as the initial guess of the unknown parameters, the wet or dry range of the retrieved states, the boundary conditions, as well as the form (h-based or θ-based) of the state-space formulation. State identifiability is instead efficient even with a relatively coarse time-resolution of the assimilated observation. The accuracy of the retrieved states exhibits limited sensitivity to the observation depth and the assimilation frequency.

  18. A low free-parameter stochastic model of daily Forbush decrease indices

    NASA Astrophysics Data System (ADS)

    Patra, Sankar Narayan; Bhattacharya, Gautam; Panja, Subhash Chandra; Ghosh, Koushik

    2014-01-01

    Forbush decrease is a rapid decrease in the observed galactic cosmic ray intensity pattern occurring after a coronal mass ejection. In the present paper we have analyzed the daily Forbush decrease indices from January, 1967 to December, 2003 generated in IZMIRAN, Russia. First the entire indices have been smoothened and next we have made an attempt to fit a suitable stochastic model for the present time series by means of a necessary number of process parameters. The study reveals that the present time series is governed by a stationary autoregressive process of order 2 with a trace of white noise. Under the consideration of the present model we have shown that chaos is not expected in the present time series which opens up the possibility of validation of its forecasting (both short-term and long-term) as well as its multi-periodic behavior.

  19. Cosinor-based rhythmometry

    PubMed Central

    2014-01-01

    A brief overview is provided of cosinor-based techniques for the analysis of time series in chronobiology. Conceived as a regression problem, the method is applicable to non-equidistant data, a major advantage. Another dividend is the feasibility of deriving confidence intervals for parameters of rhythmic components of known periods, readily drawn from the least squares procedure, stressing the importance of prior (external) information. Originally developed for the analysis of short and sparse data series, the extended cosinor has been further developed for the analysis of long time series, focusing both on rhythm detection and parameter estimation. Attention is given to the assumptions underlying the use of the cosinor and ways to determine whether they are satisfied. In particular, ways of dealing with non-stationary data are presented. Examples illustrate the use of the different cosinor-based methods, extending their application from the study of circadian rhythms to the mapping of broad time structures (chronomes). PMID:24725531

  20. Parallel optimization of signal detection in active magnetospheric signal injection experiments

    NASA Astrophysics Data System (ADS)

    Gowanlock, Michael; Li, Justin D.; Rude, Cody M.; Pankratius, Victor

    2018-05-01

    Signal detection and extraction requires substantial manual parameter tuning at different stages in the processing pipeline. Time-series data depends on domain-specific signal properties, necessitating unique parameter selection for a given problem. The large potential search space makes this parameter selection process time-consuming and subject to variability. We introduce a technique to search and prune such parameter search spaces in parallel and select parameters for time series filters using breadth- and depth-first search strategies to increase the likelihood of detecting signals of interest in the field of magnetospheric physics. We focus on studying geomagnetic activity in the extremely and very low frequency ranges (ELF/VLF) using ELF/VLF transmissions from Siple Station, Antarctica, received at Québec, Canada. Our technique successfully detects amplified transmissions and achieves substantial speedup performance gains as compared to an exhaustive parameter search. We present examples where our algorithmic approach reduces the search from hundreds of seconds down to less than 1 s, with a ranked signal detection in the top 99th percentile, thus making it valuable for real-time monitoring. We also present empirical performance models quantifying the trade-off between the quality of signal recovered and the algorithm response time required for signal extraction. In the future, improved signal extraction in scenarios like the Siple experiment will enable better real-time diagnostics of conditions of the Earth's magnetosphere for monitoring space weather activity.

  1. Weighted statistical parameters for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  2. Dealing with Non-stationarity in Intensity-Frequency-Duration Curve

    NASA Astrophysics Data System (ADS)

    Rengaraju, S.; Rajendran, V.; C T, D.

    2017-12-01

    Extremes like flood and drought are becoming frequent and more vulnerable in recent times, generally attributed to the recent revelation of climate change. One of the main concerns is that whether the present infrastructures like dams, storm water drainage networks, etc., which were designed following the so called `stationary' assumption, are capable of withstanding the expected severe extremes. Stationary assumption considers that extremes are not changing with respect to time. However, recent studies proved that climate change has altered the climate extremes both temporally and spatially. Traditionally, the observed non-stationary in the extreme precipitation is incorporated in the extreme value distributions in terms of changing parameters. Nevertheless, this raises a question which parameter needs to be changed, i.e. location or scale or shape, since either one or more of these parameters vary at a given location. Hence, this study aims to detect the changing parameters to reduce the complexity involved in the development of non-stationary IDF curve and to provide the uncertainty bound of estimated return level using Bayesian Differential Evolutionary Monte Carlo (DE-MC) algorithm. Firstly, the extreme precipitation series is extracted using Peak Over Threshold. Then, the time varying parameter(s) is(are) detected for the extracted series using Generalized Additive Models for Location Scale and Shape (GAMLSS). Then, the IDF curve is constructed using Generalized Pareto Distribution incorporating non-stationarity only if the parameter(s) is(are) changing with respect to time, otherwise IDF curve will follow stationary assumption. Finally, the posterior probability intervals of estimated return revel are computed through Bayesian DE-MC approach and the non-stationary based IDF curve is compared with the stationary based IDF curve. The results of this study emphasize that the time varying parameters also change spatially and the IDF curves should incorporate non-stationarity only if there is change in the parameters, though there may be significant change in the extreme rainfall series. Our results evoke the importance of updating the infrastructure design strategies for the changing climate, by adopting the non-stationary based IDF curves.

  3. Improvements in Obstacle Clearance Parameters and Reaction Time Over a Series of Obstacles Revealed After Five Repeated Testing Sessions in Older Adults.

    PubMed

    Jehu, Deborah A; Lajoie, Yves; Paquet, Nicole

    2017-12-21

    The purpose of this study was to investigate obstacle clearance and reaction time parameters when crossing a series of six obstacles in older adults. A second aim was to examine the repeated exposure of this testing protocol once per week for 5 weeks. In total, 10 older adults (five females; age: 67.0 ± 6.9 years) walked onto and over six obstacles of varying heights (range: 100-200 mm) while completing no reaction time, simple reaction time, and choice reaction time tasks once per week for 5 weeks. The highest obstacles elicited the lowest toe clearance, and the first three obstacles revealed smaller heel clearance compared with the last three obstacles. Dual tasking negatively impacted obstacle clearance parameters when information processing demands were high. Longer and less consistent time to completion was observed in Session 1 compared with Sessions 2-5. Finally, improvements in simple reaction time were displayed after Session 2, but choice reaction time gradually improved and did not reach a plateau after repeated testing.

  4. Analysis and generation of groundwater concentration time series

    NASA Astrophysics Data System (ADS)

    Crăciun, Maria; Vamoş, Călin; Suciu, Nicolae

    2018-01-01

    Concentration time series are provided by simulated concentrations of a nonreactive solute transported in groundwater, integrated over the transverse direction of a two-dimensional computational domain and recorded at the plume center of mass. The analysis of a statistical ensemble of time series reveals subtle features that are not captured by the first two moments which characterize the approximate Gaussian distribution of the two-dimensional concentration fields. The concentration time series exhibit a complex preasymptotic behavior driven by a nonstationary trend and correlated fluctuations with time-variable amplitude. Time series with almost the same statistics are generated by successively adding to a time-dependent trend a sum of linear regression terms, accounting for correlations between fluctuations around the trend and their increments in time, and terms of an amplitude modulated autoregressive noise of order one with time-varying parameter. The algorithm generalizes mixing models used in probability density function approaches. The well-known interaction by exchange with the mean mixing model is a special case consisting of a linear regression with constant coefficients.

  5. Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time

    NASA Astrophysics Data System (ADS)

    Urteaga, Iñigo; Bugallo, Mónica F.; Djurić, Petar M.

    2017-12-01

    We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t+1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.

  6. Optical signal processing using photonic reservoir computing

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Dehyadegari, Louiza

    2014-10-01

    As a new approach to recognition and classification problems, photonic reservoir computing has such advantages as parallel information processing, power efficient and high speed. In this paper, a photonic structure has been proposed for reservoir computing which is investigated using a simple, yet, non-partial noisy time series prediction task. This study includes the application of a suitable topology with self-feedbacks in a network of SOA's - which lends the system a strong memory - and leads to adjusting adequate parameters resulting in perfect recognition accuracy (100%) for noise-free time series, which shows a 3% improvement over previous results. For the classification of noisy time series, the rate of accuracy showed a 4% increase and amounted to 96%. Furthermore, an analytical approach was suggested to solve rate equations which led to a substantial decrease in the simulation time, which is an important parameter in classification of large signals such as speech recognition, and better results came up compared with previous works.

  7. Crustal Movements and Gravity Variations in the Southeastern Po Plain, Italy

    NASA Astrophysics Data System (ADS)

    Zerbini, S.; Bruni, S.; Errico, M.; Santi, E.; Wilmes, H.; Wziontek, H.

    2014-12-01

    At the Medicina observatory, in the southeastern Po Plain, in Italy, we have started a project of continuous GPS and gravity observations in mid 1996. The experiment, focused on a comparison between height and gravity variations, is still ongoing; these uninterrupted time series certainly constitute a most important data base to observe and estimate reliably long-period behaviors but also to derive deeper insights on the nature of the crustal deformation. Almost two decades of continuous GPS observations from two closely located receivers have shown that the coordinate time series are characterized by linear and non-linear variations as well as by sudden jumps. Both over long- and short-period time scales, the GPS height series show signals induced by different phenomena, for example, those related to mass transport in the Earth system. Seasonal effects are clearly recognizable and are mainly associated with the water table seasonal behavior. To understand and separate the contribution of different forcings is not an easy task; to this end, the information provided by the superconducting gravimeter observations and also by absolute gravity measurements offers a most important means to detect and understand mass contributions. In addition to GPS and gravity data, at Medicina, a number of environmental parameters time series are also regularly acquired, among them water table levels. We present the results of study investigating correlations between height, gravity and environmental parameters time series.

  8. Multichannel biomedical time series clustering via hierarchical probabilistic latent semantic analysis.

    PubMed

    Wang, Jin; Sun, Xiangping; Nahavandi, Saeid; Kouzani, Abbas; Wu, Yuchuan; She, Mary

    2014-11-01

    Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Complex network approach to fractional time series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manshour, Pouya

    In order to extract correlation information inherited in stochastic time series, the visibility graph algorithm has been recently proposed, by which a time series can be mapped onto a complex network. We demonstrate that the visibility algorithm is not an appropriate one to study the correlation aspects of a time series. We then employ the horizontal visibility algorithm, as a much simpler one, to map fractional processes onto complex networks. The degree distributions are shown to have parabolic exponential forms with Hurst dependent fitting parameter. Further, we take into account other topological properties such as maximum eigenvalue of the adjacencymore » matrix and the degree assortativity, and show that such topological quantities can also be used to predict the Hurst exponent, with an exception for anti-persistent fractional Gaussian noises. To solve this problem, we take into account the Spearman correlation coefficient between nodes' degrees and their corresponding data values in the original time series.« less

  10. Clustering of financial time series

    NASA Astrophysics Data System (ADS)

    D'Urso, Pierpaolo; Cappelli, Carmela; Di Lallo, Dario; Massari, Riccardo

    2013-05-01

    This paper addresses the topic of classifying financial time series in a fuzzy framework proposing two fuzzy clustering models both based on GARCH models. In general clustering of financial time series, due to their peculiar features, needs the definition of suitable distance measures. At this aim, the first fuzzy clustering model exploits the autoregressive representation of GARCH models and employs, in the framework of a partitioning around medoids algorithm, the classical autoregressive metric. The second fuzzy clustering model, also based on partitioning around medoids algorithm, uses the Caiado distance, a Mahalanobis-like distance, based on estimated GARCH parameters and covariances that takes into account the information about the volatility structure of time series. In order to illustrate the merits of the proposed fuzzy approaches an application to the problem of classifying 29 time series of Euro exchange rates against international currencies is presented and discussed, also comparing the fuzzy models with their crisp version.

  11. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    PubMed

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  12. Computing the multifractal spectrum from time series: an algorithmic approach.

    PubMed

    Harikrishnan, K P; Misra, R; Ambika, G; Amritkar, R E

    2009-12-01

    We show that the existing methods for computing the f(alpha) spectrum from a time series can be improved by using a new algorithmic scheme. The scheme relies on the basic idea that the smooth convex profile of a typical f(alpha) spectrum can be fitted with an analytic function involving a set of four independent parameters. While the standard existing schemes [P. Grassberger et al., J. Stat. Phys. 51, 135 (1988); A. Chhabra and R. V. Jensen, Phys. Rev. Lett. 62, 1327 (1989)] generally compute only an incomplete f(alpha) spectrum (usually the top portion), we show that this can be overcome by an algorithmic approach, which is automated to compute the D(q) and f(alpha) spectra from a time series for any embedding dimension. The scheme is first tested with the logistic attractor with known f(alpha) curve and subsequently applied to higher-dimensional cases. We also show that the scheme can be effectively adapted for analyzing practical time series involving noise, with examples from two widely different real world systems. Moreover, some preliminary results indicating that the set of four independent parameters may be used as diagnostic measures are also included.

  13. Comparative Fatigue Lives of Rubber and PVC Wiper Cylindrical Coatings

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.; Savage, Michael

    2002-01-01

    Three coating materials for rotating cylindrical-coated wiping rollers were fatigue tested in 2 Intaglio printing presses. The coatings were a hard, cross-linked, plasticized PVC thermoset (P-series); a plasticized PVC (A-series); and a hard, nitryl rubber (R-series). Both 2- and 3-parameter Weibull analyses as well as a cost-benefit analysis were performed. The mean value of life for the R-series coating is 24 and 9 times longer than the P- and A-series coatings, respectively. Both the cost and replacement rate for the R-series coating was significantly less than those for the P- and A-series coatings. At a very high probability of survival the R-series coating is approximately 2 and 6 times the lives of the P- and A-series, respectively, before the first failure occurs. Where all coatings are run to failure, using the mean (life) time between removal (MTBR) for each coating to calculate the number of replacements and costs provides qualitatively similar results to those using a Weibull analysis.

  14. Simple Example of Backtest Overfitting (SEBO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less

  15. Effects of ageing on the electrical characteristics of Zn/ZnS/n-GaAs/In structure

    NASA Astrophysics Data System (ADS)

    Güzeldir, B.; Sağlam, M.

    2016-04-01

    Zn/ZnS/n-GaAs/In structure has been fabricated by the Successive Ionic Layer Adsorption and Reaction (SILAR) method and the influence of the time dependent or ageing on the characteristic parameters are examined. The current-voltage (I-V) of the structure have been measured immediately, 1, 3, 5, 15, 30, 45, 60, 75, 90, 105, 120, 135, 150 and 165 days after fabrication of this structure. The characteristics parameters of this structure such as barrier height, ideality factor, series resistance are calculated from the I-V measurements. It has been seen that the changes of characteristic parameters such as barrier height, ideality factor and series resistance of Zn/ZnS/n-GaAs/In structure have lightly changed with increasing ageing time.

  16. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  17. A seasonal Bartlett-Lewis Rectangular Pulse model

    NASA Astrophysics Data System (ADS)

    Ritschel, Christoph; Agbéko Kpogo-Nuwoklo, Komlan; Rust, Henning; Ulbrich, Uwe; Névir, Peter

    2016-04-01

    Precipitation time series with a high temporal resolution are needed as input for several hydrological applications, e.g. river runoff or sewer system models. As adequate observational data sets are often not available, simulated precipitation series come to use. Poisson-cluster models are commonly applied to generate these series. It has been shown that this class of stochastic precipitation models is able to well reproduce important characteristics of observed rainfall. For the gauge based case study presented here, the Bartlett-Lewis rectangular pulse model (BLRPM) has been chosen. As it has been shown that certain model parameters vary with season in a midlatitude moderate climate due to different rainfall mechanisms dominating in winter and summer, model parameters are typically estimated separately for individual seasons or individual months. Here, we suggest a simultaneous parameter estimation for the whole year under the assumption that seasonal variation of parameters can be described with harmonic functions. We use an observational precipitation series from Berlin with a high temporal resolution to exemplify the approach. We estimate BLRPM parameters with and without this seasonal extention and compare the results in terms of model performance and robustness of the estimation.

  18. Visualization of synchronization of the uterine contraction signals: running cross-correlation and wavelet running cross-correlation methods.

    PubMed

    Oczeretko, Edward; Swiatecka, Jolanta; Kitlas, Agnieszka; Laudanski, Tadeusz; Pierzynski, Piotr

    2006-01-01

    In physiological research, we often study multivariate data sets, containing two or more simultaneously recorded time series. The aim of this paper is to present the cross-correlation and the wavelet cross-correlation methods to assess synchronization between contractions in different topographic regions of the uterus. From a medical point of view, it is important to identify time delays between contractions, which may be of potential diagnostic significance in various pathologies. The cross-correlation was computed in a moving window with a width corresponding to approximately two or three contractions. As a result, the running cross-correlation function was obtained. The propagation% parameter assessed from this function allows quantitative description of synchronization in bivariate time series. In general, the uterine contraction signals are very complicated. Wavelet transforms provide insight into the structure of the time series at various frequencies (scales). To show the changes of the propagation% parameter along scales, a wavelet running cross-correlation was used. At first, the continuous wavelet transforms as the uterine contraction signals were received and afterwards, a running cross-correlation analysis was conducted for each pair of transformed time series. The findings show that running functions are very useful in the analysis of uterine contractions.

  19. A new data-driven model for post-transplant antibody dynamics in high risk kidney transplantation.

    PubMed

    Zhang, Yan; Briggs, David; Lowe, David; Mitchell, Daniel; Daga, Sunil; Krishnan, Nithya; Higgins, Robert; Khovanova, Natasha

    2017-02-01

    The dynamics of donor specific human leukocyte antigen antibodies during early stage after kidney transplantation are of great clinical interest as these antibodies are considered to be associated with short and long term clinical outcomes. The limited number of antibody time series and their diverse patterns have made the task of modelling difficult. Focusing on one typical post-transplant dynamic pattern with rapid falls and stable settling levels, a novel data-driven model has been developed for the first time. A variational Bayesian inference method has been applied to select the best model and learn its parameters for 39 time series from two groups of graft recipients, i.e. patients with and without acute antibody-mediated rejection (AMR) episodes. Linear and nonlinear dynamic models of different order were attempted to fit the time series, and the third order linear model provided the best description of the common features in both groups. Both deterministic and stochastic parameters are found to be significantly different in the AMR and no-AMR groups showing that the time series in the AMR group have significantly higher frequency of oscillations and faster dissipation rates. This research may potentially lead to better understanding of the immunological mechanisms involved in kidney transplantation. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    NASA Astrophysics Data System (ADS)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  1. Efficient Bayesian inference for natural time series using ARFIMA processes

    NASA Astrophysics Data System (ADS)

    Graves, T.; Gramacy, R. B.; Franzke, C. L. E.; Watkins, N. W.

    2015-11-01

    Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. In this paper we present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators. For CET we also extend our method to seasonal long memory.

  2. Off-line tracking of series parameters in distribution systems using AMI data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Tess L.; Sun, Yannan; Schneider, Kevin

    2016-05-01

    Electric distribution systems have historically lacked measurement points, and equipment is often operated to its failure point, resulting in customer outages. The widespread deployment of sensors at the distribution level is enabling observability. This paper presents an off-line parameter value tracking procedure that takes advantage of the increasing number of measurement devices being deployed at the distribution level to estimate changes in series impedance parameter values over time. The tracking of parameter values enables non-diurnal and non-seasonal change to be flagged for investigation. The presented method uses an unbalanced Distribution System State Estimation (DSSE) and a measurement residual-based parameter estimationmore » procedure. Measurement residuals from multiple measurement snapshots are combined in order to increase the effective local redundancy and improve the robustness of the calculations in the presence of measurement noise. Data from devices on the primary distribution system and from customer meters, via an AMI system, form the input data set. Results of simulations on the IEEE 13-Node Test Feeder are presented to illustrate the proposed approach applied to changes in series impedance parameters. A 5% change in series resistance elements can be detected in the presence of 2% measurement error when combining less than 1 day of measurement snapshots into a single estimate.« less

  3. State and parameter estimation of spatiotemporally chaotic systems illustrated by an application to Rayleigh-Bénard convection.

    PubMed

    Cornick, Matthew; Hunt, Brian; Ott, Edward; Kurtuldu, Huseyin; Schatz, Michael F

    2009-03-01

    Data assimilation refers to the process of estimating a system's state from a time series of measurements (which may be noisy or incomplete) in conjunction with a model for the system's time evolution. Here we demonstrate the applicability of a recently developed data assimilation method, the local ensemble transform Kalman filter, to nonlinear, high-dimensional, spatiotemporally chaotic flows in Rayleigh-Bénard convection experiments. Using this technique we are able to extract the full temperature and velocity fields from a time series of shadowgraph measurements. In addition, we describe extensions of the algorithm for estimating model parameters. Our results suggest the potential usefulness of our data assimilation technique to a broad class of experimental situations exhibiting spatiotemporal chaos.

  4. Hybrid intelligent methodology to design translation invariant morphological operators for Brazilian stock market prediction.

    PubMed

    Araújo, Ricardo de A

    2010-12-01

    This paper presents a hybrid intelligent methodology to design increasing translation invariant morphological operators applied to Brazilian stock market prediction (overcoming the random walk dilemma). The proposed Translation Invariant Morphological Robust Automatic phase-Adjustment (TIMRAA) method consists of a hybrid intelligent model composed of a Modular Morphological Neural Network (MMNN) with a Quantum-Inspired Evolutionary Algorithm (QIEA), which searches for the best time lags to reconstruct the phase space of the time series generator phenomenon and determines the initial (sub-optimal) parameters of the MMNN. Each individual of the QIEA population is further trained by the Back Propagation (BP) algorithm to improve the MMNN parameters supplied by the QIEA. Also, for each prediction model generated, it uses a behavioral statistical test and a phase fix procedure to adjust time phase distortions observed in stock market time series. Furthermore, an experimental analysis is conducted with the proposed method through four Brazilian stock market time series, and the achieved results are discussed and compared to results found with random walk models and the previously introduced Time-delay Added Evolutionary Forecasting (TAEF) and Morphological-Rank-Linear Time-lag Added Evolutionary Forecasting (MRLTAEF) methods. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Using simulations and data to evaluate mean sensitivity (ζ) as a useful statistic in dendrochronology

    Treesearch

    Andrew G. Bunn; Esther Jansma; Mikko Korpela; Robert D. Westfall; James Baldwin

    2013-01-01

    Mean sensitivity (ζ) continues to be used in dendrochronology despite a literature that shows it to be of questionable value in describing the properties of a time series. We simulate first-order autoregressive models with known parameters and show that ζ is a function of variance and autocorrelation of a time series. We then use 500 random tree-ring...

  6. Maximum likelihood estimation of signal-to-noise ratio and combiner weight

    NASA Technical Reports Server (NTRS)

    Kalson, S.; Dolinar, S. J.

    1986-01-01

    An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.

  7. Improving the precision of dynamic forest parameter estimates using Landsat

    Treesearch

    Evan B. Brooks; John W. Coulston; Randolph H. Wynne; Valerie A. Thomas

    2016-01-01

    The use of satellite-derived classification maps to improve post-stratified forest parameter estimates is wellestablished.When reducing the variance of post-stratification estimates for forest change parameters such as forestgrowth, it is logical to use a change-related strata map. At the stand level, a time series of Landsat images is

  8. State space model approach for forecasting the use of electrical energy (a case study on: PT. PLN (Persero) district of Kroya)

    NASA Astrophysics Data System (ADS)

    Kurniati, Devi; Hoyyi, Abdul; Widiharih, Tatik

    2018-05-01

    Time series data is a series of data taken or measured based on observations at the same time interval. Time series data analysis is used to perform data analysis considering the effect of time. The purpose of time series analysis is to know the characteristics and patterns of a data and predict a data value in some future period based on data in the past. One of the forecasting methods used for time series data is the state space model. This study discusses the modeling and forecasting of electric energy consumption using the state space model for univariate data. The modeling stage is began with optimal Autoregressive (AR) order selection, determination of state vector through canonical correlation analysis, estimation of parameter, and forecasting. The result of this research shows that modeling of electric energy consumption using state space model of order 4 with Mean Absolute Percentage Error (MAPE) value 3.655%, so the model is very good forecasting category.

  9. Diffusive and subdiffusive dynamics of indoor microclimate: a time series modeling.

    PubMed

    Maciejewska, Monika; Szczurek, Andrzej; Sikora, Grzegorz; Wyłomańska, Agnieszka

    2012-09-01

    The indoor microclimate is an issue in modern society, where people spend about 90% of their time indoors. Temperature and relative humidity are commonly used for its evaluation. In this context, the two parameters are usually considered as behaving in the same manner, just inversely correlated. This opinion comes from observation of the deterministic components of temperature and humidity time series. We focus on the dynamics and the dependency structure of the time series of these parameters, without deterministic components. Here we apply the mean square displacement, the autoregressive integrated moving average (ARIMA), and the methodology for studying anomalous diffusion. The analyzed data originated from five monitoring locations inside a modern office building, covering a period of nearly one week. It was found that the temperature data exhibited a transition between diffusive and subdiffusive behavior, when the building occupancy pattern changed from the weekday to the weekend pattern. At the same time the relative humidity consistently showed diffusive character. Also the structures of the dependencies of the temperature and humidity data sets were different, as shown by the different structures of the ARIMA models which were found appropriate. In the space domain, the dynamics and dependency structure of the particular parameter were preserved. This work proposes an approach to describe the very complex conditions of indoor air and it contributes to the improvement of the representative character of microclimate monitoring.

  10. Efficient Maize and Sunflower Multi-year Mapping with NDVI Time Series of HJ-1A/1B in Hetao Irrigation District of Inner Mongolia, China

    NASA Astrophysics Data System (ADS)

    Yu, B.; Shang, S.

    2016-12-01

    Food shortage is one of the major challenges that human beings are facing. It is urgent to improve the monitoring of the plantation and distribution of the main crops to solve the following economic and social issues. Recently, with the extensive use of remote sensing satellite data, it has provided favorable conditions for crop identification in large irrigation district with complex planting structure. Difference of different crop phenology is the main basis for crop identification, and the normalized difference vegetation index (NDVI) time-series could better delineate crop phenology cycle. Therefore, the key of crop identification is to obtain high quality NDVI time-series. MODIS and Landsat TM satellite images are the most frequently used, however, neither of them could guarantee high temporal and spatial resolutions at once. Accordingly, this paper makes use of NDVI time-series extracted from China Environment Satellites data, which has two-day-repeat temporal and 30m spatial resolutions. The NDVI time-series are fitted with an asymmetric logistic curve, the fitting effect is good and the correlation coefficient is greater than 0.9. The phonological parameters are derived from NDVI fitting curves, and crop identification is carried out by different relation ellipses between NDVI and its phonological parameters of different crops. This paper takes Hetao Irrigation District of Inner Mongolia as an example, to identify multi-year maize and sunflower in the district, and the identification result is good. Compared with the official statistics, the relative errors are both lower than 5%. The results show that the NDVI time-series dataset derived from HJ-1A/1B CCD could delineate the crop phenology cycle accurately and demonstrate its application in crop identification in irrigated district.

  11. Visibility graph analysis of heart rate time series and bio-marker of congestive heart failure

    NASA Astrophysics Data System (ADS)

    Bhaduri, Anirban; Bhaduri, Susmita; Ghosh, Dipak

    2017-09-01

    Study of RR interval time series for Congestive Heart Failure had been an area of study with different methods including non-linear methods. In this article the cardiac dynamics of heart beat are explored in the light of complex network analysis, viz. visibility graph method. Heart beat (RR Interval) time series data taken from Physionet database [46, 47] belonging to two groups of subjects, diseased (congestive heart failure) (29 in number) and normal (54 in number) are analyzed with the technique. The overall results show that a quantitative parameter can significantly differentiate between the diseased subjects and the normal subjects as well as different stages of the disease. Further, the data when split into periods of around 1 hour each and analyzed separately, also shows the same consistent differences. This quantitative parameter obtained using the visibility graph analysis thereby can be used as a potential bio-marker as well as a subsequent alarm generation mechanism for predicting the onset of Congestive Heart Failure.

  12. Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling

    DOE PAGES

    Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...

    2014-07-14

    Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less

  13. Trend analysis for daily rainfall series of Barcelona

    NASA Astrophysics Data System (ADS)

    Ortego, M. I.; Gibergans-Báguena, J.; Tolosana-Delgado, R.; Egozcue, J. J.; Llasat, M. C.

    2009-09-01

    Frequency analysis of hydrological series is a key point to acquire an in-depth understanding of the behaviour of hydrologic events. The occurrence of extreme hydrologic events in an area may imply great social and economical impacts. A good understanding of hazardous events improves the planning of human activities. A useful model for hazard assessment of extreme hydrologic events in an area is the point-over-threshold (POT) model. Time-occurrence of events is assumed to be Poisson distributed, and the magnitude X of each event is modeled as an arbitrary random variable, whose excesses over the threshold x0, Y = X - x0, given X > x0, have a Generalized Pareto Distribution (GPD), ( ? )- 1? FY (y|β,?) = 1 - 1+ βy , 0 ? y < ysup , where ysup = +? if ? 0, and ysup = -β? ? if ? < 0. The limiting distribution for ? = 0 is an exponential one. Independence between this magnitude and occurrence in time is assumed, as well as independence from event to event. In order to take account for uncertainty of the estimation of the GPD parameters, a Bayesian approach is chosen. This approach allows to include necessary conditions on the parameters of the distribution for our particular phenomena, as well as propagate adequately the uncertainty of estimations to the hazard parameters, such as return periods. A common concern is to know whether magnitudes of hazardous events have changed in the last decades. Long data series are very appreciated in order to properly study these issues. The series of daily rainfall in Barcelona (1854-2006) has been selected. This is one of the longer european daily rainfall series available. Daily rainfall is better described using a relative scale and therefore it is suitably treated in a log-scale. Accordingly, log-precipitation is identified with X. Excesses over a threshold are modeled by a GPD with a limited maximum value. An additional assumption is that the distribution of the excesses Y has limited upper tail and, therefore, ? < 0, ysup = -β?. Such a long data series provides valuable information about the phenomena on hand, and therefore a very first step is to have a look to its reliability. The first part of the work focuses on the possible existence of abrupt changes in the parameters of the GPD. These abrupt changes may be due to changes in the location of the observatories and/or technological advances introduced in the measuring instruments. The second part of the work examines the possible existence of trends. The parameters of the model are considered as a function of time. A new parameterisation of the GPD distribution is suggested, in order to parsimoniously deal with this climate variation, ? = ln(-? ?;β) and ? = ln(-? ? β) The classical scale and shape parameters of the GPD (β,?) are reformulated as a location parameter ? "linked to the upper limit of the distribution", and a shape parameter ?. In this reparameterisation, the parsimonious choice is to consider shape as a linear function of time, ?(t) = ?0 + t? while keeping location fixed, ?(t) = ?0. Then, the climate change is assessed by checking the hypothesis ? 0. Results show no significant abrupt changes in excesses distribution of the Barcelona daily rainfall series but suggest a significant change for the parameters, and therefore the existence of a trend in daily rainfall for this period.

  14. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less

  15. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series

    PubMed Central

    2011-01-01

    Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598

  16. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series.

    PubMed

    Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp

    2011-08-18

    Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.

  17. Comparison of Nomothetic versus Idiographic-Oriented Methods for Making Predictions about Distal Outcomes from Time Series Data

    ERIC Educational Resources Information Center

    Castro-Schilo, Laura; Ferrer, Emilio

    2013-01-01

    We illustrate the idiographic/nomothetic debate by comparing 3 approaches to using daily self-report data on affect for predicting relationship quality and breakup. The 3 approaches included (a) the first day in the series of daily data; (b) the mean and variability of the daily series; and (c) parameters from dynamic factor analysis, a…

  18. Investigating parameters participating in the infant respiratory control system attractor.

    PubMed

    Terrill, Philip I; Wilson, Stephen J; Suresh, Sadasivam; Cooper, David M; Dakin, Carolyn

    2008-01-01

    Theoretically, any participating parameter in a non-linear system represents the dynamics of the whole system. Taken's time delay embedding theory provides the fundamental basis for allowing non-linear analysis to be performed on physiological, time-series data. In practice, only one measurable parameter is required to be measured to convey an accurate representation of the system dynamics. In this paper, the infant respiratory control system is represented using three variables-a digitally sampled respiratory inductive plethysmography waveform, and the derived parameters tidal volume and inter-breath interval time series data. For 14 healthy infants, these data streams were analysed using recurrence plot analysis across one night of sleep. The measured attractor size of these variables followed the same qualitative trends across the nights study. Results suggest that the attractor size measures of the derived IBI and tidal volume are representative surrogates for the raw respiratory waveform. The extent to which the relative attractor sizes of IBI and tidal volume remain constant through changing sleep state could potentially be used to quantify pathology, or maturation of breathing control.

  19. Comparison of the performance of tracer kinetic model-driven registration for dynamic contrast enhanced MRI using different models of contrast enhancement.

    PubMed

    Buonaccorsi, Giovanni A; Roberts, Caleb; Cheung, Sue; Watson, Yvonne; O'Connor, James P B; Davies, Karen; Jackson, Alan; Jayson, Gordon C; Parker, Geoff J M

    2006-09-01

    The quantitative analysis of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) data is subject to model fitting errors caused by motion during the time-series data acquisition. However, the time-varying features that occur as a result of contrast enhancement can confound motion correction techniques based on conventional registration similarity measures. We have therefore developed a heuristic, locally controlled tracer kinetic model-driven registration procedure, in which the model accounts for contrast enhancement, and applied it to the registration of abdominal DCE-MRI data at high temporal resolution. Using severely motion-corrupted data sets that had been excluded from analysis in a clinical trial of an antiangiogenic agent, we compared the results obtained when using different models to drive the tracer kinetic model-driven registration with those obtained when using a conventional registration against the time series mean image volume. Using tracer kinetic model-driven registration, it was possible to improve model fitting by reducing the sum of squared errors but the improvement was only realized when using a model that adequately described the features of the time series data. The registration against the time series mean significantly distorted the time series data, as did tracer kinetic model-driven registration using a simpler model of contrast enhancement. When an appropriate model is used, tracer kinetic model-driven registration influences motion-corrupted model fit parameter estimates and provides significant improvements in localization in three-dimensional parameter maps. This has positive implications for the use of quantitative DCE-MRI for example in clinical trials of antiangiogenic or antivascular agents.

  20. Modeling multivariate time series on manifolds with skew radial basis functions.

    PubMed

    Jamshidi, Arta A; Kirby, Michael J

    2011-01-01

    We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.

  1. Consistent Long-Time Series of GPS Satellite Antenna Phase Center Corrections

    NASA Astrophysics Data System (ADS)

    Steigenberger, P.; Schmid, R.; Rothacher, M.

    2004-12-01

    The current IGS processing strategy disregards satellite antenna phase center variations (pcvs) depending on the nadir angle and applies block-specific phase center offsets only. However, the transition from relative to absolute receiver antenna corrections presently under discussion necessitates the consideration of satellite antenna pcvs. Moreover, studies of several groups have shown that the offsets are not homogeneous within a satellite block. Manufacturer specifications seem to confirm this assumption. In order to get best possible antenna corrections, consistent ten-year time series (1994-2004) of satellite-specific pcvs and offsets were generated. This challenging effort became possible as part of the reprocessing of a global GPS network currently performed by the Technical Universities of Munich and Dresden. The data of about 160 stations since the official start of the IGS in 1994 have been reprocessed, as today's GPS time series are mostly inhomogeneous and inconsistent due to continuous improvements in the processing strategies and modeling of global GPS solutions. An analysis of the signals contained in the time series of the phase center offsets demonstrates amplitudes on the decimeter level, at least one order of magnitude worse than the desired accuracy. The periods partly arise from the GPS orbit configuration, as the orientation of the orbit planes with regard to the inertial system repeats after about 350 days due to the rotation of the ascending nodes. In addition, the rms values of the X- and Y-offsets show a high correlation with the angle between the orbit plane and the direction to the sun. The time series of the pcvs mainly point at the correlation with the global terrestrial scale. Solutions with relative and absolute phase center corrections, with block- and satellite-specific satellite antenna corrections demonstrate the effect of this parameter group on other global GPS parameters such as the terrestrial scale, station velocities, the geocenter position or the tropospheric delays. Thus, deeper insight into the so-called `Bermuda triangle' of several highly correlated parameters is given.

  2. Fourier Magnitude-Based Privacy-Preserving Clustering on Time-Series Data

    NASA Astrophysics Data System (ADS)

    Kim, Hea-Suk; Moon, Yang-Sae

    Privacy-preserving clustering (PPC in short) is important in publishing sensitive time-series data. Previous PPC solutions, however, have a problem of not preserving distance orders or incurring privacy breach. To solve this problem, we propose a new PPC approach that exploits Fourier magnitudes of time-series. Our magnitude-based method does not cause privacy breach even though its techniques or related parameters are publicly revealed. Using magnitudes only, however, incurs the distance order problem, and we thus present magnitude selection strategies to preserve as many Euclidean distance orders as possible. Through extensive experiments, we showcase the superiority of our magnitude-based approach.

  3. GrammarViz 3.0: Interactive Discovery of Variable-Length Time Series Patterns

    DOE PAGES

    Senin, Pavel; Lin, Jessica; Wang, Xing; ...

    2018-02-23

    The problems of recurrent and anomalous pattern discovery in time series, e.g., motifs and discords, respectively, have received a lot of attention from researchers in the past decade. However, since the pattern search space is usually intractable, most existing detection algorithms require that the patterns have discriminative characteristics and have its length known in advance and provided as input, which is an unreasonable requirement for many real-world problems. In addition, patterns of similar structure, but of different lengths may co-exist in a time series. In order to address these issues, we have developed algorithms for variable-length time series pattern discoverymore » that are based on symbolic discretization and grammar inference—two techniques whose combination enables the structured reduction of the search space and discovery of the candidate patterns in linear time. In this work, we present GrammarViz 3.0—a software package that provides implementations of proposed algorithms and graphical user interface for interactive variable-length time series pattern discovery. The current version of the software provides an alternative grammar inference algorithm that improves the time series motif discovery workflow, and introduces an experimental procedure for automated discretization parameter selection that builds upon the minimum cardinality maximum cover principle and aids the time series recurrent and anomalous pattern discovery.« less

  4. GrammarViz 3.0: Interactive Discovery of Variable-Length Time Series Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senin, Pavel; Lin, Jessica; Wang, Xing

    The problems of recurrent and anomalous pattern discovery in time series, e.g., motifs and discords, respectively, have received a lot of attention from researchers in the past decade. However, since the pattern search space is usually intractable, most existing detection algorithms require that the patterns have discriminative characteristics and have its length known in advance and provided as input, which is an unreasonable requirement for many real-world problems. In addition, patterns of similar structure, but of different lengths may co-exist in a time series. In order to address these issues, we have developed algorithms for variable-length time series pattern discoverymore » that are based on symbolic discretization and grammar inference—two techniques whose combination enables the structured reduction of the search space and discovery of the candidate patterns in linear time. In this work, we present GrammarViz 3.0—a software package that provides implementations of proposed algorithms and graphical user interface for interactive variable-length time series pattern discovery. The current version of the software provides an alternative grammar inference algorithm that improves the time series motif discovery workflow, and introduces an experimental procedure for automated discretization parameter selection that builds upon the minimum cardinality maximum cover principle and aids the time series recurrent and anomalous pattern discovery.« less

  5. Functional Covariance Networks: Obtaining Resting-State Networks from Intersubject Variability

    PubMed Central

    Gohel, Suril; Di, Xin; Walter, Martin; Biswal, Bharat B.

    2012-01-01

    Abstract In this study, we investigate a new approach for examining the separation of the brain into resting-state networks (RSNs) on a group level using resting-state parameters (amplitude of low-frequency fluctuation [ALFF], fractional ALFF [fALFF], the Hurst exponent, and signal standard deviation). Spatial independent component analysis is used to reveal covariance patterns of the relevant resting-state parameters (not the time series) across subjects that are shown to be related to known, standard RSNs. As part of the analysis, nonresting state parameters are also investigated, such as mean of the blood oxygen level-dependent time series and gray matter volume from anatomical scans. We hypothesize that meaningful RSNs will primarily be elucidated by analysis of the resting-state functional connectivity (RSFC) parameters and not by non-RSFC parameters. First, this shows the presence of a common influence underlying individual RSFC networks revealed through low-frequency fluctation (LFF) parameter properties. Second, this suggests that the LFFs and RSFC networks have neurophysiological origins. Several of the components determined from resting-state parameters in this manner correlate strongly with known resting-state functional maps, and we term these “functional covariance networks”. PMID:22765879

  6. Real-time flutter boundary prediction based on time series models

    NASA Astrophysics Data System (ADS)

    Gu, Wenjing; Zhou, Li

    2018-03-01

    For the purpose of predicting the flutter boundary in real time during flutter flight tests, two time series models accompanied with corresponding stability criterion are adopted in this paper. The first method simplifies a long nonstationary response signal as many contiguous intervals and each is considered to be stationary. The traditional AR model is then established to represent each interval of signal sequence. While the second employs a time-varying AR model to characterize actual measured signals in flutter test with progression variable speed (FTPVS). To predict the flutter boundary, stability parameters are formulated by the identified AR coefficients combined with Jury's stability criterion. The behavior of the parameters is examined using both simulated and wind-tunnel experiment data. The results demonstrate that both methods show significant effectiveness in predicting the flutter boundary at lower speed level. A comparison between the two methods is also given in this paper.

  7. Aerial Refueling Simulator Validation Using Operational Experimentation and Response Surface Methods with Time Series Responses

    DTIC Science & Technology

    2013-03-21

    10 2.3 Time Series Response Data ................................................................................. 12 2.4 Comparison of Response...to 12 evaluating the efficiency of the parameter estimates. In the past, the most popular form of response surface design used the D-optimality...as well. A model can refer to almost anything in math , statistics, or computer science. It can be any “physical, mathematical, or logical

  8. Toward automatic time-series forecasting using neural networks.

    PubMed

    Yan, Weizhong

    2012-07-01

    Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.

  9. Model Performance Evaluation and Scenario Analysis ...

    EPA Pesticide Factsheets

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too

  10. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    NASA Astrophysics Data System (ADS)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  11. Deconvolution of mixing time series on a graph

    PubMed Central

    Blocker, Alexander W.; Airoldi, Edoardo M.

    2013-01-01

    In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135

  12. Investigation of the 16-year and 18-year ZTD Time Series Derived from GPS Data Processing

    NASA Astrophysics Data System (ADS)

    Bałdysz, Zofia; Nykiel, Grzegorz; Figurski, Mariusz; Szafranek, Karolina; KroszczyńSki, Krzysztof

    2015-08-01

    The GPS system can play an important role in activities related to the monitoring of climate. Long time series, coherent strategy, and very high quality of tropospheric parameter Zenith Tropospheric Delay (ZTD) estimated on the basis of GPS data analysis allows to investigate its usefulness for climate research as a direct GPS product. This paper presents results of analysis of 16-year time series derived from EUREF Permanent Network (EPN) reprocessing performed by the Military University of Technology. For 58 stations Lomb-Scargle periodograms were performed in order to obtain information about the oscillations in ZTD time series. Seasonal components and linear trend were estimated using Least Square Estimation (LSE) and Mann—Kendall trend test was used to confirm the presence of a linear trend designated by LSE method. In order to verify the impact of the length of time series on trend value, comparison between 16 and 18 years were performed.

  13. RADON CONCENTRATION TIME SERIES MODELING AND APPLICATION DISCUSSION.

    PubMed

    Stránský, V; Thinová, L

    2017-11-01

    In the year 2010 a continual radon measurement was established at Mladeč Caves in the Czech Republic using a continual radon monitor RADIM3A. In order to model radon time series in the years 2010-15, the Box-Jenkins Methodology, often used in econometrics, was applied. Because of the behavior of radon concentrations (RCs), a seasonal integrated, autoregressive moving averages model with exogenous variables (SARIMAX) has been chosen to model the measured time series. This model uses the time series seasonality, previously acquired values and delayed atmospheric parameters, to forecast RC. The developed model for RC time series is called regARIMA(5,1,3). Model residuals could be retrospectively compared with seismic evidence of local or global earthquakes, which occurred during the RCs measurement. This technique enables us to asses if continuously measured RC could serve an earthquake precursor. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Nonstationary frequency analysis for the trivariate flood series of the Weihe River

    NASA Astrophysics Data System (ADS)

    Jiang, Cong; Xiong, Lihua

    2016-04-01

    Some intensive human activities such as water-soil conservation can significantly alter the natural hydrological processes of rivers. In this study, the effect of the water-soil conservation on the trivariate flood series from the Weihe River located in the Northwest China is investigated. The annual maxima daily discharge, annual maxima 3-day flood volume and annual maxima 5-day flood volume are chosen as the study data and used to compose the trivariate flood series. The nonstationarities in both the individual univariate flood series and the corresponding antecedent precipitation series generating the flood events are examined by the Mann-Kendall trend test. It is found that all individual univariate flood series present significant decreasing trend, while the antecedent precipitation series can be treated as stationary. It indicates that the increase of the water-soil conservation land area has altered the rainfall-runoff relationship of the Weihe basin, and induced the nonstationarities in the three individual univariate flood series. The time-varying moments model based on the Pearson type III distribution is applied to capture the nonstationarities in the flood frequency distribution with the water-soil conservation land area introduced as the explanatory variable of the flood distribution parameters. Based on the analysis for each individual univariate flood series, the dependence structure among the three univariate flood series are investigated by the time-varying copula model also with the water-soil conservation land area as the explanatory variable of copula parameters. The results indicate that the dependence among the trivariate flood series is enhanced by the increase of water-soil conservation land area.

  15. Transition Icons for Time-Series Visualization and Exploratory Analysis.

    PubMed

    Nickerson, Paul V; Baharloo, Raheleh; Wanigatunga, Amal A; Manini, Todd M; Tighe, Patrick J; Rashidi, Parisa

    2018-03-01

    The modern healthcare landscape has seen the rapid emergence of techniques and devices that temporally monitor and record physiological signals. The prevalence of time-series data within the healthcare field necessitates the development of methods that can analyze the data in order to draw meaningful conclusions. Time-series behavior is notoriously difficult to intuitively understand due to its intrinsic high-dimensionality, which is compounded in the case of analyzing groups of time series collected from different patients. Our framework, which we call transition icons, renders common patterns in a visual format useful for understanding the shared behavior within groups of time series. Transition icons are adept at detecting and displaying subtle differences and similarities, e.g., between measurements taken from patients receiving different treatment strategies or stratified by demographics. We introduce various methods that collectively allow for exploratory analysis of groups of time series, while being free of distribution assumptions and including simple heuristics for parameter determination. Our technique extracts discrete transition patterns from symbolic aggregate approXimation representations, and compiles transition frequencies into a bag of patterns constructed for each group. These transition frequencies are normalized and aligned in icon form to intuitively display the underlying patterns. We demonstrate the transition icon technique for two time-series datasets-postoperative pain scores, and hip-worn accelerometer activity counts. We believe transition icons can be an important tool for researchers approaching time-series data, as they give rich and intuitive information about collective time-series behaviors.

  16. Convergent Cross Mapping: Basic concept, influence of estimation parameters and practical application.

    PubMed

    Schiecke, Karin; Pester, Britta; Feucht, Martha; Leistritz, Lutz; Witte, Herbert

    2015-01-01

    In neuroscience, data are typically generated from neural network activity. Complex interactions between measured time series are involved, and nothing or only little is known about the underlying dynamic system. Convergent Cross Mapping (CCM) provides the possibility to investigate nonlinear causal interactions between time series by using nonlinear state space reconstruction. Aim of this study is to investigate the general applicability, and to show potentials and limitation of CCM. Influence of estimation parameters could be demonstrated by means of simulated data, whereas interval-based application of CCM on real data could be adapted for the investigation of interactions between heart rate and specific EEG components of children with temporal lobe epilepsy.

  17. A likelihood-based time series modeling approach for application in dendrochronology to examine the growth-climate relations and forest disturbance history.

    PubMed

    Lee, E Henry; Wickham, Charlotte; Beedlow, Peter A; Waschmann, Ronald S; Tingey, David T

    2017-10-01

    A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for climate and forest disturbances (i.e., pests, diseases, fire). The statistical method is illustrated with a tree-ring width time series for a mature closed-canopy Douglas-fir stand on the west slopes of the Cascade Mountains of Oregon, USA that is impacted by Swiss needle cast disease caused by the foliar fungus, Phaecryptopus gaeumannii (Rhode) Petrak. The likelihood-based TSIA method is proposed for the field of dendrochronology to understand the interaction of temperature, water, and forest disturbances that are important in forest ecology and climate change studies.

  18. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    NASA Astrophysics Data System (ADS)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  19. Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.

    PubMed

    Bühler, Jonas; von Lieres, Eric; Huber, Gregor J

    2018-01-01

    Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.

  20. Results of meteorological monitoring in Gorny Altai before and after the Chuya earthquake in 2003

    NASA Astrophysics Data System (ADS)

    Aptikaeva, O. I.; Shitov, A. V.

    2014-12-01

    We consider the dynamics of some meteorological parameters in Gorny Altai from 2000 to 2011. We analyzed the variations in the meteorological parameters related to the strong Chuya earthquake (September 27, 2003). A number of anomalies were revealed in the time series. Before this strong earthquake, the winter temperatures at the nearest meteorological station to the earthquake source increased by 8-10°C (by 2009 they returned to the mean values), while the air humidity in winter decreased. In the winter of 2002, we observed a long negative anomaly in the time series of the atmospheric pressure. At the same time, the decrease in the released seismic energy was replaced by the tendency to its increase. Using wavelet analysis we revealed the synchronism in the dynamics of the atmospheric parameters, variations in the solar and geomagnetic activities, and geodynamic processes. We also discuss the relationship of the atmospheric and geodynamic processes and the comfort conditions of the population in the climate analyzed here.

  1. A systematic review on the use of time series data in the study of antimicrobial consumption and Pseudomonas aeruginosa resistance.

    PubMed

    Athanasiou, Christos I; Kopsini, Angeliki

    2018-06-12

    In the field of antimicrobial resistance, the number of studies that use time series data has increased recently. The purpose of this study is the systematic review of all studies on antibacterial consumption and on Pseudomonas aeruginosa resistance in healthcare settings, that have used time series data. A systematic review of the literature till June 2017 was conducted. All the studies that have used time series data and have examined the inhospital antibiotic consumption and Ps. aeruginosa resistance rates or incidence were eligible. No other exclusion criteria were applied. Data on the structure, terminology used, methods used and results of each article were recorded and analyzed as possible. A total of thirty six studies were retrieved, twenty three of which were in accordance with our criteria. Thirteen of them were quasi experimental studies and ten were ecological observational studies. Eighteen studies collected time series data of both parameters and the statistical methodology of "time series analysis" was applied in nine studies. Most of the studies were published in the last eight years. The Interrupted Time Series design was the most widespread. As expected, there was high heterogeneity in regard to the study design, terminology and statistical methods applied. Copyright © 2018. Published by Elsevier Ltd.

  2. Distilling allometric and environmental information from time series of conduit size: the standardization issue and its relationship to tree hydraulic architecture.

    PubMed

    Carrer, Marco; von Arx, Georg; Castagneri, Daniele; Petit, Giai

    2015-01-01

    Trees are among the best natural archives of past environmental information. Xylem anatomy preserves information related to tree allometry and ecophysiological performance, which is not available from the more customary ring-width or wood-density proxy parameters. Recent technological advances make tree-ring anatomy very attractive because time frames of many centuries can now be covered. This calls for the proper treatment of time series of xylem anatomical attributes. In this article, we synthesize current knowledge on the biophysical and physiological mechanisms influencing the short- to long-term variation in the most widely used wood-anatomical feature, namely conduit size. We also clarify the strong mechanistic link between conduit-lumen size, tree hydraulic architecture and height growth. Among the key consequences of these biophysical constraints is the pervasive, increasing trend of conduit size during ontogeny. Such knowledge is required to process time series of anatomical parameters correctly in order to obtain the information of interest. An appropriate standardization procedure is fundamental when analysing long tree-ring-related chronologies. When dealing with wood-anatomical parameters, this is even more critical. Only an interdisciplinary approach involving ecophysiology, wood anatomy and dendrochronology will help to distill the valuable information about tree height growth and past environmental variability correctly. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. The study on the parallel processing based time series correlation analysis of RBC membrane flickering in quantitative phase imaging

    NASA Astrophysics Data System (ADS)

    Lee, Minsuk; Won, Youngjae; Park, Byungjun; Lee, Seungrag

    2017-02-01

    Not only static characteristics but also dynamic characteristics of the red blood cell (RBC) contains useful information for the blood diagnosis. Quantitative phase imaging (QPI) can capture sample images with subnanometer scale depth resolution and millisecond scale temporal resolution. Various researches have been used QPI for the RBC diagnosis, and recently many researches has been developed to decrease the process time of RBC information extraction using QPI by the parallel computing algorithm, however previous studies are interested in the static parameters such as morphology of the cells or simple dynamic parameters such as root mean square (RMS) of the membrane fluctuations. Previously, we presented a practical blood test method using the time series correlation analysis of RBC membrane flickering with QPI. However, this method has shown that there is a limit to the clinical application because of the long computation time. In this study, we present an accelerated time series correlation analysis of RBC membrane flickering using the parallel computing algorithm. This method showed consistent fractal scaling exponent results of the surrounding medium and the normal RBC with our previous research.

  4. Choosing the appropriate forecasting model for predictive parameter control.

    PubMed

    Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars

    2014-01-01

    All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  5. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    USGS Publications Warehouse

    Langbein, John O.

    2017-01-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  6. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  7. Non-linear dynamical classification of short time series of the rössler system in high noise regimes.

    PubMed

    Lainscsek, Claudia; Weyhenmeyer, Jonathan; Hernandez, Manuel E; Poizner, Howard; Sejnowski, Terrence J

    2013-01-01

    Time series analysis with delay differential equations (DDEs) reveals non-linear properties of the underlying dynamical system and can serve as a non-linear time-domain classification tool. Here global DDE models were used to analyze short segments of simulated time series from a known dynamical system, the Rössler system, in high noise regimes. In a companion paper, we apply the DDE model developed here to classify short segments of encephalographic (EEG) data recorded from patients with Parkinson's disease and healthy subjects. Nine simulated subjects in each of two distinct classes were generated by varying the bifurcation parameter b and keeping the other two parameters (a and c) of the Rössler system fixed. All choices of b were in the chaotic parameter range. We diluted the simulated data using white noise ranging from 10 to -30 dB signal-to-noise ratios (SNR). Structure selection was supervised by selecting the number of terms, delays, and order of non-linearity of the model DDE model that best linearly separated the two classes of data. The distances d from the linear dividing hyperplane was then used to assess the classification performance by computing the area A' under the ROC curve. The selected model was tested on untrained data using repeated random sub-sampling validation. DDEs were able to accurately distinguish the two dynamical conditions, and moreover, to quantify the changes in the dynamics. There was a significant correlation between the dynamical bifurcation parameter b of the simulated data and the classification parameter d from our analysis. This correlation still held for new simulated subjects with new dynamical parameters selected from each of the two dynamical regimes. Furthermore, the correlation was robust to added noise, being significant even when the noise was greater than the signal. We conclude that DDE models may be used as a generalizable and reliable classification tool for even small segments of noisy data.

  8. Non-Linear Dynamical Classification of Short Time Series of the Rössler System in High Noise Regimes

    PubMed Central

    Lainscsek, Claudia; Weyhenmeyer, Jonathan; Hernandez, Manuel E.; Poizner, Howard; Sejnowski, Terrence J.

    2013-01-01

    Time series analysis with delay differential equations (DDEs) reveals non-linear properties of the underlying dynamical system and can serve as a non-linear time-domain classification tool. Here global DDE models were used to analyze short segments of simulated time series from a known dynamical system, the Rössler system, in high noise regimes. In a companion paper, we apply the DDE model developed here to classify short segments of encephalographic (EEG) data recorded from patients with Parkinson’s disease and healthy subjects. Nine simulated subjects in each of two distinct classes were generated by varying the bifurcation parameter b and keeping the other two parameters (a and c) of the Rössler system fixed. All choices of b were in the chaotic parameter range. We diluted the simulated data using white noise ranging from 10 to −30 dB signal-to-noise ratios (SNR). Structure selection was supervised by selecting the number of terms, delays, and order of non-linearity of the model DDE model that best linearly separated the two classes of data. The distances d from the linear dividing hyperplane was then used to assess the classification performance by computing the area A′ under the ROC curve. The selected model was tested on untrained data using repeated random sub-sampling validation. DDEs were able to accurately distinguish the two dynamical conditions, and moreover, to quantify the changes in the dynamics. There was a significant correlation between the dynamical bifurcation parameter b of the simulated data and the classification parameter d from our analysis. This correlation still held for new simulated subjects with new dynamical parameters selected from each of the two dynamical regimes. Furthermore, the correlation was robust to added noise, being significant even when the noise was greater than the signal. We conclude that DDE models may be used as a generalizable and reliable classification tool for even small segments of noisy data. PMID:24379798

  9. Joint Seasonal ARMA Approach for Modeling of Load Forecast Errors in Planning Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Samaan, Nader A.; Makarov, Yuri V.

    2014-04-14

    To make informed and robust decisions in the probabilistic power system operation and planning process, it is critical to conduct multiple simulations of the generated combinations of wind and load parameters and their forecast errors to handle the variability and uncertainty of these time series. In order for the simulation results to be trustworthy, the simulated series must preserve the salient statistical characteristics of the real series. In this paper, we analyze day-ahead load forecast error data from multiple balancing authority locations and characterize statistical properties such as mean, standard deviation, autocorrelation, correlation between series, time-of-day bias, and time-of-day autocorrelation.more » We then construct and validate a seasonal autoregressive moving average (ARMA) model to model these characteristics, and use the model to jointly simulate day-ahead load forecast error series for all BAs.« less

  10. Comparative assessment of five water infiltration models into the soil

    NASA Astrophysics Data System (ADS)

    Shahsavaramir, M.

    2009-04-01

    The knowledge of the soil hydraulic conditions particularly soil permeability is an important issue hydrological and climatic study. Because of its high spatial and temporal variability, soil infiltration monitoring scheme was investigated in view of its application in infiltration modelling. Some of models for infiltration into the soil have been developed, in this paper; we design and describe capability of five infiltration model into the soil. We took a decision to select the best model suggested. In this research in the first time, we designed a program in Quick Basic software and wrote algorithm of five models that include Kostiakove, Modified Kostiakove, Philip, S.C.S and Horton. Afterwards we supplied amounts of factual infiltration, according of get at infiltration data, by double rings method in 12 series of Saveh plain which situated in Markazi province in Iran. After accessing to models coefficients, these equations were regenerated by Excel software and calculations related to models acuity rate in proportion to observations and also related graphs were done by this software. Amounts of infiltration parameters, such as cumulative infiltration and infiltration rate were obtained from designed models. Then we compared amounts of observation and determination parameters of infiltration. The results show that Kostiakove and Modified Kostiakove models could quantify amounts of cumulative infiltration and infiltration rate in triple period (short, middle and long time). In tree series of soils, Horton model could determine infiltration amounts better than others in time trinal treatments. The results show that Philip model in seven series had a relatively good fitness for determination of infiltration parameters. Also Philip model in five series of soils, after passing of time, had curve shape; in fact this shown that attraction coefficient (s) was less than zero. After all S.C.S model among of others had the least capability to determination of infiltration parameters.

  11. Decoupled ARX and RBF Neural Network Modeling Using PCA and GA Optimization for Nonlinear Distributed Parameter Systems.

    PubMed

    Zhang, Ridong; Tao, Jili; Lu, Renquan; Jin, Qibing

    2018-02-01

    Modeling of distributed parameter systems is difficult because of their nonlinearity and infinite-dimensional characteristics. Based on principal component analysis (PCA), a hybrid modeling strategy that consists of a decoupled linear autoregressive exogenous (ARX) model and a nonlinear radial basis function (RBF) neural network model are proposed. The spatial-temporal output is first divided into a few dominant spatial basis functions and finite-dimensional temporal series by PCA. Then, a decoupled ARX model is designed to model the linear dynamics of the dominant modes of the time series. The nonlinear residual part is subsequently parameterized by RBFs, where genetic algorithm is utilized to optimize their hidden layer structure and the parameters. Finally, the nonlinear spatial-temporal dynamic system is obtained after the time/space reconstruction. Simulation results of a catalytic rod and a heat conduction equation demonstrate the effectiveness of the proposed strategy compared to several other methods.

  12. Bayesian analysis of time-series data under case-crossover designs: posterior equivalence and inference.

    PubMed

    Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay

    2013-12-01

    Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.

  13. Efficient Bayesian inference for natural time series using ARFIMA processes

    NASA Astrophysics Data System (ADS)

    Graves, Timothy; Gramacy, Robert; Franzke, Christian; Watkins, Nicholas

    2016-04-01

    Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. We present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators [1]. In addition we show how the method can be used to perform joint inference of the stability exponent and the memory parameter when ARFIMA is extended to allow for alpha-stable innovations. Such models can be used to study systems where heavy tails and long range memory coexist. [1] Graves et al, Nonlin. Processes Geophys., 22, 679-700, 2015; doi:10.5194/npg-22-679-2015.

  14. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  15. Modeling global vector fields of chaotic systems from noisy time series with the aid of structure-selection techniques.

    PubMed

    Xu, Daolin; Lu, Fangfang

    2006-12-01

    We address the problem of reconstructing a set of nonlinear differential equations from chaotic time series. A method that combines the implicit Adams integration and the structure-selection technique of an error reduction ratio is proposed for system identification and corresponding parameter estimation of the model. The structure-selection technique identifies the significant terms from a pool of candidates of functional basis and determines the optimal model through orthogonal characteristics on data. The technique with the Adams integration algorithm makes the reconstruction available to data sampled with large time intervals. Numerical experiment on Lorenz and Rossler systems shows that the proposed strategy is effective in global vector field reconstruction from noisy time series.

  16. Polarization-resolved time-delay signatures of chaos induced by FBG-feedback in VCSEL.

    PubMed

    Zhong, Zhu-Qiang; Li, Song-Sui; Chan, Sze-Chun; Xia, Guang-Qiong; Wu, Zheng-Mao

    2015-06-15

    Polarization-resolved chaotic emission intensities from a vertical-cavity surface-emitting laser (VCSEL) subject to feedback from a fiber Bragg grating (FBG) are numerically investigated. Time-delay (TD) signatures of the feedback are examined through various means including self-correlations of intensity time-series of individual polarizations, cross-correlation of intensities time-series between both polarizations, and permutation entropies calculated for the individual polarizations. The results show that the TD signatures can be clearly suppressed by selecting suitable operation parameters such as the feedback strength, FBG bandwidth, and Bragg frequency. Also, in the operational parameter space, numerical maps of TD signatures and effective bandwidths are obtained, which show regions of chaotic signals with both wide bandwidths and weak TD signatures. Finally, by comparing with a VCSEL subject to feedback from a mirror, the VCSEL subject to feedback from the FBG generally shows better concealment of the TD signatures with similar, or even wider, bandwidths.

  17. Multiparametric evaluation of hindlimb ischemia using time-series indocyanine green fluorescence imaging.

    PubMed

    Guang, Huizhi; Cai, Chuangjian; Zuo, Simin; Cai, Wenjuan; Zhang, Jiulou; Luo, Jianwen

    2017-03-01

    Peripheral arterial disease (PAD) can further cause lower limb ischemia. Quantitative evaluation of the vascular perfusion in the ischemic limb contributes to diagnosis of PAD and preclinical development of new drug. In vivo time-series indocyanine green (ICG) fluorescence imaging can noninvasively monitor blood flow and has a deep tissue penetration. The perfusion rate estimated from the time-series ICG images is not enough for the evaluation of hindlimb ischemia. The information relevant to the vascular density is also important, because angiogenesis is an essential mechanism for post-ischemic recovery. In this paper, a multiparametric evaluation method is proposed for simultaneous estimation of multiple vascular perfusion parameters, including not only the perfusion rate but also the vascular perfusion density and the time-varying ICG concentration in veins. The target method is based on a mathematical model of ICG pharmacokinetics in the mouse hindlimb. The regression analysis performed on the time-series ICG images obtained from a dynamic reflectance fluorescence imaging system. The results demonstrate that the estimated multiple parameters are effective to quantitatively evaluate the vascular perfusion and distinguish hypo-perfused tissues from well-perfused tissues in the mouse hindlimb. The proposed multiparametric evaluation method could be useful for PAD diagnosis. The estimated perfusion rate and vascular perfusion density maps (left) and the time-varying ICG concentration in veins of the ankle region (right) of the normal and ischemic hindlimbs. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Online analysis: Deeper insights into water quality dynamics in spring water.

    PubMed

    Page, Rebecca M; Besmer, Michael D; Epting, Jannis; Sigrist, Jürg A; Hammes, Frederik; Huggenberger, Peter

    2017-12-01

    We have studied the dynamics of water quality in three karst springs taking advantage of new technological developments that enable high-resolution measurements of bacterial load (total cell concentration: TCC) as well as online measurements of abiotic parameters. We developed a novel data analysis approach, using self-organizing maps and non-linear projection methods, to approximate the TCC dynamics using the multivariate data sets of abiotic parameter time-series, thus providing a method that could be implemented in an online water quality management system for water suppliers. The (TCC) data, obtained over several months, provided a good basis to study the microbiological dynamics in detail. Alongside the TCC measurements, online abiotic parameter time-series, including spring discharge, turbidity, spectral absorption coefficient at 254nm (SAC254) and electrical conductivity, were obtained. High-density sampling over an extended period of time, i.e. every 45min for 3months, allowed a detailed analysis of the dynamics in karst spring water quality. Substantial increases in both the TCC and the abiotic parameters followed precipitation events in the catchment area. Differences between the parameter fluctuations were only apparent when analyzed at a high temporal scale. Spring discharge was always the first to react to precipitation events in the catchment area. Lag times between the onset of precipitation and a change in discharge varied between 0.2 and 6.7h, depending on the spring and event. TCC mostly reacted second or approximately concurrent with turbidity and SAC254, whereby the fastest observed reaction in the TCC time series occurred after 2.3h. The methodological approach described here enables a better understanding of bacterial dynamics in karst springs, which can be used to estimate risks and management options to avoid contamination of the drinking water. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Insights on correlation dimension from dynamics mapping of three experimental nonlinear laser systems.

    PubMed

    McMahon, Christopher J; Toomey, Joshua P; Kane, Deb M

    2017-01-01

    We have analysed large data sets consisting of tens of thousands of time series from three Type B laser systems: a semiconductor laser in a photonic integrated chip, a semiconductor laser subject to optical feedback from a long free-space-external-cavity, and a solid-state laser subject to optical injection from a master laser. The lasers can deliver either constant, periodic, pulsed, or chaotic outputs when parameters such as the injection current and the level of external perturbation are varied. The systems represent examples of experimental nonlinear systems more generally and cover a broad range of complexity including systematically varying complexity in some regions. In this work we have introduced a new procedure for semi-automatically interrogating experimental laser system output power time series to calculate the correlation dimension (CD) using the commonly adopted Grassberger-Proccacia algorithm. The new CD procedure is called the 'minimum gradient detection algorithm'. A value of minimum gradient is returned for all time series in a data set. In some cases this can be identified as a CD, with uncertainty. Applying the new 'minimum gradient detection algorithm' CD procedure, we obtained robust measurements of the correlation dimension for many of the time series measured from each laser system. By mapping the results across an extended parameter space for operation of each laser system, we were able to confidently identify regions of low CD (CD < 3) and assign these robust values for the correlation dimension. However, in all three laser systems, we were not able to measure the correlation dimension at all parts of the parameter space. Nevertheless, by mapping the staged progress of the algorithm, we were able to broadly classify the dynamical output of the lasers at all parts of their respective parameter spaces. For two of the laser systems this included displaying regions of high-complexity chaos and dynamic noise. These high-complexity regions are differentiated from regions where the time series are dominated by technical noise. This is the first time such differentiation has been achieved using a CD analysis approach. More can be known of the CD for a system when it is interrogated in a mapping context, than from calculations using isolated time series. This has been shown for three laser systems and the approach is expected to be useful in other areas of nonlinear science where large data sets are available and need to be semi-automatically analysed to provide real dimensional information about the complex dynamics. The CD/minimum gradient algorithm measure provides additional information that complements other measures of complexity and relative complexity, such as the permutation entropy; and conventional physical measurements.

  20. Insights on correlation dimension from dynamics mapping of three experimental nonlinear laser systems

    PubMed Central

    McMahon, Christopher J.; Toomey, Joshua P.

    2017-01-01

    Background We have analysed large data sets consisting of tens of thousands of time series from three Type B laser systems: a semiconductor laser in a photonic integrated chip, a semiconductor laser subject to optical feedback from a long free-space-external-cavity, and a solid-state laser subject to optical injection from a master laser. The lasers can deliver either constant, periodic, pulsed, or chaotic outputs when parameters such as the injection current and the level of external perturbation are varied. The systems represent examples of experimental nonlinear systems more generally and cover a broad range of complexity including systematically varying complexity in some regions. Methods In this work we have introduced a new procedure for semi-automatically interrogating experimental laser system output power time series to calculate the correlation dimension (CD) using the commonly adopted Grassberger-Proccacia algorithm. The new CD procedure is called the ‘minimum gradient detection algorithm’. A value of minimum gradient is returned for all time series in a data set. In some cases this can be identified as a CD, with uncertainty. Findings Applying the new ‘minimum gradient detection algorithm’ CD procedure, we obtained robust measurements of the correlation dimension for many of the time series measured from each laser system. By mapping the results across an extended parameter space for operation of each laser system, we were able to confidently identify regions of low CD (CD < 3) and assign these robust values for the correlation dimension. However, in all three laser systems, we were not able to measure the correlation dimension at all parts of the parameter space. Nevertheless, by mapping the staged progress of the algorithm, we were able to broadly classify the dynamical output of the lasers at all parts of their respective parameter spaces. For two of the laser systems this included displaying regions of high-complexity chaos and dynamic noise. These high-complexity regions are differentiated from regions where the time series are dominated by technical noise. This is the first time such differentiation has been achieved using a CD analysis approach. Conclusions More can be known of the CD for a system when it is interrogated in a mapping context, than from calculations using isolated time series. This has been shown for three laser systems and the approach is expected to be useful in other areas of nonlinear science where large data sets are available and need to be semi-automatically analysed to provide real dimensional information about the complex dynamics. The CD/minimum gradient algorithm measure provides additional information that complements other measures of complexity and relative complexity, such as the permutation entropy; and conventional physical measurements. PMID:28837602

  1. Bayesian dynamic modeling of time series of dengue disease case counts

    PubMed Central

    López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-01-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941

  2. The Earth Observation Monitor - Automated monitoring and alerting for spatial time-series data based on OGC web services

    NASA Astrophysics Data System (ADS)

    Eberle, J.; Hüttich, C.; Schmullius, C.

    2014-12-01

    Spatial time series data are freely available around the globe from earth observation satellites and meteorological stations for many years until now. They provide useful and important information to detect ongoing changes of the environment; but for end-users it is often too complex to extract this information out of the original time series datasets. This issue led to the development of the Earth Observation Monitor (EOM), an operational framework and research project to provide simple access, analysis and monitoring tools for global spatial time series data. A multi-source data processing middleware in the backend is linked to MODIS data from Land Processes Distributed Archive Center (LP DAAC) and Google Earth Engine as well as daily climate station data from NOAA National Climatic Data Center. OGC Web Processing Services are used to integrate datasets from linked data providers or external OGC-compliant interfaces to the EOM. Users can either use the web portal (webEOM) or the mobile application (mobileEOM) to execute these processing services and to retrieve the requested data for a given point or polygon in userfriendly file formats (CSV, GeoTiff). Beside providing just data access tools, users can also do further time series analyses like trend calculations, breakpoint detections or the derivation of phenological parameters from vegetation time series data. Furthermore data from climate stations can be aggregated over a given time interval. Calculated results can be visualized in the client and downloaded for offline usage. Automated monitoring and alerting of the time series data integrated by the user is provided by an OGC Sensor Observation Service with a coupled OGC Web Notification Service. Users can decide which datasets and parameters are monitored with a given filter expression (e.g., precipitation value higher than x millimeter per day, occurrence of a MODIS Fire point, detection of a time series anomaly). Datasets integrated in the SOS service are updated in near-realtime based on the linked data providers mentioned above. An alert is automatically pushed to the user if the new data meets the conditions of the registered filter expression. This monitoring service is available on the web portal with alerting by email and within the mobile app with alerting by email and push notification.

  3. Neutron monitors and muon detectors for solar modulation studies: 2. ϕ time series

    NASA Astrophysics Data System (ADS)

    Ghelfi, A.; Maurin, D.; Cheminet, A.; Derome, L.; Hubert, G.; Melot, F.

    2017-08-01

    The level of solar modulation at different times (related to the solar activity) is a central question of solar and galactic cosmic-ray physics. In the first paper of this series, we have established a correspondence between the uncertainties on ground-based detectors count rates and the parameter ϕ (modulation level in the force-field approximation) reconstructed from these count rates. In this second paper, we detail a procedure to obtain a reference ϕ time series from neutron monitor data. We show that we can have an unbiased and accurate ϕ reconstruction (Δϕ / ϕ ≃ 10 %). We also discuss the potential of Bonner spheres spectrometers and muon detectors to provide ϕ time series. Two by-products of this calculation are updated ϕ values for the cosmic-ray database and a web interface to retrieve and plot ϕ from the 50's to today (http://lpsc.in2p3.fr/crdb).

  4. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less

  5. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    PubMed

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  6. Application of Time-series Model to Predict Groundwater Quality Parameters for Agriculture: (Plain Mehran Case Study)

    NASA Astrophysics Data System (ADS)

    Mehrdad Mirsanjari, Mir; Mohammadyari, Fatemeh

    2018-03-01

    Underground water is regarded as considerable water source which is mainly available in arid and semi arid with deficient surface water source. Forecasting of hydrological variables are suitable tools in water resources management. On the other hand, time series concepts is considered efficient means in forecasting process of water management. In this study the data including qualitative parameters (electrical conductivity and sodium adsorption ratio) of 17 underground water wells in Mehran Plain has been used to model the trend of parameters change over time. Using determined model, the qualitative parameters of groundwater is predicted for the next seven years. Data from 2003 to 2016 has been collected and were fitted by AR, MA, ARMA, ARIMA and SARIMA models. Afterward, the best model is determined using information criterion or Akaike (AIC) and correlation coefficient. After modeling parameters, the map of agricultural land use in 2016 and 2023 were generated and the changes between these years were studied. Based on the results, the average of predicted SAR (Sodium Adsorption Rate) in all wells in the year 2023 will increase compared to 2016. EC (Electrical Conductivity) average in the ninth and fifteenth holes and decreases in other wells will be increased. The results indicate that the quality of groundwater for Agriculture Plain Mehran will decline in seven years.

  7. Cluster analysis of dynamic contrast enhanced MRI reveals tumor subregions related to locoregional relapse for cervical cancer patients.

    PubMed

    Torheim, Turid; Groendahl, Aurora R; Andersen, Erlend K F; Lyng, Heidi; Malinen, Eirik; Kvaal, Knut; Futsaether, Cecilia M

    2016-11-01

    Solid tumors are known to be spatially heterogeneous. Detection of treatment-resistant tumor regions can improve clinical outcome, by enabling implementation of strategies targeting such regions. In this study, K-means clustering was used to group voxels in dynamic contrast enhanced magnetic resonance images (DCE-MRI) of cervical cancers. The aim was to identify clusters reflecting treatment resistance that could be used for targeted radiotherapy with a dose-painting approach. Eighty-one patients with locally advanced cervical cancer underwent DCE-MRI prior to chemoradiotherapy. The resulting image time series were fitted to two pharmacokinetic models, the Tofts model (yielding parameters K trans and ν e ) and the Brix model (A Brix , k ep and k el ). K-means clustering was used to group similar voxels based on either the pharmacokinetic parameter maps or the relative signal increase (RSI) time series. The associations between voxel clusters and treatment outcome (measured as locoregional control) were evaluated using the volume fraction or the spatial distribution of each cluster. One voxel cluster based on the RSI time series was significantly related to locoregional control (adjusted p-value 0.048). This cluster consisted of low-enhancing voxels. We found that tumors with poor prognosis had this RSI-based cluster gathered into few patches, making this cluster a potential candidate for targeted radiotherapy. None of the voxels clusters based on Tofts or Brix parameter maps were significantly related to treatment outcome. We identified one group of tumor voxels significantly associated with locoregional relapse that could potentially be used for dose painting. This tumor voxel cluster was identified using the raw MRI time series rather than the pharmacokinetic maps.

  8. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  9. Assessment of MODIS BRDF/Albedo Model Parameters (MCD43A1 Collection 6) for directional reflectance retrieval

    NASA Astrophysics Data System (ADS)

    Che, X.; Feng, M.; Sexton, J. O.; Channan, S.; Yang, Y.; Song, J.

    2017-12-01

    Reflection of solar radiation from Earth's surface is the basis for retrieving many higher-level terrestrial attributes such as vegetation indices and albedo. However, reflectance varies with the illumination and viewing geometry of observation (Bi-directional Reflectance Distribution Function (BRDF)) even with constant surface properties, and correcting for these artifacts increases precision of comparisons of images and time series acquired from satellites with different illumination and observation geometries. The operational MODIS processing inverts MODIS BRDF/Albedo Model Parameters (MCD43A1) to retrieve directional reflectance at any solar and view angles, and recently the MCD43A1 (Collection 6) was updated and distributed. We quantified the ability of MCD43A1 Collection 6 for retrieving directional reflectance compared to Collection 5 and tested whether changes in the land surface change over a 16-day composite period affect time series of directional reflectance. Correcting the Terra MODIS daily Surface Reflectance (MOD09GA) to the illumination and view geometries of coincidental Aqua MODIS daily Surface Reflectance (MYD09GA), MCD43A4 Collection 6 and Landsat-5 TM imagery show that the BRDF-corrected results using MCD43A1 Collection 6 hold a higher consistency with higher R2 (0.63 0.955), the slopes close to unity (0.718 0.955) and the lower RMSD (0.422 3.142) and MAE (0.282 1.735) reduced by about 10% than Collection 5. A simple parameter calibration to evaluate the variability of the roughness (R) and the volumetric (V) BRDF parameters for MCD43A1 Collection 6 shows that the assumption of stable land surface characteristic over 16-days composite period, used for BRDF parameters inversion, is plausible in spite of small improvement of directional reflectance and BRDF parameters time series. The larger fluctuations for the MCD43A1 Collection 6 do not have a discernable impact on the reflectance time series. All of these results shows that MCD43A1 Collection 6 product with daily temporal resolution is a valuable product representing the anisotropy of surface features, and reasonably more accurate for directional reflectance derivation at any solar and view geometries than Collection 5, which holds a great potential for many Earth's science research.

  10. Time Series Decomposition into Oscillation Components and Phase Estimation.

    PubMed

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-02-01

    Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.

  11. Multifractal analysis and the NYHA index

    NASA Astrophysics Data System (ADS)

    Muñoz-Diosdado, A.; Ramírez-Hernández, L.; Aguilar-Molina, A. M.; Zamora-Justo, J. A.; Gutiérrez-Calleja, R. A.; Virgilio-González, C. D.

    2014-11-01

    We did multifractal analysis of heartbeat interval time series of healthy persons and patients with congestive heart failure (CHF). To analyze circadian rhythm variations we analyzed time series of 24 hours records and segments of six hours when the subject is asleep and segments of six hours when is awake. A decrease in the multifractality degree occurs in the heartbeat interval time series of CHF patients. This multifractality loss is associated with the width reduction of the spectrum and the complexity loss of the signal. Multifractal spectra of healthy persons are right skewed, but the spectra of CHF patients tend to be symmetrical and in some cases are left skewed. To determine the therapy for CHF patients, cardiologists use an index proposed by the NYHA (New York Heart Association). There is a correlation between this index and the multifractal analysis parameters, i.e. while higher is the NYHA index the width of the multifractal spectrum is lower and it is also more symmetrical. In contrast, patients with NYHA index equal to 1 have multifractal parameters similar to those of healthy subjects.

  12. Investigation of relationships between parameters of solar nano-flares and solar activity

    NASA Astrophysics Data System (ADS)

    Safari, Hossein; Javaherian, Mohsen; Kaki, Bardia

    2016-07-01

    Solar flares are one of the important coronal events which are originated in solar magnetic activity. They release lots of energy during the interstellar medium, right after the trigger. Flare prediction can play main role in avoiding eventual damages on the Earth. Here, to interpret solar large-scale events (e.g., flares), we investigate relationships between small-scale events (nano-flares) and large-scale events (e.g., flares). In our method, by using simulations of nano-flares based on Monte Carlo method, the intensity time series of nano-flares are simulated. Then, the solar full disk images taken at 171 angstrom recorded by SDO/AIA are employed. Some parts of the solar disk (quiet Sun (QS), coronal holes (CHs), and active regions (ARs)) are cropped and the time series of these regions are extracted. To compare the simulated intensity time series of nano-flares with the intensity time series of real data extracted from different parts of the Sun, the artificial neural networks is employed. Therefore, we are able to extract physical parameters of nano-flares like both kick and decay rate lifetime, and the power of their power-law distributions. The procedure of variations in the power value of power-law distributions within QS, CH is similar to AR. Thus, by observing the small part of the Sun, we can follow the procedure of solar activity.

  13. Integrating a Linear Signal Model with Groundwater and Rainfall time-series on the Characteristic Identification of Groundwater Systems

    NASA Astrophysics Data System (ADS)

    Chen, Yu-Wen; Wang, Yetmen; Chang, Liang-Cheng

    2017-04-01

    Groundwater resources play a vital role on regional supply. To avoid irreversible environmental impact such as land subsidence, the characteristic identification of groundwater system is crucial before sustainable management of groundwater resource. This study proposes a signal process approach to identify the character of groundwater systems based on long-time hydrologic observations include groundwater level and rainfall. The study process contains two steps. First, a linear signal model (LSM) is constructed and calibrated to simulate the variation of underground hydrology based on the time series of groundwater levels and rainfall. The mass balance equation of the proposed LSM contains three major terms contain net rate of horizontal exchange, rate of rainfall recharge and rate of pumpage and four parameters are required to calibrate. Because reliable records of pumpage is rare, the time-variant groundwater amplitudes of daily frequency (P ) calculated by STFT are assumed as linear indicators of puamage instead of pumpage records. Time series obtained from 39 observation wells and 50 rainfall stations in and around the study area, Pintung Plain, are paired for model construction. Second, the well-calibrated parameters of the linear signal model can be used to interpret the characteristic of groundwater system. For example, the rainfall recharge coefficient (γ) means the transform ratio between rainfall intention and groundwater level raise. The area around the observation well with higher γ means that the saturated zone here is easily affected by rainfall events and the material of unsaturated zone might be gravel or coarse sand with high infiltration ratio. Considering the spatial distribution of γ, the values of γ decrease from the upstream to the downstream of major rivers and also are correlated to the spatial distribution of grain size of surface soil. Via the time-series of groundwater levels and rainfall, the well-calibrated parameters of LSM have ability to identify the characteristic of aquifer.

  14. TaiWan Ionospheric Model (TWIM) prediction based on time series autoregressive analysis

    NASA Astrophysics Data System (ADS)

    Tsai, L. C.; Macalalad, Ernest P.; Liu, C. H.

    2014-10-01

    As described in a previous paper, a three-dimensional ionospheric electron density (Ne) model has been constructed from vertical Ne profiles retrieved from the FormoSat3/Constellation Observing System for Meteorology, Ionosphere, and Climate GPS radio occultation measurements and worldwide ionosonde foF2 and foE data and named the TaiWan Ionospheric Model (TWIM). The TWIM exhibits vertically fitted α-Chapman-type layers with distinct F2, F1, E, and D layers, and surface spherical harmonic approaches for the fitted layer parameters including peak density, peak density height, and scale height. To improve the TWIM into a real-time model, we have developed a time series autoregressive model to forecast short-term TWIM coefficients. The time series of TWIM coefficients are considered as realizations of stationary stochastic processes within a processing window of 30 days. These autocorrelation coefficients are used to derive the autoregressive parameters and then forecast the TWIM coefficients, based on the least squares method and Lagrange multiplier technique. The forecast root-mean-square relative TWIM coefficient errors are generally <30% for 1 day predictions. The forecast TWIM values of foE and foF2 values are also compared and evaluated using worldwide ionosonde data.

  15. Frequency Analysis of Modis Ndvi Time Series for Determining Hotspot of Land Degradation in Mongolia

    NASA Astrophysics Data System (ADS)

    Nasanbat, E.; Sharav, S.; Sanjaa, T.; Lkhamjav, O.; Magsar, E.; Tuvdendorj, B.

    2018-04-01

    This study examines MODIS NDVI satellite imagery time series can be used to determine hotspot of land degradation area in whole Mongolia. The trend statistical analysis of Mann-Kendall was applied to a 16-year MODIS NDVI satellite imagery record, based on 16-day composited temporal data (from May to September) for growing seasons and from 2000 to 2016. We performed to frequency analysis that resulting NDVI residual trend pattern would enable successful determined of negative and positive changes in photo synthetically health vegetation. Our result showed that negative and positive values and generated a map of significant trends. Also, we examined long-term of meteorological parameters for the same period. The result showed positive and negative NDVI trends concurred with land cover types change representing an improve or a degrade in vegetation, respectively. Also, integrated the climate parameters which were precipitation and air temperature changes in the same time period seem to have had an affecting on huge NDVI trend area. The time series trend analysis approach applied successfully determined hotspot of an improvement and a degraded area due to land degradation and desertification.

  16. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    PubMed

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step algorithms can potentially incorporate with different nonlinear differential equation models to reconstruct the GRN.

  17. Monte Carlo Method for Determining Earthquake Recurrence Parameters from Short Paleoseismic Catalogs: Example Calculations for California

    USGS Publications Warehouse

    Parsons, Tom

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  18. Monte Carlo method for determining earthquake recurrence parameters from short paleoseismic catalogs: Example calculations for California

    USGS Publications Warehouse

    Parsons, T.

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  19. On the forecast of runoff based on the harmonic analysis of time series of precipitation in the catchment area

    NASA Astrophysics Data System (ADS)

    Cherednichenko, A. V.; Cherednichenko, A. V.; Cherednichenko, V. S.

    2018-01-01

    It is shown that a significant connection exists between the most important harmonics, extracted in the process of harmonic analysis of time series of precipitation in the catchment area of rivers and the amount of runoff. This allowed us to predict the size of the flow for a period of up to 20 years, assuming that the main parameters of the harmonics are preserved at the predicted time interval. The results of such a forecast for three river basins of Kazakhstan are presented.

  20. Graphic analysis and multifractal on percolation-based return interval series

    NASA Astrophysics Data System (ADS)

    Pei, A. Q.; Wang, J.

    2015-05-01

    A financial time series model is developed and investigated by the oriented percolation system (one of the statistical physics systems). The nonlinear and statistical behaviors of the return interval time series are studied for the proposed model and the real stock market by applying visibility graph (VG) and multifractal detrended fluctuation analysis (MF-DFA). We investigate the fluctuation behaviors of return intervals of the model for different parameter settings, and also comparatively study these fluctuation patterns with those of the real financial data for different threshold values. The empirical research of this work exhibits the multifractal features for the corresponding financial time series. Further, the VGs deviated from both of the simulated data and the real data show the behaviors of small-world, hierarchy, high clustering and power-law tail for the degree distributions.

  1. Inverse sequential procedures for the monitoring of time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy

    1993-01-01

    Climate changes traditionally have been detected from long series of observations and long after they happened. The 'inverse sequential' monitoring procedure is designed to detect changes as soon as they occur. Frequency distribution parameters are estimated both from the most recent existing set of observations and from the same set augmented by 1,2,...j new observations. Individual-value probability products ('likelihoods') are then calculated which yield probabilities for erroneously accepting the existing parameter(s) as valid for the augmented data set and vice versa. A parameter change is signaled when these probabilities (or a more convenient and robust compound 'no change' probability) show a progressive decrease. New parameters are then estimated from the new observations alone to restart the procedure. The detailed algebra is developed and tested for Gaussian means and variances, Poisson and chi-square means, and linear or exponential trends; a comprehensive and interactive Fortran program is provided in the appendix.

  2. Automated Land Cover Change Detection and Mapping from Hidden Parameter Estimates of Normalized Difference Vegetation Index (NDVI) Time-Series

    NASA Astrophysics Data System (ADS)

    Chakraborty, S.; Banerjee, A.; Gupta, S. K. S.; Christensen, P. R.; Papandreou-Suppappola, A.

    2017-12-01

    Multitemporal observations acquired frequently by satellites with short revisit periods such as the Moderate Resolution Imaging Spectroradiometer (MODIS), is an important source for modeling land cover. Due to the inherent seasonality of the land cover, harmonic modeling reveals hidden state parameters characteristic to it, which is used in classifying different land cover types and in detecting changes due to natural or anthropogenic factors. In this work, we use an eight day MODIS composite to create a Normalized Difference Vegetation Index (NDVI) time-series of ten years. Improved hidden parameter estimates of the nonlinear harmonic NDVI model are obtained using the Particle Filter (PF), a sequential Monte Carlo estimator. The nonlinear estimation based on PF is shown to improve parameter estimation for different land cover types compared to existing techniques that use the Extended Kalman Filter (EKF), due to linearization of the harmonic model. As these parameters are representative of a given land cover, its applicability in near real-time detection of land cover change is also studied by formulating a metric that captures parameter deviation due to change. The detection methodology is evaluated by considering change as a rare class problem. This approach is shown to detect change with minimum delay. Additionally, the degree of change within the change perimeter is non-uniform. By clustering the deviation in parameters due to change, this spatial variation in change severity is effectively mapped and validated with high spatial resolution change maps of the given regions.

  3. Impacts of GNSS position offsets on global frame stability

    NASA Astrophysics Data System (ADS)

    Griffiths, Jake; Ray, Jim

    2015-04-01

    Positional offsets appear in Global Navigation Satellite System (GNSS) time series for a variety of reasons. Antenna or radome changes are the most common cause for these discontinuities. Many others are from earthquakes, receiver changes, and different anthropogenic modifications at or near the stations. Some jumps appear for unknown or undocumented reasons. Accurate determination of station velocities, and therefore geophysical parameters and terrestrial reference frames, requires that positional offsets be correctly found and compensated. Williams (2003) found that undetected offsets introduce a random walk error component in individual station time series. The topic of detecting positional offsets has received considerable attention in recent years (e.g., Detection of Offsets in GPS Experiment; DOGEx), and most research groups using GNSS have adopted a mix of manual and automated methods for finding them. The removal of a positional offset from a time series is usually handled by estimating the average station position on both sides of the discontinuity. Except for large earthquake events, the velocity is usually assumed constant and continuous across the positional jump. This approach is sufficient in the absence of time-correlated errors. However, GNSS time series contain periodic and power-law (flicker) errors. In this paper, we evaluate the impact to individual station results and the overall stability of the global reference frame from adding increasing numbers of positional discontinuities. We use the International GNSS Service (IGS) weekly SINEX files, and iteratively insert positional offset parameters. Each iteration includes a restacking of the modified SINEX files using the CATREF software from Institut National de l'Information Géographique et Forestière (IGN). Comparisons of successive stacked solutions are used to assess the impacts on the time series of x-pole and y-pole offsets, along with changes in regularized position and secular velocity for stations with more than 2.5 years of data. Our preliminary results indicate that the change in polar motion scatter is logarithmic with increasing numbers of discontinuities. The best-fit natural logarithm to the changes in scatter for x-pole has R2 = 0.58; the fit for the y-pole series has R2 = 0.99. From these empirical functions, we find that polar motion scatter increases from zero when the total rate of discontinuities exceeds 0.2 (x-pole) and 1.3 (y-pole) per station, on average (the IGS has 0.65 per station). Thus, the presence of position offsets in GNSS station time series is likely already a contributor to IGS polar motion inaccuracy and global frame instability. Impacts to station position and velocity estimates depend on noise features found in that station's positional time series. For instance, larger changes in velocity occur for stations with shorter and noisier data spans. This is because an added discontinuity parameter for an individual station time series can induce changes in average position on both sides of the break. We will expand on these results, and consider remaining questions about the role of velocity discontinuities and the effects caused by non-core reference frame stations.

  4. Alpine Grassland Phenology as Seen in AVHRR, VEGETATION, and MODIS NDVI Time Series - a Comparison with In Situ Measurements

    PubMed Central

    Fontana, Fabio; Rixen, Christian; Jonas, Tobias; Aberegg, Gabriel; Wunderle, Stefan

    2008-01-01

    This study evaluates the ability to track grassland growth phenology in the Swiss Alps with NOAA-16 Advanced Very High Resolution Radiometer (AVHRR) Normalized Difference Vegetation Index (NDVI) time series. Three growth parameters from 15 alpine and subalpine grassland sites were investigated between 2001 and 2005: Melt-Out (MO), Start Of Growth (SOG), and End Of Growth (EOG). We tried to estimate these phenological dates from yearly NDVI time series by identifying dates, where certain fractions (thresholds) of the maximum annual NDVI amplitude were crossed for the first time. For this purpose, the NDVI time series were smoothed using two commonly used approaches (Fourier adjustment or alternatively Savitzky-Golay filtering). Moreover, AVHRR NDVI time series were compared against data from the newer generation sensors SPOT VEGETATION and TERRA MODIS. All remote sensing NDVI time series were highly correlated with single point ground measurements and therefore accurately represented growth dynamics of alpine grassland. The newer generation sensors VGT and MODIS performed better than AVHRR, however, differences were minor. Thresholds for the determination of MO, SOG, and EOG were similar across sensors and smoothing methods, which demonstrated the robustness of the results. For our purpose, the Fourier adjustment algorithm created better NDVI time series than the Savitzky-Golay filter, since latter appeared to be more sensitive to noisy NDVI time series. Findings show that the application of various thresholds to NDVI time series allows the observation of the temporal progression of vegetation growth at the selected sites with high consistency. Hence, we believe that our study helps to better understand large-scale vegetation growth dynamics above the tree line in the European Alps. PMID:27879852

  5. Alpine Grassland Phenology as Seen in AVHRR, VEGETATION, and MODIS NDVI Time Series - a Comparison with In Situ Measurements.

    PubMed

    Fontana, Fabio; Rixen, Christian; Jonas, Tobias; Aberegg, Gabriel; Wunderle, Stefan

    2008-04-23

    This study evaluates the ability to track grassland growth phenology in the Swiss Alps with NOAA-16 Advanced Very High Resolution Radiometer (AVHRR) Normalized Difference Vegetation Index (NDVI) time series. Three growth parameters from 15 alpine and subalpine grassland sites were investigated between 2001 and 2005: Melt-Out (MO), Start Of Growth (SOG), and End Of Growth (EOG).We tried to estimate these phenological dates from yearly NDVI time series by identifying dates, where certain fractions (thresholds) of the maximum annual NDVI amplitude were crossed for the first time. For this purpose, the NDVI time series were smoothed using two commonly used approaches (Fourier adjustment or alternatively Savitzky-Golay filtering). Moreover, AVHRR NDVI time series were compared against data from the newer generation sensors SPOT VEGETATION and TERRA MODIS. All remote sensing NDVI time series were highly correlated with single point ground measurements and therefore accurately represented growth dynamics of alpine grassland. The newer generation sensors VGT and MODIS performed better than AVHRR, however, differences were minor. Thresholds for the determination of MO, SOG, and EOG were similar across sensors and smoothing methods, which demonstrated the robustness of the results. For our purpose, the Fourier adjustment algorithm created better NDVI time series than the Savitzky-Golay filter, since latter appeared to be more sensitive to noisy NDVI time series. Findings show that the application of various thresholds to NDVI time series allows the observation of the temporal progression of vegetation growth at the selected sites with high consistency. Hence, we believe that our study helps to better understand largescale vegetation growth dynamics above the tree line in the European Alps.

  6. Parameters estimation using the first passage times method in a jump-diffusion model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khaldi, K., E-mail: kkhaldi@umbb.dz; LIMOSE Laboratory, Boumerdes University, 35000; Meddahi, S., E-mail: samia.meddahi@gmail.com

    2016-06-02

    The main purposes of this paper are two contributions: (1) it presents a new method, which is the first passage time (FPT method) generalized for all passage times (GPT method), in order to estimate the parameters of stochastic Jump-Diffusion process. (2) it compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the GPT method and those obtained by the moments method and the FPT method applied to the Merton Jump-Diffusion (MJD) model.

  7. Bayesian Analysis of Non-Gaussian Long-Range Dependent Processes

    NASA Astrophysics Data System (ADS)

    Graves, T.; Franzke, C.; Gramacy, R. B.; Watkins, N. W.

    2012-12-01

    Recent studies have strongly suggested that surface temperatures exhibit long-range dependence (LRD). The presence of LRD would hamper the identification of deterministic trends and the quantification of their significance. It is well established that LRD processes exhibit stochastic trends over rather long periods of time. Thus, accurate methods for discriminating between physical processes that possess long memory and those that do not are an important adjunct to climate modeling. We have used Markov Chain Monte Carlo algorithms to perform a Bayesian analysis of Auto-Regressive Fractionally-Integrated Moving-Average (ARFIMA) processes, which are capable of modeling LRD. Our principal aim is to obtain inference about the long memory parameter, d,with secondary interest in the scale and location parameters. We have developed a reversible-jump method enabling us to integrate over different model forms for the short memory component. We initially assume Gaussianity, and have tested the method on both synthetic and physical time series such as the Central England Temperature. Many physical processes, for example the Faraday time series from Antarctica, are highly non-Gaussian. We have therefore extended this work by weakening the Gaussianity assumption. Specifically, we assume a symmetric α -stable distribution for the innovations. Such processes provide good, flexible, initial models for non-Gaussian processes with long memory. We will present a study of the dependence of the posterior variance σ d of the memory parameter d on the length of the time series considered. This will be compared with equivalent error diagnostics for other measures of d.

  8. Recursive Bayesian recurrent neural networks for time-series modeling.

    PubMed

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  9. Dynamical Analysis and Visualization of Tornadoes Time Series

    PubMed Central

    2015-01-01

    In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns. PMID:25790281

  10. Dynamical analysis and visualization of tornadoes time series.

    PubMed

    Lopes, António M; Tenreiro Machado, J A

    2015-01-01

    In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns.

  11. Timescales for determining temperature and dissolved oxygen trends in the Long Island Sound (LIS) estuary

    NASA Astrophysics Data System (ADS)

    Staniec, Allison; Vlahos, Penny

    2017-12-01

    Long-term time series represent a critical part of the oceanographic community's efforts to discern natural and anthropogenically forced variations in the environment. They provide regular measurements of climate relevant indicators including temperature, oxygen concentrations, and salinity. When evaluating time series, it is essential to isolate long-term trends from autocorrelation in data and noise due to natural variability. Herein we apply a statistical approach, well-established in atmospheric time series, to key parameters in the U.S. east coast's Long Island Sound estuary (LIS). Analysis shows that the LIS time series (established in the early 1990s) is sufficiently long to detect significant trends in physical-chemical parameters including temperature (T) and dissolved oxygen (DO). Over the last two decades, overall (combined surface and deep) LIS T has increased at an average rate of 0.08 ± 0.03 °C yr-1 while overall DO has dropped at an average rate of 0.03 ± 0.01 mg L-1yr-1 since 1994 at the 95% confidence level. This trend is notably faster than the global open ocean T trend (0.01 °C yr-1), as might be expected for a shallower estuarine system. T and DO trends were always significant for the existing time series using four month data increments. Rates of change of DO and T in LIS are strongly correlated and the rate of decrease of DO concentrations is consistent with the expected reduced solubility of DO at these higher temperatures. Thus, changes in T alone, across decadal timescales can account for between 33 and 100% of the observed decrease in DO. This has significant implications for other dissolved gases and the long-term management of LIS hypoxia.

  12. Study of Glycemic Variability Through Time Series Analyses (Detrended Fluctuation Analysis and Poincaré Plot) in Children and Adolescents with Type 1 Diabetes.

    PubMed

    García Maset, Leonor; González, Lidia Blasco; Furquet, Gonzalo Llop; Suay, Francisco Montes; Marco, Roberto Hernández

    2016-11-01

    Time series analysis provides information on blood glucose dynamics that is unattainable with conventional glycemic variability (GV) indices. To date, no studies have been published on these parameters in pediatric patients with type 1 diabetes. Our aim is to evaluate the relationship between time series analysis and conventional GV indices, and glycosylated hemoglobin (HbA1c) levels. This is a transversal study of 41 children and adolescents with type 1 diabetes. Glucose monitoring was carried out continuously for 72 h to study the following GV indices: standard deviation (SD) of glucose levels (mg/dL), coefficient of variation (%), interquartile range (IQR; mg/dL), mean amplitude of the largest glycemic excursions (MAGE), and continuous overlapping net glycemic action (CONGA). The time series analysis was conducted by means of detrended fluctuation analysis (DFA) and Poincaré plot. Time series parameters (DFA alpha coefficient and elements of the ellipse of the Poincaré plot) correlated well with the more conventional GV indices. Patients were grouped according to the terciles of these indices, to the terciles of eccentricity (1: 12.56-16.98, 2: 16.99-21.91, 3: 21.92-41.03), and to the value of the DFA alpha coefficient (> or ≤1.5). No differences were observed in the HbA1c of patients grouped by GV index criteria; however, significant differences were found in patients grouped by alpha coefficient and eccentricity, not only in terms of HbA1c, but also in SD glucose, IQR, and CONGA index. The loss of complexity in glycemic homeostasis is accompanied by an increase in variability.

  13. Identification of spikes associated with local sources in continuous time series of atmospheric CO, CO2 and CH4

    NASA Astrophysics Data System (ADS)

    El Yazidi, Abdelhadi; Ramonet, Michel; Ciais, Philippe; Broquet, Gregoire; Pison, Isabelle; Abbaris, Amara; Brunner, Dominik; Conil, Sebastien; Delmotte, Marc; Gheusi, Francois; Guerin, Frederic; Hazan, Lynn; Kachroudi, Nesrine; Kouvarakis, Giorgos; Mihalopoulos, Nikolaos; Rivier, Leonard; Serça, Dominique

    2018-03-01

    This study deals with the problem of identifying atmospheric data influenced by local emissions that can result in spikes in time series of greenhouse gases and long-lived tracer measurements. We considered three spike detection methods known as coefficient of variation (COV), robust extraction of baseline signal (REBS) and standard deviation of the background (SD) to detect and filter positive spikes in continuous greenhouse gas time series from four monitoring stations representative of the European ICOS (Integrated Carbon Observation System) Research Infrastructure network. The results of the different methods are compared to each other and against a manual detection performed by station managers. Four stations were selected as test cases to apply the spike detection methods: a continental rural tower of 100 m height in eastern France (OPE), a high-mountain observatory in the south-west of France (PDM), a regional marine background site in Crete (FKL) and a marine clean-air background site in the Southern Hemisphere on Amsterdam Island (AMS). This selection allows us to address spike detection problems in time series with different variability. Two years of continuous measurements of CO2, CH4 and CO were analysed. All methods were found to be able to detect short-term spikes (lasting from a few seconds to a few minutes) in the time series. Analysis of the results of each method leads us to exclude the COV method due to the requirement to arbitrarily specify an a priori percentage of rejected data in the time series, which may over- or underestimate the actual number of spikes. The two other methods freely determine the number of spikes for a given set of parameters, and the values of these parameters were calibrated to provide the best match with spikes known to reflect local emissions episodes that are well documented by the station managers. More than 96 % of the spikes manually identified by station managers were successfully detected both in the SD and the REBS methods after the best adjustment of parameter values. At PDM, measurements made by two analyzers located 200 m from each other allow us to confirm that the CH4 spikes identified in one of the time series but not in the other correspond to a local source from a sewage treatment facility in one of the observatory buildings. From this experiment, we also found that the REBS method underestimates the number of positive anomalies in the CH4 data caused by local sewage emissions. As a conclusion, we recommend the use of the SD method, which also appears to be the easiest one to implement in automatic data processing, used for the operational filtering of spikes in greenhouse gases time series at global and regional monitoring stations of networks like that of the ICOS atmosphere network.

  14. Quantifying the Uncertainty in Discharge Data Using Hydraulic Knowledge and Uncertain Gaugings

    NASA Astrophysics Data System (ADS)

    Renard, B.; Le Coz, J.; Bonnifait, L.; Branger, F.; Le Boursicaud, R.; Horner, I.; Mansanarez, V.; Lang, M.

    2014-12-01

    River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.).

  15. Using a neural network approach and time series data from an international monitoring station in the Yellow Sea for modeling marine ecosystems.

    PubMed

    Zhang, Yingying; Wang, Juncheng; Vorontsov, A M; Hou, Guangli; Nikanorova, M N; Wang, Hongliang

    2014-01-01

    The international marine ecological safety monitoring demonstration station in the Yellow Sea was developed as a collaborative project between China and Russia. It is a nonprofit technical workstation designed as a facility for marine scientific research for public welfare. By undertaking long-term monitoring of the marine environment and automatic data collection, this station will provide valuable information for marine ecological protection and disaster prevention and reduction. The results of some initial research by scientists at the research station into predictive modeling of marine ecological environments and early warning are described in this paper. Marine ecological processes are influenced by many factors including hydrological and meteorological conditions, biological factors, and human activities. Consequently, it is very difficult to incorporate all these influences and their interactions in a deterministic or analysis model. A prediction model integrating a time series prediction approach with neural network nonlinear modeling is proposed for marine ecological parameters. The model explores the natural fluctuations in marine ecological parameters by learning from the latest observed data automatically, and then predicting future values of the parameter. The model is updated in a "rolling" fashion with new observed data from the monitoring station. Prediction experiments results showed that the neural network prediction model based on time series data is effective for marine ecological prediction and can be used for the development of early warning systems.

  16. Fractional Brownian motion time-changed by gamma and inverse gamma process

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Wyłomańska, A.; Połoczański, R.; Sundar, S.

    2017-02-01

    Many real time-series exhibit behavior adequate to long range dependent data. Additionally very often these time-series have constant time periods and also have characteristics similar to Gaussian processes although they are not Gaussian. Therefore there is need to consider new classes of systems to model these kinds of empirical behavior. Motivated by this fact in this paper we analyze two processes which exhibit long range dependence property and have additional interesting characteristics which may be observed in real phenomena. Both of them are constructed as the superposition of fractional Brownian motion (FBM) and other process. In the first case the internal process, which plays role of the time, is the gamma process while in the second case the internal process is its inverse. We present in detail their main properties paying main attention to the long range dependence property. Moreover, we show how to simulate these processes and estimate their parameters. We propose to use a novel method based on rescaled modified cumulative distribution function for estimation of parameters of the second considered process. This method is very useful in description of rounded data, like waiting times of subordinated processes delayed by inverse subordinators. By using the Monte Carlo method we show the effectiveness of proposed estimation procedures. Finally, we present the applications of proposed models to real time series.

  17. Correlation tests of the engine performance parameter by using the detrended cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Dong, Keqiang; Gao, You; Jing, Liming

    2015-02-01

    The presence of cross-correlation in complex systems has long been noted and studied in a broad range of physical applications. We here focus on an aero-engine system as an example of a complex system. By applying the detrended cross-correlation (DCCA) coefficient method to aero-engine time series, we investigate the effects of the data length and the time scale on the detrended cross-correlation coefficients ρ DCCA ( T, s). We then show, for a twin-engine aircraft, that the engine fuel flow time series derived from the left engine and the right engine exhibit much stronger cross-correlations than the engine exhaust-gas temperature series derived from the left engine and the right engine do.

  18. Time-Series Analysis: A Cautionary Tale

    NASA Technical Reports Server (NTRS)

    Damadeo, Robert

    2015-01-01

    Time-series analysis has often been a useful tool in atmospheric science for deriving long-term trends in various atmospherically important parameters (e.g., temperature or the concentration of trace gas species). In particular, time-series analysis has been repeatedly applied to satellite datasets in order to derive the long-term trends in stratospheric ozone, which is a critical atmospheric constituent. However, many of the potential pitfalls relating to the non-uniform sampling of the datasets were often ignored and the results presented by the scientific community have been unknowingly biased. A newly developed and more robust application of this technique is applied to the Stratospheric Aerosol and Gas Experiment (SAGE) II version 7.0 ozone dataset and the previous biases and newly derived trends are presented.

  19. Effects of U.S. Navy Diver Training on Physiological Parameters, Time of Useful Consciousness and Cognitive Performance During Periods of Normobaric Hypoxia

    DTIC Science & Technology

    2014-04-01

    from the pulse oximeter were integrated, digitized, and displayed graphically in real time in LabView (National Instruments) and logged at 20 Hz...Peripheral oxygenation monitoring: Fg-SpO2 levels were measured using a pulse oximeter placed on the left index finger (ROBD-2; Series 6202, Environics...Tolland, CT). Heart rate monitoring: HR was measured using a pulse oximeter placed on the left index finger (ROBD-2; Series 6202, Environics

  20. Effects of U.S. Navy Diver Training on Physiological Parameters, Time of Useful Consciousness, and Cognitive Performance During Periods of Normobaric Hypoxia

    DTIC Science & Technology

    2014-04-01

    from the pulse oximeter were integrated, digitized, and displayed graphically in real time in LabView (National Instruments) and logged at 20 Hz...Peripheral oxygenation monitoring: Fg-SpO2 levels were measured using a pulse oximeter placed on the left index finger (ROBD-2; Series 6202, Environics...Tolland, CT). Heart rate monitoring: HR was measured using a pulse oximeter placed on the left index finger (ROBD-2; Series 6202, Environics

  1. Development and Evaluation of New Algorithms for the Retrieval of Wind and Internal Wave Parameters from Shipborne Marine Radar Data

    DTIC Science & Technology

    2012-12-01

    traditional buoy measurements , which are based on the analysis of the buoy motion using accelerometer and tilt sensors, is the capacity to detect multi-modal...the radar- based estimates slightly overestimate the measured data. Fig. 6.33 shows a time series of p-p-distances and corresponding water depths as...32 3.5 Time series of wind speed and direction measurements from ship anemome- ters 1 and 2 from

  2. Second-degree Stokes coefficients from multi-satellite SLR

    NASA Astrophysics Data System (ADS)

    Bloßfeld, Mathis; Müller, Horst; Gerstl, Michael; Štefka, Vojtěch; Bouman, Johannes; Göttl, Franziska; Horwath, Martin

    2015-09-01

    The long wavelength part of the Earth's gravity field can be determined, with varying accuracy, from satellite laser ranging (SLR). In this study, we investigate the combination of up to ten geodetic SLR satellites using iterative variance component estimation. SLR observations to different satellites are combined in order to identify the impact of each satellite on the estimated Stokes coefficients. The combination of satellite-specific weekly or monthly arcs allows to reduce parameter correlations of the single-satellite solutions and leads to alternative estimates of the second-degree Stokes coefficients. This alternative time series might be helpful for assessing the uncertainty in the impact of the low-degree Stokes coefficients on geophysical investigations. In order to validate the obtained time series of second-degree Stokes coefficients, a comparison with the SLR RL05 time series of the Center of Space Research (CSR) is done. This investigation shows that all time series are comparable to the CSR time series. The precision of the weekly/monthly and coefficients is analyzed by comparing mass-related equatorial excitation functions with geophysical model results and reduced geodetic excitation functions. In case of , the annual amplitude and phase of the DGFI solution agrees better with three of four geophysical model combinations than other time series. In case of , all time series agree very well to each other. The impact of on the ice mass trend estimates for Antarctica are compared based on CSR GRACE RL05 solutions, in which different monthly time series are used for replacing. We found differences in the long-term Antarctic ice loss of Gt/year between the GRACE solutions induced by the different SLR time series of CSR and DGFI, which is about 13 % of the total ice loss of Antarctica. This result shows that Antarctic ice mass loss quantifications must be carefully interpreted.

  3. The CACAO Method for Smoothing, Gap Filling, and Characterizing Seasonal Anomalies in Satellite Time Series

    NASA Technical Reports Server (NTRS)

    Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.

    2013-01-01

    Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.

  4. Documentation of a spreadsheet for time-series analysis and drawdown estimation

    USGS Publications Warehouse

    Halford, Keith J.

    2006-01-01

    Drawdowns during aquifer tests can be obscured by barometric pressure changes, earth tides, regional pumping, and recharge events in the water-level record. These stresses can create water-level fluctuations that should be removed from observed water levels prior to estimating drawdowns. Simple models have been developed for estimating unpumped water levels during aquifer tests that are referred to as synthetic water levels. These models sum multiple time series such as barometric pressure, tidal potential, and background water levels to simulate non-pumping water levels. The amplitude and phase of each time series are adjusted so that synthetic water levels match measured water levels during periods unaffected by an aquifer test. Differences between synthetic and measured water levels are minimized with a sum-of-squares objective function. Root-mean-square errors during fitting and prediction periods were compared multiple times at four geographically diverse sites. Prediction error equaled fitting error when fitting periods were greater than or equal to four times prediction periods. The proposed drawdown estimation approach has been implemented in a spreadsheet application. Measured time series are independent so that collection frequencies can differ and sampling times can be asynchronous. Time series can be viewed selectively and magnified easily. Fitting and prediction periods can be defined graphically or entered directly. Synthetic water levels for each observation well are created with earth tides, measured time series, moving averages of time series, and differences between measured and moving averages of time series. Selected series and fitting parameters for synthetic water levels are stored and drawdowns are estimated for prediction periods. Drawdowns can be viewed independently and adjusted visually if an anomaly skews initial drawdowns away from 0. The number of observations in a drawdown time series can be reduced by averaging across user-defined periods. Raw or reduced drawdown estimates can be copied from the spreadsheet application or written to tab-delimited ASCII files.

  5. PolyWaTT: A polynomial water travel time estimator based on Derivative Dynamic Time Warping and Perceptually Important Points

    NASA Astrophysics Data System (ADS)

    Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano

    2018-03-01

    Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical formulas.

  6. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  7. Evaluation of Bayesian estimation of a hidden continuous-time Markov chain model with application to threshold violation in water-quality indicators

    USGS Publications Warehouse

    Deviney, Frank A.; Rice, Karen; Brown, Donald E.

    2012-01-01

    Natural resource managers require information concerning  the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates.  An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.

  8. Non-invasive breast biopsy method using GD-DTPA contrast enhanced MRI series and F-18-FDG PET/CT dynamic image series

    NASA Astrophysics Data System (ADS)

    Magri, Alphonso William

    This study was undertaken to develop a nonsurgical breast biopsy from Gd-DTPA Contrast Enhanced Magnetic Resonance (CE-MR) images and F-18-FDG PET/CT dynamic image series. A five-step process was developed to accomplish this. (1) Dynamic PET series were nonrigidly registered to the initial frame using a finite element method (FEM) based registration that requires fiducial skin markers to sample the displacement field between image frames. A commercial FEM package (ANSYS) was used for meshing and FEM calculations. Dynamic PET image series registrations were evaluated using similarity measurements SAVD and NCC. (2) Dynamic CE-MR series were nonrigidly registered to the initial frame using two registration methods: a multi-resolution free-form deformation (FFD) registration driven by normalized mutual information, and a FEM-based registration method. Dynamic CE-MR image series registrations were evaluated using similarity measurements, localization measurements, and qualitative comparison of motion artifacts. FFD registration was found to be superior to FEM-based registration. (3) Nonlinear curve fitting was performed for each voxel of the PET/CT volume of activity versus time, based on a realistic two-compartmental Patlak model. Three parameters for this model were fitted; two of them describe the activity levels in the blood and in the cellular compartment, while the third characterizes the washout rate of F-18-FDG from the cellular compartment. (4) Nonlinear curve fitting was performed for each voxel of the MR volume of signal intensity versus time, based on a realistic two-compartment Brix model. Three parameters for this model were fitted: rate of Gd exiting the compartment, representing the extracellular space of a lesion; rate of Gd exiting a blood compartment; and a parameter that characterizes the strength of signal intensities. Curve fitting used for PET/CT and MR series was accomplished by application of the Levenburg-Marquardt nonlinear regression algorithm. The best-fit parameters were used to create 3D parametric images. Compartmental modeling evaluation was based on the ability of parameter values to differentiate between tissue types. This evaluation was used on registered and unregistered image series and found that registration improved results. (5) PET and MR parametric images were registered through FEM- and FFD-based registration. Parametric image registration was evaluated using similarity measurements, target registration error, and qualitative comparison. Comparing FFD and FEM-based registration results showed that the FEM method is superior. This five-step process constitutes a novel multifaceted approach to a nonsurgical breast biopsy that successfully executes each step. Comparison of this method to biopsy still needs to be done with a larger set of subject data.

  9. Operational use of open satellite data for marine water quality monitoring

    NASA Astrophysics Data System (ADS)

    Symeonidis, Panagiotis; Vakkas, Theodoros

    2017-09-01

    The purpose of this study was to develop an operational platform for marine water quality monitoring using near real time satellite data. The developed platform utilizes free and open satellite data available from different data sources like COPERNICUS, the European Earth Observation Initiative, or NASA, from different satellites and instruments. The quality of the marine environment is operationally evaluated using parameters like chlorophyll-a concentration, water color and Sea Surface Temperature (SST). For each parameter, there are more than one dataset available, from different data sources or satellites, to allow users to select the most appropriate dataset for their area or time of interest. The above datasets are automatically downloaded from the data provider's services and ingested to the central, spatial engine. The spatial data platform uses the Postgresql database with the PostGIS extension for spatial data storage and Geoserver for the provision of the spatial data services. The system provides daily, 10 days and monthly maps and time series of the above parameters. The information is provided using a web client which is based on the GET SDI PORTAL, an easy to use and feature rich geospatial visualization and analysis platform. The users can examine the temporal variation of the parameters using a simple time animation tool. In addition, with just one click on the map, the system provides an interactive time series chart for any of the parameters of the available datasets. The platform can be offered as Software as a Service (SaaS) to any area in the Mediterranean region.

  10. Evaluation of an S-system root-finding method for estimating parameters in a metabolic reaction model.

    PubMed

    Iwata, Michio; Miyawaki-Kuwakado, Atsuko; Yoshida, Erika; Komori, Soichiro; Shiraishi, Fumihide

    2018-02-02

    In a mathematical model, estimation of parameters from time-series data of metabolic concentrations in cells is a challenging task. However, it seems that a promising approach for such estimation has not yet been established. Biochemical Systems Theory (BST) is a powerful methodology to construct a power-law type model for a given metabolic reaction system and to then characterize it efficiently. In this paper, we discuss the use of an S-system root-finding method (S-system method) to estimate parameters from time-series data of metabolite concentrations. We demonstrate that the S-system method is superior to the Newton-Raphson method in terms of the convergence region and iteration number. We also investigate the usefulness of a translocation technique and a complex-step differentiation method toward the practical application of the S-system method. The results indicate that the S-system method is useful to construct mathematical models for a variety of metabolic reaction networks. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Modelling of Vortex-Induced Loading on a Single-Blade Installation Setup

    NASA Astrophysics Data System (ADS)

    Skrzypiński, Witold; Gaunaa, Mac; Heinz, Joachim

    2016-09-01

    Vortex-induced integral loading fluctuations on a single suspended blade at various inflow angles were modeled in the presents work by means of stochastic modelling methods. The reference time series were obtained by 3D DES CFD computations carried out on the DTU 10MW reference wind turbine blade. In the reference time series, the flapwise force component, Fx, showed both higher absolute values and variation than the chordwise force component, Fz, for every inflow angle considered. For this reason, the present paper focused on modelling of the Fx and not the Fz whereas Fz would be modelled using exactly the same procedure. The reference time series were significantly different, depending on the inflow angle. This made the modelling of all the time series with a single and relatively simple engineering model challenging. In order to find model parameters, optimizations were carried out, based on the root-mean-square error between the Single-Sided Amplitude Spectra of the reference and modelled time series. In order to model well defined frequency peaks present at certain inflow angles, optimized sine functions were superposed on the stochastically modelled time series. The results showed that the modelling accuracy varied depending on the inflow angle. None the less, the modelled and reference time series showed a satisfactory general agreement in terms of their visual and frequency characteristics. This indicated that the proposed method is suitable to model loading fluctuations on suspended blades.

  12. Changes in the Hurst exponent of heartbeat intervals during physical activity

    NASA Astrophysics Data System (ADS)

    Martinis, M.; Knežević, A.; Krstačić, G.; Vargović, E.

    2004-07-01

    The fractal scaling properties of the heartbeat time series are studied in different controlled ergometric regimes using both the improved Hurst rescaled range (R/S) analysis and the detrended fluctuation analysis (DFA). The long-time “memory effect” quantified by the value of the Hurst exponent H>0.5 is found to increase during progressive physical activity in healthy subjects, in contrast to those having stable angina pectoris, where it decreases. The results are also supported by the detrended fluctuation analysis. We argue that this finding may be used as a useful new diagnostic parameter for short heartbeat time series.

  13. Alteration of Box-Jenkins methodology by implementing genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad

    2015-02-01

    A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.

  14. Analyzing a stochastic time series obeying a second-order differential equation.

    PubMed

    Lehle, B; Peinke, J

    2015-06-01

    The stochastic properties of a Langevin-type Markov process can be extracted from a given time series by a Markov analysis. Also processes that obey a stochastically forced second-order differential equation can be analyzed this way by employing a particular embedding approach: To obtain a Markovian process in 2N dimensions from a non-Markovian signal in N dimensions, the system is described in a phase space that is extended by the temporal derivative of the signal. For a discrete time series, however, this derivative can only be calculated by a differencing scheme, which introduces an error. If the effects of this error are not accounted for, this leads to systematic errors in the estimation of the drift and diffusion functions of the process. In this paper we will analyze these errors and we will propose an approach that correctly accounts for them. This approach allows an accurate parameter estimation and, additionally, is able to cope with weak measurement noise, which may be superimposed to a given time series.

  15. The mass and age of the first SONG target: the red giant 46 LMi

    NASA Astrophysics Data System (ADS)

    Frandsen, S.; Fredslund Andersen, M.; Brogaard, K.; Jiang, C.; Arentoft, T.; Grundahl, F.; Kjeldsen, H.; Christensen-Dalsgaard, J.; Weiss, E.; Pallé, P.; Antoci, V.; Kjærgaard, P.; Sørensen, A. N.; Skottfelt, J.; Jørgensen, U. G.

    2018-05-01

    Context. The Stellar Observation Network Group (SONG) is an initiative to build a worldwide network of 1m telescopes with high-precision radial-velocity spectrographs. Here we analyse the first radial-velocity time series of a red-giant star measured by the SONG telescope at Tenerife. The asteroseismic results demonstrate a major increase in the achievable precision of the parameters for red-giant stars obtainable from ground-based observations. Reliable tests of the validity of these results are needed, however, before the accuracy of the parameters can be trusted. Aims: We analyse the first SONG time series for the star 46 LMi, which has a precise parallax and an angular diameter measured from interferometry, and therefore a good determination of the stellar radius. We use asteroseismic scaling relations to obtain an accurate mass, and modelling to determine the age. Methods: A 55-day time series of high-resolution, high S/N spectra were obtained with the first SONG telescope. We derive the asteroseismic parameters by analysing the power spectrum. To give a best guess on the large separation of modes in the power spectrum, we have applied a new method which uses the scaling of Kepler red-giant stars to 46 LMi. Results: Several methods have been applied: classical estimates, seismic methods using the observed time series, and model calculations to derive the fundamental parameters of 46 LMi. Parameters determined using the different methods are consistent within the uncertainties. We find the following values for the mass M (scaling), radius R (classical), age (modelling), and surface gravity (combining mass and radius): M = 1.09 ± 0.04M⊙, R = 7.95 ± 0.11R⊙ age t = 8.2 ± 1.9 Gy, and logg = 2.674 ± 0.013. Conclusions: The exciting possibilities for ground-based asteroseismology of solar-like oscillations with a fully robotic network have been illustrated with the results obtained from just a single site of the SONG network. The window function is still a severe problem which will be solved when there are more nodes in the network. Based on observations made with the Hertzsprung SONG telescope operated at the Spanish Observatorio del Teide on the island of Tenerife by the Aarhus and Copenhagen Universities and by the Instituto de Astrofísica de Canarias.

  16. Compressed Sensing for Metrics Development

    NASA Astrophysics Data System (ADS)

    McGraw, R. L.; Giangrande, S. E.; Liu, Y.

    2012-12-01

    Models by their very nature tend to be sparse in the sense that they are designed, with a few optimally selected key parameters, to provide simple yet faithful representations of a complex observational dataset or computer simulation output. This paper seeks to apply methods from compressed sensing (CS), a new area of applied mathematics currently undergoing a very rapid development (see for example Candes et al., 2006), to FASTER needs for new approaches to model evaluation and metrics development. The CS approach will be illustrated for a time series generated using a few-parameter (i.e. sparse) model. A seemingly incomplete set of measurements, taken at a just few random sampling times, is then used to recover the hidden model parameters. Remarkably there is a sharp transition in the number of required measurements, beyond which both the model parameters and time series are recovered exactly. Applications to data compression, data sampling/collection strategies, and to the development of metrics for model evaluation by comparison with observation (e.g. evaluation of model predictions of cloud fraction using cloud radar observations) are presented and discussed in context of the CS approach. Cited reference: Candes, E. J., Romberg, J., and Tao, T. (2006), Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52, 489-509.

  17. Curve Number Application in Continuous Runoff Models: An Exercise in Futility?

    NASA Astrophysics Data System (ADS)

    Lamont, S. J.; Eli, R. N.

    2006-12-01

    The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a surrogate for the selected subset of HPSF parameters could not be justified. These results suggest that use of the Curve Number in other complex continuous time series hydrologic models may not be appropriate, given the limitations inherent in the definition of the NRCS CN method.

  18. Intercomparison of Recent Anomaly Time-Series of OLR as Observed by CERES and Computed Using AIRS Products

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Molnar, Gyula; Iredell, Lena; Loeb, Norman G.

    2011-01-01

    This paper compares recent spatial and temporal anomaly time series of OLR as observed by CERES and computed based on AIRS retrieved surface and atmospheric geophysical parameters over the 7 year time period September 2002 through February 2010. This time period is marked by a substantial decrease of OLR, on the order of +/-0.1 W/sq m/yr, averaged over the globe, and very large spatial variations of changes in OLR in the tropics, with local values ranging from -2.8 W/sq m/yr to +3.1 W/sq m/yr. Global and Tropical OLR both began to decrease significantly at the onset of a strong La Ni a in mid-2007. Late 2009 is characterized by a strong El Ni o, with a corresponding change in sign of both Tropical and Global OLR anomalies. The spatial patterns of the 7 year short term changes in AIRS and CERES OLR have a spatial correlation of 0.97 and slopes of the linear least squares fits of anomaly time series averaged over different spatial regions agree on the order of +/-0.01 W/sq m/yr. This essentially perfect agreement of OLR anomaly time series derived from observations by two different instruments, determined in totally independent and different manners, implies that both sets of results must be highly stable. This agreement also validates the anomaly time series of the AIRS derived products used to compute OLR and furthermore indicates that anomaly time series of AIRS derived products can be used to explain the factors contributing to anomaly time series of OLR.

  19. NASA Satellite Monitoring of Water Clarity in Mobile Bay for Nutrient Criteria Development

    NASA Technical Reports Server (NTRS)

    Blonski, Slawomir; Holekamp, Kara; Spiering, Bruce A.

    2009-01-01

    This project has demonstrated feasibility of deriving from MODIS daily measurements time series of water clarity parameters that provide coverage of a specific location or an area of interest for 30-50% of days. Time series derived for estuarine and coastal waters display much higher variability than time series of ecological parameters (such as vegetation indices) derived for land areas. (Temporal filtering often applied in terrestrial studies cannot be used effectively in ocean color processing). IOP-based algorithms for retrieval of diffuse light attenuation coefficient and TSS concentration perform well for the Mobile Bay environment: only a minor adjustment was needed in the TSS algorithm, despite generally recognized dependence of such algorithms on local conditions. The current IOP-based algorithm for retrieval of chlorophyll a concentration has not performed as well: a more reliable algorithm is needed that may be based on IOPs at additional wavelengths or on remote sensing reflectance from multiple spectral bands. CDOM algorithm also needs improvement to provide better separation between effects of gilvin (gelbstoff) and detritus. (Identification or development of such algorithm requires more data from in situ measurements of CDOM concentration in Gulf of Mexico coastal waters (ongoing collaboration with the EPA Gulf Ecology Division))

  20. Cyclical Changes in the Pleistocene Climate from an Analysis of Biogenic Silica in a Bottom Sediment Core Sample of Lake Baikal

    NASA Astrophysics Data System (ADS)

    Dergachev, V. A.; Dmitriev, P. B.

    2017-12-01

    An inhomogeneous time series of measurements of the percentage content of biogenic silica in the samples of joint cores BDP-96-1 and BDP-96-2 from the bottom of Lake Baikal drilled at a depth of 321 m under water has been analyzed. The composite depth of cores is 77 m, which covers the Pleistocene Epoch to 1.8 Ma. The time series was reduced to a regular form with a time step of 1 kyr, which allowed 16 distinct quasi-periodic components with periods from 19 to 251 kyr to be revealed in this series at a significance level of their amplitudes exceeding 4σ. For this, the combined spectral periodogram (a modification of the spectral analysis method) was used. Some of the revealed quasi-harmonics are related to the characteristic cyclical oscillations of the Earth's orbital parameters. Special focus was payed to the temporal change in the parameters of the revealed quasi-harmonic components over the Pleistocene Epoch, which was studied by constructing the spectral density of the analyzed data in the running window of 201 and 701 kyr.

  1. Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI

    NASA Astrophysics Data System (ADS)

    Chen, Kho Chia; Bahar, Arifah; Kane, Ibrahim Lawal; Ting, Chee-Ming; Rahman, Haliza Abd

    2015-02-01

    In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.

  2. Inter-annual cascade effect on marine food web: A benthic pathway lagging nutrient supply to pelagic fish stock

    PubMed Central

    Fernandes, Lohengrin Dias de Almeida; Fagundes Netto, Eduardo Barros; Coutinho, Ricardo

    2017-01-01

    Currently, spatial and temporal changes in nutrients availability, marine planktonic, and fish communities are best described on a shorter than inter-annual (seasonal) scale, primarily because the simultaneous year-to-year variations in physical, chemical, and biological parameters are very complex. The limited availability of time series datasets furnishing simultaneous evaluations of temperature, nutrients, plankton, and fish have limited our ability to describe and to predict variability related to short-term process, as species-specific phenology and environmental seasonality. In the present study, we combine a computational time series analysis on a 15-year (1995–2009) weekly-sampled time series (high-resolution long-term time series, 780 weeks) with an Autoregressive Distributed Lag Model to track non-seasonal changes in 10 potentially related parameters: sea surface temperature, nutrient concentrations (NO2, NO3, NH4 and PO4), phytoplankton biomass (as in situ chlorophyll a biomass), meroplankton (barnacle and mussel larvae), and fish abundance (Mugil liza and Caranx latus). Our data demonstrate for the first time that highly intense and frequent upwelling years initiate a huge energy flux that is not fully transmitted through classical size-structured food web by bottom-up stimulus but through additional ontogenetic steps. A delayed inter-annual sequential effect from phytoplankton up to top predators as carnivorous fishes is expected if most of energy is trapped into benthic filter feeding organisms and their larval forms. These sequential events can explain major changes in ecosystem food web that were not predicted in previous short-term models. PMID:28886162

  3. A synergic simulation-optimization approach for analyzing biomolecular dynamics in living organisms.

    PubMed

    Sadegh Zadeh, Kouroush

    2011-01-01

    A synergic duo simulation-optimization approach was developed and implemented to study protein-substrate dynamics and binding kinetics in living organisms. The forward problem is a system of several coupled nonlinear partial differential equations which, with a given set of kinetics and diffusion parameters, can provide not only the commonly used bleached area-averaged time series in fluorescence microscopy experiments but more informative full biomolecular/drug space-time series and can be successfully used to study dynamics of both Dirac and Gaussian fluorescence-labeled biomacromolecules in vivo. The incomplete Cholesky preconditioner was coupled with the finite difference discretization scheme and an adaptive time-stepping strategy to solve the forward problem. The proposed approach was validated with analytical as well as reference solutions and used to simulate dynamics of GFP-tagged glucocorticoid receptor (GFP-GR) in mouse cancer cell during a fluorescence recovery after photobleaching experiment. Model analysis indicates that the commonly practiced bleach spot-averaged time series is not an efficient approach to extract physiological information from the fluorescence microscopy protocols. It was recommended that experimental biophysicists should use full space-time series, resulting from experimental protocols, to study dynamics of biomacromolecules and drugs in living organisms. It was also concluded that in parameterization of biological mass transfer processes, setting the norm of the gradient of the penalty function at the solution to zero is not an efficient stopping rule to end the inverse algorithm. Theoreticians should use multi-criteria stopping rules to quantify model parameters by optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Speleothem stable isotope records for east-central Europe: resampling sedimentary proxy records to obtain evenly spaced time series with spectral guidance

    NASA Astrophysics Data System (ADS)

    Gábor Hatvani, István; Kern, Zoltán; Leél-Őssy, Szabolcs; Demény, Attila

    2018-01-01

    Uneven spacing is a common feature of sedimentary paleoclimate records, in many cases causing difficulties in the application of classical statistical and time series methods. Although special statistical tools do exist to assess unevenly spaced data directly, the transformation of such data into a temporally equidistant time series which may then be examined using commonly employed statistical tools remains, however, an unachieved goal. The present paper, therefore, introduces an approach to obtain evenly spaced time series (using cubic spline fitting) from unevenly spaced speleothem records with the application of a spectral guidance to avoid the spectral bias caused by interpolation and retain the original spectral characteristics of the data. The methodology was applied to stable carbon and oxygen isotope records derived from two stalagmites from the Baradla Cave (NE Hungary) dating back to the late 18th century. To show the benefit of the equally spaced records to climate studies, their coherence with climate parameters is explored using wavelet transform coherence and discussed. The obtained equally spaced time series are available at https://doi.org/10.1594/PANGAEA.875917.

  5. Analysis of Vlbi, Slr and GPS Site Position Time Series

    NASA Astrophysics Data System (ADS)

    Angermann, D.; Krügel, M.; Meisel, B.; Müller, H.; Tesmer, V.

    Conventionally the IERS terrestrial reference frame (ITRF) is realized by the adoption of a set of epoch coordinates and linear velocities for a set of global tracking stations. Due to the remarkable progress of the space geodetic observation techniques (e.g. VLBI, SLR, GPS) the accuracy and consistency of the ITRF increased continuously. The accuracy achieved today is mainly limited by technique-related systematic errors, which are often poorly characterized or quantified. Therefore it is essential to analyze the individual techniques' solutions with respect to systematic differences, models, parameters, datum definition, etc. Main subject of this presentation is the analysis of GPS, SLR and VLBI time series of site positions. The investigations are based on SLR and VLBI solutions computed at DGFI with the software systems DOGS (SLR) and OCCAM (VLBI). The GPS time series are based on weekly IGS station coordinates solutions. We analyze the time series with respect to the issues mentioned above. In particular we characterize the noise in the time series, identify periodic signals, and investigate non-linear effects that complicate the assignment of linear velocities for global tracking sites. One important aspect is the comparison of results obtained by different techniques at colocation sites.

  6. Effect of spatial averaging on multifractal properties of meteorological time series

    NASA Astrophysics Data System (ADS)

    Hoffmann, Holger; Baranowski, Piotr; Krzyszczak, Jaromir; Zubik, Monika

    2016-04-01

    Introduction The process-based models for large-scale simulations require input of agro-meteorological quantities that are often in the form of time series of coarse spatial resolution. Therefore, the knowledge about their scaling properties is fundamental for transferring locally measured fluctuations to larger scales and vice-versa. However, the scaling analysis of these quantities is complicated due to the presence of localized trends and non-stationarities. Here we assess how spatially aggregating meteorological data to coarser resolutions affects the data's temporal scaling properties. While it is known that spatial aggregation may affect spatial data properties (Hoffmann et al., 2015), it is unknown how it affects temporal data properties. Therefore, the objective of this study was to characterize the aggregation effect (AE) with regard to both temporal and spatial input data properties considering scaling properties (i.e. statistical self-similarity) of the chosen agro-meteorological time series through multifractal detrended fluctuation analysis (MFDFA). Materials and Methods Time series coming from years 1982-2011 were spatially averaged from 1 to 10, 25, 50 and 100 km resolution to assess the impact of spatial aggregation. Daily minimum, mean and maximum air temperature (2 m), precipitation, global radiation, wind speed and relative humidity (Zhao et al., 2015) were used. To reveal the multifractal structure of the time series, we used the procedure described in Baranowski et al. (2015). The diversity of the studied multifractals was evaluated by the parameters of time series spectra. In order to analyse differences in multifractal properties to 1 km resolution grids, data of coarser resolutions was disaggregated to 1 km. Results and Conclusions Analysing the spatial averaging on multifractal properties we observed that spatial patterns of the multifractal spectrum (MS) of all meteorological variables differed from 1 km grids and MS-parameters were biased by -29.1 % (precipitation; width of MS) up to >4 % (min. Temperature, Radiation; asymmetry of MS). Also, the spatial variability of MS parameters was strongly affected at the highest aggregation (100 km). Obtained results confirm that spatial data aggregation may strongly affect temporal scaling properties. This should be taken into account when upscaling for large-scale studies. Acknowledgements The study was conducted within FACCE MACSUR. Please see Baranowski et al. (2015) for details on funding. References Baranowski, P., Krzyszczak, J., Sławiński, C. et al. (2015). Climate Research 65, 39-52. Hoffman, H., G. Zhao, L.G.J. Van Bussel et al. (2015). Climate Research 65, 53-69. Zhao, G., Siebert, S., Rezaei E. et al. (2015). Agricultural and Forest Meteorology 200, 156-171.

  7. Disentangling Time-series Spectra with Gaussian Processes: Applications to Radial Velocity Analysis

    NASA Astrophysics Data System (ADS)

    Czekala, Ian; Mandel, Kaisey S.; Andrews, Sean M.; Dittmann, Jason A.; Ghosh, Sujit K.; Montet, Benjamin T.; Newton, Elisabeth R.

    2017-05-01

    Measurements of radial velocity variations from the spectroscopic monitoring of stars and their companions are essential for a broad swath of astrophysics; these measurements provide access to the fundamental physical properties that dictate all phases of stellar evolution and facilitate the quantitative study of planetary systems. The conversion of those measurements into both constraints on the orbital architecture and individual component spectra can be a serious challenge, however, especially for extreme flux ratio systems and observations with relatively low sensitivity. Gaussian processes define sampling distributions of flexible, continuous functions that are well-motivated for modeling stellar spectra, enabling proficient searches for companion lines in time-series spectra. We introduce a new technique for spectral disentangling, where the posterior distributions of the orbital parameters and intrinsic, rest-frame stellar spectra are explored simultaneously without needing to invoke cross-correlation templates. To demonstrate its potential, this technique is deployed on red-optical time-series spectra of the mid-M-dwarf binary LP661-13. We report orbital parameters with improved precision compared to traditional radial velocity analysis and successfully reconstruct the primary and secondary spectra. We discuss potential applications for other stellar and exoplanet radial velocity techniques and extensions to time-variable spectra. The code used in this analysis is freely available as an open-source Python package.

  8. Model calibration criteria for estimating ecological flow characteristics

    USGS Publications Warehouse

    Vis, Marc; Knight, Rodney; Poole, Sandra; Wolfe, William J.; Seibert, Jan; Breuer, Lutz; Kraft, Philipp

    2016-01-01

    Quantification of streamflow characteristics in ungauged catchments remains a challenge. Hydrological modeling is often used to derive flow time series and to calculate streamflow characteristics for subsequent applications that may differ from those envisioned by the modelers. While the estimation of model parameters for ungauged catchments is a challenging research task in itself, it is important to evaluate whether simulated time series preserve critical aspects of the streamflow hydrograph. To address this question, seven calibration objective functions were evaluated for their ability to preserve ecologically relevant streamflow characteristics of the average annual hydrograph using a runoff model, HBV-light, at 27 catchments in the southeastern United States. Calibration trials were repeated 100 times to reduce parameter uncertainty effects on the results, and 12 ecological flow characteristics were computed for comparison. Our results showed that the most suitable calibration strategy varied according to streamflow characteristic. Combined objective functions generally gave the best results, though a clear underprediction bias was observed. The occurrence of low prediction errors for certain combinations of objective function and flow characteristic suggests that (1) incorporating multiple ecological flow characteristics into a single objective function would increase model accuracy, potentially benefitting decision-making processes; and (2) there may be a need to have different objective functions available to address specific applications of the predicted time series.

  9. Improving GNSS time series for volcano monitoring: application to Canary Islands (Spain)

    NASA Astrophysics Data System (ADS)

    García-Cañada, Laura; Sevilla, Miguel J.; Pereda de Pablo, Jorge; Domínguez Cerdeña, Itahiza

    2017-04-01

    The number of permanent GNSS stations has increased significantly in recent years for different geodetic applications such as volcano monitoring, which require a high precision. Recently we have started to have coordinates time series long enough so that we can apply different analysis and filters that allow us to improve the GNSS coordinates results. Following this idea we have processed data from GNSS permanent stations used by the Spanish Instituto Geográfico Nacional (IGN) for volcano monitoring in Canary Islands to obtained time series by double difference processing method with Bernese v5.0 for the period 2007-2014. We have identified the characteristics of these time series and obtained models to estimate velocities with greater accuracy and more realistic uncertainties. In order to improve the results we have used two kinds of filters to improve the time series. The first, a spatial filter, has been computed using the series of residuals of all stations in the Canary Islands without an anomalous behaviour after removing a linear trend. This allows us to apply this filter to all sets of coordinates of the permanent stations reducing their dispersion. The second filter takes account of the temporal correlation in the coordinate time series for each station individually. A research about the evolution of the velocity depending on the series length has been carried out and it has demonstrated the need for using time series of at least four years. Therefore, in those stations with more than four years of data, we calculated the velocity and the characteristic parameters in order to have time series of residuals. This methodology has been applied to the GNSS data network in El Hierro (Canary Islands) during the 2011-2012 eruption and the subsequent magmatic intrusions (2012-2014). The results show that in the new series it is easier to detect anomalous behaviours in the coordinates, so they are most useful to detect crustal deformations in volcano monitoring.

  10. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  11. Time-series analysis of delta13C from tree rings. I. Time trends and autocorrelation.

    PubMed

    Monserud, R A; Marshall, J D

    2001-09-01

    Univariate time-series analyses were conducted on stable carbon isotope ratios obtained from tree-ring cellulose. We looked for the presence and structure of autocorrelation. Significant autocorrelation violates the statistical independence assumption and biases hypothesis tests. Its presence would indicate the existence of lagged physiological effects that persist for longer than the current year. We analyzed data from 28 trees (60-85 years old; mean = 73 years) of western white pine (Pinus monticola Dougl.), ponderosa pine (Pinus ponderosa Laws.), and Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco var. glauca) growing in northern Idaho. Material was obtained by the stem analysis method from rings laid down in the upper portion of the crown throughout each tree's life. The sampling protocol minimized variation caused by changing light regimes within each tree. Autoregressive moving average (ARMA) models were used to describe the autocorrelation structure over time. Three time series were analyzed for each tree: the stable carbon isotope ratio (delta(13)C); discrimination (delta); and the difference between ambient and internal CO(2) concentrations (c(a) - c(i)). The effect of converting from ring cellulose to whole-leaf tissue did not affect the analysis because it was almost completely removed by the detrending that precedes time-series analysis. A simple linear or quadratic model adequately described the time trend. The residuals from the trend had a constant mean and variance, thus ensuring stationarity, a requirement for autocorrelation analysis. The trend over time for c(a) - c(i) was particularly strong (R(2) = 0.29-0.84). Autoregressive moving average analyses of the residuals from these trends indicated that two-thirds of the individual tree series contained significant autocorrelation, whereas the remaining third were random (white noise) over time. We were unable to distinguish between individuals with and without significant autocorrelation beforehand. Significant ARMA models were all of low order, with either first- or second-order (i.e., lagged 1 or 2 years, respectively) models performing well. A simple autoregressive (AR(1)), model was the most common. The most useful generalization was that the same ARMA model holds for each of the three series (delta(13)C, delta, c(a) - c(i)) for an individual tree, if the time trend has been properly removed for each series. The mean series for the two pine species were described by first-order ARMA models (1-year lags), whereas the Douglas-fir mean series were described by second-order models (2-year lags) with negligible first-order effects. Apparently, the process of constructing a mean time series for a species preserves an underlying signal related to delta(13)C while canceling some of the random individual tree variation. Furthermore, the best model for the overall mean series (e.g., for a species) cannot be inferred from a consensus of the individual tree model forms, nor can its parameters be estimated reliably from the mean of the individual tree parameters. Because two-thirds of the individual tree time series contained significant autocorrelation, the normal assumption of a random structure over time is unwarranted, even after accounting for the time trend. The residuals of an appropriate ARMA model satisfy the independence assumption, and can be used to make hypothesis tests.

  12. tsiR: An R package for time-series Susceptible-Infected-Recovered models of epidemics.

    PubMed

    Becker, Alexander D; Grenfell, Bryan T

    2017-01-01

    tsiR is an open source software package implemented in the R programming language designed to analyze infectious disease time-series data. The software extends a well-studied and widely-applied algorithm, the time-series Susceptible-Infected-Recovered (TSIR) model, to infer parameters from incidence data, such as contact seasonality, and to forward simulate the underlying mechanistic model. The tsiR package aggregates a number of different fitting features previously described in the literature in a user-friendly way, providing support for their broader adoption in infectious disease research. Also included in tsiR are a number of diagnostic tools to assess the fit of the TSIR model. This package should be useful for researchers analyzing incidence data for fully-immunizing infectious diseases.

  13. Inferring rate and state friction parameters from a rupture model of the 1995 Hyogo-ken Nanbu (Kobe) Japan earthquake

    USGS Publications Warehouse

    Guatteri, Mariagiovanna; Spudich, P.; Beroza, G.C.

    2001-01-01

    We consider the applicability of laboratory-derived rate- and state-variable friction laws to the dynamic rupture of the 1995 Kobe earthquake. We analyze the shear stress and slip evolution of Ide and Takeo's [1997] dislocation model, fitting the inferred stress change time histories by calculating the dynamic load and the instantaneous friction at a series of points within the rupture area. For points exhibiting a fast-weakening behavior, the Dieterich-Ruina friction law, with values of dc = 0.01-0.05 m for critical slip, fits the stress change time series well. This range of dc is 10-20 times smaller than the slip distance over which the stress is released, Dc, which previous studies have equated with the slip-weakening distance. The limited resolution and low-pass character of the strong motion inversion degrades the resolution of the frictional parameters and suggests that the actual dc is less than this value. Stress time series at points characterized by a slow-weakening behavior are well fitted by the Dieterich-Ruina friction law with values of dc ??? 0.01-0.05 m. The apparent fracture energy Gc can be estimated from waveform inversions more stably than the other friction parameters. We obtain a Gc = 1.5??106 J m-2 for the 1995 Kobe earthquake, in agreement with estimates for previous earthquakes. From this estimate and a plausible upper bound for the local rock strength we infer a lower bound for Dc of about 0.008 m. Copyright 2001 by the American Geophysical Union.

  14. A 15-Year Time-series Study of Tooth Extraction in Brazil

    PubMed Central

    Cunha, Maria Aparecida Goncalves de Melo; Lino, Patrícia Azevedo; dos Santos, Thiago Rezende; Vasconcelos, Mara; Lucas, Simone Dutra; de Abreu, Mauro Henrique Nogueira Guimarães

    2015-01-01

    Abstract Tooth loss is considered to be a public health problem. Time-series studies that assess the influence of social conditions and access to health services on tooth loss are scarce. This study aimed to examine the time-series of permanent tooth extraction in Brazil between 1998 and 2012 and to compare these series in municipalities with different Human Development Index (HDI) scores and with different access to distinct primary and secondary care. The time-series study was performed between 1998 and 2012, using data from the Brazilian National Health Information System. Time-series study was performed between 1998 and 2012. Two annual rates of tooth extraction were calculated and evaluated separately according to 3 parameters: the HDI, the presence of a Dental Specialty Center, and coverage by Oral Health Teams. The time-series was analyzed using a linear regression model. An overall decrease in the tooth-loss tendencies during this period was observed, particularly in the tooth-extraction rate during primary care procedures. In the municipalities with an HDI that was lower than the median, the average tooth-loss rates were higher than in the municipalities with a higher HDI. The municipalities with lower rates of Oral Health Team coverage also showed lower extraction rates than the municipalities with higher coverage rates. In general, Brazil has shown a decrease in the trend to extract permanent teeth during these 15 years. Increased human development and access to dental services have influenced tooth-extraction rates. PMID:26632688

  15. A probabilistic method for constructing wave time-series at inshore locations using model scenarios

    USGS Publications Warehouse

    Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.

    2014-01-01

    Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.

  16. BGFit: management and automated fitting of biological growth curves.

    PubMed

    Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana

    2013-09-25

    Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.

  17. Impacting the effect of fMRI noise through hardware and acquisition choices - Implications for controlling false positive rates.

    PubMed

    Wald, Lawrence L; Polimeni, Jonathan R

    2017-07-01

    We review the components of time-series noise in fMRI experiments and the effect of image acquisition parameters on the noise. In addition to helping determine the total amount of signal and noise (and thus temporal SNR), the acquisition parameters have been shown to be critical in determining the ratio of thermal to physiological induced noise components in the time series. Although limited attention has been given to this latter metric, we show that it determines the degree of spatial correlations seen in the time-series noise. The spatially correlations of the physiological noise component are well known, but recent studies have shown that they can lead to a higher than expected false-positive rate in cluster-wise inference based on parametric statistical methods used by many researchers. Based on understanding the effect of acquisition parameters on the noise mixture, we propose several acquisition strategies that might be helpful reducing this elevated false-positive rate, such as moving to high spatial resolution or using highly-accelerated acquisitions where thermal sources dominate. We suggest that the spatial noise correlations at the root of the inflated false-positive rate problem can be limited with these strategies, and the well-behaved spatial auto-correlation functions (ACFs) assumed by the conventional statistical methods are retained if the high resolution data is smoothed to conventional resolutions. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Earth Core and Inner Core: What Can We Learn From a Bayesian Inversion of Combined Nutation and Surface Gravimetry Data?

    NASA Astrophysics Data System (ADS)

    Lambert, S. B.; Ziegler, Y.; Rosat, S.; Bizouard, C.

    2017-12-01

    Nutation time series derived from very long baseline interferometry (VLBI) and time varying surface gravity data recorded by superconducting gravimeters (SG) have long been used separately to assess the Earth's interior via the estimation of the free core and inner core resonance effects on nutation or tidal gravity. The results obtained from these two techniques have shown recently to be consistent, making relevant the combination of VLBI and SG observables and the estimation of Earth's interior parameters in a single inversion. We present here the results of combining nutation and surface gravity time series to improve estimates of the Earth's core and inner core resonant frequencies. We use VLBI nutation time series spanning 1984-2016 derived by several analysis centers affiliated to the International VLBI Service for Geodesy and Astrometry, together with surface gravity data from about 15 SG stations. We address the resonance model used for describing the Earth's interior response to tidal excitation, the data preparation consisting of the error recalibration and amplitude fitting to nutation data, and processing of SG time-varying gravity to remove any gaps, spikes, steps and other disturbances, followed by the tidal analysis with the ETERNA 3.4 software package. New estimates of the resonant periods are proposed and correlations between the parameters are investigated.

  19. Combining nutation and surface gravity observations to estimate the Earth's core and inner core resonant frequencies

    NASA Astrophysics Data System (ADS)

    Ziegler, Yann; Lambert, Sébastien; Rosat, Séverine; Nurul Huda, Ibnu; Bizouard, Christian

    2017-04-01

    Nutation time series derived from very long baseline interferometry (VLBI) and time varying surface gravity data recorded by superconducting gravimeters (SG) have long been used separately to assess the Earth's interior via the estimation of the free core and inner core resonance effects on nutation or tidal gravity. The results obtained from these two techniques have been shown recently to be consistent, making relevant the combination of VLBI and SG observables and the estimation of Earth's interior parameters in a single inversion. We present here the intermediate results of the ongoing project of combining nutation and surface gravity time series to improve estimates of the Earth's core and inner core resonant frequencies. We use VLBI nutation time series spanning 1984-2016 derived by the International VLBI Service for geodesy and astrometry (IVS) as the result of a combination of inputs from various IVS analysis centers, and surface gravity data from about 15 SG stations. We address here the resonance model used for describing the Earth's interior response to tidal excitation, the data preparation consisting of the error recalibration and amplitude fitting for nutation data, and processing of SG time-varying gravity to remove any gaps, spikes, steps and other disturbances, followed by the tidal analysis with the ETERNA 3.4 software package, the preliminary estimates of the resonant periods, and the correlations between parameters.

  20. Segmentation algorithm for non-stationary compound Poisson processes. With an application to inventory time series of market members in a financial market

    NASA Astrophysics Data System (ADS)

    Tóth, B.; Lillo, F.; Farmer, J. D.

    2010-11-01

    We introduce an algorithm for the segmentation of a class of regime switching processes. The segmentation algorithm is a non parametric statistical method able to identify the regimes (patches) of a time series. The process is composed of consecutive patches of variable length. In each patch the process is described by a stationary compound Poisson process, i.e. a Poisson process where each count is associated with a fluctuating signal. The parameters of the process are different in each patch and therefore the time series is non-stationary. Our method is a generalization of the algorithm introduced by Bernaola-Galván, et al. [Phys. Rev. Lett. 87, 168105 (2001)]. We show that the new algorithm outperforms the original one for regime switching models of compound Poisson processes. As an application we use the algorithm to segment the time series of the inventory of market members of the London Stock Exchange and we observe that our method finds almost three times more patches than the original one.

  1. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  2. Deep Learning for real-time gravitational wave detection and parameter estimation: Results with Advanced LIGO data

    NASA Astrophysics Data System (ADS)

    George, Daniel; Huerta, E. A.

    2018-03-01

    The recent Nobel-prize-winning detections of gravitational waves from merging black holes and the subsequent detection of the collision of two neutron stars in coincidence with electromagnetic observations have inaugurated a new era of multimessenger astrophysics. To enhance the scope of this emergent field of science, we pioneered the use of deep learning with convolutional neural networks, that take time-series inputs, for rapid detection and characterization of gravitational wave signals. This approach, Deep Filtering, was initially demonstrated using simulated LIGO noise. In this article, we present the extension of Deep Filtering using real data from LIGO, for both detection and parameter estimation of gravitational waves from binary black hole mergers using continuous data streams from multiple LIGO detectors. We demonstrate for the first time that machine learning can detect and estimate the true parameters of real events observed by LIGO. Our results show that Deep Filtering achieves similar sensitivities and lower errors compared to matched-filtering while being far more computationally efficient and more resilient to glitches, allowing real-time processing of weak time-series signals in non-stationary non-Gaussian noise with minimal resources, and also enables the detection of new classes of gravitational wave sources that may go unnoticed with existing detection algorithms. This unified framework for data analysis is ideally suited to enable coincident detection campaigns of gravitational waves and their multimessenger counterparts in real-time.

  3. Hemodynamics assessed via approximate entropy analysis of impedance cardiography time series: effect of metabolic syndrome.

    PubMed

    Guerra, Stefania; Boscari, Federico; Avogaro, Angelo; Di Camillo, Barbara; Sparacino, Giovanni; de Kreutzenberg, Saula Vigili

    2011-08-01

    The metabolic syndrome (MS), a predisposing condition for cardiovascular disease, presents disturbances in hemodynamics; impedance cardiography (ICG) can assess these alterations. In subjects with MS, the morphology of the pulses present in the ICG time series is more irregular/complex than in normal subjects. Therefore, the aim of the present study was to quantitatively assess the complexity of ICG times series in 53 patients, with or without MS, through a nonlinear analysis algorithm, the approximate entropy, a method employed in recent years for the study of several biological signals, which provides a scalar index, ApEn. We correlated ApEn computed from ICG times series data during fasting and postprandial phase with the presence of alterations in the parameters defining MS [Adult Treatment Panel (ATP) III (Grundy SM, Brewer HB Jr, Cleeman JI, Smith SC Jr, Lenfant C; National Heart, Lung, and Blood Institute; American Heart Association. Circulation 109: 433-438, 2004) and the International Diabetes Federation (IDF) definition]. Results show that ApEn was significantly higher in subjects with MS compared with those without (1.81 ± 0.09 vs. 1.65 ± 0.13; means ± SD; P = 0.0013, with ATP III definition; 1.82 ± 0.09 vs. 1.67 ± 0.12; P = 0.00006, with the IDF definition). We also demonstrated that ApEn increase parallels the number of components of MS. ApEn was then correlated to each MS component: mean ApEn values of subjects belonging to the first and fourth quartiles of the distribution of MS parameters were statistically different for all parameters but HDL cholesterol. No difference was observed between ApEn values evaluated in fasting and postprandial states. In conclusion, we identified that MS is characterized by an increased complexity of ICG signals: this may have a prognostic relevance in subjects with this condition.

  4. Complex time series analysis of PM10 and PM2.5 for a coastal site using artificial neural network modelling and k-means clustering

    NASA Astrophysics Data System (ADS)

    Elangasinghe, M. A.; Singhal, N.; Dirks, K. N.; Salmond, J. A.; Samarasinghe, S.

    2014-09-01

    This paper uses artificial neural networks (ANN), combined with k-means clustering, to understand the complex time series of PM10 and PM2.5 concentrations at a coastal location of New Zealand based on data from a single site. Out of available meteorological parameters from the network (wind speed, wind direction, solar radiation, temperature, relative humidity), key factors governing the pattern of the time series concentrations were identified through input sensitivity analysis performed on the trained neural network model. The transport pathways of particulate matter under these key meteorological parameters were further analysed through bivariate concentration polar plots and k-means clustering techniques. The analysis shows that the external sources such as marine aerosols and local sources such as traffic and biomass burning contribute equally to the particulate matter concentrations at the study site. These results are in agreement with the results of receptor modelling by the Auckland Council based on Positive Matrix Factorization (PMF). Our findings also show that contrasting concentration-wind speed relationships exist between marine aerosols and local traffic sources resulting in very noisy and seemingly large random PM10 concentrations. The inclusion of cluster rankings as an input parameter to the ANN model showed a statistically significant (p < 0.005) improvement in the performance of the ANN time series model and also showed better performance in picking up high concentrations. For the presented case study, the correlation coefficient between observed and predicted concentrations improved from 0.77 to 0.79 for PM2.5 and from 0.63 to 0.69 for PM10 and reduced the root mean squared error (RMSE) from 5.00 to 4.74 for PM2.5 and from 6.77 to 6.34 for PM10. The techniques presented here enable the user to obtain an understanding of potential sources and their transport characteristics prior to the implementation of costly chemical analysis techniques or advanced air dispersion models.

  5. Identification of AR(I)MA processes for modelling temporal correlations of GPS observations

    NASA Astrophysics Data System (ADS)

    Luo, X.; Mayer, M.; Heck, B.

    2009-04-01

    In many geodetic applications observations of the Global Positioning System (GPS) are routinely processed by means of the least-squares method. However, this algorithm delivers reliable estimates of unknown parameters und realistic accuracy measures only if both the functional and stochastic models are appropriately defined within GPS data processing. One deficiency of the stochastic model used in many GPS software products consists in neglecting temporal correlations of GPS observations. In practice the knowledge of the temporal stochastic behaviour of GPS observations can be improved by analysing time series of residuals resulting from the least-squares evaluation. This paper presents an approach based on the theory of autoregressive (integrated) moving average (AR(I)MA) processes to model temporal correlations of GPS observations using time series of observation residuals. A practicable integration of AR(I)MA models in GPS data processing requires the determination of the order parameters of AR(I)MA processes at first. In case of GPS, the identification of AR(I)MA processes could be affected by various factors impacting GPS positioning results, e.g. baseline length, multipath effects, observation weighting, or weather variations. The influences of these factors on AR(I)MA identification are empirically analysed based on a large amount of representative residual time series resulting from differential GPS post-processing using 1-Hz observation data collected within the permanent SAPOS® (Satellite Positioning Service of the German State Survey) network. Both short and long time series are modelled by means of AR(I)MA processes. The final order parameters are determined based on the whole residual database; the corresponding empirical distribution functions illustrate that multipath and weather variations seem to affect the identification of AR(I)MA processes much more significantly than baseline length and observation weighting. Additionally, the modelling results of temporal correlations using high-order AR(I)MA processes are compared with those by means of first order autoregressive (AR(1)) processes and empirically estimated autocorrelation functions.

  6. An a priori model for the reduction of nutation observations: KSV(1994.3) nutation series

    NASA Technical Reports Server (NTRS)

    Herring, T. A.

    1995-01-01

    We discuss the formulation of a new nutation series to be used in the reduction of modern space geodetic data. The motivation for developing such a series is to develop a nutation series that has smaller short period errors than the IAU 1980 nutation series and to provide a series that can be used with techniques such as the Global Positioning System (GPS) that have sensitivity to nutations but can directly separate the effects of nutations from errors in the dynamical force models that effect the satellite orbits. A modern nutation series should allow the errors in the force models for GPS to be better understood. The series is constructed by convolving the Kinoshita and Souchay rigid Earth nutation series with an Earth response function whose parameters are partly based on geophysical models of the Earth and partly estimated from a long series (1979-1993) of very long baseline interferometry (VLBI) estimates of nutation angles. Secular rates of change of the nutation angles to represent corrections to the precession constant and a secular change of the obliquity of the ecliptic are included in the theory. Time dependent amplitudes of the Free Core Nutation (FCN) that is most likely excited by variations in atmospheric pressure are included when the geophysical parameters are estimated. The complex components of the prograde annual nutation are estimated simultaneously with the geophysical parameters because of the large contribution to the nutation from the S(sub 1) atmospheric tide. The weighted root mean square (WRMS) scatter of the nutation angle estimates about this new model are 0.32 mas and the largest correction to the series when the amplitudes of the ten largest nutations are estimated is 0.18 +/- 0.03 mas for the in phase component of the prograde 18. 6 year nutation.

  7. The Potential of Tropospheric Gradients for Regional Precipitation Prediction

    NASA Astrophysics Data System (ADS)

    Boisits, Janina; Möller, Gregor; Wittmann, Christoph; Weber, Robert

    2017-04-01

    Changes of temperature and humidity in the neutral atmosphere cause variations in tropospheric path delays and tropospheric gradients. By estimating zenith wet delays (ZWD) and gradients using a GNSS reference station network the obtained time series provide information about spatial and temporal variations of water vapour in the atmosphere. Thus, GNSS-based tropospheric parameters can contribute to the forecast of regional precipitation events. In a recently finalized master thesis at TU Wien the potential of tropospheric gradients for weather prediction was investigated. Therefore, ZWD and gradient time series at selected GNSS reference stations were compared to precipitation data over a period of six months (April to September 2014). The selected GNSS stations form two test areas within Austria. All required meteorological data was provided by the Central Institution for Meteorology and Geodynamics (ZAMG). Two characteristics in ZWD and gradient time series can be anticipated in case of an approaching weather front. First, an induced asymmetry in tropospheric delays results in both, an increased magnitude of the gradient and in gradients pointing towards the weather front. Second, an increase in ZWD reflects the increased water vapour concentration right before a precipitation event. To investigate these characteristics exemplary test events were processed. On the one hand, the sequence of the anticipated increase in ZWD at each GNSS station obtained by cross correlation of the time series indicates the direction of the approaching weather front. On the other hand, the corresponding peak in gradient time series allows the deduction of the direction of movement as well. To verify the results precipitation data from ZAMG was used. It can be deduced, that tropospheric gradients show high potential for predicting precipitation events. While ZWD time series rather indicate the orientation of the air mass boundary, gradients rather indicate the direction of movement of an approaching weather front. Additionally our investigations have shown that gradients are able to capture the characteristics of an approaching weather front twenty to thirty hours before the precipitation event, which allows a first indication well in advance. Thus in conclusion, the utilization of GNSS tropospheric parameters, in particular tropospheric gradients, has the potential to contribute substantially to weather forecasting models.

  8. Solutions for transients in arbitrarily branching cables: III. Voltage clamp problems.

    PubMed

    Major, G

    1993-07-01

    Branched cable voltage recording and voltage clamp analytical solutions derived in two previous papers are used to explore practical issues concerning voltage clamp. Single exponentials can be fitted reasonably well to the decay phase of clamped synaptic currents, although they contain many underlying components. The effective time constant depends on the fit interval. The smoothing effects on synaptic clamp currents of dendritic cables and series resistance are explored with a single cylinder + soma model, for inputs with different time courses. "Soma" and "cable" charging currents cannot be separated easily when the soma is much smaller than the dendrites. Subtractive soma capacitance compensation and series resistance compensation are discussed. In a hippocampal CA1 pyramidal neurone model, voltage control at most dendritic sites is extremely poor. Parameter dependencies are illustrated. The effects of series resistance compound those of dendritic cables and depend on the "effective capacitance" of the cell. Plausible combinations of parameters can cause order-of-magnitude distortions to clamp current waveform measures of simulated Schaeffer collateral inputs. These voltage clamp problems are unlikely to be solved by the use of switch clamp methods.

  9. Reduction of the dimension of neural network models in problems of pattern recognition and forecasting

    NASA Astrophysics Data System (ADS)

    Nasertdinova, A. D.; Bochkarev, V. V.

    2017-11-01

    Deep neural networks with a large number of parameters are a powerful tool for solving problems of pattern recognition, prediction and classification. Nevertheless, overfitting remains a serious problem in the use of such networks. A method of solving the problem of overfitting is proposed in this article. This method is based on reducing the number of independent parameters of a neural network model using the principal component analysis, and can be implemented using existing libraries of neural computing. The algorithm was tested on the problem of recognition of handwritten symbols from the MNIST database, as well as on the task of predicting time series (rows of the average monthly number of sunspots and series of the Lorentz system were used). It is shown that the application of the principal component analysis enables reducing the number of parameters of the neural network model when the results are good. The average error rate for the recognition of handwritten figures from the MNIST database was 1.12% (which is comparable to the results obtained using the "Deep training" methods), while the number of parameters of the neural network can be reduced to 130 times.

  10. A Multipixel Time Series Analysis Method Accounting for Ground Motion, Atmospheric Noise, and Orbital Errors

    NASA Astrophysics Data System (ADS)

    Jolivet, R.; Simons, M.

    2018-02-01

    Interferometric synthetic aperture radar time series methods aim to reconstruct time-dependent ground displacements over large areas from sets of interferograms in order to detect transient, periodic, or small-amplitude deformation. Because of computational limitations, most existing methods consider each pixel independently, ignoring important spatial covariances between observations. We describe a framework to reconstruct time series of ground deformation while considering all pixels simultaneously, allowing us to account for spatial covariances, imprecise orbits, and residual atmospheric perturbations. We describe spatial covariances by an exponential decay function dependent of pixel-to-pixel distance. We approximate the impact of imprecise orbit information and residual long-wavelength atmosphere as a low-order polynomial function. Tests on synthetic data illustrate the importance of incorporating full covariances between pixels in order to avoid biased parameter reconstruction. An example of application to the northern Chilean subduction zone highlights the potential of this method.

  11. The international performance of healthcare systems in population health: capabilities of pooled cross-sectional time series methods.

    PubMed

    Reibling, Nadine

    2013-09-01

    This paper outlines the capabilities of pooled cross-sectional time series methodology for the international comparison of health system performance in population health. It shows how common model specifications can be improved so that they not only better address the specific nature of time series data on population health but are also more closely aligned with our theoretical expectations of the effect of healthcare systems. Three methodological innovations for this field of applied research are discussed: (1) how dynamic models help us understand the timing of effects, (2) how parameter heterogeneity can be used to compare performance across countries, and (3) how multiple imputation can be used to deal with incomplete data. We illustrate these methodological strategies with an analysis of infant mortality rates in 21 OECD countries between 1960 and 2008 using OECD Health Data. Copyright © 2013 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.

  12. Time series models of environmental exposures: Good predictions or good understanding.

    PubMed

    Barnett, Adrian G; Stephen, Dimity; Huang, Cunrui; Wolkewitz, Martin

    2017-04-01

    Time series data are popular in environmental epidemiology as they make use of the natural experiment of how changes in exposure over time might impact on disease. Many published time series papers have used parameter-heavy models that fully explained the second order patterns in disease to give residuals that have no short-term autocorrelation or seasonality. This is often achieved by including predictors of past disease counts (autoregression) or seasonal splines with many degrees of freedom. These approaches give great residuals, but add little to our understanding of cause and effect. We argue that modelling approaches should rely more on good epidemiology and less on statistical tests. This includes thinking about causal pathways, making potential confounders explicit, fitting a limited number of models, and not over-fitting at the cost of under-estimating the true association between exposure and disease. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. AGARD Flight Test Instrumentation Series. Volume 14. The Analysis of Random Data

    DTIC Science & Technology

    1981-11-01

    obtained at arbitrary times during a number of flights. No constraints have been placed upon the controlling parameters, so that the process is non ...34noisy" environment controlling a non -linear system (the aircraft) using a redundant net of control parameters. when aircraft were flown manually with...structure. Cuse 2. Non -Stationary Measurements. When the 114S value of a random signal varies with parameters which cannot be controlled , then the method

  14. Optimizing Use of Water Management Systems during Changes of Hydrological Conditions

    NASA Astrophysics Data System (ADS)

    Výleta, Roman; Škrinár, Andrej; Danáčová, Michaela; Valent, Peter

    2017-10-01

    When designing the water management systems and their components, there is a need of more detail research on hydrological conditions of the river basin, runoff of which creates the main source of water in the reservoir. Over the lifetime of the water management systems the hydrological time series are never repeated in the same form which served as the input for the design of the system components. The design assumes the observed time series to be representative at the time of the system use. However, it is rather unrealistic assumption, because the hydrological past will not be exactly repeated over the design lifetime. When designing the water management systems, the specialists may occasionally face the insufficient or oversized capacity design, possibly wrong specification of the management rules which may lead to their non-optimal use. It is therefore necessary to establish a comprehensive approach to simulate the fluctuations in the interannual runoff (taking into account the current dry and wet periods) in the form of stochastic modelling techniques in water management practice. The paper deals with the methodological procedure of modelling the mean monthly flows using the stochastic Thomas-Fiering model, while modification of this model by Wilson-Hilferty transformation of independent random number has been applied. This transformation usually applies in the event of significant asymmetry in the observed time series. The methodological procedure was applied on the data acquired at the gauging station of Horné Orešany in the Parná Stream. Observed mean monthly flows for the period of 1.11.1980 - 31.10.2012 served as the model input information. After extrapolation the model parameters and Wilson-Hilferty transformation parameters the synthetic time series of mean monthly flows were simulated. Those have been compared with the observed hydrological time series using basic statistical characteristics (e. g. mean, standard deviation and skewness) for testing the quality of the model simulation. The synthetic hydrological series of monthly flows were created having the same statistical properties as the time series observed in the past. The compiled model was able to take into account the diversity of extreme hydrological situations in a form of synthetic series of mean monthly flows, while the occurrence of a set of flows was confirmed, which could and may occur in the future. The results of stochastic modelling in the form of synthetic time series of mean monthly flows, which takes into account the seasonal fluctuations of runoff within the year, could be applicable in engineering hydrology (e. g. for optimum use of the existing water management system that is related to reassessment of economic risks of the system).

  15. On the Impact of a Quadratic Acceleration Term in the Analysis of Position Time Series

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Klos, Anna; Bos, Machiel Simon; Hunegnaw, Addisu; Teferle, Felix Norman

    2016-04-01

    The analysis of Global Navigation Satellite System (GNSS) position time series generally assumes that each of the coordinate component series is described by the sum of a linear rate (velocity) and various periodic terms. The residuals, the deviations between the fitted model and the observations, are then a measure of the epoch-to-epoch scatter and have been used for the analysis of the stochastic character (noise) of the time series. Often the parameters of interest in GNSS position time series are the velocities and their associated uncertainties, which have to be determined with the highest reliability. It is clear that not all GNSS position time series follow this simple linear behaviour. Therefore, we have added an acceleration term in the form of a quadratic polynomial function to the model in order to better describe the non-linear motion in the position time series. This non-linear motion could be a response to purely geophysical processes, for example, elastic rebound of the Earth's crust due to ice mass loss in Greenland, artefacts due to deficiencies in bias mitigation models, for example, of the GNSS satellite and receiver antenna phase centres, or any combination thereof. In this study we have simulated 20 time series with different stochastic characteristics such as white, flicker or random walk noise of length of 23 years. The noise amplitude was assumed at 1 mm/y-/4. Then, we added the deterministic part consisting of a linear trend of 20 mm/y (that represents the averaged horizontal velocity) and accelerations ranging from minus 0.6 to plus 0.6 mm/y2. For all these data we estimated the noise parameters with Maximum Likelihood Estimation (MLE) using the Hector software package without taken into account the non-linear term. In this way we set the benchmark to then investigate how the noise properties and velocity uncertainty may be affected by any un-modelled, non-linear term. The velocities and their uncertainties versus the accelerations for different types of noise are determined. Furthermore, we have selected 40 globally distributed stations that have a clear non-linear behaviour from two different International GNSS Service (IGS) analysis centers: JPL (Jet Propulsion Laboratory) and BLT (British Isles continuous GNSS Facility and University of Luxembourg Tide Gauge Benchmark Monitoring (TIGA) Analysis Center). We obtained maximum accelerations of -1.8±1.2 mm2/y and -4.5±3.3 mm2/y for the horizontal and vertical components, respectively. The noise analysis tests have shown that the addition of the non-linear term has significantly whitened the power spectra of the position time series, i.e. shifted the spectral index from flicker towards white noise.

  16. A Bayesian inverse modeling approach to estimate soil hydraulic properties of a toposequence in southeastern Amazonia.

    NASA Astrophysics Data System (ADS)

    Stucchi Boschi, Raquel; Qin, Mingming; Gimenez, Daniel; Cooper, Miguel

    2016-04-01

    Modeling is an important tool for better understanding and assessing land use impacts on landscape processes. A key point for environmental modeling is the knowledge of soil hydraulic properties. However, direct determination of soil hydraulic properties is difficult and costly, particularly in vast and remote regions such as one constituting the Amazon Biome. One way to overcome this problem is to extrapolate accurately estimated data to pedologically similar sites. The van Genuchten (VG) parametric equation is the most commonly used for modeling SWRC. The use of a Bayesian approach in combination with the Markov chain Monte Carlo to estimate the VG parameters has several advantages compared to the widely used global optimization techniques. The Bayesian approach provides posterior distributions of parameters that are independent from the initial values and allow for uncertainty analyses. The main objectives of this study were: i) to estimate hydraulic parameters from data of pasture and forest sites by the Bayesian inverse modeling approach; and ii) to investigate the extrapolation of the estimated VG parameters to a nearby toposequence with pedologically similar soils to those used for its estimate. The parameters were estimated from volumetric water content and tension observations obtained after rainfall events during a 207-day period from pasture and forest sites located in the southeastern Amazon region. These data were used to run HYDRUS-1D under a Differential Evolution Adaptive Metropolis (DREAM) scheme 10,000 times, and only the last 2,500 times were used to calculate the posterior distributions of each hydraulic parameter along with 95% confidence intervals (CI) of volumetric water content and tension time series. Then, the posterior distributions were used to generate hydraulic parameters for two nearby toposequences composed by six soil profiles, three are under forest and three are under pasture. The parameters of the nearby site were accepted when the predicted tension time series were within the 95% CI which is derived from the calibration site using DREAM scheme.

  17. Ranking streamflow model performance based on Information theory metrics

    NASA Astrophysics Data System (ADS)

    Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas

    2016-04-01

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.

  18. Daily water and sediment discharges from selected rivers of the eastern United States; a time-series modeling approach

    USGS Publications Warehouse

    Fitzgerald, Michael G.; Karlinger, Michael R.

    1983-01-01

    Time-series models were constructed for analysis of daily runoff and sediment discharge data from selected rivers of the Eastern United States. Logarithmic transformation and first-order differencing of the data sets were necessary to produce second-order, stationary time series and remove seasonal trends. Cyclic models accounted for less than 42 percent of the variance in the water series and 31 percent in the sediment series. Analysis of the apparent oscillations of given frequencies occurring in the data indicates that frequently occurring storms can account for as much as 50 percent of the variation in sediment discharge. Components of the frequency analysis indicate that a linear representation is reasonable for the water-sediment system. Models that incorporate lagged water discharge as input prove superior to univariate techniques in modeling and prediction of sediment discharges. The random component of the models includes errors in measurement and model hypothesis and indicates no serial correlation. An index of sediment production within or between drain-gage basins can be calculated from model parameters.

  19. Modified superposition: A simple time series approach to closed-loop manual controller identification

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.; Leban, F.; Mashiko, S.

    1986-01-01

    Single-channel pilot manual control output in closed-tracking tasks is modeled in terms of linear discrete transfer functions which are parsimonious and guaranteed stable. The transfer functions are found by applying a modified super-position time series generation technique. A Levinson-Durbin algorithm is used to determine the filter which prewhitens the input and a projective (least squares) fit of pulse response estimates is used to guarantee identified model stability. Results from two case studies are compared to previous findings, where the source of data are relatively short data records, approximately 25 seconds long. Time delay effects and pilot seasonalities are discussed and analyzed. It is concluded that single-channel time series controller modeling is feasible on short records, and that it is important for the analyst to determine a criterion for best time domain fit which allows association of model parameter values, such as pure time delay, with actual physical and physiological constraints. The purpose of the modeling is thus paramount.

  20. Fractal dynamics of heartbeat time series of young persons with metabolic syndrome

    NASA Astrophysics Data System (ADS)

    Muñoz-Diosdado, A.; Alonso-Martínez, A.; Ramírez-Hernández, L.; Martínez-Hernández, G.

    2012-10-01

    Many physiological systems have been in recent years quantitatively characterized using fractal analysis. We applied it to study heart variability of young subjects with metabolic syndrome (MS); we examined the RR time series (time between two R waves in ECG) with the detrended fluctuation analysis (DFA) method, the Higuchi's fractal dimension method and the multifractal analysis to detect the possible presence of heart problems. The results show that although the young persons have MS, the majority do not present alterations in the heart dynamics. However, there were cases where the fractal parameter values differed significantly from the healthy people values.

  1. Beyond linear methods of data analysis: time series analysis and its applications in renal research.

    PubMed

    Gupta, Ashwani K; Udrea, Andreea

    2013-01-01

    Analysis of temporal trends in medicine is needed to understand normal physiology and to study the evolution of disease processes. It is also useful for monitoring response to drugs and interventions, and for accountability and tracking of health care resources. In this review, we discuss what makes time series analysis unique for the purposes of renal research and its limitations. We also introduce nonlinear time series analysis methods and provide examples where these have advantages over linear methods. We review areas where these computational methods have found applications in nephrology ranging from basic physiology to health services research. Some examples include noninvasive assessment of autonomic function in patients with chronic kidney disease, dialysis-dependent renal failure and renal transplantation. Time series models and analysis methods have been utilized in the characterization of mechanisms of renal autoregulation and to identify the interaction between different rhythms of nephron pressure flow regulation. They have also been used in the study of trends in health care delivery. Time series are everywhere in nephrology and analyzing them can lead to valuable knowledge discovery. The study of time trends of vital signs, laboratory parameters and the health status of patients is inherent to our everyday clinical practice, yet formal models and methods for time series analysis are not fully utilized. With this review, we hope to familiarize the reader with these techniques in order to assist in their proper use where appropriate.

  2. Detecting trend on ecological river status - how to deal with short incomplete bioindicator time series? Methodological and operational issues

    NASA Astrophysics Data System (ADS)

    Cernesson, Flavie; Tournoud, Marie-George; Lalande, Nathalie

    2018-06-01

    Among the various parameters monitored in river monitoring networks, bioindicators provide very informative data. Analysing time variations in bioindicator data is tricky for water managers because the data sets are often short, irregular, and non-normally distributed. It is then a challenging methodological issue for scientists, as it is in Saône basin (30 000 km2, France) where, between 1998 and 2010, among 812 IBGN (French macroinvertebrate bioindicator) monitoring stations, only 71 time series have got more than 10 data values and were studied here. Combining various analytical tools (three parametric and non-parametric statistical tests plus a graphical analysis), 45 IBGN time series were classified as stationary and 26 as non-stationary (only one of which showing a degradation). Series from sampling stations located within the same hydroecoregion showed similar trends, while river size classes seemed to be non-significant to explain temporal trends. So, from a methodological point of view, combining statistical tests and graphical analysis is a relevant option when striving to improve trend detection. Moreover, it was possible to propose a way to summarise series in order to analyse links between ecological river quality indicators and land use stressors.

  3. Multifractal detrended fluctuation analysis of sheep livestock prices in origin

    NASA Astrophysics Data System (ADS)

    Pavón-Domínguez, P.; Serrano, S.; Jiménez-Hornero, F. J.; Jiménez-Hornero, J. E.; Gutiérrez de Ravé, E.; Ariza-Villaverde, A. B.

    2013-10-01

    The multifractal detrended fluctuation analysis (MF-DFA) is used to verify whether or not the returns of time series of prices paid to farmers in original markets can be described by the multifractal approach. By way of example, 5 weekly time series of prices of different breeds, slaughter weight and market differentiation from 2000 to 2012 are analyzed. Results obtained from the multifractal parameters and multifractal spectra show that the price series of livestock products are of a multifractal nature. The Hurst exponent shows that these time series are stationary signals, some of which exhibit long memory (Merino milk-fed in Seville and Segureña paschal in Jaen), short memory (Merino paschal in Cordoba and Segureña milk-fed in Jaen) or even are close to an uncorrelated signals (Merino paschal in Seville). MF-DFA is able to discern the different underlying dynamics that play an important role in different types of sheep livestock markets, such as degree and source of multifractality. In addition, the main source of multifractality of these time series is due to the broadness of the probability function, instead of the long-range correlation properties between small and large fluctuations, which play a clearly secondary role.

  4. Optimized Design and Analysis of Sparse-Sampling fMRI Experiments

    PubMed Central

    Perrachione, Tyler K.; Ghosh, Satrajit S.

    2013-01-01

    Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power. PMID:23616742

  5. Optimized design and analysis of sparse-sampling FMRI experiments.

    PubMed

    Perrachione, Tyler K; Ghosh, Satrajit S

    2013-01-01

    Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power.

  6. Which homogenisation method is appropriate for daily time series of relative humidity?

    NASA Astrophysics Data System (ADS)

    Chimani, Barbara; Nemec, Johanna; Auer, Ingeborg; Venema, Victor

    2014-05-01

    Data homogenisation is an essential part of reliable climate data analyses. Different tools for detecting and adjusting breaks in daily extreme temperatures (Tmin, Tmax) and daily precipitation sums were developed in the last years. Due to its influence on health, plants and construction relative humidity is another parameter of great importance. On the basis of 6 networks of measured (and homogenized with respect to the monthly means) relative humidity data, which cover different climatic areas in Austria, a synthetic data set for testing and validating homogenisation methods was built. Each network consists of 4 to 6 station time series with a minimum length of 5 years. The so-called surrogate networks resemble the statistical properties (e.g. distribution of parameter, auto- and cross correlation within the network) of the measured time series, but are extended to 100 year long time series, which are in a first step assumed to be homogeneous. For creating the best possible surrogate dataset of relative humidity detailed statistical information on potential inhomogeneities is decisive. Information on the potential breaks was taken from parallel measurements available for some Austrian locations, mostly representing changes in instrumentation and/or station relocation. Beside changes in the distribution of the parameter the analyses includes an estimation of changes in the number of missing data, global and local biases, both on a seasonal and annual basis. An additional break is to be expected in the Austrian time series due to a change in observation time in 1970/1971. Since this change occurred simultaneously at all Austrian climate stations, standard homogenisation methods, which rely on a comparison with reference stations, are not able to detect or correct this shift. Therefore an independent correction method for this type of break, to be applied before homogenisation was developed. This type of change point was not included in the surrogate network. Artificial inhomogenities were introduced to the dataset in three steps: (1) deterministic change points: within one homogeneous sub-period (HSP) a constant perturbation is added to each relative humidity values, (2) deterministic + random changes: random changes do not change the mean of the HSP but can affect the distribution of the parameter, (3) in addition realistic changes in break frequency and missing data. In order to tests the efficiency of homogenisation methods, the procedure was separated in break detection and adjustment of inhomogenities. The methods MASH (Szentimrey, 1999), ACMANT (Domonkos, 2011), PRODIGE (Caussinus and Mestre, 2004), SNHT (Alexandersson, 1986), Vincent (Vincent, 1998), E-P method (Easterling and Peterson, 1995) and Bivariate test (Maronna and Yohai, 1978) were selected for break detection. Break detection is in all methods restricted to monthly, seasonal or annual data. Since we are dealing with daily data, the amount of methods for break correction is reduced and we concentrate on the following methods: MASH, Vincent, SPLIDHOM (Mestre et al., 2011) and the percentile method (Stepanek, 2009). Information on the statistical characteristics of breaks in relative humidity series, the correction method concerning the changed observation times and first results concerning break detection will be presented.

  7. Appraisal of long term groundwater quality of peninsular India using water quality index and fractal dimension

    NASA Astrophysics Data System (ADS)

    Rawat, Kishan Singh; Singh, Sudhir Kumar; Jacintha, T. German Amali; Nemčić-Jurec, Jasna; Tripathi, Vinod Kumar

    2017-12-01

    A review has been made to understand the hydrogeochemical behaviour of groundwater through statistical analysis of long term water quality data (year 2005-2013). Water Quality Index ( WQI), descriptive statistics, Hurst exponent, fractal dimension and predictability index were estimated for each water parameter. WQI results showed that majority of samples fall in moderate category during 2005-2013, but monitoring site four falls under severe category (water unfit for domestic use). Brownian time series behaviour (a true random walk nature) exists between calcium (Ca^{2+}) and electric conductivity (EC); magnesium (Mg^{2+}) with EC; sodium (Na+) with EC; sulphate (SO4^{2-}) with EC; total dissolved solids (TDS) with chloride (Cl-) during pre- (2005-2013) and post- (2006-2013) monsoon season. These parameters have a closer value of Hurst exponent ( H) with Brownian time series behaviour condition (H=0.5). The result of times series analysis of water quality data shows a persistent behaviour (a positive autocorrelation) that has played a role between Cl- and Mg^{2+}, Cl- and Ca^{2+}, TDS and Na+, TDS and SO4^{2-}, TDS and Ca^{2+} in pre- and post-monsoon time series because of the higher value of H (>1). Whereas an anti-persistent behaviour (or negative autocorrelation) was found between Cl- and EC, TDS and EC during pre- and post-monsoon due to low value of H. The work outline shows that the groundwater of few areas needs treatment before direct consumption, and it also needs to be protected from contamination.

  8. Characterizability of metabolic pathway systems from time series data.

    PubMed

    Voit, Eberhard O

    2013-12-01

    Over the past decade, the biomathematical community has devoted substantial effort to the complicated challenge of estimating parameter values for biological systems models. An even more difficult issue is the characterization of functional forms for the processes that govern these systems. Most parameter estimation approaches tacitly assume that these forms are known or can be assumed with some validity. However, this assumption is not always true. The recently proposed method of Dynamic Flux Estimation (DFE) addresses this problem in a genuinely novel fashion for metabolic pathway systems. Specifically, DFE allows the characterization of fluxes within such systems through an analysis of metabolic time series data. Its main drawback is the fact that DFE can only directly be applied if the pathway system contains as many metabolites as unknown fluxes. This situation is unfortunately rare. To overcome this roadblock, earlier work in this field had proposed strategies for augmenting the set of unknown fluxes with independent kinetic information, which however is not always available. Employing Moore-Penrose pseudo-inverse methods of linear algebra, the present article discusses an approach for characterizing fluxes from metabolic time series data that is applicable even if the pathway system is underdetermined and contains more fluxes than metabolites. Intriguingly, this approach is independent of a specific modeling framework and unaffected by noise in the experimental time series data. The results reveal whether any fluxes may be characterized and, if so, which subset is characterizable. They also help with the identification of fluxes that, if they could be determined independently, would allow the application of DFE. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Characterizability of Metabolic Pathway Systems from Time Series Data

    PubMed Central

    Voit, Eberhard O.

    2013-01-01

    Over the past decade, the biomathematical community has devoted substantial effort to the complicated challenge of estimating parameter values for biological systems models. An even more difficult issue is the characterization of functional forms for the processes that govern these systems. Most parameter estimation approaches tacitly assume that these forms are known or can be assumed with some validity. However, this assumption is not always true. The recently proposed method of Dynamic Flux Estimation (DFE) addresses this problem in a genuinely novel fashion for metabolic pathway systems. Specifically, DFE allows the characterization of fluxes within such systems through an analysis of metabolic time series data. Its main drawback is the fact that DFE can only directly be applied if the pathway system contains as many metabolites as unknown fluxes. This situation is unfortunately rare. To overcome this roadblock, earlier work in this field had proposed strategies for augmenting the set of unknown fluxes with independent kinetic information, which however is not always available. Employing Moore-Penrose pseudo-inverse methods of linear algebra, the present article discusses an approach for characterizing fluxes from metabolic time series data that is applicable even if the pathway system is underdetermined and contains more fluxes than metabolites. Intriguingly, this approach is independent of a specific modeling framework and unaffected by noise in the experimental time series data. The results reveal whether any fluxes may be characterized and, if so, which subset is characterizable. They also help with the identification of fluxes that, if they could be determined independently, would allow the application of DFE. PMID:23391489

  10. Analyzing Single-Molecule Time Series via Nonparametric Bayesian Inference

    PubMed Central

    Hines, Keegan E.; Bankston, John R.; Aldrich, Richard W.

    2015-01-01

    The ability to measure the properties of proteins at the single-molecule level offers an unparalleled glimpse into biological systems at the molecular scale. The interpretation of single-molecule time series has often been rooted in statistical mechanics and the theory of Markov processes. While existing analysis methods have been useful, they are not without significant limitations including problems of model selection and parameter nonidentifiability. To address these challenges, we introduce the use of nonparametric Bayesian inference for the analysis of single-molecule time series. These methods provide a flexible way to extract structure from data instead of assuming models beforehand. We demonstrate these methods with applications to several diverse settings in single-molecule biophysics. This approach provides a well-constrained and rigorously grounded method for determining the number of biophysical states underlying single-molecule data. PMID:25650922

  11. Time Series Modeling of Nano-Gold Immunochromatographic Assay via Expectation Maximization Algorithm.

    PubMed

    Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui

    2013-12-01

    In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.

  12. On system behaviour using complex networks of a compression algorithm

    NASA Astrophysics Data System (ADS)

    Walker, David M.; Correa, Debora C.; Small, Michael

    2018-01-01

    We construct complex networks of scalar time series using a data compression algorithm. The structure and statistics of the resulting networks can be used to help characterize complex systems, and one property, in particular, appears to be a useful discriminating statistic in surrogate data hypothesis tests. We demonstrate these ideas on systems with known dynamical behaviour and also show that our approach is capable of identifying behavioural transitions within electroencephalogram recordings as well as changes due to a bifurcation parameter of a chaotic system. The technique we propose is dependent on a coarse grained quantization of the original time series and therefore provides potential for a spatial scale-dependent characterization of the data. Finally the method is as computationally efficient as the underlying compression algorithm and provides a compression of the salient features of long time series.

  13. Effects of linear trends on estimation of noise in GNSS position time-series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dmitrieva, K.; Segall, P.; Bradley, A. M.

    A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this study, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that themore » effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Finally, overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.« less

  14. Effects of linear trends on estimation of noise in GNSS position time-series

    NASA Astrophysics Data System (ADS)

    Dmitrieva, K.; Segall, P.; Bradley, A. M.

    2017-01-01

    A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this paper, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that the effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.

  15. Effects of linear trends on estimation of noise in GNSS position time-series

    DOE PAGES

    Dmitrieva, K.; Segall, P.; Bradley, A. M.

    2016-10-20

    A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this study, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that themore » effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Finally, overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.« less

  16. Time Series Observations of the 2015 Eclipse of b Persei (not beta Persei) (Abstract)

    NASA Astrophysics Data System (ADS)

    Collins, D. F.

    2016-06-01

    (Abstract only) The bright (V = 4.6) ellipsoidal variable b Persei consists of a close non-eclipsing binary pair that shows a nearly sinusoidal light curve with a ~1.5 day period. This system also contains a third star that orbits the binary pair every 702 days. AAVSO observers recently detected the first ever optical eclipse of A-B binary pair by the third star as a series of snapshots (D. Collins, R. Zavala, J. Sanborn - AAVSO Spring Meeting, 2013); abstract published in Collins, JAAVSO, 41, 2, 391 (2013); b Per mis-printed as b Per therein. A follow-up eclipse campaign in mid-January 2015 recorded time-series observations. These new time-series observations clearly show multiple ingress and egress of each component of the binary system by the third star over the eclipse duration of 2 to 3 days. A simulation of the eclipse was created. Orbital and some astrophysical parameters were adjusted within constraints to give a reasonable fit to the observed light curve.

  17. Parameters of Higuchi's method to characterize primary waves in some seismograms from the Mexican subduction zone

    NASA Astrophysics Data System (ADS)

    Gálvez-Coyt, Gonzalo; Muñoz-Diosdado, Alejandro; Peralta, José; Balderas-López, José; Angulo-Brown, Fernando

    2012-06-01

    Higuchi's method is a procedure that, if applied appropriately, can determine in a reliable way the fractal dimension D of time series; this fractal dimension permits to characterize the degree of correlation of the series. However, when analyzing some time series with Higuchi's method, there are oscillations at the right-hand side of the graph, which can cause a mistaken determination of the fractal dimension. In this work, an appropriate explanation is given to this type of behaviour. Using the seismogram as a time series and the properties of the P and S waves, it is possible to use the properties of Higuchi's method to previously detect the arrival of the earthquake shacking stage, some seconds in advance, approximately 30-35 s in the case of Mexico City. Thus, we propose the Higuchi's method to characterize and detect the P waves in order to estimate the strength of the forthcoming S waves.

  18. Pesticide nonextractable residue formation in soil: insights from inverse modeling of degradation time series.

    PubMed

    Loos, Martin; Krauss, Martin; Fenner, Kathrin

    2012-09-18

    Formation of soil nonextractable residues (NER) is central to the fate and persistence of pesticides. To investigate pools and extent of NER formation, an established inverse modeling approach for pesticide soil degradation time series was evaluated with a Monte Carlo Markov Chain (MCMC) sampling procedure. It was found that only half of 73 pesticide degradation time series from a homogeneous soil source allowed for well-behaved identification of kinetic parameters with a four-pool model containing a parent compound, a metabolite, a volatile, and a NER pool. A subsequent simulation indeed confirmed distinct parameter combinations of low identifiability. Taking the resulting uncertainties into account, several conclusions regarding NER formation and its impact on persistence assessment could nonetheless be drawn. First, rate constants for transformation of parent compounds to metabolites were correlated to those for transformation of parent compounds to NER, leading to degradation half-lives (DegT50) typically not being larger than disappearance half-lives (DT50) by more than a factor of 2. Second, estimated rate constants were used to evaluate NER formation over time. This showed that NER formation, particularly through the metabolite pool, may be grossly underestimated when using standard incubation periods. It further showed that amounts and uncertainties in (i) total NER, (ii) NER formed from the parent pool, and (iii) NER formed from the metabolite pool vary considerably among data sets at t→∞, with no clear dominance between (ii) and (iii). However, compounds containing aromatic amine moieties were found to form significantly more total NER when extrapolating to t→∞ than the other compounds studied. Overall, our study stresses the general need for assessing uncertainties, identifiability issues, and resulting biases when using inverse modeling of degradation time series for evaluating persistence and NER formation.

  19. A framework for relating the structures and recovery statistics in pressure time-series surveys for dust devils

    NASA Astrophysics Data System (ADS)

    Jackson, Brian; Lorenz, Ralph; Davis, Karan

    2018-01-01

    Dust devils are likely the dominant source of dust for the martian atmosphere, but the amount and frequency of dust-lifting depend on the statistical distribution of dust devil parameters. Dust devils exhibit pressure perturbations and, if they pass near a barometric sensor, they may register as a discernible dip in a pressure time-series. Leveraging this fact, several surveys using barometric sensors on landed spacecraft have revealed dust devil structures and occurrence rates. However powerful they are, though, such surveys suffer from non-trivial biases that skew the inferred dust devil properties. For example, such surveys are most sensitive to dust devils with the widest and deepest pressure profiles, but the recovered profiles will be distorted, broader and shallow than the actual profiles. In addition, such surveys often do not provide wind speed measurements alongside the pressure time series, and so the durations of the dust devil signals in the time series cannot be directly converted to profile widths. Fortunately, simple statistical and geometric considerations can de-bias these surveys, allowing conversion of the duration of dust devil signals into physical widths, given only a distribution of likely translation velocities, and the recovery of the underlying distributions of physical parameters. In this study, we develop a scheme for de-biasing such surveys. Applying our model to an in-situ survey using data from the Phoenix lander suggests a larger dust flux and a dust devil occurrence rate about ten times larger than previously inferred. Comparing our results to dust devil track surveys suggests only about one in five low-pressure cells lifts sufficient dust to leave a visible track.

  20. Estimation of confidence limits for descriptive indexes derived from autoregressive analysis of time series: Methods and application to heart rate variability.

    PubMed

    Beda, Alessandro; Simpson, David M; Faes, Luca

    2017-01-01

    The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings.

  1. Estimation of confidence limits for descriptive indexes derived from autoregressive analysis of time series: Methods and application to heart rate variability

    PubMed Central

    2017-01-01

    The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings. PMID:28968394

  2. Eisenstein series for infinite-dimensional U-duality groups

    NASA Astrophysics Data System (ADS)

    Fleig, Philipp; Kleinschmidt, Axel

    2012-06-01

    We consider Eisenstein series appearing as coefficients of curvature corrections in the low-energy expansion of type II string theory four-graviton scattering amplitudes. We define these Eisenstein series over all groups in the E n series of string duality groups, and in particular for the infinite-dimensional Kac-Moody groups E 9, E 10 and E 11. We show that, remarkably, the so-called constant term of Kac-Moody-Eisenstein series contains only a finite number of terms for particular choices of a parameter appearing in the definition of the series. This resonates with the idea that the constant term of the Eisenstein series encodes perturbative string corrections in BPS-protected sectors allowing only a finite number of corrections. We underpin our findings with an extensive discussion of physical degeneration limits in D < 3 space-time dimensions.

  3. A Deep Machine Learning Method for Classifying Cyclic Time Series of Biological Signals Using Time-Growing Neural Network.

    PubMed

    Gharehbaghi, Arash; Linden, Maria

    2017-10-12

    This paper presents a novel method for learning the cyclic contents of stochastic time series: the deep time-growing neural network (DTGNN). The DTGNN combines supervised and unsupervised methods in different levels of learning for an enhanced performance. It is employed by a multiscale learning structure to classify cyclic time series (CTS), in which the dynamic contents of the time series are preserved in an efficient manner. This paper suggests a systematic procedure for finding the design parameter of the classification method for a one-versus-multiple class application. A novel validation method is also suggested for evaluating the structural risk, both in a quantitative and a qualitative manner. The effect of the DTGNN on the performance of the classifier is statistically validated through the repeated random subsampling using different sets of CTS, from different medical applications. The validation involves four medical databases, comprised of 108 recordings of the electroencephalogram signal, 90 recordings of the electromyogram signal, 130 recordings of the heart sound signal, and 50 recordings of the respiratory sound signal. Results of the statistical validations show that the DTGNN significantly improves the performance of the classification and also exhibits an optimal structural risk.

  4. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  5. Time-varying spatial data integration and visualization: 4 Dimensions Environmental Observations Platform (4-DEOS)

    NASA Astrophysics Data System (ADS)

    Paciello, Rossana; Coviello, Irina; Filizzola, Carolina; Genzano, Nicola; Lisi, Mariano; Mazzeo, Giuseppe; Pergola, Nicola; Sileo, Giancanio; Tramutoli, Valerio

    2014-05-01

    In environmental studies the integration of heterogeneous and time-varying data, is a very common requirement for investigating and possibly visualize correlations among physical parameters underlying the dynamics of complex phenomena. Datasets used in such kind of applications has often different spatial and temporal resolutions. In some case superimposition of asynchronous layers is required. Traditionally the platforms used to perform spatio-temporal visual data analyses allow to overlay spatial data, managing the time using 'snapshot' data model, each stack of layers being labeled with different time. But this kind of architecture does not incorporate the temporal indexing neither the third spatial dimension which is usually given as an independent additional layer. Conversely, the full representation of a generic environmental parameter P(x,y,z,t) in the 4D space-time domain could allow to handle asynchronous datasets as well as less traditional data-products (e.g. vertical sections, punctual time-series, etc.) . In this paper we present the 4 Dimensions Environmental Observation Platform (4-DEOS), a system based on a web services architecture Client-Broker-Server. This platform is a new open source solution for both a timely access and an easy integration and visualization of heterogeneous (maps, vertical profiles or sections, punctual time series, etc.) asynchronous, geospatial products. The innovative aspect of the 4-DEOS system is that users can analyze data/products individually moving through time, having also the possibility to stop the display of some data/products and focus on other parameters for better studying their temporal evolution. This platform gives the opportunity to choose between two distinct display modes for time interval or for single instant. Users can choose to visualize data/products in two ways: i) showing each parameter in a dedicated window or ii) visualize all parameters overlapped in a single window. A sliding time bar, allows to follow the temporal evolution of the selected data/product. With this software, users have the possibility to identify events partially correlated each other not only in the spatial dimension but also in the time domain even at different time lags.

  6. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    PubMed

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  7. Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) using Complex Quantum Neuron (CQN): Applications to time series prediction.

    PubMed

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2015-11-01

    Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Identification of pests and diseases of Dalbergia hainanensis based on EVI time series and classification of decision tree

    NASA Astrophysics Data System (ADS)

    Luo, Qiu; Xin, Wu; Qiming, Xiong

    2017-06-01

    In the process of vegetation remote sensing information extraction, the problem of phenological features and low performance of remote sensing analysis algorithm is not considered. To solve this problem, the method of remote sensing vegetation information based on EVI time-series and the classification of decision-tree of multi-source branch similarity is promoted. Firstly, to improve the time-series stability of recognition accuracy, the seasonal feature of vegetation is extracted based on the fitting span range of time-series. Secondly, the decision-tree similarity is distinguished by adaptive selection path or probability parameter of component prediction. As an index, it is to evaluate the degree of task association, decide whether to perform migration of multi-source decision tree, and ensure the speed of migration. Finally, the accuracy of classification and recognition of pests and diseases can reach 87%--98% of commercial forest in Dalbergia hainanensis, which is significantly better than that of MODIS coverage accuracy of 80%--96% in this area. Therefore, the validity of the proposed method can be verified.

  9. Attributes characterizing spontaneous ultra-weak photon signals of human subjects.

    PubMed

    Bajpai, Rajendra P; Van Wijk, Eduard P A; Van Wijk, Roeland; van der Greef, Jan

    2013-12-05

    Sixty visible range photon signals spontaneously emitted from the dorsal side of both hands of fifteen human subjects are analyzed with the aim of finding their attributes. The signals are of 30 min duration and detected in bins of 50 ms by two synchronized photo multipliers sensitive in the range (290-630 nm). Each signal is a time series of 36,000 elements. The attributes of its signal are determined from the statistical properties of time series. The mean and variance of time series determine the attributes signal strength and intercept (p₀) and slope (p₁) of the Fano Factor curve. The photon count distribution of the time series determines squeezed state parameters |α|, r, θ and ϕ, squeezed state index (SSI), and sum of the squares of residue (SSR). The correlation between simultaneously detected signals determines intercept (c₀) and slope (c₁) of their correlation curve. The variability of attributes is studied by calculating them in smaller intervals covering the entire signal. The profile of attribute at 12 sites in a subject is more informative and biologically relevant. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Optimal HRF and smoothing parameters for fMRI time series within an autoregressive modeling framework.

    PubMed

    Galka, Andreas; Siniatchkin, Michael; Stephani, Ulrich; Groening, Kristina; Wolff, Stephan; Bosch-Bayard, Jorge; Ozaki, Tohru

    2010-12-01

    The analysis of time series obtained by functional magnetic resonance imaging (fMRI) may be approached by fitting predictive parametric models, such as nearest-neighbor autoregressive models with exogeneous input (NNARX). As a part of the modeling procedure, it is possible to apply instantaneous linear transformations to the data. Spatial smoothing, a common preprocessing step, may be interpreted as such a transformation. The autoregressive parameters may be constrained, such that they provide a response behavior that corresponds to the canonical haemodynamic response function (HRF). We present an algorithm for estimating the parameters of the linear transformations and of the HRF within a rigorous maximum-likelihood framework. Using this approach, an optimal amount of both the spatial smoothing and the HRF can be estimated simultaneously for a given fMRI data set. An example from a motor-task experiment is discussed. It is found that, for this data set, weak, but non-zero, spatial smoothing is optimal. Furthermore, it is demonstrated that activated regions can be estimated within the maximum-likelihood framework.

  11. Atmospheric mold spore counts in relation to meteorological parameters

    NASA Astrophysics Data System (ADS)

    Katial, R. K.; Zhang, Yiming; Jones, Richard H.; Dyer, Philip D.

    Fungal spore counts of Cladosporium, Alternaria, and Epicoccum were studied during 8 years in Denver, Colorado. Fungal spore counts were obtained daily during the pollinating season by a Rotorod sampler. Weather data were obtained from the National Climatic Data Center. Daily averages of temperature, relative humidity, daily precipitation, barometric pressure, and wind speed were studied. A time series analysis was performed on the data to mathematically model the spore counts in relation to weather parameters. Using SAS PROC ARIMA software, a regression analysis was performed, regressing the spore counts on the weather variables assuming an autoregressive moving average (ARMA) error structure. Cladosporium was found to be positively correlated (P<0.02) with average daily temperature, relative humidity, and negatively correlated with precipitation. Alternaria and Epicoccum did not show increased predictability with weather variables. A mathematical model was derived for Cladosporium spore counts using the annual seasonal cycle and significant weather variables. The model for Alternaria and Epicoccum incorporated the annual seasonal cycle. Fungal spore counts can be modeled by time series analysis and related to meteorological parameters controlling for seasonallity; this modeling can provide estimates of exposure to fungal aeroallergens.

  12. Discovery of Potent, Orally Bioavailable Inhibitors of Human Cytomegalovirus

    PubMed Central

    2016-01-01

    A high-throughput screen based on a viral replication assay was used to identify inhibitors of the human cytomegalovirus. Using this approach, hit compound 1 was identified as a 4 μM inhibitor of HCMV that was specific and selective over other herpes viruses. Time of addition studies indicated compound 1 exerted its antiviral effect early in the viral life cycle. Mechanism of action studies also revealed that this series inhibited infection of MRC-5 and ARPE19 cells by free virus and via direct cell-to-cell spread from infected to uninfected cells. Preliminary structure–activity relationships demonstrated that the potency of compound 1 could be improved to a low nanomolar level, but metabolic stability was a key optimization parameter for this series. A strategy focused on minimizing metabolic hydrolysis of the N1-amide led to an alternative scaffold in this series with improved metabolic stability and good pharmacokinetic parameters in rat. PMID:27190604

  13. Single event time series analysis in a binary karst catchment evaluated using a groundwater model (Lurbach system, Austria).

    PubMed

    Mayaud, C; Wagner, T; Benischke, R; Birk, S

    2014-04-16

    The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the synthetic system allow to deduce that similar aquifer properties are relevant in both systems. In particular, the heterogeneity of aquifer parameters appears to be a controlling factor. Moreover, the location of the overflow connecting the sub-catchments of the two springs is found to be of primary importance, regarding the occurrence of inter-catchment flow. This further supports our current understanding of an overflow zone located in the upper part of the Lurbach karst aquifer. Thus, time series analysis of single events can potentially be used to characterize transient inter-catchment flow behavior of karst systems.

  14. Single event time series analysis in a binary karst catchment evaluated using a groundwater model (Lurbach system, Austria)

    PubMed Central

    Mayaud, C.; Wagner, T.; Benischke, R.; Birk, S.

    2014-01-01

    Summary The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the synthetic system allow to deduce that similar aquifer properties are relevant in both systems. In particular, the heterogeneity of aquifer parameters appears to be a controlling factor. Moreover, the location of the overflow connecting the sub-catchments of the two springs is found to be of primary importance, regarding the occurrence of inter-catchment flow. This further supports our current understanding of an overflow zone located in the upper part of the Lurbach karst aquifer. Thus, time series analysis of single events can potentially be used to characterize transient inter-catchment flow behavior of karst systems. PMID:24748687

  15. Disentangling Time-series Spectra with Gaussian Processes: Applications to Radial Velocity Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czekala, Ian; Mandel, Kaisey S.; Andrews, Sean M.

    Measurements of radial velocity variations from the spectroscopic monitoring of stars and their companions are essential for a broad swath of astrophysics; these measurements provide access to the fundamental physical properties that dictate all phases of stellar evolution and facilitate the quantitative study of planetary systems. The conversion of those measurements into both constraints on the orbital architecture and individual component spectra can be a serious challenge, however, especially for extreme flux ratio systems and observations with relatively low sensitivity. Gaussian processes define sampling distributions of flexible, continuous functions that are well-motivated for modeling stellar spectra, enabling proficient searches formore » companion lines in time-series spectra. We introduce a new technique for spectral disentangling, where the posterior distributions of the orbital parameters and intrinsic, rest-frame stellar spectra are explored simultaneously without needing to invoke cross-correlation templates. To demonstrate its potential, this technique is deployed on red-optical time-series spectra of the mid-M-dwarf binary LP661-13. We report orbital parameters with improved precision compared to traditional radial velocity analysis and successfully reconstruct the primary and secondary spectra. We discuss potential applications for other stellar and exoplanet radial velocity techniques and extensions to time-variable spectra. The code used in this analysis is freely available as an open-source Python package.« less

  16. Using diurnal temperature signals to infer vertical groundwater-surface water exchange

    USGS Publications Warehouse

    Irvine, Dylan J.; Briggs, Martin A.; Lautz, Laura K.; Gordon, Ryan P.; McKenzie, Jeffrey M.; Cartwright, Ian

    2017-01-01

    Heat is a powerful tracer to quantify fluid exchange between surface water and groundwater. Temperature time series can be used to estimate pore water fluid flux, and techniques can be employed to extend these estimates to produce detailed plan-view flux maps. Key advantages of heat tracing include cost-effective sensors and ease of data collection and interpretation, without the need for expensive and time-consuming laboratory analyses or induced tracers. While the collection of temperature data in saturated sediments is relatively straightforward, several factors influence the reliability of flux estimates that are based on time series analysis (diurnal signals) of recorded temperatures. Sensor resolution and deployment are particularly important in obtaining robust flux estimates in upwelling conditions. Also, processing temperature time series data involves a sequence of complex steps, including filtering temperature signals, selection of appropriate thermal parameters, and selection of the optimal analytical solution for modeling. This review provides a synthesis of heat tracing using diurnal temperature oscillations, including details on optimal sensor selection and deployment, data processing, model parameterization, and an overview of computing tools available. Recent advances in diurnal temperature methods also provide the opportunity to determine local saturated thermal diffusivity, which can improve the accuracy of fluid flux modeling and sensor spacing, which is related to streambed scour and deposition. These parameters can also be used to determine the reliability of flux estimates from the use of heat as a tracer.

  17. Microforms in gravel bed rivers: Formation, disintegration, and effects on bedload transport

    USGS Publications Warehouse

    Strom, K.; Papanicolaou, A.N.; Evangelopoulos, N.; Odeh, M.

    2004-01-01

    This research aims to advance current knowledge on cluster formation and evolution by tackling some of the aspects associated with cluster microtopography and the effects of clusters on bedload transport. The specific objectives of the study are (1) to identify the bed shear stress range in which clusters form and disintegrate, (2) to quantitatively describe the spacing characteristics and orientation of clusters with respect to flow characteristics, (3) to quantify the effects clusters have on the mean bedload rate, and (4) to assess the effects of clusters on the pulsating nature of bedload. In order to meet the objectives of this study, two main experimental scenarios, namely, Test Series A and B (20 experiments overall) are considered in a laboratory flume under well-controlled conditions. Series A tests are performed to address objectives (1) and (2) while Series B is designed to meet objectives (3) and (4). Results show that cluster microforms develop in uniform sediment at 1.25 to 2 times the Shields parameter of an individual particle and start disintegrating at about 2.25 times the Shields parameter. It is found that during an unsteady flow event, effects of clusters on bedload transport rate can be classified in three different phases: a sink phase where clusters absorb incoming sediment, a neutral phase where clusters do not affect bedload, and a source phase where clusters release particles. Clusters also increase the magnitude of the fluctuations in bedload transport rate, showing that clusters amplify the unsteady nature of bedload transport. A fourth-order autoregressive, autoregressive integrated moving average model is employed to describe the time series of bedload and provide a predictive formula for predicting bedload at different periods. Finally, a change-point analysis enhanced with a binary segmentation procedure is performed to identify the abrupt changes in the bedload statistic characteristics due to the effects of clusters and detect the different phases in bedload time series using probability theory. The analysis verifies the experimental findings that three phases are detected in the bedload rate time series structure, namely, sink, neutral, and source. ?? ASCE / JUNE 2004.

  18. Dynamic and Regression Modeling of Ocean Variability in the Tide-Gauge Record at Seasonal and Longer Periods

    NASA Technical Reports Server (NTRS)

    Hill, Emma M.; Ponte, Rui M.; Davis, James L.

    2007-01-01

    Comparison of monthly mean tide-gauge time series to corresponding model time series based on a static inverted barometer (IB) for pressure-driven fluctuations and a ocean general circulation model (OM) reveals that the combined model successfully reproduces seasonal and interannual changes in relative sea level at many stations. Removal of the OM and IB from the tide-gauge record produces residual time series with a mean global variance reduction of 53%. The OM is mis-scaled for certain regions, and 68% of the residual time series contain a significant seasonal variability after removal of the OM and IB from the tide-gauge data. Including OM admittance parameters and seasonal coefficients in a regression model for each station, with IB also removed, produces residual time series with mean global variance reduction of 71%. Examination of the regional improvement in variance caused by scaling the OM, including seasonal terms, or both, indicates weakness in the model at predicting sea-level variation for constricted ocean regions. The model is particularly effective at reproducing sea-level variation for stations in North America, Europe, and Japan. The RMS residual for many stations in these areas is 25-35 mm. The production of "cleaner" tide-gauge time series, with oceanographic variability removed, is important for future analysis of nonsecular and regionally differing sea-level variations. Understanding the ocean model's strengths and weaknesses will allow for future improvements of the model.

  19. Coronal Mass Ejection Data Clustering and Visualization of Decision Trees

    NASA Astrophysics Data System (ADS)

    Ma, Ruizhe; Angryk, Rafal A.; Riley, Pete; Filali Boubrahimi, Soukaina

    2018-05-01

    Coronal mass ejections (CMEs) can be categorized as either “magnetic clouds” (MCs) or non-MCs. Features such as a large magnetic field, low plasma-beta, and low proton temperature suggest that a CME event is also an MC event; however, so far there is neither a definitive method nor an automatic process to distinguish the two. Human labeling is time-consuming, and results can fluctuate owing to the imprecise definition of such events. In this study, we approach the problem of MC and non-MC distinction from a time series data analysis perspective and show how clustering can shed some light on this problem. Although many algorithms exist for traditional data clustering in the Euclidean space, they are not well suited for time series data. Problems such as inadequate distance measure, inaccurate cluster center description, and lack of intuitive cluster representations need to be addressed for effective time series clustering. Our data analysis in this work is twofold: clustering and visualization. For clustering we compared the results from the popular hierarchical agglomerative clustering technique to a distance density clustering heuristic we developed previously for time series data clustering. In both cases, dynamic time warping will be used for similarity measure. For classification as well as visualization, we use decision trees to aggregate single-dimensional clustering results to form a multidimensional time series decision tree, with averaged time series to present each decision. In this study, we achieved modest accuracy and, more importantly, an intuitive interpretation of how different parameters contribute to an MC event.

  20. Output-only modal parameter estimator of linear time-varying structural systems based on vector TAR model and least squares support vector machine

    NASA Astrophysics Data System (ADS)

    Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei

    2018-01-01

    Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.

  1. Automated Bayesian model development for frequency detection in biological time series.

    PubMed

    Granqvist, Emma; Oldroyd, Giles E D; Morris, Richard J

    2011-06-24

    A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure.

  2. Automated Bayesian model development for frequency detection in biological time series

    PubMed Central

    2011-01-01

    Background A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. Results In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Conclusions Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure. PMID:21702910

  3. VizieR Online Data Catalog: AR Sco VLA radio observations (Stanway+, 2018)

    NASA Astrophysics Data System (ADS)

    Stanway, E. R.; Marsh, T. R.; Chote, P.; Gaensicke, B. T.; Steeghs, D.; Wheatley, P. J.

    2018-02-01

    Time series VLA radio observations were undertaken of the highly variable white dwarf binary AR Scorpii. These were analysed for periodicity, spectral behaviour and other characteristics. Here we present time series data in the Stokes I parameter at three frequencies. These were centred at 1.5GHz (1GHz bandwidth), 5GHz (2GHz bandwidth) and 9GHz (2GHz bandwidth). The AR Sco binary is unresolved at these frequencies. In the case of the 1.5GHz data, fluxes have been deconvolved with those of a neighbouring object. (3 data files).

  4. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  5. Separation of Trend and Chaotic Components of Time Series and Estimation of Their Characteristics by Linear Splines

    NASA Astrophysics Data System (ADS)

    Kryanev, A. V.; Ivanov, V. V.; Romanova, A. O.; Sevastyanov, L. A.; Udumyan, D. K.

    2018-03-01

    This paper considers the problem of separating the trend and the chaotic component of chaotic time series in the absence of information on the characteristics of the chaotic component. Such a problem arises in nuclear physics, biomedicine, and many other applied fields. The scheme has two stages. At the first stage, smoothing linear splines with different values of smoothing parameter are used to separate the "trend component." At the second stage, the method of least squares is used to find the unknown variance σ2 of the noise component.

  6. Detecting the sampling rate through observations

    NASA Astrophysics Data System (ADS)

    Shoji, Isao

    2018-09-01

    This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.

  7. Discriminating the Mediterranean Pinus spp. using the land surface phenology extracted from the whole MODIS NDVI time series and machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Rodriguez-Galiano, Victor; Aragones, David; Caparros-Santiago, Jose A.; Navarro-Cerrillo, Rafael M.

    2017-10-01

    Land surface phenology (LSP) can improve the characterisation of forest areas and their change processes. The aim of this work was: i) to characterise the temporal dynamics in Mediterranean Pinus forests, and ii) to evaluate the potential of LSP for species discrimination. The different experiments were based on 679 mono-specific plots for the 5 native species on the Iberian Peninsula: P. sylvestris, P. pinea, P. halepensis, P. nigra and P. pinaster. The entire MODIS NDVI time series (2000-2016) of the MOD13Q1 product was used to characterise phenology. The following phenological parameters were extracted: the start, end and median days of the season, and the length of the season in days, as well as the base value, maximum value, amplitude and integrated value. Multi-temporal metrics were calculated to synthesise the inter-annual variability of the phenological parameters. The species were discriminated by the application of Random Forest (RF) classifiers from different subsets of variables: model 1) NDVI-smoothed time series, model 2) multi-temporal metrics of the phenological parameters, and model 3) multi-temporal metrics and the auxiliary physical variables (altitude, slope, aspect and distance to the coastline). Model 3 was the best, with an overall accuracy of 82%, a kappa coefficient of 0.77 and whose most important variables were: elevation, coast distance, and the end and start days of the growing season. The species that presented the largest errors was P. nigra, (kappa= 0.45), having locations with a similar behaviour to P. sylvestris or P. pinaster.

  8. Contamination Event Detection with Multivariate Time-Series Data in Agricultural Water Monitoring †

    PubMed Central

    Mao, Yingchi; Qi, Hai; Ping, Ping; Li, Xiaofang

    2017-01-01

    Time series data of multiple water quality parameters are obtained from the water sensor networks deployed in the agricultural water supply network. The accurate and efficient detection and warning of contamination events to prevent pollution from spreading is one of the most important issues when pollution occurs. In order to comprehensively reduce the event detection deviation, a spatial–temporal-based event detection approach with multivariate time-series data for water quality monitoring (M-STED) was proposed. The M-STED approach includes three parts. The first part is that M-STED adopts a Rule K algorithm to select backbone nodes as the nodes in the CDS, and forward the sensed data of multiple water parameters. The second part is to determine the state of each backbone node with back propagation neural network models and the sequential Bayesian analysis in the current timestamp. The third part is to establish a spatial model with Bayesian networks to estimate the state of the backbones in the next timestamp and trace the “outlier” node to its neighborhoods to detect a contamination event. The experimental results indicate that the average detection rate is more than 80% with M-STED and the false detection rate is lower than 9%, respectively. The M-STED approach can improve the rate of detection by about 40% and reduce the false alarm rate by about 45%, compared with the event detection with a single water parameter algorithm, S-STED. Moreover, the proposed M-STED can exhibit better performance in terms of detection delay and scalability. PMID:29207535

  9. Evaluation of artificial time series microarray data for dynamic gene regulatory network inference.

    PubMed

    Xenitidis, P; Seimenis, I; Kakolyris, S; Adamopoulos, A

    2017-08-07

    High-throughput technology like microarrays is widely used in the inference of gene regulatory networks (GRNs). We focused on time series data since we are interested in the dynamics of GRNs and the identification of dynamic networks. We evaluated the amount of information that exists in artificial time series microarray data and the ability of an inference process to produce accurate models based on them. We used dynamic artificial gene regulatory networks in order to create artificial microarray data. Key features that characterize microarray data such as the time separation of directly triggered genes, the percentage of directly triggered genes and the triggering function type were altered in order to reveal the limits that are imposed by the nature of microarray data on the inference process. We examined the effect of various factors on the inference performance such as the network size, the presence of noise in microarray data, and the network sparseness. We used a system theory approach and examined the relationship between the pole placement of the inferred system and the inference performance. We examined the relationship between the inference performance in the time domain and the true system parameter identification. Simulation results indicated that time separation and the percentage of directly triggered genes are crucial factors. Also, network sparseness, the triggering function type and noise in input data affect the inference performance. When two factors were simultaneously varied, it was found that variation of one parameter significantly affects the dynamic response of the other. Crucial factors were also examined using a real GRN and acquired results confirmed simulation findings with artificial data. Different initial conditions were also used as an alternative triggering approach. Relevant results confirmed that the number of datasets constitutes the most significant parameter with regard to the inference performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Rules of parameter variation in homotype series of birdsong can indicate a 'sollwert' significance.

    PubMed

    Hultsch, H; Todt, D

    1996-11-01

    Various bird species produce songs which include homotype pattern series, i.e. segments composed of a number of repeated vocal units. We compared such units and analyzed the variation of their parameters, especially in the time and the frequency domain. In addition, we examined whether and how serial changes of both the range and the trend of variation were related to song constituents following the repetitions. Data evaluation showed that variation of specific serial parameters (e.g., unit pitch or unit duration) occurring in the whistle song-types of nightingales (Luscinia megarhynchos) were converging towards a distinct terminal value. Although song-types differed in this terminal value, it was found to play the role of a key cue ('sollwert'). The continuation of a song depended on a preceding attainment of its specific 'sollwert'. Our results suggest that the study of signal parameters and rules of their variations make a useful tool for the behavioral access to the properties of the control systems mediating serial signal performances.

  11. Early-time solution of the horizontal unconfined aquifer in the build-up phase

    NASA Astrophysics Data System (ADS)

    Gravanis, Elias; Akylas, Evangelos

    2017-04-01

    The Boussinesq equation is a dynamical equation for the free surface of saturated subsurface flows over an impervious bed. Boussinesq equation is non-linear. The non-linearity comes from the reduction of the dimensionality of the problem: The flow is assumed to be vertically homogeneous, therefore the flow rate through a cross section of the flow is proportional to the free surface height times the hydraulic gradient, which is assumed to be equal to the slope of the free surface (Dupuit approximation). In general, 'vertically' means normally on the bed; combining the Dupuit approximation with the continuity equation leads to the Boussinesq equation. There are very few transient exact solutions. Self- similar solutions have been constructed in the past by various authors. A power series type of solution was derived for a self-similar Boussinesq equation by Barenblatt in 1990. That type of solution has generated a certain amount of literature. For the unconfined flow case for zero recharge rate Boussinesq derived for the horizontal aquifer an exact solution assuming separation of variables. This is actually an exact asymptotic solution of the horizontal aquifer recession phase for late times. The kinematic wave is an interesting solution obtained by dropping the non-linear term in the Boussinesq equation. Although it is an approximate solution, and holds well only for small values of the Henderson and Wooding λ parameter (that is, for steep slopes, high conductivity or small recharge rate), it becomes less and less approximate for smaller values of the parameter, that is, it is asymptotically exact with respect to that parameter. In the present work we consider the case of the unconfined subsurface flow over horizontal bed in the build-up phase under constant recharge rate. This is a case with an infinite Henderson and Wooding parameter, that is, it is the limiting case where the non-linear term is present in the Boussinesq while the linear spatial derivative term goes away. Nonetheless, no analogue of the kinematic wave or the Boussinesq separable solution exists in this case. The late time state of the build-up phase under constant recharge rate is very simply the steady state solution. Our aim is to construct the early time asymptotic solution of this problem. The solution is expressed as a power series of a suitable similarity variable, which is constructed so that to satisfy the boundary conditions at both ends of the aquifer, that is, it is a polynomial approximation of the exact solution. The series turn out to be asymptotic and it is regularized by re-summation techniques which are used to define divergent series. The outflow rate in this regime is linear in time, and the (dimensionless) coefficient is calculated to eight significant figures. The local error of the series is quantified by its deviation from satisfying the self-similar Boussinesq equation at every point. The local error turns out to be everywhere positive, hence, so is the integrated error, which in turn quantifies the degree of convergence of the series to the exact solution.

  12. Land Surface Phenologies and Seasonalities of Croplands and Grasslands in the US Prairie Pothole Region Using Passive Microwave Data (2003-2015)

    NASA Astrophysics Data System (ADS)

    Alemu, W. G.; Henebry, G. M.

    2017-12-01

    Grasslands and wetlands in the Prairie Pothole Region (PPR) have been converted to croplands in recent years. Crops cultivated in the PPR are also changing: spring wheat and alfalfa/hay are being switched to corn and soybean due to biofuel demand. According to the USDA Cropland Data Layer (CDL) from 2003 to 2015, spring wheat significantly decreased (r2 = 0.74), while corn and soybeans significantly increased (r2 = 0.86). We characterized land surface phenologies and land surface seasonalities across the PPR using the finer temporal (twice daily) but much lower spatial (25 km) resolution Advanced Microwave Scanning Radiometer (AMSR: blended from AMSR-E and AMSR2) enhanced land surface parameters for 2003-2015 (DOY 91-330 annual cycles). We tracked the temporal development of these land surface parameters as a function of accumulated growing degree-days (AGDD) based on the AMSR retrieved air temperature data. Growing degree-days (GDD) revealed distinct seasonality typical to temperate grasslands and croplands. GDD peaks were 23°C and it peaks at 1700°C AGDD. Precipitable water vapor (V) displayed seasonality comparable to GDD. Vegetation optical depth (VOD) revealed distinct land surface phenologies for grasslands versus croplands. We explored the changing crop fractions within the 25 km AMSR pixels using the CDL. Crop-dominated sites VOD time series caught the early spring growth, ploughing, and crop growth dynamics. In contrast, the VOD time series at grass-dominated sites exhibited a lower but more extended amplitude throughout the non-frozen season. VODs peaked at 1.6 and 1.3 for croplands and grasslands, respectively. Croplands peaked about a month later than grasslands (2200 °C AGDD vs. 1600 °C AGDD). The other parameters available from the AMSR dataset—soil moisture (sm), and fractional open water (fw)—together with the AGDD time series constructed from the AMSR air temperature data revealed the passage of storm systems during the growing season. Soil moisture and fractional water were slightly higher in early spring compared to the main growing season; however, neither parameter exhibited distinct seasonality. VOD and fw time series curves displayed significant interannual variation that can enable to augment drought-monitoring efforts.

  13. Ordinary kriging as a tool to estimate historical daily streamflow records

    USGS Publications Warehouse

    Farmer, William H.

    2016-01-01

    Efficient and responsible management of water resources relies on accurate streamflow records. However, many watersheds are ungaged, limiting the ability to assess and understand local hydrology. Several tools have been developed to alleviate this data scarcity, but few provide continuous daily streamflow records at individual streamgages within an entire region. Building on the history of hydrologic mapping, ordinary kriging was extended to predict daily streamflow time series on a regional basis. Pooling parameters to estimate a single, time-invariant characterization of spatial semivariance structure is shown to produce accurate reproduction of streamflow. This approach is contrasted with a time-varying series of variograms, representing the temporal evolution and behavior of the spatial semivariance structure. Furthermore, the ordinary kriging approach is shown to produce more accurate time series than more common, single-index hydrologic transfers. A comparison between topological kriging and ordinary kriging is less definitive, showing the ordinary kriging approach to be significantly inferior in terms of Nash–Sutcliffe model efficiencies while maintaining significantly superior performance measured by root mean squared errors. Given the similarity of performance and the computational efficiency of ordinary kriging, it is concluded that ordinary kriging is useful for first-order approximation of daily streamflow time series in ungaged watersheds.

  14. Improved Analysis of Time Series with Temporally Correlated Errors: An Algorithm that Reduces the Computation Time.

    NASA Astrophysics Data System (ADS)

    Langbein, J. O.

    2016-12-01

    Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.

  15. Intermittent electron density and temperature fluctuations and associated fluxes in the Alcator C-Mod scrape-off layer

    NASA Astrophysics Data System (ADS)

    Kube, R.; Garcia, O. E.; Theodorsen, A.; Brunner, D.; Kuang, A. Q.; LaBombard, B.; Terry, J. L.

    2018-06-01

    The Alcator C-Mod mirror Langmuir probe system has been used to sample data time series of fluctuating plasma parameters in the outboard mid-plane far scrape-off layer. We present a statistical analysis of one second long time series of electron density, temperature, radial electric drift velocity and the corresponding particle and electron heat fluxes. These are sampled during stationary plasma conditions in an ohmically heated, lower single null diverted discharge. The electron density and temperature are strongly correlated and feature fluctuation statistics similar to the ion saturation current. Both electron density and temperature time series are dominated by intermittent, large-amplitude burst with an exponential distribution of both burst amplitudes and waiting times between them. The characteristic time scale of the large-amplitude bursts is approximately 15 μ {{s}}. Large-amplitude velocity fluctuations feature a slightly faster characteristic time scale and appear at a faster rate than electron density and temperature fluctuations. Describing these time series as a superposition of uncorrelated exponential pulses, we find that probability distribution functions, power spectral densities as well as auto-correlation functions of the data time series agree well with predictions from the stochastic model. The electron particle and heat fluxes present large-amplitude fluctuations. For this low-density plasma, the radial electron heat flux is dominated by convection, that is, correlations of fluctuations in the electron density and radial velocity. Hot and dense blobs contribute only a minute fraction of the total fluctuation driven heat flux.

  16. Approximate scaling properties of RNA free energy landscapes

    NASA Technical Reports Server (NTRS)

    Baskaran, S.; Stadler, P. F.; Schuster, P.

    1996-01-01

    RNA free energy landscapes are analysed by means of "time-series" that are obtained from random walks restricted to excursion sets. The power spectra, the scaling of the jump size distribution, and the scaling of the curve length measured with different yard stick lengths are used to describe the structure of these "time series". Although they are stationary by construction, we find that their local behavior is consistent with both AR(1) and self-affine processes. Random walks confined to excursion sets (i.e., with the restriction that the fitness value exceeds a certain threshold at each step) exhibit essentially the same statistics as free random walks. We find that an AR(1) time series is in general approximately self-affine on timescales up to approximately the correlation length. We present an empirical relation between the correlation parameter rho of the AR(1) model and the exponents characterizing self-affinity.

  17. Time-Series Analysis of Intermittent Velocity Fluctuations in Turbulent Boundary Layers

    NASA Astrophysics Data System (ADS)

    Zayernouri, Mohsen; Samiee, Mehdi; Meerschaert, Mark M.; Klewicki, Joseph

    2017-11-01

    Classical turbulence theory is modified under the inhomogeneities produced by the presence of a wall. In this regard, we propose a new time series model for the streamwise velocity fluctuations in the inertial sub-layer of turbulent boundary layers. The new model employs tempered fractional calculus and seamlessly extends the classical 5/3 spectral model of Kolmogorov in the inertial subrange to the whole spectrum from large to small scales. Moreover, the proposed time-series model allows the quantification of data uncertainties in the underlying stochastic cascade of turbulent kinetic energy. The model is tested using well-resolved streamwise velocity measurements up to friction Reynolds numbers of about 20,000. The physics of the energy cascade are briefly described within the context of the determined model parameters. This work was supported by the AFOSR Young Investigator Program (YIP) award (FA9550-17-1-0150) and partially by MURI/ARO (W911NF-15-1-0562).

  18. A Python-based interface to examine motions in time series of solar images

    NASA Astrophysics Data System (ADS)

    Campos-Rozo, J. I.; Vargas Domínguez, S.

    2017-10-01

    Python is considered to be a mature programming language, besides of being widely accepted as an engaging option for scientific analysis in multiple areas, as will be presented in this work for the particular case of solar physics research. SunPy is an open-source library based on Python that has been recently developed to furnish software tools to solar data analysis and visualization. In this work we present a graphical user interface (GUI) based on Python and Qt to effectively compute proper motions for the analysis of time series of solar data. This user-friendly computing interface, that is intended to be incorporated to the Sunpy library, uses a local correlation tracking technique and some extra tools that allows the selection of different parameters to calculate, vizualize and analyze vector velocity fields of solar data, i.e. time series of solar filtergrams and magnetograms.

  19. Hybrid grammar-based approach to nonlinear dynamical system identification from biological time series

    NASA Astrophysics Data System (ADS)

    McKinney, B. A.; Crowe, J. E., Jr.; Voss, H. U.; Crooke, P. S.; Barney, N.; Moore, J. H.

    2006-02-01

    We introduce a grammar-based hybrid approach to reverse engineering nonlinear ordinary differential equation models from observed time series. This hybrid approach combines a genetic algorithm to search the space of model architectures with a Kalman filter to estimate the model parameters. Domain-specific knowledge is used in a context-free grammar to restrict the search space for the functional form of the target model. We find that the hybrid approach outperforms a pure evolutionary algorithm method, and we observe features in the evolution of the dynamical models that correspond with the emergence of favorable model components. We apply the hybrid method to both artificially generated time series and experimentally observed protein levels from subjects who received the smallpox vaccine. From the observed data, we infer a cytokine protein interaction network for an individual’s response to the smallpox vaccine.

  20. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.

  2. A new estimator method for GARCH models

    NASA Astrophysics Data System (ADS)

    Onody, R. N.; Favaro, G. M.; Cazaroto, E. R.

    2007-06-01

    The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.

  3. Large-deviation probabilities for correlated Gaussian processes and intermittent dynamical systems

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Nicol, Matthew; Kantz, Holger

    2018-05-01

    In its classical version, the theory of large deviations makes quantitative statements about the probability of outliers when estimating time averages, if time series data are identically independently distributed. We study large-deviation probabilities (LDPs) for time averages in short- and long-range correlated Gaussian processes and show that long-range correlations lead to subexponential decay of LDPs. A particular deterministic intermittent map can, depending on a control parameter, also generate long-range correlated time series. We illustrate numerically, in agreement with the mathematical literature, that this type of intermittency leads to a power law decay of LDPs. The power law decay holds irrespective of whether the correlation time is finite or infinite, and hence irrespective of whether the central limit theorem applies or not.

  4. Signatures of ecological processes in microbial community time series.

    PubMed

    Faust, Karoline; Bauchinger, Franziska; Laroche, Béatrice; de Buyl, Sophie; Lahti, Leo; Washburne, Alex D; Gonze, Didier; Widder, Stefanie

    2018-06-28

    Growth rates, interactions between community members, stochasticity, and immigration are important drivers of microbial community dynamics. In sequencing data analysis, such as network construction and community model parameterization, we make implicit assumptions about the nature of these drivers and thereby restrict model outcome. Despite apparent risk of methodological bias, the validity of the assumptions is rarely tested, as comprehensive procedures are lacking. Here, we propose a classification scheme to determine the processes that gave rise to the observed time series and to enable better model selection. We implemented a three-step classification scheme in R that first determines whether dependence between successive time steps (temporal structure) is present in the time series and then assesses with a recently developed neutrality test whether interactions between species are required for the dynamics. If the first and second tests confirm the presence of temporal structure and interactions, then parameters for interaction models are estimated. To quantify the importance of temporal structure, we compute the noise-type profile of the community, which ranges from black in case of strong dependency to white in the absence of any dependency. We applied this scheme to simulated time series generated with the Dirichlet-multinomial (DM) distribution, Hubbell's neutral model, the generalized Lotka-Volterra model and its discrete variant (the Ricker model), and a self-organized instability model, as well as to human stool microbiota time series. The noise-type profiles for all but DM data clearly indicated distinctive structures. The neutrality test correctly classified all but DM and neutral time series as non-neutral. The procedure reliably identified time series for which interaction inference was suitable. Both tests were required, as we demonstrated that all structured time series, including those generated with the neutral model, achieved a moderate to high goodness of fit to the Ricker model. We present a fast and robust scheme to classify community structure and to assess the prevalence of interactions directly from microbial time series data. The procedure not only serves to determine ecological drivers of microbial dynamics, but also to guide selection of appropriate community models for prediction and follow-up analysis.

  5. Analysis of Parametric Adaptive Signal Detection with Applications to Radars and Hyperspectral Imaging

    DTIC Science & Technology

    2010-02-01

    98 8.4.5 Training Screening ............................. .................................................................99 8.5 Experimental...associated with the proposed parametric model. Several im- portant issues are discussed, including model order selection, training screening , and time...parameters associated with the NS-AR model. In addition, we develop model order selection, training screening , and time-series based whitening and

  6. The comparison study among several data transformations in autoregressive modeling

    NASA Astrophysics Data System (ADS)

    Setiyowati, Susi; Waluyo, Ramdhani Try

    2015-12-01

    In finance, the adjusted close of stocks are used to observe the performance of a company. The extreme prices, which may increase or decrease drastically, are often become particular concerned since it can impact to bankruptcy. As preventing action, the investors have to observe the future (forecasting) stock prices comprehensively. For that purpose, time series analysis could be one of statistical methods that can be implemented, for both stationary and non-stationary processes. Since the variability process of stocks prices tend to large and also most of time the extreme values are always exist, then it is necessary to do data transformation so that the time series models, i.e. autoregressive model, could be applied appropriately. One of popular data transformation in finance is return model, in addition to ratio of logarithm and some others Tukey ladder transformation. In this paper these transformations are applied to AR stationary models and non-stationary ARCH and GARCH models through some simulations with varying parameters. As results, this work present the suggestion table that shows transformations behavior for some condition of parameters and models. It is confirmed that the better transformation is obtained, depends on type of data distributions. In other hands, the parameter conditions term give significant influence either.

  7. An assessment of the ability of Bartlett-Lewis type of rainfall models to reproduce drought statistics

    NASA Astrophysics Data System (ADS)

    Pham, M. T.; Vanhaute, W. J.; Vandenberghe, S.; De Baets, B.; Verhoest, N. E. C.

    2013-12-01

    Of all natural disasters, the economic and environmental consequences of droughts are among the highest because of their longevity and widespread spatial extent. Because of their extreme behaviour, studying droughts generally requires long time series of historical climate data. Rainfall is a very important variable for calculating drought statistics, for quantifying historical droughts or for assessing the impact on other hydrological (e.g. water stage in rivers) or agricultural (e.g. irrigation requirements) variables. Unfortunately, time series of historical observations are often too short for such assessments. To circumvent this, one may rely on the synthetic rainfall time series from stochastic point process rainfall models, such as Bartlett-Lewis models. The present study investigates whether drought statistics are preserved when simulating rainfall with Bartlett-Lewis models. Therefore, a 105 yr 10 min rainfall time series obtained at Uccle, Belgium is used as a test case. First, drought events were identified on the basis of the Effective Drought Index (EDI), and each event was characterized by two variables, i.e. drought duration (D) and drought severity (S). As both parameters are interdependent, a multivariate distribution function, which makes use of a copula, was fitted. Based on the copula, four types of drought return periods are calculated for observed as well as simulated droughts and are used to evaluate the ability of the rainfall models to simulate drought events with the appropriate characteristics. Overall, all Bartlett-Lewis model types studied fail to preserve extreme drought statistics, which is attributed to the model structure and to the model stationarity caused by maintaining the same parameter set during the whole simulation period.

  8. A copula-based assessment of Bartlett-Lewis type of rainfall models for preserving drought statistics

    NASA Astrophysics Data System (ADS)

    Pham, M. T.; Vanhaute, W. J.; Vandenberghe, S.; De Baets, B.; Verhoest, N. E. C.

    2013-06-01

    Of all natural disasters, the economic and environmental consequences of droughts are among the highest because of their longevity and widespread spatial extent. Because of their extreme behaviour, studying droughts generally requires long time series of historical climate data. Rainfall is a very important variable for calculating drought statistics, for quantifying historical droughts or for assessing the impact on other hydrological (e.g. water stage in rivers) or agricultural (e.g. irrigation requirements) variables. Unfortunately, time series of historical observations are often too short for such assessments. To circumvent this, one may rely on the synthetic rainfall time series from stochastic point process rainfall models, such as Bartlett-Lewis models. The present study investigates whether drought statistics are preserved when simulating rainfall with Bartlett-Lewis models. Therefore, a 105 yr 10 min rainfall time series obtained at Uccle, Belgium is used as test case. First, drought events were identified on the basis of the Effective Drought Index (EDI), and each event was characterized by two variables, i.e. drought duration (D) and drought severity (S). As both parameters are interdependent, a multivariate distribution function, which makes use of a copula, was fitted. Based on the copula, four types of drought return periods are calculated for observed as well as simulated droughts and are used to evaluate the ability of the rainfall models to simulate drought events with the appropriate characteristics. Overall, all Bartlett-Lewis type of models studied fail in preserving extreme drought statistics, which is attributed to the model structure and to the model stationarity caused by maintaining the same parameter set during the whole simulation period.

  9. EO-1 Hyperion Reflectance Time Series at Calibration and Validation Sites: Stability and Sensitivity to Seasonal Dynamics

    NASA Technical Reports Server (NTRS)

    Campbell, Petya K. Entcheva; Middleton, Elizabeth M.; Thome, Kurt J.; Kokaly, Raymond F.; Huemmrich, Karl Fred; Lagomasino, David; Novick, Kimberly A.; Brunsell, Nathaniel A.

    2013-01-01

    This study evaluated Earth Observing 1 (EO-1) Hyperion reflectance time series at established calibration sites to assess the instrument stability and suitability for monitoring vegetation functional parameters. Our analysis using three pseudo-invariant calibration sites in North America indicated that the reflectance time series are devoid of apparent spectral trends and their stability consistently is within 2.5-5 percent throughout most of the spectral range spanning the 12-plus year data record. Using three vegetated sites instrumented with eddy covariance towers, the Hyperion reflectance time series were evaluated for their ability to determine important variables of ecosystem function. A number of narrowband and derivative vegetation indices (VI) closely described the seasonal profiles in vegetation function and ecosystem carbon exchange (e.g., net and gross ecosystem productivity) in three very different ecosystems, including a hardwood forest and tallgrass prairie in North America, and a Miombo woodland in Africa. Our results demonstrate the potential for scaling the carbon flux tower measurements to local and regional landscape levels. The VIs with stronger relationships to the CO2 parameters were derived using continuous reflectance spectra and included wavelengths associated with chlorophyll content and/or chlorophyll fluorescence. Since these indices cannot be calculated from broadband multispectral instrument data, the opportunity to exploit these spectrometer-based VIs in the future will depend on the launch of satellites such as EnMAP and HyspIRI. This study highlights the practical utility of space-borne spectrometers for characterization of the spectral stability and uniformity of the calibration sites in support of sensor cross-comparisons, and demonstrates the potential of narrowband VIs to track and spatially extend ecosystem functional status as well as carbon processes measured at flux towers.

  10. EO-1 Hyperion reflectance time series at calibration and validation sites: stability and sensitivity to seasonal dynamics

    USGS Publications Warehouse

    Campbell, P.K.E.; Middleton, E.M.; Thome, K.J.; Kokaly, Raymond F.; Huemmrich, K.F.; Novick, K.A.; Brunsell, N.A.

    2013-01-01

    This study evaluated Earth Observing 1 (EO-1) Hyperion reflectance time series at established calibration sites to assess the instrument stability and suitability for monitoring vegetation functional parameters. Our analysis using three pseudo-invariant calibration sites in North America indicated that the reflectance time series are devoid of apparent spectral trends and their stability consistently is within 2.5-5 percent throughout most of the spectral range spanning the 12+ year data record. Using three vegetated sites instrumented with eddy covariance towers, the Hyperion reflectance time series were evaluated for their ability to determine important variables of ecosystem function. A number of narrowband and derivative vegetation indices (VI) closely described the seasonal profiles in vegetation function and ecosystem carbon exchange (e.g., net and gross ecosystem productivity) in three very different ecosystems, including a hardwood forest and tallgrass prairie in North America, and a Miombo woodland in Africa. Our results demonstrate the potential for scaling the carbon flux tower measurements to local and regional landscape levels. The VIs with stronger relationships to the CO2 parameters were derived using continuous reflectance spectra and included wavelengths associated with chlorophyll content and/or chlorophyll fluorescence. Since these indices cannot be calculated from broadband multispectral instrument data, the opportunity to exploit these spectrometer-based VIs in the future will depend on the launch of satellites such as EnMAP and HyspIRI. This study highlights the practical utility of space-borne spectrometers for characterization of the spectral stability and uniformity of the calibration sites in support of sensor cross-comparisons, and demonstrates the potential of narrowband VIs to track and spatially extend ecosystem functional status as well as carbon processes measured at flux towers.

  11. Hazard function theory for nonstationary natural hazards

    NASA Astrophysics Data System (ADS)

    Read, L.; Vogel, R. M.

    2015-12-01

    Studies from the natural hazards literature indicate that many natural processes, including wind speeds, landslides, wildfires, precipitation, streamflow and earthquakes, show evidence of nonstationary behavior such as trends in magnitudes through time. Traditional probabilistic analysis of natural hazards based on partial duration series (PDS) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance is constant through time. Given evidence of trends and the consequent expected growth in devastating impacts from natural hazards across the world, new methods are needed to characterize their probabilistic behavior. The field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (x) with its failure time series (t), enabling computation of corresponding average return periods and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose PDS magnitudes are assumed to follow the widely applied Poisson-GP model. We derive a 2-parameter Generalized Pareto hazard model and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series x, with corresponding failure time series t, should have application to a wide class of natural hazards.

  12. Rotor design for maneuver performance

    NASA Technical Reports Server (NTRS)

    Berry, John D.; Schrage, Daniel

    1986-01-01

    A method of determining the sensitivity of helicopter maneuver performance to changes in basic rotor design parameters is developed. Maneuver performance is measured by the time required, based on a simplified rotor/helicopter performance model, to perform a series of specified maneuvers. This method identifies parameter values which result in minimum time quickly because of the inherent simplicity of the rotor performance model used. For the specific case studied, this method predicts that the minimum time required is obtained with a low disk loading and a relatively high rotor solidity. The method was developed as part of the winning design effort for the American Helicopter Society student design competition for 1984/1985.

  13. Status, trends, and changes in freshwater inflows to bay systems in the Corpus Christi Bay National Estuary Program study area

    USGS Publications Warehouse

    Asquith, W.H.; Mosier, J. G.; Bush, P.W.

    1997-01-01

    The watershed simulation model Hydrologic Simulation Program—Fortran (HSPF) was used to generate simulated flow (runoff) from the 13 watersheds to the six bay systems because adequate gaged streamflow data from which to estimate freshwater inflows are not available; only about 23 percent of the adjacent contributing watershed area is gaged. The model was calibrated for the gaged parts of three watersheds—that is, selected input parameters (meteorologic and hydrologic properties and conditions) that control runoff were adjusted in a series of simulations until an adequate match between model-generated flows and a set (time series) of gaged flows was achieved. The primary model input is rainfall and evaporation data and the model output is a time series of runoff volumes. After calibration, simulations driven by daily rainfall for a 26-year period (1968–93) were done for the 13 watersheds to obtain runoff under current (1983–93), predevelopment (pre-1940 streamflow and pre-urbanization), and future (2010) land-use conditions for estimating freshwater inflows and for comparing runoff under the three land-use conditions; and to obtain time series of runoff from which to estimate time series of freshwater inflows for trend analysis.

  14. Patterns of time series of numbers of emergency hospitalizations in mental hospitals in Moscow and Kazan (common features and differences)

    NASA Astrophysics Data System (ADS)

    Aptikaeva, O. I.; Gamburtsev, A. G.; Martyushov, A. N.

    2012-12-01

    We have investigated the numbers of emergency hospitalizations in mental and drug-treatment hospitals in Kazan in 1996-2006 and in Moscow in 1984-1996. Samples have been analyzed by disease type, sex, age, and place of residence (city or village). This study aims to discover differences and common traits in various structures of series of hospitalizations in these samples and their possible relationships with the changing parameters of the environment. We have found similar structures of series of samples of the same type both in Moscow and in Kazan. In some cases, cyclic structures of series of numbers of hospitalizations and series of changes in solar activity and the rate of rotation of the earth change simultaneously.

  15. How one might miss early warning signals of critical transitions in time series data: A systematic study of two major currency pairs

    PubMed Central

    Ciamarra, Massimo Pica; Cheong, Siew Ann

    2018-01-01

    There is growing interest in the use of critical slowing down and critical fluctuations as early warning signals for critical transitions in different complex systems. However, while some studies found them effective, others found the opposite. In this paper, we investigated why this might be so, by testing three commonly used indicators: lag-1 autocorrelation, variance, and low-frequency power spectrum at anticipating critical transitions in the very-high-frequency time series data of the Australian Dollar-Japanese Yen and Swiss Franc-Japanese Yen exchange rates. Besides testing rising trends in these indicators at a strict level of confidence using the Kendall-tau test, we also required statistically significant early warning signals to be concurrent in the three indicators, which must rise to appreciable values. We then found for our data set the optimum parameters for discovering critical transitions, and showed that the set of critical transitions found is generally insensitive to variations in the parameters. Suspecting that negative results in the literature are the results of low data frequencies, we created time series with time intervals over three orders of magnitude from the raw data, and tested them for early warning signals. Early warning signals can be reliably found only if the time interval of the data is shorter than the time scale of critical transitions in our complex system of interest. Finally, we compared the set of time windows with statistically significant early warning signals with the set of time windows followed by large movements, to conclude that the early warning signals indeed provide reliable information on impending critical transitions. This reliability becomes more compelling statistically the more events we test. PMID:29538373

  16. How one might miss early warning signals of critical transitions in time series data: A systematic study of two major currency pairs.

    PubMed

    Wen, Haoyu; Ciamarra, Massimo Pica; Cheong, Siew Ann

    2018-01-01

    There is growing interest in the use of critical slowing down and critical fluctuations as early warning signals for critical transitions in different complex systems. However, while some studies found them effective, others found the opposite. In this paper, we investigated why this might be so, by testing three commonly used indicators: lag-1 autocorrelation, variance, and low-frequency power spectrum at anticipating critical transitions in the very-high-frequency time series data of the Australian Dollar-Japanese Yen and Swiss Franc-Japanese Yen exchange rates. Besides testing rising trends in these indicators at a strict level of confidence using the Kendall-tau test, we also required statistically significant early warning signals to be concurrent in the three indicators, which must rise to appreciable values. We then found for our data set the optimum parameters for discovering critical transitions, and showed that the set of critical transitions found is generally insensitive to variations in the parameters. Suspecting that negative results in the literature are the results of low data frequencies, we created time series with time intervals over three orders of magnitude from the raw data, and tested them for early warning signals. Early warning signals can be reliably found only if the time interval of the data is shorter than the time scale of critical transitions in our complex system of interest. Finally, we compared the set of time windows with statistically significant early warning signals with the set of time windows followed by large movements, to conclude that the early warning signals indeed provide reliable information on impending critical transitions. This reliability becomes more compelling statistically the more events we test.

  17. Parameters Design of Series Resonant Inverter Circuit

    NASA Astrophysics Data System (ADS)

    Qi, Xingkun; Peng, Yonglong; Li, Yabin

    This paper analyzes the main circuit structure of series resonant inverter, and designs the components parameters of the main circuit.That provides a theoretical method for the design of series resonant inverter.

  18. Toward the Real-Time Tsunami Parameters Prediction

    NASA Astrophysics Data System (ADS)

    Lavrentyev, Mikhail; Romanenko, Alexey; Marchuk, Andrey

    2013-04-01

    Today, a wide well-developed system of deep ocean tsunami detectors operates over the Pacific. Direct measurements of tsunami-wave time series are available. However, tsunami-warning systems fail to predict basic parameters of tsunami waves on time. Dozens examples could be provided. In our view, the lack of computational power is the main reason of these failures. At the same time, modern computer technologies such as, GPU (graphic processing unit) and FPGA (field programmable gates array), can dramatically improve data processing performance, which may enhance timely tsunami-warning prediction. Thus, it is possible to address the challenge of real-time tsunami forecasting for selected geo regions. We propose to use three new techniques in the existing tsunami warning systems to achieve real-time calculation of tsunami wave parameters. First of all, measurement system (DART buoys location, e.g.) should be optimized (both in terms of wave arriving time and amplitude parameter). The corresponding software application exists today and is ready for use [1]. We consider the example of the coastal line of Japan. Numerical tests show that optimal installation of only 4 DART buoys (accounting the existing sea bed cable) will reduce the tsunami wave detection time to only 10 min after an underwater earthquake. Secondly, as was shown by this paper authors, the use of GPU/FPGA technologies accelerates the execution of the MOST (method of splitting tsunami) code by 100 times [2]. Therefore, tsunami wave propagation over the ocean area 2000*2000 km (wave propagation simulation: time step 10 sec, recording each 4th spatial point and 4th time step) could be calculated at: 3 sec with 4' mesh 50 sec with 1' mesh 5 min with 0.5' mesh The algorithm to switch from coarse mesh to the fine grain one is also available. Finally, we propose the new algorithm for tsunami source parameters determination by real-time processing the time series, obtained at DART. It is possible to approximate the measured time series by a linear combination of synthetic marigrams. Coefficients of such linear combination are calculated with the help of orthogonal decomposition. The algorithm is very fast and demonstrates good accuracy. Summing up, using the example of the coastal line of Japan, wave height evaluation will be available in 12-14 minutes after the earthquake even before the wave approaches the nearest shore point (usually, it takes places in about 20 minutes). The determination of the optimal sensors' location using genetic algorithm / A.S.Astrakova, D.V.Bannikov, S.G.Cherny, M.M.Lavrentiev // 3rd Nordic EMW Summer School, Turku, Finland, June, 2009: proceedings - Finland: TUSC General Publications, 2009. - N 53. - P.5-22. M.Lavrentiev Jr., A.Romanenko, "Modern Hardware Solutions to Speed Up Tsunami Simulation Codes", Geophysical research abstracts, Vol. 12, EGU2010-3835, 2010

  19. The reliable solution and computation time of variable parameters logistic model

    NASA Astrophysics Data System (ADS)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  20. Quantifying evolutionary dynamics from variant-frequency time series

    NASA Astrophysics Data System (ADS)

    Khatri, Bhavin S.

    2016-09-01

    From Kimura’s neutral theory of protein evolution to Hubbell’s neutral theory of biodiversity, quantifying the relative importance of neutrality versus selection has long been a basic question in evolutionary biology and ecology. With deep sequencing technologies, this question is taking on a new form: given a time-series of the frequency of different variants in a population, what is the likelihood that the observation has arisen due to selection or neutrality? To tackle the 2-variant case, we exploit Fisher’s angular transformation, which despite being discovered by Ronald Fisher a century ago, has remained an intellectual curiosity. We show together with a heuristic approach it provides a simple solution for the transition probability density at short times, including drift, selection and mutation. Our results show under that under strong selection and sufficiently frequent sampling these evolutionary parameters can be accurately determined from simulation data and so they provide a theoretical basis for techniques to detect selection from variant or polymorphism frequency time-series.

  1. Quantifying evolutionary dynamics from variant-frequency time series.

    PubMed

    Khatri, Bhavin S

    2016-09-12

    From Kimura's neutral theory of protein evolution to Hubbell's neutral theory of biodiversity, quantifying the relative importance of neutrality versus selection has long been a basic question in evolutionary biology and ecology. With deep sequencing technologies, this question is taking on a new form: given a time-series of the frequency of different variants in a population, what is the likelihood that the observation has arisen due to selection or neutrality? To tackle the 2-variant case, we exploit Fisher's angular transformation, which despite being discovered by Ronald Fisher a century ago, has remained an intellectual curiosity. We show together with a heuristic approach it provides a simple solution for the transition probability density at short times, including drift, selection and mutation. Our results show under that under strong selection and sufficiently frequent sampling these evolutionary parameters can be accurately determined from simulation data and so they provide a theoretical basis for techniques to detect selection from variant or polymorphism frequency time-series.

  2. Solar signals detected within neutral atmospheric and ionospheric parameters

    NASA Astrophysics Data System (ADS)

    Koucka Knizova, Petra; Georgieva, Katya; Mosna, Zbysek; Kozubek, Michal; Kouba, Daniel; Kirov, Boian; Potuzníkova, Katerina; Boska, Josef

    2018-06-01

    We have analyzed time series of solar data together with the atmospheric and ionospheric measurements for solar cycles 19 till 23 according to particular data availability. For the analyses we have used long term data with 1-day sampling. By mean of Continuous Wavelet Transform (CWT) we have found common spectral domains within solar and atmospheric and ionospheric time series. Further we have identified terms when particular pairs of signals show high coherence applying Wavelet Transform Coherence (WTC). Despite wide oscillation ranges detected in particular time series CWT spectra we found only limited domains with high coherence by mean of WTC. Wavelet Transform Coherence reveals significant high power domains with stable phase difference for periods 1 month, 2 months, 6 months, 1 year, 2 years and 3-4 years between pairs of solar data and atmospheric and ionospheric data. The occurence of the detected domains vary significantly during particular solar cycle (SC) and from cycle to the following one. It indicates the changing solar forcing and/or atmospheric sensitivity with time.

  3. Ares I-X In-Flight Modal Identification

    NASA Technical Reports Server (NTRS)

    Bartkowicz, Theodore J.; James, George H., III

    2011-01-01

    Operational modal analysis is a procedure that allows the extraction of modal parameters of a structure in its operating environment. It is based on the idealized premise that input to the structure is white noise. In some cases, when free decay responses are corrupted by unmeasured random disturbances, the response data can be processed into cross-correlation functions that approximate free decay responses. Modal parameters can be computed from these functions by time domain identification methods such as the Eigenvalue Realization Algorithm (ERA). The extracted modal parameters have the same characteristics as impulse response functions of the original system. Operational modal analysis is performed on Ares I-X in-flight data. Since the dynamic system is not stationary due to propellant mass loss, modal identification is only possible by analyzing the system as a series of linearized models over short periods of time via a sliding time-window of short time intervals. A time-domain zooming technique was also employed to enhance the modal parameter extraction. Results of this study demonstrate that free-decay time domain modal identification methods can be successfully employed for in-flight launch vehicle modal extraction.

  4. Progress Report on the Airborne Composition Standard Variable Name and Time Series Working Groups of the 2017 ESDSWG

    NASA Astrophysics Data System (ADS)

    Evans, K. D.; Early, A. B.; Northup, E. A.; Ames, D. P.; Teng, W. L.; Olding, S. W.; Krotkov, N. A.; Arctur, D. K.; Beach, A. L., III; Silverman, M. L.

    2017-12-01

    The role of NASA's Earth Science Data Systems Working Groups (ESDSWG) is to make recommendations relevant to NASA's Earth science data systems from users' experiences and community insight. Each group works independently, focusing on a unique topic. Progress of two of the 2017 Working Groups will be presented. In a single airborne field campaign, there can be several different instruments and techniques that measure the same parameter on one or more aircraft platforms. Many of these same parameters are measured during different airborne campaigns using similar or different instruments and techniques. The Airborne Composition Standard Variable Name Working Group is working to create a list of variable standard names that can be used across all airborne field campaigns in order to assist in the transition to the ICARTT Version 2.0 file format. The overall goal is to enhance the usability of ICARTT files and the search ability of airborne field campaign data. The Time Series Working Group (TSWG) is a continuation of the 2015 and 2016 Time Series Working Groups. In 2015, we started TSWG with the intention of exploring the new OGC (Open Geospatial Consortium) WaterML 2 standards as a means for encoding point-based time series data from NASA satellites. In this working group, we realized that WaterML 2 might not be the best solution for this type of data, for a number of reasons. Our discussion with experts from other agencies, who have worked on similar issues, identified several challenges that we would need to address. As a result, we made the recommendation to study the new TimeseriesML 1.0 standard of OGC as a potential NASA time series standard. The 2016 TSWG examined closely the TimeseriesML 1.0 and, in coordination with the OGC TimeseriesML Standards Working Group, identified certain gaps in TimeseriesML 1.0 that would need to be addressed for the standard to be applicable to NASA time series data. An engineering report was drafted based on the OGC Engineering Report template, describing recommended changes to TimeseriesML 1.0, in the form of use cases. In 2017, we are conducting interoperability experiments to implement the use cases and demonstrate the feasibility and suitability of these modifications for NASA and related user communities. The results will be incorporated into the existing draft engineering report.

  5. Progress Report on the Airborne Composition Standard Variable Name and Time Series Working Groups of the 2017 ESDSWG

    NASA Technical Reports Server (NTRS)

    Evans, Keith D.; Early, Amanda; Northup, Emily; Ames, Dan; Teng, William; Archur, David; Beach, Aubrey; Olding, Steve; Krotkov, Nickolay A.

    2017-01-01

    The role of NASA's Earth Science Data Systems Working Groups (ESDSWG) is to make recommendations relevant to NASA's Earth science data systems from users' experiences and community insight. Each group works independently, focusing on a unique topic. Progress of two of the 2017 Working Groups will be presented. In a single airborne field campaign, there can be several different instruments and techniques that measure the same parameter on one or more aircraft platforms. Many of these same parameters are measured during different airborne campaigns using similar or different instruments and techniques. The Airborne Composition Standard Variable Name Working Group is working to create a list of variable standard names that can be used across all airborne field campaigns in order to assist in the transition to the ICARTT Version 2.0 file format. The overall goal is to enhance the usability of ICARTT files and the search ability of airborne field campaign data. The Time Series Working Group (TSWG) is a continuation of the 2015 and 2016 Time Series Working Groups. In 2015, we started TSWG with the intention of exploring the new OGC (Open Geospatial Consortium) WaterML 2 standards as a means for encoding point-based time series data from NASA satellites. In this working group, we realized that WaterML 2 might not be the best solution for this type of data, for a number of reasons. Our discussion with experts from other agencies, who have worked on similar issues, identified several challenges that we would need to address. As a result, we made the recommendation to study the new TimeseriesML 1.0 standard of OGC as a potential NASA time series standard. The 2016 TSWG examined closely the TimeseriesML 1.0 and, in coordination with the OGC TimeseriesML Standards Working Group, identified certain gaps in TimeseriesML 1.0 that would need to be addressed for the standard to be applicable to NASA time series data. An engineering report was drafted based on the OGC Engineering Report template, describing recommended changes to TimeseriesML 1.0, in the form of use cases. In 2017, we are conducting interoperability experiments to implement the use cases and demonstrate the feasibility and suitability of these modifications for NASA and related user communities. The results will be incorporated into the existing draft engineering report.

  6. The Combined Effect of Periodic Signals and Noise on the Dilution of Precision of GNSS Station Velocity Uncertainties

    NASA Astrophysics Data System (ADS)

    Klos, Anna; Olivares, German; Teferle, Felix Norman; Bogusz, Janusz

    2016-04-01

    Station velocity uncertainties determined from a series of Global Navigation Satellite System (GNSS) position estimates depend on both the deterministic and stochastic models applied to the time series. While the deterministic model generally includes parameters for a linear and several periodic terms the stochastic model is a representation of the noise character of the time series in form of a power-law process. For both of these models the optimal model may vary from one time series to another while the models also depend, to some degree, on each other. In the past various power-law processes have been shown to fit the time series and the sources for the apparent temporally-correlated noise were attributed to, for example, mismodelling of satellites orbits, antenna phase centre variations, troposphere, Earth Orientation Parameters, mass loading effects and monument instabilities. Blewitt and Lavallée (2002) demonstrated how improperly modelled seasonal signals affected the estimates of station velocity uncertainties. However, in their study they assumed that the time series followed a white noise process with no consideration of additional temporally-correlated noise. Bos et al. (2010) empirically showed for a small number of stations that the noise character was much more important for the reliable estimation of station velocity uncertainties than the seasonal signals. In this presentation we pick up from Blewitt and Lavallée (2002) and Bos et al. (2010), and have derived formulas for the computation of the General Dilution of Precision (GDP) under presence of periodic signals and temporally-correlated noise in the time series. We show, based on simulated and real time series from globally distributed IGS (International GNSS Service) stations processed by the Jet Propulsion Laboratory (JPL), that periodic signals dominate the effect on the velocity uncertainties at short time scales while for those beyond four years, the type of noise becomes much more important. In other words, for time series long enough, the assumed periodic signals do not affect the velocity uncertainties as much as the assumed noise model. We calculated the GDP to be the ratio between two errors of velocity: without and with inclusion of seasonal terms of periods equal to one year and its overtones till 3rd. To all these cases power-law processes of white, flicker and random-walk noise were added separately. Few oscillations in GDP can be noticed for integer years, which arise from periodic terms added. Their amplitudes in GDP increase along with the increasing spectral index. Strong peaks of oscillations in GDP are indicated for short time scales, especially for random-walk processes. This means that badly monumented stations are affected the most. Local minima and maxima in GDP are also enlarged as the noise approaches random walk. We noticed that the semi-annual signal increased the local GDP minimum for white noise. This suggests that adding power-law noise to a deterministic model with annual term or adding a semi-annual term to white noise causes an increased velocity uncertainty even at the points, where determined velocity is not biased.

  7. Bayesian Analysis of Non-Gaussian Long-Range Dependent Processes

    NASA Astrophysics Data System (ADS)

    Graves, Timothy; Watkins, Nicholas; Franzke, Christian; Gramacy, Robert

    2013-04-01

    Recent studies [e.g. the Antarctic study of Franzke, J. Climate, 2010] have strongly suggested that surface temperatures exhibit long-range dependence (LRD). The presence of LRD would hamper the identification of deterministic trends and the quantification of their significance. It is well established that LRD processes exhibit stochastic trends over rather long periods of time. Thus, accurate methods for discriminating between physical processes that possess long memory and those that do not are an important adjunct to climate modeling. As we briefly review, the LRD idea originated at the same time as H-selfsimilarity, so it is often not realised that a model does not have to be H-self similar to show LRD [e.g. Watkins, GRL Frontiers, 2013]. We have used Markov Chain Monte Carlo algorithms to perform a Bayesian analysis of Auto-Regressive Fractionally-Integrated Moving-Average ARFIMA(p,d,q) processes, which are capable of modeling LRD. Our principal aim is to obtain inference about the long memory parameter, d, with secondary interest in the scale and location parameters. We have developed a reversible-jump method enabling us to integrate over different model forms for the short memory component. We initially assume Gaussianity, and have tested the method on both synthetic and physical time series. Many physical processes, for example the Faraday Antarctic time series, are significantly non-Gaussian. We have therefore extended this work by weakening the Gaussianity assumption, assuming an alpha-stable distribution for the innovations, and performing joint inference on d and alpha. Such a modified FARIMA(p,d,q) process is a flexible, initial model for non-Gaussian processes with long memory. We will present a study of the dependence of the posterior variance of the memory parameter d on the length of the time series considered. This will be compared with equivalent error diagnostics for other measures of d.

  8. Road safety forecasts in five European countries using structural time series models.

    PubMed

    Antoniou, Constantinos; Papadimitriou, Eleonora; Yannis, George

    2014-01-01

    Modeling road safety development is a complex task and needs to consider both the quantifiable impact of specific parameters as well as the underlying trends that cannot always be measured or observed. The objective of this research is to apply structural time series models for obtaining reliable medium- to long-term forecasts of road traffic fatality risk using data from 5 countries with different characteristics from all over Europe (Cyprus, Greece, Hungary, Norway, and Switzerland). Two structural time series models are considered: (1) the local linear trend model and the (2) latent risk time series model. Furthermore, a structured decision tree for the selection of the applicable model for each situation (developed within the Road Safety Data, Collection, Transfer and Analysis [DaCoTA] research project, cofunded by the European Commission) is outlined. First, the fatality and exposure data that are used for the development of the models are presented and explored. Then, the modeling process is presented, including the model selection process, introduction of intervention variables, and development of mobility scenarios. The forecasts using the developed models appear to be realistic and within acceptable confidence intervals. The proposed methodology is proved to be very efficient for handling different cases of data availability and quality, providing an appropriate alternative from the family of structural time series models in each country. A concluding section providing perspectives and directions for future research is presented.

  9. Evaluation of the effects of climate and man intervention on ground waters and their dependent ecosystems using time series analysis

    NASA Astrophysics Data System (ADS)

    Gemitzi, Alexandra; Stefanopoulos, Kyriakos

    2011-06-01

    SummaryGroundwaters and their dependent ecosystems are affected both by the meteorological conditions as well as from human interventions, mainly in the form of groundwater abstractions for irrigation needs. This work aims at investigating the quantitative effects of meteorological conditions and man intervention on groundwater resources and their dependent ecosystems. Various seasonal Auto-Regressive Integrated Moving Average (ARIMA) models with external predictor variables were used in order to model the influence of meteorological conditions and man intervention on the groundwater level time series. Initially, a seasonal ARIMA model that simulates the abstraction time series using as external predictor variable temperature ( T) was prepared. Thereafter, seasonal ARIMA models were developed in order to simulate groundwater level time series in 8 monitoring locations, using the appropriate predictor variables determined for each individual case. The spatial component was introduced through the use of Geographical Information Systems (GIS). Application of the proposed methodology took place in the Neon Sidirochorion alluvial aquifer (Northern Greece), for which a 7-year long time series (i.e., 2003-2010) of piezometric and groundwater abstraction data exists. According to the developed ARIMA models, three distinct groups of groundwater level time series exist; the first one proves to be dependent only on the meteorological parameters, the second group demonstrates a mixed dependence both on meteorological conditions and on human intervention, whereas the third group shows a clear influence from man intervention. Moreover, there is evidence that groundwater abstraction has affected an important protected ecosystem.

  10. Spatially detailed retrievals of spring phenology from single-season high-resolution image time series

    NASA Astrophysics Data System (ADS)

    Vrieling, Anton; Skidmore, Andrew K.; Wang, Tiejun; Meroni, Michele; Ens, Bruno J.; Oosterbeek, Kees; O'Connor, Brian; Darvishzadeh, Roshanak; Heurich, Marco; Shepherd, Anita; Paganini, Marc

    2017-07-01

    Vegetation indices derived from satellite image time series have been extensively used to estimate the timing of phenological events like season onset. Medium spatial resolution (≥250 m) satellite sensors with daily revisit capability are typically employed for this purpose. In recent years, phenology is being retrieved at higher resolution (≤30 m) in response to increasing availability of high-resolution satellite data. To overcome the reduced acquisition frequency of such data, previous attempts involved fusion between high- and medium-resolution data, or combinations of multi-year acquisitions in a single phenological reconstruction. The objectives of this study are to demonstrate that phenological parameters can now be retrieved from single-season high-resolution time series, and to compare these retrievals against those derived from multi-year high-resolution and single-season medium-resolution satellite data. The study focuses on the island of Schiermonnikoog, the Netherlands, which comprises a highly-dynamic saltmarsh, dune vegetation, and agricultural land. Combining NDVI series derived from atmospherically-corrected images from RapidEye (5 m-resolution) and the SPOT5 Take5 experiment (10m-resolution) acquired between March and August 2015, phenological parameters were estimated using a function fitting approach. We then compared results with phenology retrieved from four years of 30 m Landsat 8 OLI data, and single-year 100 m Proba-V and 250 m MODIS temporal composites of the same period. Retrieved phenological parameters from combined RapidEye/SPOT5 displayed spatially consistent results and a large spatial variability, providing complementary information to existing vegetation community maps. Retrievals that combined four years of Landsat observations into a single synthetic year were affected by the inclusion of years with warmer spring temperatures, whereas adjustment of the average phenology to 2015 observations was only feasible for a few pixels due to cloud cover around phenological transition dates. The Proba-V and MODIS phenology retrievals scaled poorly relative to their high-resolution equivalents, indicating that medium-resolution phenology retrievals need to be interpreted with care, particularly in landscapes with fine-scale land cover variability.

  11. A black box optimization approach to parameter estimation in a model for long/short term variations dynamics of commodity prices

    NASA Astrophysics Data System (ADS)

    De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano

    2012-11-01

    In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.

  12. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  13. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE PAGES

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    2017-10-31

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  14. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    NASA Astrophysics Data System (ADS)

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    In this study, we introduced an alternative approach for analysis of hydrological flow time series, using an adaptive sampling framework based on inter-amount times (IATs). The main difference with conventional flow time series is the rate at which low and high flows are sampled: the unit of analysis for IATs is a fixed flow amount, instead of a fixed time window. We analysed statistical distributions of flows and IATs across a wide range of sampling scales to investigate sensitivity of statistical properties such as quantiles, variance, skewness, scaling parameters and flashiness indicators to the sampling scale. We did this based on streamflow time series for 17 (semi)urbanised basins in North Carolina, US, ranging from 13 km2 to 238 km2 in size. Results showed that adaptive sampling of flow time series based on inter-amounts leads to a more balanced representation of low flow and peak flow values in the statistical distribution. While conventional sampling gives a lot of weight to low flows, as these are most ubiquitous in flow time series, IAT sampling gives relatively more weight to high flow values, when given flow amounts are accumulated in shorter time. As a consequence, IAT sampling gives more information about the tail of the distribution associated with high flows, while conventional sampling gives relatively more information about low flow periods. We will present results of statistical analyses across a range of subdaily to seasonal scales and will highlight some interesting insights that can be derived from IAT statistics with respect to basin flashiness and impact urbanisation on hydrological response.

  15. FISHERY-ORIENTED MODEL OF MARYLAND OYSTER POPULATIONS

    EPA Science Inventory

    We used time series data to calibrate a model of oyster population dynamics for Maryland's Chesapeake Bay. Model parameters were fishing mortality, natural mortality, recruitment, and carrying capacity. We calibrated for the Maryland bay as a whole and separately for 3 salinity z...

  16. LONG-TERM PROJECTIONS OF EASTERN OYSTER POPULATIONS UNDER VARIOUS MANAGEMENT SCENARIOS

    EPA Science Inventory

    Time series of fishery-dependent and fishery-independent data were used to parameterize a model of oyster population dynamics for Maryland's Chesapeake Bay. Model parameters are (1) fishing mortality, estimated from differences between predicted and reported landings scaled to a ...

  17. Occurrence of CPPopt Values in Uncorrelated ICP and ABP Time Series.

    PubMed

    Cabeleira, M; Czosnyka, M; Liu, X; Donnelly, J; Smielewski, P

    2018-01-01

    Optimal cerebral perfusion pressure (CPPopt) is a concept that uses the pressure reactivity (PRx)-CPP relationship over a given period to find a value of CPP at which PRx shows best autoregulation. It has been proposed that this relationship be modelled by a U-shaped curve, where the minimum is interpreted as being the CPP value that corresponds to the strongest autoregulation. Owing to the nature of the calculation and the signals involved in it, the occurrence of CPPopt curves generated by non-physiological variations of intracranial pressure (ICP) and arterial blood pressure (ABP), termed here "false positives", is possible. Such random occurrences would artificially increase the yield of CPPopt values and decrease the reliability of the methodology.In this work, we studied the probability of the random occurrence of false-positives and we compared the effect of the parameters used for CPPopt calculation on this probability. To simulate the occurrence of false-positives, uncorrelated ICP and ABP time series were generated by destroying the relationship between the waves in real recordings. The CPPopt algorithm was then applied to these new series and the number of false-positives was counted for different values of the algorithm's parameters. The percentage of CPPopt curves generated from uncorrelated data was demonstrated to be 11.5%. This value can be minimised by tuning some of the calculation parameters, such as increasing the calculation window and increasing the minimum PRx span accepted on the curve.

  18. Using Diurnal Temperature Signals to Infer Vertical Groundwater-Surface Water Exchange.

    PubMed

    Irvine, Dylan J; Briggs, Martin A; Lautz, Laura K; Gordon, Ryan P; McKenzie, Jeffrey M; Cartwright, Ian

    2017-01-01

    Heat is a powerful tracer to quantify fluid exchange between surface water and groundwater. Temperature time series can be used to estimate pore water fluid flux, and techniques can be employed to extend these estimates to produce detailed plan-view flux maps. Key advantages of heat tracing include cost-effective sensors and ease of data collection and interpretation, without the need for expensive and time-consuming laboratory analyses or induced tracers. While the collection of temperature data in saturated sediments is relatively straightforward, several factors influence the reliability of flux estimates that are based on time series analysis (diurnal signals) of recorded temperatures. Sensor resolution and deployment are particularly important in obtaining robust flux estimates in upwelling conditions. Also, processing temperature time series data involves a sequence of complex steps, including filtering temperature signals, selection of appropriate thermal parameters, and selection of the optimal analytical solution for modeling. This review provides a synthesis of heat tracing using diurnal temperature oscillations, including details on optimal sensor selection and deployment, data processing, model parameterization, and an overview of computing tools available. Recent advances in diurnal temperature methods also provide the opportunity to determine local saturated thermal diffusivity, which can improve the accuracy of fluid flux modeling and sensor spacing, which is related to streambed scour and deposition. These parameters can also be used to determine the reliability of flux estimates from the use of heat as a tracer. © 2016, National Ground Water Association.

  19. Potential assessment of the "support vector machine" method in forecasting ambient air pollutant trends.

    PubMed

    Lu, Wei-Zhen; Wang, Wen-Jian

    2005-04-01

    Monitoring and forecasting of air quality parameters are popular and important topics of atmospheric and environmental research today due to the health impact caused by exposing to air pollutants existing in urban air. The accurate models for air pollutant prediction are needed because such models would allow forecasting and diagnosing potential compliance or non-compliance in both short- and long-term aspects. Artificial neural networks (ANN) are regarded as reliable and cost-effective method to achieve such tasks and have produced some promising results to date. Although ANN has addressed more attentions to environmental researchers, its inherent drawbacks, e.g., local minima, over-fitting training, poor generalization performance, determination of the appropriate network architecture, etc., impede the practical application of ANN. Support vector machine (SVM), a novel type of learning machine based on statistical learning theory, can be used for regression and time series prediction and have been reported to perform well by some promising results. The work presented in this paper aims to examine the feasibility of applying SVM to predict air pollutant levels in advancing time series based on the monitored air pollutant database in Hong Kong downtown area. At the same time, the functional characteristics of SVM are investigated in the study. The experimental comparisons between the SVM model and the classical radial basis function (RBF) network demonstrate that the SVM is superior to the conventional RBF network in predicting air quality parameters with different time series and of better generalization performance than the RBF model.

  20. Observatory geoelectric fields induced in a two-layer lithosphere during magnetic storms

    USGS Publications Warehouse

    Love, Jeffrey J.; Swidinsky, Andrei

    2015-01-01

    We report on the development and validation of an algorithm for estimating geoelectric fields induced in the lithosphere beneath an observatory during a magnetic storm. To accommodate induction in three-dimensional lithospheric electrical conductivity, we analyze a simple nine-parameter model: two horizontal layers, each with uniform electrical conductivity properties given by independent distortion tensors. With Laplace transformation of the induction equations into the complex frequency domain, we obtain a transfer function describing induction of observatory geoelectric fields having frequency-dependent polarization. Upon inverse transformation back to the time domain, the convolution of the corresponding impulse-response function with a geomagnetic time series yields an estimated geoelectric time series. We obtain an optimized set of conductivity parameters using 1-s resolution geomagnetic and geoelectric field data collected at the Kakioka, Japan, observatory for five different intense magnetic storms, including the October 2003 Halloween storm; our estimated geoelectric field accounts for 93% of that measured during the Halloween storm. This work demonstrates the need for detailed modeling of the Earth’s lithospheric conductivity structure and the utility of co-located geomagnetic and geoelectric monitoring.

  1. Comprehensive cosmographic analysis by Markov chain method

    NASA Astrophysics Data System (ADS)

    Capozziello, S.; Lazkoz, R.; Salzano, V.

    2011-12-01

    We study the possibility of extracting model independent information about the dynamics of the Universe by using cosmography. We intend to explore it systematically, to learn about its limitations and its real possibilities. Here we are sticking to the series expansion approach on which cosmography is based. We apply it to different data sets: Supernovae type Ia (SNeIa), Hubble parameter extracted from differential galaxy ages, gamma ray bursts, and the baryon acoustic oscillations data. We go beyond past results in the literature extending the series expansion up to the fourth order in the scale factor, which implies the analysis of the deceleration q0, the jerk j0, and the snap s0. We use the Markov chain Monte Carlo method (MCMC) to analyze the data statistically. We also try to relate direct results from cosmography to dark energy (DE) dynamical models parametrized by the Chevallier-Polarski-Linder model, extracting clues about the matter content and the dark energy parameters. The main results are: (a) even if relying on a mathematical approximate assumption such as the scale factor series expansion in terms of time, cosmography can be extremely useful in assessing dynamical properties of the Universe; (b) the deceleration parameter clearly confirms the present acceleration phase; (c) the MCMC method can help giving narrower constraints in parameter estimation, in particular for higher order cosmographic parameters (the jerk and the snap), with respect to the literature; and (d) both the estimation of the jerk and the DE parameters reflect the possibility of a deviation from the ΛCDM cosmological model.

  2. Acoustic emission monitoring and critical failure identification of bridge cable damage

    NASA Astrophysics Data System (ADS)

    Li, Dongsheng; Ou, Jinping

    2008-03-01

    Acoustic emission (AE) characteristic parameters of bridge cable damage were obtained on tensile test. The testing results show that the AE parameter analysis method based on correlation figure of count, energy, duration time, amplitude and time can express the whole damage course, and can correctly judge the signal difference of broken wire and unbroken wire. It found the bridge cable AE characteristics aren't apparent before yield deformation, however they are increasing after yield deformation, at the time of breaking, and they reach to maximum. At last, the bridge cable damage evolution law is studied applying the AE characteristic parameter time series fractal theory. In the initial and middle stage of loading, the AE fractal value of bridge cable is unsteady. The fractal value reaches to the minimum at the critical point of failure. According to this changing law, it is approached how to make dynamic assessment and estimation of damage degrees.

  3. Fractality of profit landscapes and validation of time series models for stock prices

    NASA Astrophysics Data System (ADS)

    Yi, Il Gu; Oh, Gabjin; Kim, Beom Jun

    2013-08-01

    We apply a simple trading strategy for various time series of real and artificial stock prices to understand the origin of fractality observed in the resulting profit landscapes. The strategy contains only two parameters p and q, and the sell (buy) decision is made when the log return is larger (smaller) than p (-q). We discretize the unit square (p,q) ∈ [0,1] × [0,1] into the N × N square grid and the profit Π(p,q) is calculated at the center of each cell. We confirm the previous finding that local maxima in profit landscapes are scattered in a fractal-like fashion: the number M of local maxima follows the power-law form M ˜ Na, but the scaling exponent a is found to differ for different time series. From comparisons of real and artificial stock prices, we find that the fat-tailed return distribution is closely related to the exponent a ≈ 1.6 observed for real stock markets. We suggest that the fractality of profit landscape characterized by a ≈ 1.6 can be a useful measure to validate time series model for stock prices.

  4. Investigation of prospects for forecasting non-linear time series by example of drilling oil and gas wells

    NASA Astrophysics Data System (ADS)

    Vlasenko, A. V.; Sizonenko, A. B.; Zhdanov, A. A.

    2018-05-01

    Discrete time series or mappings are proposed for describing the dynamics of a nonlinear system. The article considers the problems of forecasting the dynamics of the system from the time series generated by it. In particular, the commercial rate of drilling oil and gas wells can be considered as a series where each next value depends on the previous one. The main parameter here is the technical drilling speed. With the aim of eliminating the measurement error and presenting the commercial speed of the object to the current with a good accuracy, future or any of the elapsed time points, the use of the Kalman filter is suggested. For the transition from a deterministic model to a probabilistic one, the use of ensemble modeling is suggested. Ensemble systems can provide a wide range of visual output, which helps the user to evaluate the measure of confidence in the model. In particular, the availability of information on the estimated calendar duration of the construction of oil and gas wells will allow drilling companies to optimize production planning by rationalizing the approach to loading drilling rigs, which ultimately leads to maximization of profit and an increase of their competitiveness.

  5. Dual-Pol X-Band Pol-InSAR Time Series of a Greenland Outlet Glacier

    NASA Astrophysics Data System (ADS)

    Fischer, Georg; Hajnsek, Irena

    2015-04-01

    This study investigates X-band (TanDEM-X) polarimetric and interferometric SAR (Pol-InSAR) data in order to retrieve information about the temporal and spatial variations of surface and subsurface parameters of the Helheim Glacier in south east Greenland. In particular, it will be indicated that the copolar phase difference between HH and VV could be a suitable proxy for snow accumulation, when Pol-InSAR techniques are used to assess the underlying scattering mechanism. By applying a two-phase mixing formula, this approach shows potential to reveal the temporal and spatial snow accumulation patterns in time series of TanDEM-X data.

  6. A comparative simulation study of AR(1) estimators in short time series.

    PubMed

    Krone, Tanja; Albers, Casper J; Timmerman, Marieke E

    2017-01-01

    Various estimators of the autoregressive model exist. We compare their performance in estimating the autocorrelation in short time series. In Study 1, under correct model specification, we compare the frequentist r 1 estimator, C-statistic, ordinary least squares estimator (OLS) and maximum likelihood estimator (MLE), and a Bayesian method, considering flat (B f ) and symmetrized reference (B sr ) priors. In a completely crossed experimental design we vary lengths of time series (i.e., T = 10, 25, 40, 50 and 100) and autocorrelation (from -0.90 to 0.90 with steps of 0.10). The results show a lowest bias for the B sr , and a lowest variability for r 1 . The power in different conditions is highest for B sr and OLS. For T = 10, the absolute performance of all measurements is poor, as expected. In Study 2, we study robustness of the methods through misspecification by generating the data according to an ARMA(1,1) model, but still analysing the data with an AR(1) model. We use the two methods with the lowest bias for this study, i.e., B sr and MLE. The bias gets larger when the non-modelled moving average parameter becomes larger. Both the variability and power show dependency on the non-modelled parameter. The differences between the two estimation methods are negligible for all measurements.

  7. Uncovering the genetic signature of quantitative trait evolution with replicated time series data.

    PubMed

    Franssen, S U; Kofler, R; Schlötterer, C

    2017-01-01

    The genetic architecture of adaptation in natural populations has not yet been resolved: it is not clear to what extent the spread of beneficial mutations (selective sweeps) or the response of many quantitative trait loci drive adaptation to environmental changes. Although much attention has been given to the genomic footprint of selective sweeps, the importance of selection on quantitative traits is still not well studied, as the associated genomic signature is extremely difficult to detect. We propose 'Evolve and Resequence' as a promising tool, to study polygenic adaptation of quantitative traits in evolving populations. Simulating replicated time series data we show that adaptation to a new intermediate trait optimum has three characteristic phases that are reflected on the genomic level: (1) directional frequency changes towards the new trait optimum, (2) plateauing of allele frequencies when the new trait optimum has been reached and (3) subsequent divergence between replicated trajectories ultimately leading to the loss or fixation of alleles while the trait value does not change. We explore these 3 phase characteristics for relevant population genetic parameters to provide expectations for various experimental evolution designs. Remarkably, over a broad range of parameters the trajectories of selected alleles display a pattern across replicates, which differs both from neutrality and directional selection. We conclude that replicated time series data from experimental evolution studies provide a promising framework to study polygenic adaptation from whole-genome population genetics data.

  8. Precipitation extremes on multiple timescales - Bartlett-Lewis rectangular pulse model and intensity-duration-frequency curves

    NASA Astrophysics Data System (ADS)

    Ritschel, Christoph; Ulbrich, Uwe; Névir, Peter; Rust, Henning W.

    2017-12-01

    For several hydrological modelling tasks, precipitation time series with a high (i.e. sub-daily) resolution are indispensable. The data are, however, not always available, and thus model simulations are used to compensate. A canonical class of stochastic models for sub-daily precipitation are Poisson cluster processes, with the original Bartlett-Lewis (OBL) model as a prominent representative. The OBL model has been shown to well reproduce certain characteristics found in observations. Our focus is on intensity-duration-frequency (IDF) relationships, which are of particular interest in risk assessment. Based on a high-resolution precipitation time series (5 min) from Berlin-Dahlem, OBL model parameters are estimated and IDF curves are obtained on the one hand directly from the observations and on the other hand from OBL model simulations. Comparing the resulting IDF curves suggests that the OBL model is able to reproduce the main features of IDF statistics across several durations but cannot capture rare events (here an event with a return period larger than 1000 years on the hourly timescale). In this paper, IDF curves are estimated based on a parametric model for the duration dependence of the scale parameter in the generalized extreme value distribution; this allows us to obtain a consistent set of curves over all durations. We use the OBL model to investigate the validity of this approach based on simulated long time series.

  9. The risk characteristics of solar and geomagnetic activity

    NASA Astrophysics Data System (ADS)

    Podolska, Katerina

    2016-04-01

    The main aim of this contribution is a deeper analysis of the influence of solar activity which is expected to have an impact on human health, and therefore on mortality, in particular civilization and degenerative diseases. We have constructed the characteristics that represent the risk of solar and geomagnetic activity on human health on the basis of our previous analysis of association between the daily numbers of death on diseases of the nervous system and diseases of the circulatory system and solar and geomagnetic activity in the Czech Republic during the years 1994 - 2013. We used long period daily time series of numbers of deaths by cause, long period time series of solar activity indices (namely R and F10.7), geomagnetic indicies (Kp planetary index, Dst) and ionospheric parameters (foF2 and TEC). The ionospheric parameters were related to the geographic location of the Czech Republic and adjusted for middle geographic latitudes. The risk characteristics were composed by cluster analysis in time series according to the phases of the solar cycle resp. the seasonal insolation at mid-latitudes or the daily period according to the impact of solar and geomagnetic activity on mortality by cause of death from medical cause groups of death VI. Diseases of the nervous system and IX. Diseases of the circulatory system mortality by 10th Revision of International Classification of Diseases WHO (ICD-10).

  10. The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Brissette, Fancois; Chen, Jie

    2013-04-01

    Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.

  11. Biological Indicators in Studies of Earthquake Precursors

    NASA Astrophysics Data System (ADS)

    Sidorin, A. Ya.; Deshcherevskii, A. V.

    2012-04-01

    Time series of data on variations in the electric activity (EA) of four species of weakly electric fish Gnathonemus leopoldianus and moving activity (MA) of two cat-fishes Hoplosternum thoracatum and two groups of Columbian cockroaches Blaberus craniifer were analyzed. The observations were carried out in the Garm region of Tajikistan within the frameworks of the experiments aimed at searching for earthquake precursors. An automatic recording system continuously recorded EA and DA over a period of several years. Hourly means EA and MA values were processed. Approximately 100 different parameters were calculated on the basis of six initial EA and MA time series, which characterize different variations in the EA and DA structure: amplitude of the signal and fluctuations of activity, parameters of diurnal rhythms, correlated changes in the activity of various biological indicators, and others. A detailed analysis of the statistical structure of the total array of parametric time series obtained in the experiment showed that the behavior of all animals shows a strong temporal variability. All calculated parameters are unstable and subject to frequent changes. A comparison of the data obtained with seismicity allow us to make the following conclusions: (1) The structure of variations in the studied parameters is represented by flicker noise or even a more complex process with permanent changes in its characteristics. Significant statistics are required to prove the cause-and-effect relationship of the specific features of such time series with seismicity. (2) The calculation of the reconstruction statistics in the EA and MA series structure demonstrated an increase in their frequency in the last hours or a few days before the earthquake if the hypocenter distance is comparable to the source size. Sufficiently dramatic anomalies in the behavior of catfishes and cockroaches (changes in the amplitude of activity variation, distortions of diurnal rhythms, increase in the mismatch of coordination between the activity dynamics of one type of biological indicators) were observed in one case before the November 12, 1987, event at a hypocenter distance of 8 km from the observation point (i.e., the animals were located within the source zone). (3) Changes observed before the earthquakes do not have any specific features and correspond quite well to the variations permanently observed without any relation to the earthquakes. (4) The activity of individual specimens has specific features. This hampers the implication of the biological monitoring. (5) The conclusions made here should not be considered absolute or extrapolated over all cases of observation of the behavior of animals, because the animals were kept under experimental (laboratory) conditions and could be screened from the influence of the stimuli of some modalities.

  12. Decadal trends in Indian Ocean ambient sound.

    PubMed

    Miksis-Olds, Jennifer L; Bradley, David L; Niu, Xiaoyue Maggie

    2013-11-01

    The increase of ocean noise documented in the North Pacific has sparked concern on whether the observed increases are a global or regional phenomenon. This work provides evidence of low frequency sound increases in the Indian Ocean. A decade (2002-2012) of recordings made off the island of Diego Garcia, UK in the Indian Ocean was parsed into time series according to frequency band and sound level. Quarterly sound level comparisons between the first and last years were also performed. The combination of time series and temporal comparison analyses over multiple measurement parameters produced results beyond those obtainable from a single parameter analysis. The ocean sound floor has increased over the past decade in the Indian Ocean. Increases were most prominent in recordings made south of Diego Garcia in the 85-105 Hz band. The highest sound level trends differed between the two sides of the island; the highest sound levels decreased in the north and increased in the south. Rate, direction, and magnitude of changes among the multiple parameters supported interpretation of source functions driving the trends. The observed sound floor increases are consistent with concurrent increases in shipping, wind speed, wave height, and blue whale abundance in the Indian Ocean.

  13. Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation

    NASA Astrophysics Data System (ADS)

    Lychak, Oleh V.; Holyns'kiy, Ivan S.

    2016-03-01

    The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.

  14. Fiber laser platform for highest flexibility and reliability in industrial femtosecond micromachining: TruMicro Series 2000

    NASA Astrophysics Data System (ADS)

    Jansen, Florian; Kanal, Florian; Kahmann, Max; Tan, Chuong; Diekamp, Holger; Scelle, Raphael; Budnicki, Aleksander; Sutter, Dirk

    2018-02-01

    In this work we present an ultrafast laser system distinguished by its industry-ready reliability and its outstanding flexibility that allows for real-time process-inherent parameter. The robust system design and linear amplifier architecture make the all-fiber series TruMicro 2000 ideally suited for passive coupling to hollow-core delivery fibers. In addition to details on the laser system itself, various application examples are shown, including welding of different glasses and ablation of silicon carbide and silicon.

  15. Analysis on Difference of Forest Phenology Extracted from EVI and LAI Based on PhenoCams

    NASA Astrophysics Data System (ADS)

    Wang, C.; Jing, L.; Qinhuo, L.

    2017-12-01

    Land surface phenology can make up for the deficiency of field observation with advantages of capturing the continuous expression of phenology on a large scale. However, there are some variability in phenological metrics derived from different satellite time-series data of vegetation parameters. This paper aims at assessing the difference of phenology information extracted from EVI and LAI time series. To achieve this, some web-camera sites were selected to analyze the characteristics between MODIS-EVI and MODIS-LAI time series from 2010 to 2014 for different forest types, including evergreen coniferous forest, evergreen broadleaf forest, deciduous coniferous forest and deciduous broadleaf forest. At the same time, satellite-based phenological metrics were extracted by the Logistics algorithm and compared with camera-based phenological metrics. Results show that the SOS and EOS that are extracted from LAI are close to bud burst and leaf defoliation respectively, while the SOS and EOS that are extracted from EVI is close to leaf unfolding and leaf coloring respectively. Thus the SOS that is extracted from LAI is earlier than that from EVI, while the EOS that is extracted from LAI is later than that from EVI at deciduous forest sites. Although the seasonal variation characteristics of evergreen forests are not apparent, significant discrepancies exist in LAI time series and EVI time series. In addition, Satellite- and camera-based phenological metrics agree well generally, but EVI has higher correlation with the camera-based canopy greenness (green chromatic coordinate, gcc) than LAI.

  16. Observation of daily and annular variations in the 214Po half-life

    NASA Astrophysics Data System (ADS)

    Alexeev, E. N.; Gavrilyuk, Yu. M.; Gangapshev, A. M.; Gezhaev, A. M.; Kazalov, V. V.; Kuzminov, V. V.; Panasenko, S. I.; Ratkevich, S. S.

    2017-11-01

    Results of the analysis of a time series of values of the half-life (τ) of the 214Po nucleus with a different time step obtained from the TAU-1 (354 days) and TAU-2 (973 days) installations are presented. The annual variation with an amplitude of (9.8 ± 0.6) × 10-4 and daily variations in the solar, lunar, and sidereal times with amplitudes of (5.3 ± 0.3) × 10-4, (6.9 ± 2.0) × 10-4, and (7.2 ± 1.2) × 10-4, respectively, are found in the series of τ values. It is shown that variations in microclimatic parameters cannot be a cause of τ variations.

  17. Transient deformation from daily GPS displacement time series: postseismic deformation, ETS and evolving strain rates

    NASA Astrophysics Data System (ADS)

    Bock, Y.; Fang, P.; Moore, A. W.; Kedar, S.; Liu, Z.; Owen, S. E.; Glasscoe, M. T.

    2016-12-01

    Detection of time-dependent crustal deformation relies on the availability of accurate surface displacements, proper time series analysis to correct for secular motion, coseismic and non-tectonic instrument offsets, periodic signatures at different frequencies, and a realistic estimate of uncertainties for the parameters of interest. As part of the NASA Solid Earth Science ESDR System (SESES) project, daily displacement time series are estimated for about 2500 stations, focused on tectonic plate boundaries and having a global distribution for accessing the terrestrial reference frame. The "combined" time series are optimally estimated from independent JPL GIPSY and SIO GAMIT solutions, using a consistent set of input epoch-date coordinates and metadata. The longest time series began in 1992; more than 30% of the stations have experienced one or more of 35 major earthquakes with significant postseismic deformation. Here we present three examples of time-dependent deformation that have been detected in the SESES displacement time series. (1) Postseismic deformation is a fundamental time-dependent signal that indicates a viscoelastic response of the crust/mantle lithosphere, afterslip, or poroelastic effects at different spatial and temporal scales. It is critical to identify and estimate the extent of postseismic deformation in both space and time not only for insight into the crustal deformation and earthquake cycles and their underlying physical processes, but also to reveal other time-dependent signals. We report on our database of characterized postseismic motions using a principal component analysis to isolate different postseismic processes. (2) Starting with the SESES combined time series and applying a time-dependent Kalman filter, we examine episodic tremor and slow slip (ETS) in the Cascadia subduction zone. We report on subtle slip details, allowing investigation of the spatiotemporal relationship between slow slip transients and tremor and their underlying physical mechanisms. (3) We present evolving strain dilatation and shear rates based on the SESES velocities for regional subnetworks as a metric for assigning earthquake probabilities and detection of possible time-dependent deformation related to underlying physical processes.

  18. Green-wave control of an unbalanced two-route traffic system with signals

    NASA Astrophysics Data System (ADS)

    Tobita, Kazuhiro; Nagatani, Takashi

    2013-11-01

    We introduce the preference parameter into the two-route dynamic model proposed by Wahle et al. The parameter represents the driver’s preference for the route choice. When the driver prefers a route, the traffic flow on route A does not balance with that on route B. We study the signal control for the unbalanced two-route traffic flow at the tour-time feedback strategy where the vehicles move ahead through a series of signals. The traffic signals are controlled by both cycle time and phase shift (offset time). We find that the mean tour time can be balanced by selecting the offset time successfully. We derive the relationship between the mean tour time and offset time (phase shift). Also, the dependences of the mean density and mean current on the offset time are derived.

  19. Detecting and modelling delayed density-dependence in abundance time series of a small mammal (Didelphis aurita)

    NASA Astrophysics Data System (ADS)

    Brigatti, E.; Vieira, M. V.; Kajin, M.; Almeida, P. J. A. L.; de Menezes, M. A.; Cerqueira, R.

    2016-02-01

    We study the population size time series of a Neotropical small mammal with the intent of detecting and modelling population regulation processes generated by density-dependent factors and their possible delayed effects. The application of analysis tools based on principles of statistical generality are nowadays a common practice for describing these phenomena, but, in general, they are more capable of generating clear diagnosis rather than granting valuable modelling. For this reason, in our approach, we detect the principal temporal structures on the bases of different correlation measures, and from these results we build an ad-hoc minimalist autoregressive model that incorporates the main drivers of the dynamics. Surprisingly our model is capable of reproducing very well the time patterns of the empirical series and, for the first time, clearly outlines the importance of the time of attaining sexual maturity as a central temporal scale for the dynamics of this species. In fact, an important advantage of this analysis scheme is that all the model parameters are directly biologically interpretable and potentially measurable, allowing a consistency check between model outputs and independent measurements.

  20. Monitoring of seismic time-series with advanced parallel computational tools and complex networks

    NASA Astrophysics Data System (ADS)

    Kechaidou, M.; Sirakoulis, G. Ch.; Scordilis, E. M.

    2012-04-01

    Earthquakes have been in the focus of human and research interest for several centuries due to their catastrophic effect to the everyday life as they occur almost all over the world demonstrating a hard to be modelled unpredictable behaviour. On the other hand, their monitoring with more or less technological updated instruments has been almost continuous and thanks to this fact several mathematical models have been presented and proposed so far to describe possible connections and patterns found in the resulting seismological time-series. Especially, in Greece, one of the most seismically active territories on earth, detailed instrumental seismological data are available from the beginning of the past century providing the researchers with valuable and differential knowledge about the seismicity levels all over the country. Considering available powerful parallel computational tools, such as Cellular Automata, these data can be further successfully analysed and, most important, modelled to provide possible connections between different parameters of the under study seismic time-series. More specifically, Cellular Automata have been proven very effective to compose and model nonlinear complex systems resulting in the advancement of several corresponding models as possible analogues of earthquake fault dynamics. In this work preliminary results of modelling of the seismic time-series with the help of Cellular Automata so as to compose and develop the corresponding complex networks are presented. The proposed methodology will be able to reveal under condition hidden relations as found in the examined time-series and to distinguish the intrinsic time-series characteristics in an effort to transform the examined time-series to complex networks and graphically represent their evolvement in the time-space. Consequently, based on the presented results, the proposed model will eventually serve as a possible efficient flexible computational tool to provide a generic understanding of the possible triggering mechanisms as arrived from the adequately monitoring and modelling of the regional earthquake phenomena.

  1. Nonlinear ARMA models for the D(st) index and their physical interpretation

    NASA Technical Reports Server (NTRS)

    Vassiliadis, D.; Klimas, A. J.; Baker, D. N.

    1996-01-01

    Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.

  2. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  3. Temporal rainfall estimation using input data reduction and model inversion

    NASA Astrophysics Data System (ADS)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  4. Comparison of Clustering Techniques for Residential Energy Behavior using Smart Meter Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Ling; Lee, Doris; Sim, Alex

    Current practice in whole time series clustering of residential meter data focuses on aggregated or subsampled load data at the customer level, which ignores day-to-day differences within customers. This information is critical to determine each customer’s suitability to various demand side management strategies that support intelligent power grids and smart energy management. Clustering daily load shapes provides fine-grained information on customer attributes and sources of variation for subsequent models and customer segmentation. In this paper, we apply 11 clustering methods to daily residential meter data. We evaluate their parameter settings and suitability based on 6 generic performance metrics and post-checkingmore » of resulting clusters. Finally, we recommend suitable techniques and parameters based on the goal of discovering diverse daily load patterns among residential customers. To the authors’ knowledge, this paper is the first robust comparative review of clustering techniques applied to daily residential load shape time series in the power systems’ literature.« less

  5. Phase transition in the parametric natural visibility graph.

    PubMed

    Snarskii, A A; Bezsudnov, I V

    2016-10-01

    We investigate time series by mapping them to the complex networks using a parametric natural visibility graph (PNVG) algorithm that generates graphs depending on arbitrary continuous parameter-the angle of view. We study the behavior of the relative number of clusters in PNVG near the critical value of the angle of view. Artificial and experimental time series of different nature are used for numerical PNVG investigations to find critical exponents above and below the critical point as well as the exponent in the finite size scaling regime. Altogether, they allow us to find the critical exponent of the correlation length for PNVG. The set of calculated critical exponents satisfies the basic Widom relation. The PNVG is found to demonstrate scaling behavior. Our results reveal the similarity between the behavior of the relative number of clusters in PNVG and the order parameter in the second-order phase transitions theory. We show that the PNVG is another example of a system (in addition to magnetic, percolation, superconductivity, etc.) with observed second-order phase transition.

  6. Cardiorespiratory and cardiovascular interactions in cardiomyopathy patients using joint symbolic dynamic analysis.

    PubMed

    Giraldo, Beatriz F; Rodriguez, Javier; Caminal, Pere; Bayes-Genis, Antonio; Voss, Andreas

    2015-01-01

    Cardiovascular diseases are the first cause of death in developed countries. Using electrocardiographic (ECG), blood pressure (BP) and respiratory flow signals, we obtained parameters for classifying cardiomyopathy patients. 42 patients with ischemic (ICM) and dilated (DCM) cardiomyopathies were studied. The left ventricular ejection fraction (LVEF) was used to stratify patients with low risk (LR: LVEF>35%, 14 patients) and high risk (HR: LVEF≤ 35%, 28 patients) of heart attack. RR, SBP and TTot time series were extracted from the ECG, BP and respiratory flow signals, respectively. The time series were transformed to a binary space and then analyzed using Joint Symbolic Dynamic with a word length of three, characterizing them by the probability of occurrence of the words. Extracted parameters were then reduced using correlation and statistical analysis. Principal component analysis and support vector machines methods were applied to characterize the cardiorespiratory and cardiovascular interactions in ICM and DCM cardiomyopathies, obtaining an accuracy of 85.7%.

  7. Generalised Pareto distribution: impact of rounding on parameter estimation

    NASA Astrophysics Data System (ADS)

    Pasarić, Z.; Cindrić, K.

    2018-05-01

    Problems that occur when common methods (e.g. maximum likelihood and L-moments) for fitting a generalised Pareto (GP) distribution are applied to discrete (rounded) data sets are revealed by analysing the real, dry spell duration series. The analysis is subsequently performed on generalised Pareto time series obtained by systematic Monte Carlo (MC) simulations. The solution depends on the following: (1) the actual amount of rounding, as determined by the actual data range (measured by the scale parameter, σ) vs. the rounding increment (Δx), combined with; (2) applying a certain (sufficiently high) threshold and considering the series of excesses instead of the original series. For a moderate amount of rounding (e.g. σ/Δx ≥ 4), which is commonly met in practice (at least regarding the dry spell data), and where no threshold is applied, the classical methods work reasonably well. If cutting at the threshold is applied to rounded data—which is actually essential when dealing with a GP distribution—then classical methods applied in a standard way can lead to erroneous estimates, even if the rounding itself is moderate. In this case, it is necessary to adjust the theoretical location parameter for the series of excesses. The other solution is to add an appropriate uniform noise to the rounded data ("so-called" jittering). This, in a sense, reverses the process of rounding; and thereafter, it is straightforward to apply the common methods. Finally, if the rounding is too coarse (e.g. σ/Δx 1), then none of the above recipes would work; and thus, specific methods for rounded data should be applied.

  8. Principal components and iterative regression analysis of geophysical series: Application to Sunspot number (1750 2004)

    NASA Astrophysics Data System (ADS)

    Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.

    2008-11-01

    We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.

  9. Computation of type curves for flow to partially penetrating wells in water-table aquifers

    USGS Publications Warehouse

    Moench, Allen F.

    1993-01-01

    Evaluation of Neuman's analytical solution for flow to a well in a homogeneous, anisotropic, water-table aquifer commonly requires large amounts of computation time and can produce inaccurate results for selected combinations of parameters. Large computation times occur because the integrand of a semi-infinite integral involves the summation of an infinite series. Each term of the series requires evaluation of the roots of equations, and the series itself is sometimes slowly convergent. Inaccuracies can result from lack of computer precision or from the use of improper methods of numerical integration. In this paper it is proposed to use a method of numerical inversion of the Laplace transform solution, provided by Neuman, to overcome these difficulties. The solution in Laplace space is simpler in form than the real-time solution; that is, the integrand of the semi-infinite integral does not involve an infinite series or the need to evaluate roots of equations. Because the integrand is evaluated rapidly, advanced methods of numerical integration can be used to improve accuracy with an overall reduction in computation time. The proposed method of computing type curves, for which a partially documented computer program (WTAQ1) was written, was found to reduce computation time by factors of 2 to 20 over the time needed to evaluate the closed-form, real-time solution.

  10. WE-FG-206-05: New Arterial Spin Labeling Method for Simultaneous Estimation of Arterial Cerebral Blood Volume, Cerebral Blood Flow and Arterial Transit Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, M; Whitlow, C; Jung, Y

    Purpose: To demonstrate the feasibility of a novel Arterial Spin Labeling (ASL) method for simultaneously measuring cerebral blood flow (CBF), arterial transit time (ATT), and arterial cerebral blood volume (aCBV) without the use of a contrast agent. Methods: A series of multi-TI ASL images were acquired from one healthy subject on a 3T Siemens Skyra, with the following parameters: PCASL labeling with variable TI [300, 400, 500, 600, 700, 800, 900, 1000, 1500, 2000, 2500, 3000, 3500, 4000] ms, labeling bolus 1400 ms when TI allows, otherwise 100 ms less than TI, TR was minimized for each TI, two sincmore » shaped pre-saturation pulses were applied in the imaging plane immediately before 2D EPI acquisition. 64×64×24 voxels, 5 mm slice thickness, 1 mm gap, full brain coverage, 6 averages per TI, no crusher gradients, 11 ms TE, scan time of 4:56. The perfusion weighted time-series was created for each voxel and fit to a novel model. The model has two components: 1) the traditional model developed by Buxton et al., accounting for CBF and ATT, and 2) a box car function characterizing the width of the labeling bolus, with variable timing and height in proportion to the aCBV. All three parameters were fit using a nonlinear fitting routine that constrained all parameters to be positive. The main purpose of the high-temporal resolution TI sampling for the first second of data acquisition was to precisely estimate the blood volume component for better detection of arrival time and magnitude of signal. Results: Whole brain maps of CBF, ATT, and aCBV were produced, and all three parameters maps are consistent with similar maps described in the literature. Conclusion: Simultaneous mapping of CBF, ATT, and aCBV is feasible with a clinically tractable scan time (under 5 minutes).« less

  11. Comparative analysis of time-scaling properties about water pH in Poyang Lake Inlet and Outlet on the basis of fractal methods.

    PubMed

    Shi, K; Liu, C Q; Huang, Z W; Zhang, B; Su, Y

    2010-01-01

    Detrended fluctuation analysis (DFA) and multifractal methods are applied to the time-scaling properties analysis of water pH series in Poyang Lake Inlet and Outlet in China. The results show that these pH series are characterised by long-term memory and multifractal scaling, and these characteristics have obvious differences between the Lake Inlet and Outlet. The comparison results suggest that monofractal and multifractal parameters can be quantitative dynamical indexes reflecting the capability of anti-acidification of Poyang Lake. Furthermore, we investigated the frequency-size distribution of pH series in Poyang Lake Inlet and Outlet. Our findings suggest that water pH is an example of a self-organised criticality (SOC) process. The results show that it is different SOC behaviours that result in the differences of power-law relations between pH series in Poyang Lake Inlet and Outlet. This work can be helpful to improvement of modelling of lake water quality.

  12. Evaluation of physiologic complexity in time series using generalized sample entropy and surrogate data analysis

    NASA Astrophysics Data System (ADS)

    Eduardo Virgilio Silva, Luiz; Otavio Murta, Luiz

    2012-12-01

    Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiffmax) for q ≠1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiffmax values were capable of distinguish HRV groups (p-values 5.10×10-3, 1.11×10-7, and 5.50×10-7 for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis.

  13. Changes in electrical and thermal parameters of led packages under different current and heating stresses

    NASA Astrophysics Data System (ADS)

    Jayawardena, Adikaramge Asiri

    The goal of this dissertation is to identify electrical and thermal parameters of an LED package that can be used to predict catastrophic failure real-time in an application. Through an experimental study the series electrical resistance and thermal resistance were identified as good indicators of contact failure of LED packages. This study investigated the long-term changes in series electrical resistance and thermal resistance of LED packages at three different current and junction temperature stress conditions. Experiment results showed that the series electrical resistance went through four phases of change; including periods of latency, rapid increase, saturation, and finally a sharp decline just before failure. Formation of voids in the contact metallization was identified as the underlying mechanism for series resistance increase. The rate of series resistance change was linked to void growth using the theory of electromigration. The rate of increase of series resistance is dependent on temperature and current density. The results indicate that void growth occurred in the cap (Au) layer, was constrained by the contact metal (Ni) layer, preventing open circuit failure of contact metal layer. Short circuit failure occurred due to electromigration induced metal diffusion along dislocations in GaN. The increase in ideality factor, and reverse leakage current with time provided further evidence to presence of metal in the semiconductor. An empirical model was derived for estimation of LED package failure time due to metal diffusion. The model is based on the experimental results and theories of electromigration and diffusion. Furthermore, the experimental results showed that the thermal resistance of LED packages increased with aging time. A relationship between thermal resistance change rate, with case temperature and temperature gradient within the LED package was developed. The results showed that dislocation creep is responsible for creep induced plastic deformation in the die-attach solder. The temperatures inside the LED package reached the melting point of die-attach solder due to delamination just before catastrophic open circuit failure. A combined model that could estimate life of LED packages based on catastrophic failure of thermal and electrical contacts is presented for the first time. This model can be used to make a-priori or real-time estimation of LED package life based on catastrophic failure. Finally, to illustrate the usefulness of the findings from this thesis, two different implementations of real-time life prediction using prognostics and health monitoring techniques are discussed.

  14. Variations in the Parameters of Background Seismic Noise during the Preparation Stages of Strong Earthquakes in the Kamchatka Region

    NASA Astrophysics Data System (ADS)

    Kasimova, V. A.; Kopylova, G. N.; Lyubushin, A. A.

    2018-03-01

    The results of the long (2011-2016) investigation of background seismic noise (BSN) in Kamchatka by the method suggested by Doct. Sci. (Phys.-Math.) A.A. Lyubushin with the use of the data from the network of broadband seismic stations of the Geophysical Survey of the Russian Academy of Sciences are presented. For characterizing the BSN field and its variability, continuous time series of the statistical parameters of the multifractal singularity spectra and wavelet expansion calculated from the records at each station are used. These parameters include the generalized Hurst exponent α*, singularity spectrum support width Δα, wavelet spectral exponent β, minimal normalized entropy of wavelet coefficients En, and spectral measure of their coherent behavior. The peculiarities in the spatiotemporal distribution of the BSN parameters as a probable response to the earthquakes with M w = 6.8-8.3 that occurred in Kamchatka in 2013 and 2016 are considered. It is established that these seismic events were preceded by regular variations in the BSN parameters, which lasted for a few months and consisted in the reduction of the median and mean α*, Δα, and β values estimated over all the stations and in the increase of the En values. Based on the increase in the spectral measure of the coherent behavior of the four-variate time series of the median and mean values of the considered statistics, the effect of the enhancement of the synchronism in the joint (collective) behavior of these parameters during a certain period prior to the mantle earthquake in the Sea of Okhotsk (May 24, 2013, M w = 8.3) is diagnosed. The procedures for revealing the precursory effects in the variations of the BSN parameters are described and the examples of these effects are presented.

  15. LATERAL CONTROL IN A DRIVING SIMULATOR: CORRELATIONS WITH NEUROPSYCHOLOGICAL TESTS AND ON-ROAD SAFETY ERRORS

    PubMed Central

    Johnson, Amy; Dawson, Jeffrey; Rizzo, Matthew

    2012-01-01

    Summary Driving simulators provide precise information on vehicular position at high capture rates. To analyze such data, we have previously proposed a time series model that reduces lateral position data into several parameters for measuring lateral control, and have shown that these parameters can detect differences between neurologically impaired and healthy drivers (Dawson et al, 2010a). In this paper, we focus on the “re-centering” parameter of this model, and test whether the parameter estimates are associated with off-road neuropsychological tests and/or with on-road safety errors. We assessed such correlations in 127 neurologically healthy drivers, ages 40 to 89. We found that our re-centering parameter had significant correlations with five neuropsychological tests: Judgment of Line Orientation (r = 0.38), Block Design (r = 0.27), Contrast Sensitivity (r = 0.31), Near Visual Acuity (r = -0.26), and Grooved Pegboard (r = -0.25). We also found that our re-centering parameter was associated with on-road safety errors at stop signs (r = -0.34) and on-road safety errors during turns (r = -0.22). These results suggest that our re-centering parameter may be a useful tool for measuring and monitoring ability to maintain vehicular lateral control. As GPS-based technology continues to improve in precision and reliability to measure vehicular positioning, our time-series model may potentially be applied as an automated index of driver performance in real world settings that is sensitive to cognitive decline. This work was supported by NIH/NIA awards AG17177, AG15071, and NS044930, and by a scholarship from Nissan Motor Company. PMID:24273756

  16. A statistical approach for generating synthetic tip stress data from limited CPT soundings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basalams, M.K.

    CPT tip stress data obtained from a Uranium mill tailings impoundment are treated as time series. A statistical class of models that was developed to model time series is explored to investigate its applicability in modeling the tip stress series. These models were developed by Box and Jenkins (1970) and are known as Autoregressive Moving Average (ARMA) models. This research demonstrates how to apply the ARMA models to tip stress series. Generation of synthetic tip stress series that preserve the main statistical characteristics of the measured series is also investigated. Multiple regression analysis is used to model the regional variationmore » of the ARMA model parameters as well as the regional variation of the mean and the standard deviation of the measured tip stress series. The reliability of the generated series is investigated from a geotechnical point of view as well as from a statistical point of view. Estimation of the total settlement using the measured and the generated series subjected to the same loading condition are performed. The variation of friction angle with depth of the impoundment materials is also investigated. This research shows that these series can be modeled by the Box and Jenkins ARMA models. A third degree Autoregressive model AR(3) is selected to represent these series. A theoretical double exponential density function is fitted to the AR(3) model residuals. Synthetic tip stress series are generated at nearby locations. The generated series are shown to be reliable in estimating the total settlement and the friction angle variation with depth for this particular site.« less

  17. Time Series Analysis of Remote Sensing Observations for Citrus Crop Growth Stage and Evapotranspiration Estimation

    NASA Astrophysics Data System (ADS)

    Sawant, S. A.; Chakraborty, M.; Suradhaniwar, S.; Adinarayana, J.; Durbha, S. S.

    2016-06-01

    Satellite based earth observation (EO) platforms have proved capability to spatio-temporally monitor changes on the earth's surface. Long term satellite missions have provided huge repository of optical remote sensing datasets, and United States Geological Survey (USGS) Landsat program is one of the oldest sources of optical EO datasets. This historical and near real time EO archive is a rich source of information to understand the seasonal changes in the horticultural crops. Citrus (Mandarin / Nagpur Orange) is one of the major horticultural crops cultivated in central India. Erratic behaviour of rainfall and dependency on groundwater for irrigation has wide impact on the citrus crop yield. Also, wide variations are reported in temperature and relative humidity causing early fruit onset and increase in crop water requirement. Therefore, there is need to study the crop growth stages and crop evapotranspiration at spatio-temporal scale for managing the scarce resources. In this study, an attempt has been made to understand the citrus crop growth stages using Normalized Difference Time Series (NDVI) time series data obtained from Landsat archives (http://earthexplorer.usgs.gov/). Total 388 Landsat 4, 5, 7 and 8 scenes (from year 1990 to Aug. 2015) for Worldwide Reference System (WRS) 2, path 145 and row 45 were selected to understand seasonal variations in citrus crop growth. Considering Landsat 30 meter spatial resolution to obtain homogeneous pixels with crop cover orchards larger than 2 hectare area was selected. To consider change in wavelength bandwidth (radiometric resolution) with Landsat sensors (i.e. 4, 5, 7 and 8) NDVI has been selected to obtain continuous sensor independent time series. The obtained crop growth stage information has been used to estimate citrus basal crop coefficient information (Kcb). Satellite based Kcb estimates were used with proximal agrometeorological sensing system observed relevant weather parameters for crop ET estimation. The results show that time series EO based crop growth stage estimates provide better information about geographically separated citrus orchards. Attempts are being made to estimate regional variations in citrus crop water requirement for effective irrigation planning. In future high resolution Sentinel 2 observations from European Space Agency (ESA) will be used to fill the time gaps and to get better understanding about citrus crop canopy parameters.

  18. Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer.

    PubMed

    Vogel, Sven C; Biwer, Chris M; Rogers, David H; Ahrens, James P; Hackenberg, Robert E; Onken, Drew; Zhang, Jianzhong

    2018-06-01

    A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U-Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr 3 . A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download.

  19. Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer

    PubMed Central

    Biwer, Chris M.; Rogers, David H.; Ahrens, James P.; Hackenberg, Robert E.; Onken, Drew; Zhang, Jianzhong

    2018-01-01

    A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U–Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr3. A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download. PMID:29896062

  20. A comparative analysis of spectral exponent estimation techniques for 1/fβ processes with applications to the analysis of stride interval time series

    PubMed Central

    Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin

    2013-01-01

    Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509

  1. A comparative analysis of spectral exponent estimation techniques for 1/f(β) processes with applications to the analysis of stride interval time series.

    PubMed

    Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin

    2014-01-30

    The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Forecasting of cyanobacterial density in Torrão reservoir using artificial neural networks.

    PubMed

    Torres, Rita; Pereira, Elisa; Vasconcelos, Vítor; Teles, Luís Oliva

    2011-06-01

    The ability of general regression neural networks (GRNN) to forecast the density of cyanobacteria in the Torrão reservoir (Tâmega river, Portugal), in a period of 15 days, based on three years of collected physical and chemical data, was assessed. Several models were developed and 176 were selected based on their correlation values for the verification series. A time lag of 11 was used, equivalent to one sample (periods of 15 days in the summer and 30 days in the winter). Several combinations of the series were used. Input and output data collected from three depths of the reservoir were applied (surface, euphotic zone limit and bottom). The model that presented a higher average correlation value presented the correlations 0.991; 0.843; 0.978 for training, verification and test series. This model had the three series independent in time: first test series, then verification series and, finally, training series. Only six input variables were considered significant to the performance of this model: ammonia, phosphates, dissolved oxygen, water temperature, pH and water evaporation, physical and chemical parameters referring to the three depths of the reservoir. These variables are common to the next four best models produced and, although these included other input variables, their performance was not better than the selected best model.

  3. Relations among soil radon, environmental parameters, volcanic and seismic events at Mt. Etna (Italy)

    NASA Astrophysics Data System (ADS)

    Giammanco, S.; Ferrera, E.; Cannata, A.; Montalto, P.; Neri, M.

    2013-12-01

    From November 2009 to April 2011 soil radon activity was continuously monitored using a Barasol probe located on the upper NE flank of Mt. Etna volcano (Italy), close both to the Piano Provenzana fault and to the NE-Rift. Seismic, volcanological and radon data were analysed together with data on environmental parameters, such as air and soil temperature, barometric pressure, snow and rain fall. In order to find possible correlations among the above parameters, and hence to reveal possible anomalous trends in the radon time-series, we used different statistical methods: i) multivariate linear regression; ii) cross-correlation; iii) coherence analysis through wavelet transform. Multivariate regression indicated a modest influence on soil radon from environmental parameters (R2 = 0.31). When using 100-day time windows, the R2 values showed wide variations in time, reaching their maxima (~0.63-0.66) during summer. Cross-correlation analysis over 100-day moving averages showed that, similar to multivariate linear regression analysis, the summer period was characterised by the best correlation between radon data and environmental parameters. Lastly, the wavelet coherence analysis allowed a multi-resolution coherence analysis of the time series acquired. This approach allowed to study the relations among different signals either in the time or in the frequency domain. It confirmed the results of the previous methods, but also allowed to recognize correlations between radon and environmental parameters at different observation scales (e.g., radon activity changed during strong precipitations, but also during anomalous variations of soil temperature uncorrelated with seasonal fluctuations). Using the above analysis, two periods were recognized when radon variations were significantly correlated with marked soil temperature changes and also with local seismic or volcanic activity. This allowed to produce two different physical models of soil gas transport that explain the observed anomalies. Our work suggests that in order to make an accurate analysis of the relations among different signals it is necessary to use different techniques that give complementary analytical information. In particular, the wavelet analysis showed to be the most effective in discriminating radon changes due to environmental influences from those correlated with impending seismic or volcanic events.

  4. A time series analysis performed on a 25-year period of kidney transplantation activity in a single center.

    PubMed

    Santori, G; Fontana, I; Bertocchi, M; Gasloli, G; Valente, U

    2010-05-01

    Following the example of many Western countries, where a "minimum volume rule" policy has been adopted as a quality parameter for complex surgical procedures, the Italian National Transplant Centre set the minimum number of kidney transplantation procedures/y at 30/center. The number of procedures performed in a single center over a large period may be treated as a time series to evaluate trends, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1983, and December 31, 2007, we performed 1376 procedures in adult or pediatric recipients from living or cadaveric donors. The greatest numbers of cases/y were performed in 1998 (n = 86) followed by 2004 (n = 82), 1996 (n = 75), and 2003 (n = 73). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed a whole incremental trend after exponential smoothing as well as after seasonal decomposition. However, starting from 2005, we observed a decreased trend in the series. The number of kidney transplants expected to be performed for 2008 by using the Holt-Winters exponential smoothing applied to the period 1983 to 2007 suggested 58 procedures, while in that year there were 52. The time series approach may be helpful to establish a minimum volume/y at a single-center level. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  5. Time to enhancement derived from ultrafast breast MRI as a novel parameter to discriminate benign from malignant breast lesions.

    PubMed

    Mus, Roel D; Borelli, Cristina; Bult, Peter; Weiland, Elisabeth; Karssemeijer, Nico; Barentsz, Jelle O; Gubern-Mérida, Albert; Platel, Bram; Mann, Ritse M

    2017-04-01

    To investigate time to enhancement (TTE) as novel dynamic parameter for lesion classification in breast magnetic resonance imaging (MRI). In this retrospective study, 157 women with 195 enhancing abnormalities (99 malignant and 96 benign) were included. All patients underwent a bi-temporal MRI protocol that included ultrafast time-resolved angiography with stochastic trajectory (TWIST) acquisitions (1.0×0.9×2.5mm, temporal resolution 4.32s), during the inflow of contrast agent. TTE derived from TWIST series and relative enhancement versus time curve type derived from volumetric interpolated breath-hold examination (VIBE) series were assessed and combined with basic morphological information to differentiate benign from malignant lesions. Receiver operating characteristic analysis and kappa statistics were applied. TTE had a significantly better discriminative ability than curve type (p<0.001 and p=0.026 for reader 1 and 2, respectively). Including morphology, sensitivity of TWIST and VIBE assessment was equivalent (p=0.549 and p=0.344, respectively). Specificity and diagnostic accuracy were significantly higher for TWIST than for VIBE assessment (p<0.001). Inter-reader agreement in differentiating malignant from benign lesions was almost perfect for TWIST evaluation (κ=0.86) and substantial for conventional assessment (κ=0.75). TTE derived from ultrafast TWIST acquisitions is a valuable parameter that allows robust differentiation between malignant and benign breast lesions with high accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. How Does Variability in Aragonite Saturation Proxies Impact Our Estimates of the Intensity and Duration of Exposure to Aragonite Corrosive Conditions in a Coastal Upwelling System?

    NASA Astrophysics Data System (ADS)

    Abell, J. T.; Jacobsen, J.; Bjorkstedt, E.

    2016-02-01

    Determining aragonite saturation state (Ω) in seawater requires measurement of two parameters of the carbonate system: most commonly dissolved inorganic carbon (DIC) and total alkalinity (TA). The routine measurement of DIC and TA is not always possible on frequently repeated hydrographic lines or at moored-time series that collect hydrographic data at short time intervals. In such cases a proxy can be developed that relates the saturation state as derived from one time or infrequent DIC and TA measurements (Ωmeas) to more frequently measured parameters such as dissolved oxygen (DO) and temperature (Temp). These proxies are generally based on best-fit parameterizations that utilize references values of DO and Temp and adjust linear coefficients until the error between the proxy-derived saturation state (Ωproxy) and Ωmeas is minimized. Proxies have been used to infer Ω from moored hydrographic sensors and gliders which routinely collect DO and Temp data but do not include carbonate parameter measurements. Proxies can also calculate Ω in regional oceanographic models which do not explicitly include carbonate parameters. Here we examine the variability and accuracy of Ωproxy along a near-shore hydrographic line and a moored-time series stations at Trinidad Head, CA. The saturation state is determined using proxies from different coastal regions of the California Current Large Marine Ecosystem and from different years of sampling along the hydrographic line. We then calculate the variability and error associated with the use of different proxy coefficients, the sensitivity to reference values and the inclusion of additional variables. We demonstrate how this variability affects estimates of the intensity and duration of exposure to aragonite corrosive conditions on the near-shore shelf and in the water column.

  7. Investigation on Insar Time Series Deformation Model Considering Rheological Parameters for Soft Clay Subgrade Monitoring

    NASA Astrophysics Data System (ADS)

    Xing, X.; Yuan, Z.; Chen, L. F.; Yu, X. Y.; Xiao, L.

    2018-04-01

    The stability control is one of the major technical difficulties in the field of highway subgrade construction engineering. Building deformation model is a crucial step for InSAR time series deformation monitoring. Most of the InSAR deformation models for deformation monitoring are pure empirical mathematical models, without considering the physical mechanism of the monitored object. In this study, we take rheology into consideration, inducing rheological parameters into traditional InSAR deformation models. To assess the feasibility and accuracy for our new model, both simulation and real deformation data over Lungui highway (a typical highway built on soft clay subgrade in Guangdong province, China) are investigated with TerraSAR-X satellite imagery. In order to solve the unknows of the non-linear rheological model, three algorithms: Gauss-Newton (GN), Levenberg-Marquarat (LM), and Genetic Algorithm (GA), are utilized and compared to estimate the unknown parameters. Considering both the calculation efficiency and accuracy, GA is chosen as the final choice for the new model in our case study. Preliminary real data experiment is conducted with use of 17 TerraSAR-X Stripmap images (with a 3-m resolution). With the new deformation model and GA aforementioned, the unknown rheological parameters over all the high coherence points are obtained and the LOS deformation (the low-pass component) sequences are generated.

  8. Exploratory Causal Analysis in Bivariate Time Series Data

    NASA Astrophysics Data System (ADS)

    McCracken, James M.

    Many scientific disciplines rely on observational data of systems for which it is difficult (or impossible) to implement controlled experiments and data analysis techniques are required for identifying causal information and relationships directly from observational data. This need has lead to the development of many different time series causality approaches and tools including transfer entropy, convergent cross-mapping (CCM), and Granger causality statistics. In this thesis, the existing time series causality method of CCM is extended by introducing a new method called pairwise asymmetric inference (PAI). It is found that CCM may provide counter-intuitive causal inferences for simple dynamics with strong intuitive notions of causality, and the CCM causal inference can be a function of physical parameters that are seemingly unrelated to the existence of a driving relationship in the system. For example, a CCM causal inference might alternate between ''voltage drives current'' and ''current drives voltage'' as the frequency of the voltage signal is changed in a series circuit with a single resistor and inductor. PAI is introduced to address both of these limitations. Many of the current approaches in the times series causality literature are not computationally straightforward to apply, do not follow directly from assumptions of probabilistic causality, depend on assumed models for the time series generating process, or rely on embedding procedures. A new approach, called causal leaning, is introduced in this work to avoid these issues. The leaning is found to provide causal inferences that agree with intuition for both simple systems and more complicated empirical examples, including space weather data sets. The leaning may provide a clearer interpretation of the results than those from existing time series causality tools. A practicing analyst can explore the literature to find many proposals for identifying drivers and causal connections in times series data sets, but little research exists of how these tools compare to each other in practice. This work introduces and defines exploratory causal analysis (ECA) to address this issue along with the concept of data causality in the taxonomy of causal studies introduced in this work. The motivation is to provide a framework for exploring potential causal structures in time series data sets. ECA is used on several synthetic and empirical data sets, and it is found that all of the tested time series causality tools agree with each other (and intuitive notions of causality) for many simple systems but can provide conflicting causal inferences for more complicated systems. It is proposed that such disagreements between different time series causality tools during ECA might provide deeper insight into the data than could be found otherwise.

  9. One year of real-time radon monitoring at Stromboli volcano and the effect of environmental parameters on 222Rn concentrations

    NASA Astrophysics Data System (ADS)

    Cigolini, C.; Laiolo, M.; Coppola, D.; Piscopo, D.; Bertolino, S.

    2009-12-01

    Real-time radon monitoring at Stromboli volcano has been operative within the last two years. In this contribution we will discuss the recent one-year-long time series analyses in the light of environmental parameters. Two sites for real-time monitoring have been identified by means of a network of periodic radon surveys in order to locate the areas of more efficient response to seismic transients and/or volcanic degassing. Two real-time stations are positioned at Stromboli: one at the summit and located along a fracture zone where the gas flux is concentrated, and the second one at a lower altitude in a sector of diffuse degassing. The signals of the two time-series are essentially concordant but radon concentrations are considerably higher at the summit station. Raw data show that there is a negative correlation between radon emissions and seasonal temperature variations, whereas the correlation with atmospheric pressure is negative for the site of diffuse degassing and sligthly positive for the station lacated along the summit fracture zone. These data and the previously collected ones show that SW winds may substantially decrease radon concentrations at the summit station. Multivarite regression statistics on the radon signals in the light of the above enviromental parameters and tidal forces, may contribute to better idenfify the correlation between radon emissions and variations in volcanic activity. Fig. 1. Radon monitoring stations at Stromboli and the two major summit faults. Stars identify sites for real-time monitoring: LSC and PZZ. The diamond is the location of the automated Labronzo Station. Full dots are stations for periodic measurements using alpha track-etches detectors and E-PERM® electrets. Inset with the location of Stromboli and the major structures of the Aeolian arc.

  10. Nonequilibrium transitions driven by external dichotomous noise

    NASA Astrophysics Data System (ADS)

    Behn, U.; Schiele, K.; Teubel, A.; Kühnel, A.

    1987-06-01

    The stationary probability density P s for a class of nonlinear one-dimensional models driven by a dichotomous Markovian process (DMP) I t , can be calculated explicitly. For the specific case of the Stratonovich model, x=ax - -x 3 +I t x, the qualitative shape of P s and its support is discussed in the whole parameter region. The location of the maxima of P s shows a behavior similar to order parameters in continuous phase transitions. The possibility of a noiseinduced change from continuous to a discontinuous transition in an extended model, in which the DMP couples also to the cubic term, is discussed. The time-dependent moments can be represented as an infinite series of terms, which are determined by a recursion formula. For negative even moments the series terminates and the long-time behavior can be obtained analytically. As a function of the physical parameters, qualitative changes of this behavior may occur which can be partially related to the behavior of P s . All results reproduce those for Gaussian white noise in the corresponding limit. The influence of the finite correlation time and the discreteness of the space of states of the DMP are discussed. An extensive list of references is contained in U. Behn, K. Schiele, and A. Teubel, Wiss. Z. Karl-Marx-Univ. Leipzig, Mathem.-Naturwiss. R. 34:602 (1985).

  11. Arterial spin labeling in combination with a look-locker sampling strategy: inflow turbo-sampling EPI-FAIR (ITS-FAIR).

    PubMed

    Günther, M; Bock, M; Schad, L R

    2001-11-01

    Arterial spin labeling (ASL) permits quantification of tissue perfusion without the use of MR contrast agents. With standard ASL techniques such as flow-sensitive alternating inversion recovery (FAIR) the signal from arterial blood is measured at a fixed inversion delay after magnetic labeling. As no image information is sampled during this delay, FAIR measurements are inefficient and time-consuming. In this work the FAIR preparation was combined with a Look-Locker acquisition to sample not one but a series of images after each labeling pulse. This new method allows monitoring of the temporal dynamics of blood inflow. To quantify perfusion, a theoretical model for the signal dynamics during the Look-Locker readout was developed and applied. Also, the imaging parameters of the new ITS-FAIR technique were optimized using an expression for the variance of the calculated perfusion. For the given scanner hardware the parameters were: temporal resolution 100 ms, 23 images, flip-angle 25.4 degrees. In a normal volunteer experiment with these parameters an average perfusion value of 48.2 +/- 12.1 ml/100 g/min was measured in the brain. With the ability to obtain ITS-FAIR time series with high temporal resolution arterial transit times in the range of -138 - 1054 ms were measured, where nonphysical negative values were found in voxels containing large vessels. Copyright 2001 Wiley-Liss, Inc.

  12. Optimizing the general linear model for functional near-infrared spectroscopy: an adaptive hemodynamic response function approach

    PubMed Central

    Uga, Minako; Dan, Ippeita; Sano, Toshifumi; Dan, Haruka; Watanabe, Eiju

    2014-01-01

    Abstract. An increasing number of functional near-infrared spectroscopy (fNIRS) studies utilize a general linear model (GLM) approach, which serves as a standard statistical method for functional magnetic resonance imaging (fMRI) data analysis. While fMRI solely measures the blood oxygen level dependent (BOLD) signal, fNIRS measures the changes of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) signals at a temporal resolution severalfold higher. This suggests the necessity of adjusting the temporal parameters of a GLM for fNIRS signals. Thus, we devised a GLM-based method utilizing an adaptive hemodynamic response function (HRF). We sought the optimum temporal parameters to best explain the observed time series data during verbal fluency and naming tasks. The peak delay of the HRF was systematically changed to achieve the best-fit model for the observed oxy- and deoxy-Hb time series data. The optimized peak delay showed different values for each Hb signal and task. When the optimized peak delays were adopted, the deoxy-Hb data yielded comparable activations with similar statistical power and spatial patterns to oxy-Hb data. The adaptive HRF method could suitably explain the behaviors of both Hb parameters during tasks with the different cognitive loads during a time course, and thus would serve as an objective method to fully utilize the temporal structures of all fNIRS data. PMID:26157973

  13. Data series embedding and scale invariant statistics.

    PubMed

    Michieli, I; Medved, B; Ristov, S

    2010-06-01

    Data sequences acquired from bio-systems such as human gait data, heart rate interbeat data, or DNA sequences exhibit complex dynamics that is frequently described by a long-memory or power-law decay of autocorrelation function. One way of characterizing that dynamics is through scale invariant statistics or "fractal-like" behavior. For quantifying scale invariant parameters of physiological signals several methods have been proposed. Among them the most common are detrended fluctuation analysis, sample mean variance analyses, power spectral density analysis, R/S analysis, and recently in the realm of the multifractal approach, wavelet analysis. In this paper it is demonstrated that embedding the time series data in the high-dimensional pseudo-phase space reveals scale invariant statistics in the simple fashion. The procedure is applied on different stride interval data sets from human gait measurements time series (Physio-Bank data library). Results show that introduced mapping adequately separates long-memory from random behavior. Smaller gait data sets were analyzed and scale-free trends for limited scale intervals were successfully detected. The method was verified on artificially produced time series with known scaling behavior and with the varying content of noise. The possibility for the method to falsely detect long-range dependence in the artificially generated short range dependence series was investigated. (c) 2009 Elsevier B.V. All rights reserved.

  14. Cross-visit tumor sub-segmentation and registration with outlier rejection for dynamic contrast-enhanced MRI time series data.

    PubMed

    Buonaccorsi, G A; Rose, C J; O'Connor, J P B; Roberts, C; Watson, Y; Jackson, A; Jayson, G C; Parker, G J M

    2010-01-01

    Clinical trials of anti-angiogenic and vascular-disrupting agents often use biomarkers derived from DCE-MRI, typically reporting whole-tumor summary statistics and so overlooking spatial parameter variations caused by tissue heterogeneity. We present a data-driven segmentation method comprising tracer-kinetic model-driven registration for motion correction, conversion from MR signal intensity to contrast agent concentration for cross-visit normalization, iterative principal components analysis for imputation of missing data and dimensionality reduction, and statistical outlier detection using the minimum covariance determinant to obtain a robust Mahalanobis distance. After applying these techniques we cluster in the principal components space using k-means. We present results from a clinical trial of a VEGF inhibitor, using time-series data selected because of problems due to motion and outlier time series. We obtained spatially-contiguous clusters that map to regions with distinct microvascular characteristics. This methodology has the potential to uncover localized effects in trials using DCE-MRI-based biomarkers.

  15. Time Series Forecasting of the Number of Malaysia Airlines and AirAsia Passengers

    NASA Astrophysics Data System (ADS)

    Asrah, N. M.; Nor, M. E.; Rahim, S. N. A.; Leng, W. K.

    2018-04-01

    The standard practice in forecasting process involved by fitting a model and further analysis on the residuals. If we know the distributional behaviour of the time series data, it can help us to directly analyse the model identification, parameter estimation, and model checking. In this paper, we want to compare the distributional behaviour data from the number of Malaysia Airlines (MAS) and AirAsia passenger’s. From the previous research, the AirAsia passengers are govern by geometric Brownian motion (GBM). The data were normally distributed, stationary and independent. Then, GBM was used to forecast the number of AirAsia passenger’s. The same methods were applied to MAS data and the results then were compared. Unfortunately, the MAS data were not govern by GBM. Then, the standard approach in time series forecasting will be applied to MAS data. From this comparison, we can conclude that the number of AirAsia passengers are always in peak season rather than MAS passengers.

  16. Automatising the analysis of stochastic biochemical time-series

    PubMed Central

    2015-01-01

    Background Mathematical and computational modelling of biochemical systems has seen a lot of effort devoted to the definition and implementation of high-performance mechanistic simulation frameworks. Within these frameworks it is possible to analyse complex models under a variety of configurations, eventually selecting the best setting of, e.g., parameters for a target system. Motivation This operational pipeline relies on the ability to interpret the predictions of a model, often represented as simulation time-series. Thus, an efficient data analysis pipeline is crucial to automatise time-series analyses, bearing in mind that errors in this phase might mislead the modeller's conclusions. Results For this reason we have developed an intuitive framework-independent Python tool to automate analyses common to a variety of modelling approaches. These include assessment of useful non-trivial statistics for simulation ensembles, e.g., estimation of master equations. Intuitive and domain-independent batch scripts will allow the researcher to automatically prepare reports, thus speeding up the usual model-definition, testing and refinement pipeline. PMID:26051821

  17. Linear time-varying models can reveal non-linear interactions of biomolecular regulatory networks using multiple time-series data.

    PubMed

    Kim, Jongrae; Bates, Declan G; Postlethwaite, Ian; Heslop-Harrison, Pat; Cho, Kwang-Hyun

    2008-05-15

    Inherent non-linearities in biomolecular interactions make the identification of network interactions difficult. One of the principal problems is that all methods based on the use of linear time-invariant models will have fundamental limitations in their capability to infer certain non-linear network interactions. Another difficulty is the multiplicity of possible solutions, since, for a given dataset, there may be many different possible networks which generate the same time-series expression profiles. A novel algorithm for the inference of biomolecular interaction networks from temporal expression data is presented. Linear time-varying models, which can represent a much wider class of time-series data than linear time-invariant models, are employed in the algorithm. From time-series expression profiles, the model parameters are identified by solving a non-linear optimization problem. In order to systematically reduce the set of possible solutions for the optimization problem, a filtering process is performed using a phase-portrait analysis with random numerical perturbations. The proposed approach has the advantages of not requiring the system to be in a stable steady state, of using time-series profiles which have been generated by a single experiment, and of allowing non-linear network interactions to be identified. The ability of the proposed algorithm to correctly infer network interactions is illustrated by its application to three examples: a non-linear model for cAMP oscillations in Dictyostelium discoideum, the cell-cycle data for Saccharomyces cerevisiae and a large-scale non-linear model of a group of synchronized Dictyostelium cells. The software used in this article is available from http://sbie.kaist.ac.kr/software

  18. New Tools for Comparing Beliefs about the Timing of Recurrent Events with Climate Time Series Datasets

    NASA Astrophysics Data System (ADS)

    Stiller-Reeve, Mathew; Stephenson, David; Spengler, Thomas

    2017-04-01

    For climate services to be relevant and informative for users, scientific data definitions need to match users' perceptions or beliefs. This study proposes and tests novel yet simple methods to compare beliefs of timing of recurrent climatic events with empirical evidence from multiple historical time series. The methods are tested by applying them to the onset date of the monsoon in Bangladesh, where several scientific monsoon definitions can be applied, yielding different results for monsoon onset dates. It is a challenge to know which monsoon definition compares best with people's beliefs. Time series from eight different scientific monsoon definitions in six regions are compared with respondent beliefs from a previously completed survey concerning the monsoon onset. Beliefs about the timing of the monsoon onset are represented probabilistically for each respondent by constructing a probability mass function (PMF) from elicited responses about the earliest, normal, and latest dates for the event. A three-parameter circular modified triangular distribution (CMTD) is used to allow for the possibility (albeit small) of the onset at any time of the year. These distributions are then compared to the historical time series using two approaches: likelihood scores, and the mean and standard deviation of time series of dates simulated from each belief distribution. The methods proposed give the basis for further iterative discussion with decision-makers in the development of eventual climate services. This study uses Jessore, Bangladesh, as an example and finds that a rainfall definition, applying a 10 mm day-1 threshold to NCEP-NCAR reanalysis (Reanalysis-1) data, best matches the survey respondents' beliefs about monsoon onset.

  19. A travel time forecasting model based on change-point detection method

    NASA Astrophysics Data System (ADS)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  20. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction.

    PubMed

    Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.

  1. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction

    PubMed Central

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803

  2. Trend analysis of Arctic sea ice extent

    NASA Astrophysics Data System (ADS)

    Silva, M. E.; Barbosa, S. M.; Antunes, Luís; Rocha, Conceição

    2009-04-01

    The extent of Arctic sea ice is a fundamental parameter of Arctic climate variability. In the context of climate change, the area covered by ice in the Arctic is a particularly useful indicator of recent changes in the Arctic environment. Climate models are in near universal agreement that Arctic sea ice extent will decline through the 21st century as a consequence of global warming and many studies predict a ice free Arctic as soon as 2012. Time series of satellite passive microwave observations allow to assess the temporal changes in the extent of Arctic sea ice. Much of the analysis of the ice extent time series, as in most climate studies from observational data, have been focussed on the computation of deterministic linear trends by ordinary least squares. However, many different processes, including deterministic, unit root and long-range dependent processes can engender trend like features in a time series. Several parametric tests have been developed, mainly in econometrics, to discriminate between stationarity (no trend), deterministic trend and stochastic trends. Here, these tests are applied in the trend analysis of the sea ice extent time series available at National Snow and Ice Data Center. The parametric stationary tests, Augmented Dickey-Fuller (ADF), Phillips-Perron (PP) and the KPSS, do not support an overall deterministic trend in the time series of Arctic sea ice extent. Therefore, alternative parametrizations such as long-range dependence should be considered for characterising long-term Arctic sea ice variability.

  3. Mapping Cropland and Crop-type Distribution Using Time Series MODIS Data

    NASA Astrophysics Data System (ADS)

    Lu, D.; Chen, Y.; Moran, E. F.; Batistella, M.; Luo, L.; Pokhrel, Y.; Deb, K.

    2016-12-01

    Mapping regional and global cropland distribution has attracted great attention in the past decade, but the separation of crop types is challenging due to the spectral confusion and cloud cover problems during the growing season in Brazil. The objective of this study is to develop a new approach to identify crop types (including soybean, cotton, maize) and planting patterns (soybean-maize, soybean-cotton, and single crop) in Mato Grosso, Goias and Tocantins States, Brazil. The time series moderate resolution imaging spectroradiometer (MODIS) normalized difference vegetation index (NDVI) (MOD13Q1) in 2015/2016 were used in this research and field survey data were collected in May 2016. The major steps include: (1) reconstruct time series NDVI data contaminated by noise and clouds using the temporal interpolation algorithm; (2) identify the best periods and develop temporal indices and phenology parameters to distinguish cropland from other land cover types based on time series NDVI data; (3) develop a crop temporal difference index (CTDI) to extract crop types and patterns using time series NDVI data. This research shows that (1) the cropland occupied approximately 16.85% of total land in these three states; (2) soybean-maize and soybean-cotton were two major crop patterns which occupied 54.80% and 19.30% of total cropland area. This research indicates that the proposed approach is promising for accurately and rapidly mapping cropland and crop-type distribution in these three states of Brazil.

  4. A User's Version View of a Robustified, Bayesian Weighted Least-Squares Recursive Algorithm for Interpolating AVHRR-NDVI Data: Applications to an Animated Visualization of the Phenology of a Semi-Arid Study Area

    NASA Astrophysics Data System (ADS)

    Hermance, J. F.; Jacob, R. W.; Bradley, B. A.; Mustard, J. F.

    2005-12-01

    In studying vegetation patterns remotely, the objective is to draw inferences on the development of specific or general land surface phenology (LSP) as a function of space and time by determining the behavior of a parameter (in our case NDVI), when the parameter estimate may be biased by noise, data dropouts and obfuscations from atmospheric and other effects. We describe the underpinning concepts of a procedure for a robust interpolation of NDVI data that does not have the limitations of other mathematical approaches which require orthonormal basis functions (e.g. Fourier analysis). In this approach, data need not be uniformly sampled in time, nor do we expect noise to be Gaussian-distributed. Our approach is intuitive and straightforward, and is applied here to the refined modeling of LSP using 7 years of weekly and biweekly AVHRR NDVI data for a 150 x 150 km study area in central Nevada. This site is a microcosm of a broad range of vegetation classes, from irrigated agriculture with annual NDVIvalues of up to 0.7 to playas and alkali salt flats with annual NDVI values of only 0.07. Our procedure involves a form of parameter estimation employing Bayesian statistics. In utilitarian terms, the latter procedure is a method of statistical analysis (in our case, robustified, weighted least-squares recursive curve-fitting) that incorporates a variety of prior knowledge when forming current estimates of a particular process or parameter. In addition to the standard Bayesian approach, we account for outliers due to data dropouts or obfuscations because of clouds and snow cover. An initial "starting model" for the average annual cycle and long term (7 year) trend is determined by jointly fitting a common set of complex annual harmonics and a low order polynomial to an entire multi-year time series in one step. This is not a formal Fourier series in the conventional sense, but rather a set of 4 cosine and 4 sine coefficients with fundamental periods of 12, 6, 3 and 1.5 months. Instabilities during large time gaps in the data are suppressed by introducing an expectation of minimum roughness on the fitted time series. Our next significant computational step involves a constrained least squares fit to the observed NDVI data. Residuals between the observed NDVI value and the predicted starting model are computed, and the inverse of these residuals provide the weights for a weighted least squares analysis whereby a set of annual eighth-order splines are fit to the 7 years of NDVI data. Although a series of independent 8-th order annual functionals over a period of 7 years is intrinsically unstable when there are significant data gaps, the splined versions for this specific application are quite stable due to explicit continuity conditions on the values and derivatives of the functionals across contiguous years, as well as a priori constraints on the predicted values vis-a-vis the assumed initial model. Our procedure allows us to robustly interpolate original unequally-spaced NDVI data with a new time series having the most-appropriate, user-defined time base. We apply this approach to the temporal behavior of vegetation in our 150 x 150 km study area. Such a small area, being so rich in vegetation diversity, is particularly useful to view in map form and by animated annual and multi-year time sequences, since the interrelation between phenology, topography and specific usage patterns becomes clear.

  5. Hangar Fire Suppression Utilizing Novec 1230

    DTIC Science & Technology

    2018-01-01

    The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...fuel fires in aircraft hangars. A 30×30×8-ft concrete-and-steel test structure was constructed for this test series . Four discharge assemblies...structure. System discharge parameters---discharge time , discharge rate, and quantity of agent discharged---were adjusted to produce the desired Novec 1230

  6. An empirical comparison of SPM preprocessing parameters to the analysis of fMRI data.

    PubMed

    Della-Maggiore, Valeria; Chau, Wilkin; Peres-Neto, Pedro R; McIntosh, Anthony R

    2002-09-01

    We present the results from two sets of Monte Carlo simulations aimed at evaluating the robustness of some preprocessing parameters of SPM99 for the analysis of functional magnetic resonance imaging (fMRI). Statistical robustness was estimated by implementing parametric and nonparametric simulation approaches based on the images obtained from an event-related fMRI experiment. Simulated datasets were tested for combinations of the following parameters: basis function, global scaling, low-pass filter, high-pass filter and autoregressive modeling of serial autocorrelation. Based on single-subject SPM analysis, we derived the following conclusions that may serve as a guide for initial analysis of fMRI data using SPM99: (1) The canonical hemodynamic response function is a more reliable basis function to model the fMRI time series than HRF with time derivative. (2) Global scaling should be avoided since it may significantly decrease the power depending on the experimental design. (3) The use of a high-pass filter may be beneficial for event-related designs with fixed interstimulus intervals. (4) When dealing with fMRI time series with short interstimulus intervals (<8 s), the use of first-order autoregressive model is recommended over a low-pass filter (HRF) because it reduces the risk of inferential bias while providing a relatively good power. For datasets with interstimulus intervals longer than 8 seconds, temporal smoothing is not recommended since it decreases power. While the generalizability of our results may be limited, the methods we employed can be easily implemented by other scientists to determine the best parameter combination to analyze their data.

  7. Kumaraswamy autoregressive moving average models for double bounded environmental data

    NASA Astrophysics Data System (ADS)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  8. Decadal-Scale Crustal Deformation Transients in Japan Prior to the March 11, 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Mavrommatis, A. P.; Segall, P.; Miyazaki, S.; Owen, S. E.; Moore, A. W.

    2012-12-01

    Excluding postseismic transients and slow-slip events, interseismic deformation is generally believed to accumulate linearly in time. We test this assumption using data from Japan's GPS Earth Observation Network System (GEONET), which provides high-precision time series spanning over 10 years. Here we report regional signals of decadal transients that in some cases appear to be unrelated to any known source of deformation. We analyze GPS position time series processed independently, using the BERNESE and GIPSY-PPP software, provided by the Geospatial Information Authority of Japan (GSI) and a collaborative effort of Jet Propulsion Laboratory (JPL) and Dr. Mark Simons (Caltech), respectively. We use time series from 891 GEONET stations, spanning an average of ~14 years prior to the Mw 9.0 March 11, 2011 Tohoku earthquake. We assume a time series model that includes a linear term representing constant velocity, as well as a quadratic term representing constant acceleration. Postseismic transients, where observed, are modeled by A log(1 + t/tc). We also model seasonal terms and antenna offsets, and solve for the best-fitting parameters using standard nonlinear least squares. Uncertainties in model parameters are determined by linear propagation of errors. Noise parameters are inferred from time series that lack obvious transients using maximum-likelihood estimation and assuming a combination of power-law and white noise. Resulting velocity uncertainties are on the order of 1.0 to 1.5 mm/yr. Excluding stations with high misfit to the time series model, our results reveal several spatially coherent patterns of statistically significant (at as much as 5σ) apparent crustal acceleration in various regions of Japan. The signal exhibits similar patterns in both the GSI and JPL solutions and is not coherent across the entire network, which indicates that the pattern is not a reference frame artifact. We interpret most of the accelerations to represent transient deformation due to known sources, including slow-slip events (e.g., the post-2000 Tokai event) or postseismic transients due to large earthquakes prior to 1996 (e.g., the M 7.7 1993 Hokkaido-Nansei-Oki and M 7.7 1994 Sanriku-Oki earthquakes). Viscoelastic modeling will be required to confirm the influence of past earthquakes on the acceleration field. In addition to these signals, we find spatially coherent accelerations in the Tohoku and Kyushu regions. Specifically, we observe generally southward acceleration extending for ~400 km near the west coast of Tohoku, east-southeastward acceleration covering ~200 km along the southeast coast of Tohoku, and west-northwestward acceleration spanning ~100 km across the south coast of Kyushu. Interestingly, the eastward acceleration field in Tohoku is spatially correlated with the extent of the March 11, 2011 Mw 9.0 rupture area. We note that the inferred acceleration is present prior to the sequence of M 7+ earthquakes beginning in 2003, and that short-term transients following these events have been accounted for in the analysis. A possible, although non-unique, cause of the acceleration is increased slip rate on the Japan Trench. However, such widespread changes would not be predicted by standard earthquake nucleation models.

  9. Reconstruction method for data protection in telemedicine systems

    NASA Astrophysics Data System (ADS)

    Buldakova, T. I.; Suyatinov, S. I.

    2015-03-01

    In the report the approach to protection of transmitted data by creation of pair symmetric keys for the sensor and the receiver is offered. Since biosignals are unique for each person, their corresponding processing allows to receive necessary information for creation of cryptographic keys. Processing is based on reconstruction of the mathematical model generating time series that are diagnostically equivalent to initial biosignals. Information about the model is transmitted to the receiver, where the restoration of physiological time series is performed using the reconstructed model. Thus, information about structure and parameters of biosystem model received in the reconstruction process can be used not only for its diagnostics, but also for protection of transmitted data in telemedicine complexes.

  10. Asymmetry in power-law magnitude correlations.

    PubMed

    Podobnik, Boris; Horvatić, Davor; Tenenbaum, Joel N; Stanley, H Eugene

    2009-07-01

    Time series of increments can be created in a number of different ways from a variety of physical phenomena. For example, in the phenomenon of volatility clustering-well-known in finance-magnitudes of adjacent increments are correlated. Moreover, in some time series, magnitude correlations display asymmetry with respect to an increment's sign: the magnitude of |x_{i}| depends on the sign of the previous increment x_{i-1} . Here we define a model-independent test to measure the statistical significance of any observed asymmetry. We propose a simple stochastic process characterized by a an asymmetry parameter lambda and a method for estimating lambda . We illustrate both the test and process by analyzing physiological data.

  11. An M-estimator for reduced-rank system identification.

    PubMed

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S; Vogelstein, Joshua T

    2017-01-15

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ 1 and ℓ 2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models.

  12. An M-estimator for reduced-rank system identification

    PubMed Central

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S.; Vogelstein, Joshua T.

    2018-01-01

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ1 and ℓ2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models. PMID:29391659

  13. Generalized sample entropy analysis for traffic signals based on similarity measure

    NASA Astrophysics Data System (ADS)

    Shang, Du; Xu, Mengjia; Shang, Pengjian

    2017-05-01

    Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.

  14. Time series models for prediction the total and dissolved heavy metals concentration in road runoff and soil solution of roadside embankments

    NASA Astrophysics Data System (ADS)

    Aljoumani, Basem; Kluge, Björn; sanchez, Josep; Wessolek, Gerd

    2017-04-01

    Highways and main roads are potential sources of contamination for the surrounding environment. High traffic rates result in elevated heavy metal concentrations in road runoff, soil and water seepage, which has attracted much attention in the recent past. Prediction of heavy metals transfer near the roadside into deeper soil layers are very important to prevent the groundwater pollution. This study was carried out on data of a number of lysimeters which were installed along the A115 highway (Germany) with a mean daily traffic of 90.000 vehicles per day. Three polyethylene (PE) lysimeters were installed at the A115 highway. They have the following dimensions: length 150 cm, width 100 cm, height 60 cm. The lysimeters were filled with different soil materials, which were recently used for embankment construction in Germany. With the obtained data, we will develop a time series analysis model to predict total and dissolved metal concentration in road runoff and in soil solution of the roadside embankments. The time series consisted of monthly measurements of heavy metals and was transformed to a stationary situation. Subsequently, the transformed data will be used to conduct analyses in the time domain in order to obtain the parameters of a seasonal autoregressive integrated moving average (ARIMA) model. Four phase approaches for identifying and fitting ARIMA models will be used: identification, parameter estimation, diagnostic checking, and forecasting. An automatic selection criterion, such as the Akaike information criterion, will use to enhance this flexible approach to model building

  15. Inferring epidemiological dynamics of infectious diseases using Tajima's D statistic on nucleotide sequences of pathogens.

    PubMed

    Kim, Kiyeon; Omori, Ryosuke; Ito, Kimihito

    2017-12-01

    The estimation of the basic reproduction number is essential to understand epidemic dynamics, and time series data of infected individuals are usually used for the estimation. However, such data are not always available. Methods to estimate the basic reproduction number using genealogy constructed from nucleotide sequences of pathogens have been proposed so far. Here, we propose a new method to estimate epidemiological parameters of outbreaks using the time series change of Tajima's D statistic on the nucleotide sequences of pathogens. To relate the time evolution of Tajima's D to the number of infected individuals, we constructed a parsimonious mathematical model describing both the transmission process of pathogens among hosts and the evolutionary process of the pathogens. As a case study we applied this method to the field data of nucleotide sequences of pandemic influenza A (H1N1) 2009 viruses collected in Argentina. The Tajima's D-based method estimated basic reproduction number to be 1.55 with 95% highest posterior density (HPD) between 1.31 and 2.05, and the date of epidemic peak to be 10th July with 95% HPD between 22nd June and 9th August. The estimated basic reproduction number was consistent with estimation by birth-death skyline plot and estimation using the time series of the number of infected individuals. These results suggested that Tajima's D statistic on nucleotide sequences of pathogens could be useful to estimate epidemiological parameters of outbreaks. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Modeling the dynamics of metabolism in montane streams using continuous dissolved oxygen measurements

    NASA Astrophysics Data System (ADS)

    Birkel, Christian; Soulsby, Chris; Malcolm, Iain; Tetzlaff, Doerthe

    2013-09-01

    We inferred in-stream ecosystem processes in terms of photosynthetic productivity (P), system respiration (R), and reaeration capacity (RC) from a five parameter numerical oxygen mass balance model driven by radiation, stream and air temperature, and stream depth. This was calibrated to high-resolution (15 min), long-term (2.5 years) dissolved oxygen (DO) time series for moorland and forest reaches of a third-order montane stream in Scotland. The model was multicriteria calibrated to continuous 24 h periods within the time series to identify behavioral simulations representative of ecosystem functioning. Results were evaluated using a seasonal regional sensitivity analysis and a colinearity index for parameter sensitivity. This showed that >95 % of the behavioral models for the moorland and forest sites were identifiable and able to infer in-stream processes from the DO time series for around 40% and 32% of the time period, respectively. Monthly P/R ratios <1 indicate a heterotrophic system with both sites exhibiting similar temporal patterns; with a maximum in February and a second peak during summer months. However, the estimated net ecosystem productivity suggests that the moorland reach without riparian tree cover is likely to be a much larger source of carbon to the atmosphere (122 mmol C m-2 d-1) compared to the forested reach (64 mmol C m-2 d-1). We conclude that such process-based oxygen mass balance models may be transferable tools for investigating other systems; specifically, well-oxygenated upland channels with high hydraulic roughness and lacking reaeration measurements.

  17. Volatility measurement with directional change in Chinese stock market: Statistical property and investment strategy

    NASA Astrophysics Data System (ADS)

    Ma, Junjun; Xiong, Xiong; He, Feng; Zhang, Wei

    2017-04-01

    The stock price fluctuation is studied in this paper with intrinsic time perspective. The event, directional change (DC) or overshoot, are considered as time scale of price time series. With this directional change law, its corresponding statistical properties and parameter estimation is tested in Chinese stock market. Furthermore, a directional change trading strategy is proposed for invest in the market portfolio in Chinese stock market, and both in-sample and out-of-sample performance are compared among the different method of model parameter estimation. We conclude that DC method can capture important fluctuations in Chinese stock market and gain profit due to the statistical property that average upturn overshoot size is bigger than average downturn directional change size. The optimal parameter of DC method is not fixed and we obtained 1.8% annual excess return with this DC-based trading strategy.

  18. Significance testing of clinical data using virus dynamics models with a Markov chain Monte Carlo method: application to emergence of lamivudine-resistant hepatitis B virus.

    PubMed Central

    Burroughs, N J; Pillay, D; Mutimer, D

    1999-01-01

    Bayesian analysis using a virus dynamics model is demonstrated to facilitate hypothesis testing of patterns in clinical time-series. Our Markov chain Monte Carlo implementation demonstrates that the viraemia time-series observed in two sets of hepatitis B patients on antiviral (lamivudine) therapy, chronic carriers and liver transplant patients, are significantly different, overcoming clinical trial design differences that question the validity of non-parametric tests. We show that lamivudine-resistant mutants grow faster in transplant patients than in chronic carriers, which probably explains the differences in emergence times and failure rates between these two sets of patients. Incorporation of dynamic models into Bayesian parameter analysis is of general applicability in medical statistics. PMID:10643081

  19. Geophysical parameters from the analysis of laser ranging to Starlette

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Shum, C. K.; Tapley, B. D.

    1991-01-01

    The University of Texas Center for Space Research (UT/CSR) research efforts covering the time period from August 1, 1990 through January 31, 1991 have concentrated on the following areas: (1) Laser Data Processing (more than 15 years of Starlette data (1975-90) have been processed and cataloged); (2) Seasonal Variation of Zonal Tides (observed Starlette time series has been compared with meteorological data-derived time series); (3) Ocean Tide Solutions . (error analysis has been performed using Starlette and other tide solutions); and (4) Lunar Deceleration (formulation to compute theoretical lunar deceleration has been verified and applied to several tidal solutions). Concise descriptions of research achievement for each of the above areas are given. Copies of abstracts for some of the publications and conference presentations are included in the appendices.

  20. Rényi’s information transfer between financial time series

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Kleinert, Hagen; Shefaat, Mohammad

    2012-05-01

    In this paper, we quantify the statistical coherence between financial time series by means of the Rényi entropy. With the help of Campbell’s coding theorem, we show that the Rényi entropy selectively emphasizes only certain sectors of the underlying empirical distribution while strongly suppressing others. This accentuation is controlled with Rényi’s parameter q. To tackle the issue of the information flow between time series, we formulate the concept of Rényi’s transfer entropy as a measure of information that is transferred only between certain parts of underlying distributions. This is particularly pertinent in financial time series, where the knowledge of marginal events such as spikes or sudden jumps is of a crucial importance. We apply the Rényian information flow to stock market time series from 11 world stock indices as sampled at a daily rate in the time period 02.01.1990-31.12.2009. Corresponding heat maps and net information flows are represented graphically. A detailed discussion of the transfer entropy between the DAX and S&P500 indices based on minute tick data gathered in the period 02.04.2008-11.09.2009 is also provided. Our analysis shows that the bivariate information flow between world markets is strongly asymmetric with a distinct information surplus flowing from the Asia-Pacific region to both European and US markets. An important yet less dramatic excess of information also flows from Europe to the US. This is particularly clearly seen from a careful analysis of Rényi information flow between the DAX and S&P500 indices.

  1. Wavelet application to the time series analysis of DORIS station coordinates

    NASA Astrophysics Data System (ADS)

    Bessissi, Zahia; Terbeche, Mekki; Ghezali, Boualem

    2009-06-01

    The topic developed in this article relates to the residual time series analysis of DORIS station coordinates using the wavelet transform. Several analysis techniques, already developed in other disciplines, were employed in the statistical study of the geodetic time series of stations. The wavelet transform allows one, on the one hand, to provide temporal and frequential parameter residual signals, and on the other hand, to determine and quantify systematic signals such as periodicity and tendency. Tendency is the change in short or long term signals; it is an average curve which represents the general pace of the signal evolution. On the other hand, periodicity is a process which is repeated, identical to itself, after a time interval called the period. In this context, the topic of this article consists, on the one hand, in determining the systematic signals by wavelet analysis of time series of DORIS station coordinates, and on the other hand, in applying the denoising signal to the wavelet packet, which makes it possible to obtain a well-filtered signal, smoother than the original signal. The DORIS data used in the treatment are a set of weekly residual time series from 1993 to 2004 from eight stations: DIOA, COLA, FAIB, KRAB, SAKA, SODB, THUB and SYPB. It is the ign03wd01 solution expressed in stcd format, which is derived by the IGN/JPL analysis center. Although these data are not very recent, the goal of this study is to detect the contribution of the wavelet analysis method on the DORIS data, compared to the other analysis methods already studied.

  2. Data Reorganization for Optimal Time Series Data Access, Analysis, and Visualization

    NASA Astrophysics Data System (ADS)

    Rui, H.; Teng, W. L.; Strub, R.; Vollmer, B.

    2012-12-01

    The way data are archived is often not optimal for their access by many user communities (e.g., hydrological), particularly if the data volumes and/or number of data files are large. The number of data records of a non-static data set generally increases with time. Therefore, most data sets are commonly archived by time steps, one step per file, often containing multiple variables. However, many research and application efforts need time series data for a given geographical location or area, i.e., a data organization that is orthogonal to the way the data are archived. The retrieval of a time series of the entire temporal coverage of a data set for a single variable at a single data point, in an optimal way, is an important and longstanding challenge, especially for large science data sets (i.e., with volumes greater than 100 GB). Two examples of such large data sets are the North American Land Data Assimilation System (NLDAS) and Global Land Data Assimilation System (GLDAS), archived at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC; Hydrology Data Holdings Portal, http://disc.sci.gsfc.nasa.gov/hydrology/data-holdings). To date, the NLDAS data set, hourly 0.125x0.125° from Jan. 1, 1979 to present, has a total volume greater than 3 TB (compressed). The GLDAS data set, 3-hourly and monthly 0.25x0.25° and 1.0x1.0° Jan. 1948 to present, has a total volume greater than 1 TB (compressed). Both data sets are accessible, in the archived time step format, via several convenient methods, including Mirador search and download (http://mirador.gsfc.nasa.gov/), GrADS Data Server (GDS; http://hydro1.sci.gsfc.nasa.gov/dods/), direct FTP (ftp://hydro1.sci.gsfc.nasa.gov/data/s4pa/), and Giovanni Online Visualization and Analysis (http://disc.sci.gsfc.nasa.gov/giovanni). However, users who need long time series currently have no efficient way to retrieve them. Continuing a longstanding tradition of facilitating data access, analysis, and visualization that contribute to knowledge discovery from large science data sets, the GES DISC recently begun a NASA ACCESS-funded project to, in part, optimally reorganize selected large data sets for access and use by the hydrological user community. This presentation discusses the following aspects of the project: (1) explorations of approaches, such as database and file system; (2) findings for each approach, such as limitations and concerns, and pros and cons; (3) implementation of reorganizing data via the file system approach, including data processing (parameter and spatial subsetting), metadata and file structure of reorganized time series data (true "Data Rod," single variable, single grid point, and entire data range per file), and production and quality control. The reorganized time series data will be integrated into several broadly used data tools, such as NASA Giovanni and those provided by CUAHSI HIS (http://his.cuahsi.org/) and EPA BASINS (http://water.epa.gov/scitech/datait/models/basins/), as well as accessible via direct FTP, along with documentation and sample reading software. The data reorganization is initially, as part of the project, applied to selected popular hydrology-related parameters, with other parameters to be added, as resources permit.

  3. Parameter estimation of kinetic models from metabolic profiles: two-phase dynamic decoupling method.

    PubMed

    Jia, Gengjie; Stephanopoulos, Gregory N; Gunawan, Rudiyanto

    2011-07-15

    Time-series measurements of metabolite concentration have become increasingly more common, providing data for building kinetic models of metabolic networks using ordinary differential equations (ODEs). In practice, however, such time-course data are usually incomplete and noisy, and the estimation of kinetic parameters from these data is challenging. Practical limitations due to data and computational aspects, such as solving stiff ODEs and finding global optimal solution to the estimation problem, give motivations to develop a new estimation procedure that can circumvent some of these constraints. In this work, an incremental and iterative parameter estimation method is proposed that combines and iterates between two estimation phases. One phase involves a decoupling method, in which a subset of model parameters that are associated with measured metabolites, are estimated using the minimization of slope errors. Another phase follows, in which the ODE model is solved one equation at a time and the remaining model parameters are obtained by minimizing concentration errors. The performance of this two-phase method was tested on a generic branched metabolic pathway and the glycolytic pathway of Lactococcus lactis. The results showed that the method is efficient in getting accurate parameter estimates, even when some information is missing.

  4. Connectionist Architectures for Time Series Prediction of Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Weigend, Andreas Sebastian

    We investigate the effectiveness of connectionist networks for predicting the future continuation of temporal sequences. The problem of overfitting, particularly serious for short records of noisy data, is addressed by the method of weight-elimination: a term penalizing network complexity is added to the usual cost function in back-propagation. We describe the dynamics of the procedure and clarify the meaning of the parameters involved. From a Bayesian perspective, the complexity term can be usefully interpreted as an assumption about prior distribution of the weights. We analyze three time series. On the benchmark sunspot series, the networks outperform traditional statistical approaches. We show that the network performance does not deteriorate when there are more input units than needed. In the second example, the notoriously noisy foreign exchange rates series, we pick one weekday and one currency (DM vs. US). Given exchange rate information up to and including a Monday, the task is to predict the rate for the following Tuesday. Weight-elimination manages to extract a significant part of the dynamics and makes the solution interpretable. In the third example, the networks predict the resource utilization of a chaotic computational ecosystem for hundreds of steps forward in time.

  5. Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool

    NASA Astrophysics Data System (ADS)

    Chakraborty, Monisha; Ghosh, Dipak

    2017-12-01

    Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.

  6. Quantitative Assessment of Arrhythmia Using Non-linear Approach: A Non-invasive Prognostic Tool

    NASA Astrophysics Data System (ADS)

    Chakraborty, Monisha; Ghosh, Dipak

    2018-04-01

    Accurate prognostic tool to identify severity of Arrhythmia is yet to be investigated, owing to the complexity of the ECG signal. In this paper, we have shown that quantitative assessment of Arrhythmia is possible using non-linear technique based on "Hurst Rescaled Range Analysis". Although the concept of applying "non-linearity" for studying various cardiac dysfunctions is not entirely new, the novel objective of this paper is to identify the severity of the disease, monitoring of different medicine and their dose, and also to assess the efficiency of different medicine. The approach presented in this work is simple which in turn will help doctors in efficient disease management. In this work, Arrhythmia ECG time series are collected from MIT-BIH database. Normal ECG time series are acquired using POLYPARA system. Both time series are analyzed in thelight of non-linear approach following the method "Rescaled Range Analysis". The quantitative parameter, "Fractal Dimension" (D) is obtained from both types of time series. The major finding is that Arrhythmia ECG poses lower values of D as compared to normal. Further, this information can be used to access the severity of Arrhythmia quantitatively, which is a new direction of prognosis as well as adequate software may be developed for the use of medical practice.

  7. Coastline detection with time series of SAR images

    NASA Astrophysics Data System (ADS)

    Ao, Dongyang; Dumitru, Octavian; Schwarz, Gottfried; Datcu, Mihai

    2017-10-01

    For maritime remote sensing, coastline detection is a vital task. With continuous coastline detection results from satellite image time series, the actual shoreline, the sea level, and environmental parameters can be observed to support coastal management and disaster warning. Established coastline detection methods are often based on SAR images and wellknown image processing approaches. These methods involve a lot of complicated data processing, which is a big challenge for remote sensing time series. Additionally, a number of SAR satellites operating with polarimetric capabilities have been launched in recent years, and many investigations of target characteristics in radar polarization have been performed. In this paper, a fast and efficient coastline detection method is proposed which comprises three steps. First, we calculate a modified correlation coefficient of two SAR images of different polarization. This coefficient differs from the traditional computation where normalization is needed. Through this modified approach, the separation between sea and land becomes more prominent. Second, we set a histogram-based threshold to distinguish between sea and land within the given image. The histogram is derived from the statistical distribution of the polarized SAR image pixel amplitudes. Third, we extract continuous coastlines using a Canny image edge detector that is rather immune to speckle noise. Finally, the individual coastlines derived from time series of .SAR images can be checked for changes.

  8. Monitoring cotton root rot by synthetic Sentinel-2 NDVI time series using improved spatial and temporal data fusion.

    PubMed

    Wu, Mingquan; Yang, Chenghai; Song, Xiaoyu; Hoffmann, Wesley Clint; Huang, Wenjiang; Niu, Zheng; Wang, Changyao; Li, Wang; Yu, Bo

    2018-01-31

    To better understand the progression of cotton root rot within the season, time series monitoring is required. In this study, an improved spatial and temporal data fusion approach (ISTDFA) was employed to combine 250-m Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Different Vegetation Index (NDVI) and 10-m Sentinetl-2 NDVI data to generate a synthetic Sentinel-2 NDVI time series for monitoring this disease. Then, the phenology of healthy cotton and infected cotton was modeled using a logistic model. Finally, several phenology parameters, including the onset day of greenness minimum (OGM), growing season length (GLS), onset of greenness increase (OGI), max NDVI value, and integral area of the phenology curve, were calculated. The results showed that ISTDFA could be used to combine time series MODIS and Sentinel-2 NDVI data with a correlation coefficient of 0.893. The logistic model could describe the phenology curves with R-squared values from 0.791 to 0.969. Moreover, the phenology curve of infected cotton showed a significant difference from that of healthy cotton. The max NDVI value, OGM, GSL and the integral area of the phenology curve for infected cotton were reduced by 0.045, 30 days, 22 days, and 18.54%, respectively, compared with those for healthy cotton.

  9. On the Selection of Models for Runtime Prediction of System Resources

    NASA Astrophysics Data System (ADS)

    Casolari, Sara; Colajanni, Michele

    Applications and services delivered through large Internet Data Centers are now feasible thanks to network and server improvement, but also to virtualization, dynamic allocation of resources and dynamic migrations. The large number of servers and resources involved in these systems requires autonomic management strategies because no amount of human administrators would be capable of cloning and migrating virtual machines in time, as well as re-distributing or re-mapping the underlying hardware. At the basis of most autonomic management decisions, there is the need of evaluating own global behavior and change it when the evaluation indicates that they are not accomplishing what they were intended to do or some relevant anomalies are occurring. Decisions algorithms have to satisfy different time scales constraints. In this chapter we are interested to short-term contexts where runtime prediction models work on the basis of time series coming from samples of monitored system resources, such as disk, CPU and network utilization. In similar environments, we have to address two main issues. First, original time series are affected by limited predictability because measurements are characterized by noises due to system instability, variable offered load, heavy-tailed distributions, hardware and software interactions. Moreover, there is no existing criteria that can help us to choose a suitable prediction model and related parameters with the purpose of guaranteeing an adequate prediction quality. In this chapter, we evaluate the impact that different choices on prediction models have on different time series, and we suggest how to treat input data and whether it is convenient to choose the parameters of a prediction model in a static or dynamic way. Our conclusions are supported by a large set of analyses on realistic and synthetic data traces.

  10. Interactive Learning Modules: Enabling Near Real-Time Oceanographic Data Use In Undergraduate Education

    NASA Astrophysics Data System (ADS)

    Kilb, D. L.; Fundis, A. T.; Risien, C. M.

    2012-12-01

    The focus of the Education and Public Engagement (EPE) component of the NSF's Ocean Observatories Initiative (OOI) is to provide a new layer of cyber-interactivity for undergraduate educators to bring near real-time data from the global ocean into learning environments. To accomplish this, we are designing six online services including: 1) visualization tools, 2) a lesson builder, 3) a concept map builder, 4) educational web services (middleware), 5) collaboration tools and 6) an educational resource database. Here, we report on our Fall 2012 release that includes the first four of these services: 1) Interactive visualization tools allow users to interactively select data of interest, display the data in various views (e.g., maps, time-series and scatter plots) and obtain statistical measures such as mean, standard deviation and a regression line fit to select data. Specific visualization tools include a tool to compare different months of data, a time series explorer tool to investigate the temporal evolution of select data parameters (e.g., sea water temperature or salinity), a glider profile tool that displays ocean glider tracks and associated transects, and a data comparison tool that allows users to view the data either in scatter plot view comparing one parameter with another, or in time series view. 2) Our interactive lesson builder tool allows users to develop a library of online lesson units, which are collaboratively editable and sharable and provides starter templates designed from learning theory knowledge. 3) Our interactive concept map tool allows the user to build and use concept maps, a graphical interface to map the connection between concepts and ideas. This tool also provides semantic-based recommendations, and allows for embedding of associated resources such as movies, images and blogs. 4) Education web services (middleware) will provide an educational resource database API.

  11. Time series analysis as input for clinical predictive modeling: Modeling cardiac arrest in a pediatric ICU

    PubMed Central

    2011-01-01

    Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778

  12. Time series analysis as input for clinical predictive modeling: modeling cardiac arrest in a pediatric ICU.

    PubMed

    Kennedy, Curtis E; Turley, James P

    2011-10-24

    Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.

  13. Simulation Exploration through Immersive Parallel Planes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  14. Fractal analyses reveal independent complexity and predictability of gait

    PubMed Central

    Dierick, Frédéric; Nivard, Anne-Laure

    2017-01-01

    Locomotion is a natural task that has been assessed for decades and used as a proxy to highlight impairments of various origins. So far, most studies adopted classical linear analyses of spatio-temporal gait parameters. Here, we use more advanced, yet not less practical, non-linear techniques to analyse gait time series of healthy subjects. We aimed at finding more sensitive indexes related to spatio-temporal gait parameters than those previously used, with the hope to better identify abnormal locomotion. We analysed large-scale stride interval time series and mean step width in 34 participants while altering walking direction (forward vs. backward walking) and with or without galvanic vestibular stimulation. The Hurst exponent α and the Minkowski fractal dimension D were computed and interpreted as indexes expressing predictability and complexity of stride interval time series, respectively. These holistic indexes can easily be interpreted in the framework of optimal movement complexity. We show that α and D accurately capture stride interval changes in function of the experimental condition. Walking forward exhibited maximal complexity (D) and hence, adaptability. In contrast, walking backward and/or stimulation of the vestibular system decreased D. Furthermore, walking backward increased predictability (α) through a more stereotyped pattern of the stride interval and galvanic vestibular stimulation reduced predictability. The present study demonstrates the complementary power of the Hurst exponent and the fractal dimension to improve walking classification. Our developments may have immediate applications in rehabilitation, diagnosis, and classification procedures. PMID:29182659

  15. Simulation Exploration through Immersive Parallel Planes: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  16. Frequency analysis via the method of moment functionals

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.; Pan, J. Q.

    1990-01-01

    Several variants are presented of a linear-in-parameters least squares formulation for determining the transfer function of a stable linear system at specified frequencies given a finite set of Fourier series coefficients calculated from transient nonstationary input-output data. The basis of the technique is Shinbrot's classical method of moment functionals using complex Fourier based modulating functions to convert a differential equation model on a finite time interval into an algebraic equation which depends linearly on frequency-related parameters.

  17. Patching C2n Time Series Data Holes using Principal Component Analysis

    DTIC Science & Technology

    2007-01-01

    characteristic local scale exponent , regardless of dilation of the length examined. THE HURST PARAMETER There are a slew of methods13 available to...fractal dimension D0, which characterises the roughness of the data, and the Hurst parameter, H , which is a measure of the long range dependence (LRD...estimate H . For simplicity, we have opted to use the well known Hurst –Mandelbrot R/S technique, which is also the most elementary. The fitting curve

  18. Analysis of temporal decay of diffuse broadband sound fields in enclosures by decomposition in powers of an absorption parameter

    NASA Astrophysics Data System (ADS)

    Bliss, Donald; Franzoni, Linda; Rouse, Jerry; Manning, Ben

    2005-09-01

    An analysis method for time-dependent broadband diffuse sound fields in enclosures is described. Beginning with a formulation utilizing time-dependent broadband intensity boundary sources, the strength of these wall sources is expanded in a series in powers of an absorption parameter, thereby giving a separate boundary integral problem for each power. The temporal behavior is characterized by a Taylor expansion in the delay time for a source to influence an evaluation point. The lowest-order problem has a uniform interior field proportional to the reciprocal of the absorption parameter, as expected, and exhibits relatively slow exponential decay. The next-order problem gives a mean-square pressure distribution that is independent of the absorption parameter and is primarily responsible for the spatial variation of the reverberant field. This problem, which is driven by input sources and the lowest-order reverberant field, depends on source location and the spatial distribution of absorption. Additional problems proceed at integer powers of the absorption parameter, but are essentially higher-order corrections to the spatial variation. Temporal behavior is expressed in terms of an eigenvalue problem, with boundary source strength distributions expressed as eigenmodes. Solutions exhibit rapid short-time spatial redistribution followed by long-time decay of a predominant spatial mode.

  19. Land science with Sentinel-2 and Sentinel-3 data series synergy

    NASA Astrophysics Data System (ADS)

    Moreno, Jose; Guanter, Luis; Alonso, Luis; Gomez, Luis; Amoros, Julia; Camps, Gustavo; Delegido, Jesus

    2010-05-01

    Although the GMES/Sentinel satellite series were primarily designed to provide observations for operational services and routine applications, there is a growing interest in the scientific community towards the usage of Sentinel data for more advanced and innovative science. Apart from the improved spatial and spectral capabilities, the availability of consistent time series covering a period of over 20 years opens possibilities never explored before, such as systematic data assimilation approaches exploiting the time-series concept, or the incorporation in the modelling approaches of processes covering time scales from weeks to decades. Sentinel-3 will provide continuity to current ENVISAT MERIS/AATSR capabilities. The results already derived from MERIS/AATRS will be more systematically exploited by using OLCI in synergy with SLST. Particularly innovative is the case of Sentinel-2, which is specifically designed for land applications. Built on a constellation of two satellites operating simultaneously to provide 5 days geometric revisit time, the Sentinel-2 system will providing global and systematic acquisitions with high spatial resolution and with a high revisit time tailored towards the needs of land monitoring. Apart from providing continuity to Landsat and SPOT time series, the Sentinel-2 Multi-Spectral Instrument (MSI) incorporates new narrow bands around the red-edge for improved retrievals of biophysical parameters. The limitations imposed by the need of a proper cloud screening and atmospheric corrections have represented a serious constraint in the past for optical data. The fact that both Sentinel-2 and 3 have dedicated bands to allow such needed corrections for optical data represents an important step towards a proper exploitation, guarantying consistent time series showing actual variability in land surface conditions without the artefacts introduced by the atmosphere. Expected operational products (such as Land Cover maps, Leaf Area Index, Fractional Vegetation Cover, Fraction of Absorbed Photosynthetically Active Radiation, and Leaf Chlorophyll and Water Contents), will be enhanced with new scientific applications. Higher level products will also be provided, by means of mosaicking, averaging, synthesising or compositing of spatially and temporally resampled data. A key element in the exploitation of the Sentinel series will be the adequate use of data synergy, which will open new possibilities for improved Land Models. This paper analyses in particular the possibilities offered by mosaicking and compositing information derived from Sentinel-2 observations in high spatial resolution to complement dense time series derived from Sentinel-3 data with more frequent coverage. Interpolation of gaps in high spatial resolution time series (from Sentinel-2 data) by using medium/low resolution data from Sentinel-3 (OLCI and SLSTR) is also a way of making series more temporally consistent with high spatial resolution. The primary goal of such temporal interpolation / spatial mosaicking techniques is to derive consistent surface reflectance data virtually for every date and geographical location, no matter the initial spatial/temporal coverage of the original data used to produce the composite. As a result, biophysical products can be derived in a more consistent way from the spectral information of Sentinel-3 data by making use of a description of surface heterogeneity derived from Sentinel-2 data. Using data from dedicated experiments (SEN2FLEX, CEFLES2, SEN3EXP), that include a large dataset of satellite and airborne data and of ground-based measurements of atmospheric and vegetation parameters, different techniques are tested, including empirical / statistical approaches that builds nonlinear regression by mapping spectra to a high dimensional space, up to model inversion / data assimilation scenarios. Exploitation of the temporal domain and spatial multi-scale domain becomes then a driver for the systematic exploitation of GMES/Sentinels data time series. This paper review current status, and identifies research priorities in such direction.

  20. Correction of 3D rigid body motion in fMRI time series by independent estimation of rotational and translational effects in k-space.

    PubMed

    Costagli, Mauro; Waggoner, R Allen; Ueno, Kenichi; Tanaka, Keiji; Cheng, Kang

    2009-04-15

    In functional magnetic resonance imaging (fMRI), even subvoxel motion dramatically corrupts the blood oxygenation level-dependent (BOLD) signal, invalidating the assumption that intensity variation in time is primarily due to neuronal activity. Thus, correction of the subject's head movements is a fundamental step to be performed prior to data analysis. Most motion correction techniques register a series of volumes assuming that rigid body motion, characterized by rotational and translational parameters, occurs. Unlike the most widely used applications for fMRI data processing, which correct motion in the image domain by numerically estimating rotational and translational components simultaneously, the algorithm presented here operates in a three-dimensional k-space, to decouple and correct rotations and translations independently, offering new ways and more flexible procedures to estimate the parameters of interest. We developed an implementation of this method in MATLAB, and tested it on both simulated and experimental data. Its performance was quantified in terms of square differences and center of mass stability across time. Our data show that the algorithm proposed here successfully corrects for rigid-body motion, and its employment in future fMRI studies is feasible and promising.

  1. Emperor penguins and climate change.

    PubMed

    Barbraud, C; Weimerskirch, H

    2001-05-10

    Variations in ocean-atmosphere coupling over time in the Southern Ocean have dominant effects on sea-ice extent and ecosystem structure, but the ultimate consequences of such environmental changes for large marine predators cannot be accurately predicted because of the absence of long-term data series on key demographic parameters. Here, we use the longest time series available on demographic parameters of an Antarctic large predator breeding on fast ice and relying on food resources from the Southern Ocean. We show that over the past 50 years, the population of emperor penguins (Aptenodytes forsteri) in Terre Adélie has declined by 50% because of a decrease in adult survival during the late 1970s. At this time there was a prolonged abnormally warm period with reduced sea-ice extent. Mortality rates increased when warm sea-surface temperatures occurred in the foraging area and when annual sea-ice extent was reduced, and were higher for males than for females. In contrast with survival, emperor penguins hatched fewer eggs when winter sea-ice was extended. These results indicate strong and contrasting effects of large-scale oceanographic processes and sea-ice extent on the demography of emperor penguins, and their potential high susceptibility to climate change.

  2. Evaluation of a new parallel numerical parameter optimization algorithm for a dynamical system

    NASA Astrophysics Data System (ADS)

    Duran, Ahmet; Tuncel, Mehmet

    2016-10-01

    It is important to have a scalable parallel numerical parameter optimization algorithm for a dynamical system used in financial applications where time limitation is crucial. We use Message Passing Interface parallel programming and present such a new parallel algorithm for parameter estimation. For example, we apply the algorithm to the asset flow differential equations that have been developed and analyzed since 1989 (see [3-6] and references contained therein). We achieved speed-up for some time series to run up to 512 cores (see [10]). Unlike [10], we consider more extensive financial market situations, for example, in presence of low volatility, high volatility and stock market price at a discount/premium to its net asset value with varying magnitude, in this work. Moreover, we evaluated the convergence of the model parameter vector, the nonlinear least squares error and maximum improvement factor to quantify the success of the optimization process depending on the number of initial parameter vectors.

  3. A Methodology for the Parametric Reconstruction of Non-Steady and Noisy Meteorological Time Series

    NASA Astrophysics Data System (ADS)

    Rovira, F.; Palau, J. L.; Millán, M.

    2009-09-01

    Climatic and meteorological time series often show some persistence (in time) in the variability of certain features. One could regard annual, seasonal and diurnal time variability as trivial persistence in the variability of some meteorological magnitudes (as, e.g., global radiation, air temperature above surface, etc.). In these cases, the traditional Fourier transform into frequency space will show the principal harmonics as the components with the largest amplitude. Nevertheless, meteorological measurements often show other non-steady (in time) variability. Some fluctuations in measurements (at different time scales) are driven by processes that prevail on some days (or months) of the year but disappear on others. By decomposing a time series into time-frequency space through the continuous wavelet transformation, one is able to determine both the dominant modes of variability and how those modes vary in time. This study is based on a numerical methodology to analyse non-steady principal harmonics in noisy meteorological time series. This methodology combines both the continuous wavelet transform and the development of a parametric model that includes the time evolution of the principal and the most statistically significant harmonics of the original time series. The parameterisation scheme proposed in this study consists of reproducing the original time series by means of a statistically significant finite sum of sinusoidal signals (waves), each defined by using the three usual parameters: amplitude, frequency and phase. To ensure the statistical significance of the parametric reconstruction of the original signal, we propose a standard statistical t-student analysis of the confidence level of the amplitude in the parametric spectrum for the different wave components. Once we have assured the level of significance of the different waves composing the parametric model, we can obtain the statistically significant principal harmonics (in time) of the original time series by using the Fourier transform of the modelled signal. Acknowledgements The CEAM Foundation is supported by the Generalitat Valenciana and BANCAIXA (València, Spain). This study has been partially funded by the European Commission (FP VI, Integrated Project CIRCE - No. 036961) and by the Ministerio de Ciencia e Innovación, research projects "TRANSREG” (CGL2007-65359/CLI) and "GRACCIE” (CSD2007-00067, Program CONSOLIDER-INGENIO 2010).

  4. Retrieving the optical parameters of biological tissues using diffuse reflectance spectroscopy and Fourier series expansions. I. theory and application.

    PubMed

    Muñoz Morales, Aarón A; Vázquez Y Montiel, Sergio

    2012-10-01

    The determination of optical parameters of biological tissues is essential for the application of optical techniques in the diagnosis and treatment of diseases. Diffuse Reflection Spectroscopy is a widely used technique to analyze the optical characteristics of biological tissues. In this paper we show that by using diffuse reflectance spectra and a new mathematical model we can retrieve the optical parameters by applying an adjustment of the data with nonlinear least squares. In our model we represent the spectra using a Fourier series expansion finding mathematical relations between the polynomial coefficients and the optical parameters. In this first paper we use spectra generated by the Monte Carlo Multilayered Technique to simulate the propagation of photons in turbid media. Using these spectra we determine the behavior of Fourier series coefficients when varying the optical parameters of the medium under study. With this procedure we find mathematical relations between Fourier series coefficients and optical parameters. Finally, the results show that our method can retrieve the optical parameters of biological tissues with accuracy that is adequate for medical applications.

  5. Using GRACE and climate model simulations to predict mass loss of Alaskan glaciers through 2100

    DOE PAGES

    Wahr, John; Burgess, Evan; Swenson, Sean

    2016-05-30

    Glaciers in Alaska are currently losing mass at a rate of ~–50 Gt a –1, one of the largest ice loss rates of any regional collection of mountain glaciers on Earth. Existing projections of Alaska's future sea-level contributions tend to be divergent and are not tied directly to regional observations. Here we develop a simple, regional observation-based projection of Alaska's future sea-level contribution. We compute a time series of recent Alaska glacier mass variability using monthly GRACE gravity fields from August 2002 through December 2014. We also construct a three-parameter model of Alaska glacier mass variability based on monthly ERA-Interimmore » snowfall and temperature fields. When these three model parameters are fitted to the GRACE time series, the model explains 94% of the variance of the GRACE data. Using these parameter values, we then apply the model to simulated fields of monthly temperature and snowfall from the Community Earth System Model, to obtain predictions of mass variations through 2100. Here, we conclude that mass loss rates may increase between –80 and –110 Gt a –1by 2100, with a total sea-level rise contribution of 19 ± 4 mm during the 21st century.« less

  6. Mapping Brazilian savanna vegetation gradients with Landsat time series

    NASA Astrophysics Data System (ADS)

    Schwieder, Marcel; Leitão, Pedro J.; da Cunha Bustamante, Mercedes Maria; Ferreira, Laerte Guimarães; Rabe, Andreas; Hostert, Patrick

    2016-10-01

    Global change has tremendous impacts on savanna systems around the world. Processes related to climate change or agricultural expansion threaten the ecosystem's state, function and the services it provides. A prominent example is the Brazilian Cerrado that has an extent of around 2 million km2 and features high biodiversity with many endemic species. It is characterized by landscape patterns from open grasslands to dense forests, defining a heterogeneous gradient in vegetation structure throughout the biome. While it is undisputed that the Cerrado provides a multitude of valuable ecosystem services, it is exposed to changes, e.g. through large scale land conversions or climatic changes. Monitoring of the Cerrado is thus urgently needed to assess the state of the system as well as to analyze and further understand ecosystem responses and adaptations to ongoing changes. Therefore we explored the potential of dense Landsat time series to derive phenological information for mapping vegetation gradients in the Cerrado. Frequent data gaps, e.g. due to cloud contamination, impose a serious challenge for such time series analyses. We synthetically filled data gaps based on Radial Basis Function convolution filters to derive continuous pixel-wise temporal profiles capable of representing Land Surface Phenology (LSP). Derived phenological parameters revealed differences in the seasonal cycle between the main Cerrado physiognomies and could thus be used to calibrate a Support Vector Classification model to map their spatial distribution. Our results show that it is possible to map the main spatial patterns of the observed physiognomies based on their phenological differences, whereat inaccuracies occurred especially between similar classes and data-scarce areas. The outcome emphasizes the need for remote sensing based time series analyses at fine scales. Mapping heterogeneous ecosystems such as savannas requires spatial detail, as well as the ability to derive important phenological parameters for monitoring habitats or ecosystem responses to climate change. The open Landsat and Sentinel-2 archives provide the satellite data needed for improved analyses of savanna ecosystems globally.

  7. Variational models for discontinuity detection

    NASA Astrophysics Data System (ADS)

    Vitti, Alfonso; Battista Benciolini, G.

    2010-05-01

    The Mumford-Shah variational model produces a smooth approximation of the data and detects data discontinuities by solving a minimum problem involving an energy functional. The Blake-Zisserman model permits also the detection of discontinuities in the first derivative of the approximation. This model can result in a quasi piece-wise linear approximation, whereas the Mumford-Shah can result in a quasi piece-wise constant approximation. The two models are well known in the mathematical literature and are widely adopted in computer vision for image segmentation. In Geodesy the Blake-Zisserman model has been applied successfully to the detection of cycle-slips in linear combinations of GPS measurements. Few attempts to apply the model to time series of coordinates have been done so far. The problem of detecting discontinuities in time series of GNSS coordinates is well know and its relevance increases as the quality of geodetic measurements, analysis techniques, models and products improves. The application of the Blake-Zisserman model appears reasonable and promising due to the model characteristic to detect both position and velocity discontinuities in the same time series. The detection of position and velocity changes is of great interest in geophysics where the discontinuity itself can be the very relevant object. In the work for the realization of reference frames, detecting position and velocity discontinuities may help to define models that can handle non-linear motions. In this work the Mumford-Shah and the Blake-Zisserman models are briefly presented, the treatment is carried out from a practical viewpoint rather than from a theoretical one. A set of time series of GNSS coordinates has been processed and the results are presented in order to highlight the capabilities and the weakness of the variational approach. A first attempt to derive some indication for the automatic set up of the model parameters has been done. The underlying relation that could links the parameter values to the statistical properties of the data has been investigated.

  8. Seasonal variability and degradation investigation of iodocarbons in a coastal fjord

    NASA Astrophysics Data System (ADS)

    Shi, Qiang; Wallace, Douglas

    2016-04-01

    Methyl iodide (CH3I) is considered an important carrier of iodine atoms from sea to air. The importance of other volatile iodinated compounds, such as very short-lived iodocarbons (e.g. CH2ClI, CH2I2), has also been demonstrated [McFiggans, 2005; O'Dowd and Hoffmann, 2005; Carpenter et al., 2013]. The production pathways of iodocarbons, and controls on their sea-to-air flux can be investigated by in-situ studies (e.g. surface layer mass balance from time-series studies) and by incubation experiments. Shi et al., [2014] reported previously unrecognised large, night-time losses of CH3I observed during incubation experiments with coastal waters. These losses were significant for controlling the sea-to-air flux but are not yet understood. As part of a study to further investigate sources and sinks of CH3I and other iodocarbons in coastal waters, samples have been analysed weekly since April 2015 at 4 depths (5 to 60 m) in the Bedford Basin, Halifax, Canada. The time-series study was part of a broader study that included measurement of other, potentially related parameters (temperature, salinity, Chlorophyll a etc.). A set of repeated degradation experiments was conducted, in the context of this time-series, including incubations within a solar simulator using 13C labelled CH3I. Results of the time-series sampling and incubation experiments will be presented.

  9. 76 FR 36610 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-22

    ... rather than by series; (b) adopt a $5 quotation spread parameter; and (c) amend the quoting requirement... additional time to adapt to the enhancements. \\4\\ The Exchange has also filed other proposed rule changes in... additional time to adapt to the enhancements. \\8\\ 15 U.S.C. 78f(b). \\9\\ 15 U.S.C. 78f(b)(5). B. Self...

  10. Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.

    PubMed

    Durdu, Omer Faruk

    2010-10-01

    In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.

  11. Phenomenological analysis of medical time series with regular and stochastic components

    NASA Astrophysics Data System (ADS)

    Timashev, Serge F.; Polyakov, Yuriy S.

    2007-06-01

    Flicker-Noise Spectroscopy (FNS), a general approach to the extraction and parameterization of resonant and stochastic components contained in medical time series, is presented. The basic idea of FNS is to treat the correlation links present in sequences of different irregularities, such as spikes, "jumps", and discontinuities in derivatives of different orders, on all levels of the spatiotemporal hierarchy of the system under study as main information carriers. The tools to extract and analyze the information are power spectra and difference moments (structural functions), which complement the information of each other. The structural function stochastic component is formed exclusively by "jumps" of the dynamic variable while the power spectrum stochastic component is formed by both spikes and "jumps" on every level of the hierarchy. The information "passport" characteristics that are determined by fitting the derived expressions to the experimental variations for the stochastic components of power spectra and structural functions are interpreted as the correlation times and parameters that describe the rate of "memory loss" on these correlation time intervals for different irregularities. The number of the extracted parameters is determined by the requirements of the problem under study. Application of this approach to the analysis of tremor velocity signals for a Parkinsonian patient is discussed.

  12. Standard deviation of the mean and other time series properties of voltages measured with a digital lock-in amplifier

    NASA Astrophysics Data System (ADS)

    Witt, Thomas J.; Fletcher, N. E.

    2010-10-01

    We investigate some statistical properties of ac voltages from a white noise source measured with a digital lock-in amplifier equipped with finite impulse response output filters which introduce correlations between successive voltage values. The main goal of this work is to propose simple solutions to account for correlations when calculating the standard deviation of the mean (SDM) for a sequence of measurement data acquired using such an instrument. The problem is treated by time series analysis based on a moving average model of the filtering process. Theoretical expressions are derived for the power spectral density (PSD), the autocorrelation function, the equivalent noise bandwidth and the Allan variance; all are related to the SDM. At most three parameters suffice to specify any of the above quantities: the filter time constant, the time between successive measurements (both set by the lock-in operator) and the PSD of the white noise input, h0. Our white noise source is a resistor so that the PSD is easily calculated; there are no free parameters. Theoretical expressions are checked against their respective sample estimates and, with the exception of two of the bandwidth estimates, agreement to within 11% or better is found.

  13. Automatic Multi-sensor Data Quality Checking and Event Detection for Environmental Sensing

    NASA Astrophysics Data System (ADS)

    LIU, Q.; Zhang, Y.; Zhao, Y.; Gao, D.; Gallaher, D. W.; Lv, Q.; Shang, L.

    2017-12-01

    With the advances in sensing technologies, large-scale environmental sensing infrastructures are pervasively deployed to continuously collect data for various research and application fields, such as air quality study and weather condition monitoring. In such infrastructures, many sensor nodes are distributed in a specific area and each individual sensor node is capable of measuring several parameters (e.g., humidity, temperature, and pressure), providing massive data for natural event detection and analysis. However, due to the dynamics of the ambient environment, sensor data can be contaminated by errors or noise. Thus, data quality is still a primary concern for scientists before drawing any reliable scientific conclusions. To help researchers identify potential data quality issues and detect meaningful natural events, this work proposes a novel algorithm to automatically identify and rank anomalous time windows from multiple sensor data streams. More specifically, (1) the algorithm adaptively learns the characteristics of normal evolving time series and (2) models the spatial-temporal relationship among multiple sensor nodes to infer the anomaly likelihood of a time series window for a particular parameter in a sensor node. Case studies using different data sets are presented and the experimental results demonstrate that the proposed algorithm can effectively identify anomalous time windows, which may resulted from data quality issues and natural events.

  14. Reconstructions of parameters of radiophysical chaotic generator with delayed feedback from short time series

    NASA Astrophysics Data System (ADS)

    Ishbulatov, Yu. M.; Karavaev, A. S.; Kiselev, A. R.; Semyachkina-Glushkovskaya, O. V.; Postnov, D. E.; Bezruchko, B. P.

    2018-04-01

    A method for the reconstruction of time-delayed feedback system is investigated, which is based on the detection of synchronous response of a slave time-delay system with respect to the driving from the master system under study. The structure of the driven system is similar to the structure of the studied time-delay system, but the feedback circuit is broken in the driven system. The method efficiency is tested using short and noisy data gained from an electronic chaotic oscillator with time-delayed feedback.

  15. TSPP - A Collection of FORTRAN Programs for Processing and Manipulating Time Series

    USGS Publications Warehouse

    Boore, David M.

    2008-01-01

    This report lists a number of FORTRAN programs that I have developed over the years for processing and manipulating strong-motion accelerograms. The collection is titled TSPP, which stands for Time Series Processing Programs. I have excluded 'strong-motion accelerograms' from the title, however, as the boundary between 'strong' and 'weak' motion has become blurred with the advent of broadband sensors and high-dynamic range dataloggers, and many of the programs can be used with any evenly spaced time series, not just acceleration time series. This version of the report is relatively brief, consisting primarily of an annotated list of the programs, with two examples of processing, and a few comments on usage. I do not include a parameter-by-parameter guide to the programs. Future versions might include more examples of processing, illustrating the various parameter choices in the programs. Although these programs have been used by the U.S. Geological Survey, no warranty, expressed or implied, is made by the USGS as to the accuracy or functioning of the programs and related program material, nor shall the fact of distribution constitute any such warranty, and no responsibility is assumed by the USGS in connection therewith. The programs are distributed on an 'as is' basis, with no warranty of support from me. These programs were written for my use and are being publically distributed in the hope that others might find them as useful as I have. I would, however, appreciate being informed about bugs, and I always welcome suggestions for improvements to the codes. Please note that I have made little effort to optimize the coding of the programs or to include a user-friendly interface (many of the programs in this collection have been included in the software usdp (Utility Software for Data Processing), being developed by Akkar et al. (personal communication, 2008); usdp includes a graphical user interface). Speed of execution has been sacrificed in favor of a code that is intended to be easy to understand, although on modern computers speed of execution is rarely a problem. I will be pleased if users incorporate portions of my programs into their own applications; I only ask that reference be made to this report as the source of the programs.

  16. Two-pass imputation algorithm for missing value estimation in gene expression time series.

    PubMed

    Tsiporkova, Elena; Boeva, Veselka

    2007-10-01

    Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different initial rough imputation methods.

  17. Minimum of the order parameter fluctuations of seismicity before major earthquakes in Japan.

    PubMed

    Sarlis, Nicholas V; Skordas, Efthimios S; Varotsos, Panayiotis A; Nagao, Toshiyasu; Kamogawa, Masashi; Tanaka, Haruo; Uyeda, Seiya

    2013-08-20

    It has been shown that some dynamic features hidden in the time series of complex systems can be uncovered if we analyze them in a time domain called natural time χ. The order parameter of seismicity introduced in this time domain is the variance of χ weighted for normalized energy of each earthquake. Here, we analyze the Japan seismic catalog in natural time from January 1, 1984 to March 11, 2011, the day of the M9 Tohoku earthquake, by considering a sliding natural time window of fixed length comprised of the number of events that would occur in a few months. We find that the fluctuations of the order parameter of seismicity exhibit distinct minima a few months before all of the shallow earthquakes of magnitude 7.6 or larger that occurred during this 27-y period in the Japanese area. Among the minima, the minimum before the M9 Tohoku earthquake was the deepest. It appears that there are two kinds of minima, namely precursory and nonprecursory, to large earthquakes.

  18. The Temporal Dynamics of Spatial Patterns of Observed Soil Moisture Interpreted Using the Hydrus 1-D Model

    NASA Astrophysics Data System (ADS)

    Chen, M.; Willgoose, G. R.; Saco, P. M.

    2009-12-01

    This paper investigates the soil moisture dynamics over two subcatchments (Stanley and Krui) in the Goulburn River in NSW during a three year period (2005-2007) using the Hydrus 1-D unsaturated soil water flow model. The model was calibrated to the seven Stanley microcatchment sites (1 sqkm site) using continuous time surface 30cm and full profile soil moisture measurements. Soil type, leaf area index and soil depth were found to be the key parameters changing model fit to the soil moisture time series. They either shifted the time series up or down, changed the steepness of dry-down recessions or determined the lowest point of soil moisture dry-down respectively. Good correlations were obtained between observed and simulated soil water storage (R=0.8-0.9) when calibrated parameters for one site were applied to the other sites. Soil type was also found to be the main determinant (after rainfall) of the mean of modelled soil moisture time series. Simulations of top 30cm were better than those of the whole soil profile. Within the Stanley microcatchment excellent soil moisture matches could be generated simply by adjusting the mean of soil moisture up or down slightly. Only minor modification of soil properties from site to site enable good fits for all of the Stanley sites. We extended the predictions of soil moisture to a larger spatial scale of the Krui catchment (sites up to 30km distant from Stanley) using soil and vegetation parameters from Stanley but the locally recorded rainfall at the soil moisture measurement site. The results were encouraging (R=0.7~0.8). These results show that it is possible to use a calibrated soil moisture model to extrapolate the soil moisture to other sites for a catchment with an area of up to 1000km2. This paper demonstrates the potential usefulness of continuous time, point scale soil moisture (typical of that measured by permanently installed TDR probes) in predicting the soil wetness status over a catchment of significant size.

  19. Wavelet analysis for the study of the relations among soil radon anomalies, volcanic and seismic events: the case of Mt. Etna (Italy)

    NASA Astrophysics Data System (ADS)

    Ferrera, Elisabetta; Giammanco, Salvatore; Cannata, Andrea; Montalto, Placido

    2013-04-01

    From November 2009 to April 2011 soil radon activity was continuously monitored using a Barasol® probe located on the upper NE flank of Mt. Etna volcano, close either to the Piano Provenzana fault or to the NE-Rift. Seismic and volcanological data have been analyzed together with radon data. We also analyzed air and soil temperature, barometric pressure, snow and rain fall data. In order to find possible correlations among the above parameters, and hence to reveal possible anomalies in the radon time-series, we used different statistical methods: i) multivariate linear regression; ii) cross-correlation; iii) coherence analysis through wavelet transform. Multivariate regression indicated a modest influence on soil radon from environmental parameters (R2 = 0.31). When using 100-days time windows, the R2 values showed wide variations in time, reaching their maxima (~0.63-0.66) during summer. Cross-correlation analysis over 100-days moving averages showed that, similar to multivariate linear regression analysis, the summer period is characterised by the best correlation between radon data and environmental parameters. Lastly, the wavelet coherence analysis allowed a multi-resolution coherence analysis of the time series acquired. This approach allows to study the relations among different signals either in time or frequency domain. It confirmed the results of the previous methods, but also allowed to recognize correlations between radon and environmental parameters at different observation scales (e.g., radon activity changed during strong precipitations, but also during anomalous variations of soil temperature uncorrelated with seasonal fluctuations). Our work suggests that in order to make an accurate analysis of the relations among distinct signals it is necessary to use different techniques that give complementary analytical information. In particular, the wavelet analysis showed to be very effective in discriminating radon changes due to environmental influences from those correlated with impending seismic or volcanic events.

  20. Variability of rainfall over Lake Kariba catchment area in the Zambezi river basin, Zimbabwe

    NASA Astrophysics Data System (ADS)

    Muchuru, Shepherd; Botai, Joel O.; Botai, Christina M.; Landman, Willem A.; Adeola, Abiodun M.

    2016-04-01

    In this study, average monthly and annual rainfall totals recorded for the period 1970 to 2010 from a network of 13 stations across the Lake Kariba catchment area of the Zambezi river basin were analyzed in order to characterize the spatial-temporal variability of rainfall across the catchment area. In the analysis, the data were subjected to intervention and homogeneity analysis using the Cumulative Summation (CUSUM) technique and step change analysis using rank-sum test. Furthermore, rainfall variability was characterized by trend analysis using the non-parametric Mann-Kendall statistic. Additionally, the rainfall series were decomposed and the spectral characteristics derived using Cross Wavelet Transform (CWT) and Wavelet Coherence (WC) analysis. The advantage of using the wavelet-based parameters is that they vary in time and can therefore be used to quantitatively detect time-scale-dependent correlations and phase shifts between rainfall time series at various localized time-frequency scales. The annual and seasonal rainfall series were homogeneous and demonstrated no apparent significant shifts. According to the inhomogeneity classification, the rainfall series recorded across the Lake Kariba catchment area belonged to category A (useful) and B (doubtful), i.e., there were zero to one and two absolute tests rejecting the null hypothesis (at 5 % significance level), respectively. Lastly, the long-term variability of the rainfall series across the Lake Kariba catchment area exhibited non-significant positive and negative trends with coherent oscillatory modes that are constantly locked in phase in the Morlet wavelet space.

Top