Sample records for simulated time series

  1. Atmospheric turbulence simulation for Shuttle orbiter

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1979-01-01

    An improved non-recursive model for atmospheric turbulence along the flight path of the Shuttle Orbiter is developed which provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gust gradients. Based on this model the time series for both gusts and gust gradients are generated and stored on a series of magnetic tapes. Section 2 provides a description of the various technical considerations associated with the turbulence simulation model. Included in this section are descriptions of the digital filter simulation model, the von Karman spectra with finite upper limits, and the final non recursive turbulence simulation model which was used to generate the time series. Section 2 provides a description of the various technical considerations associated with the turbulence simulation model. Included in this section are descriptions of the digial filter simulation model, the von Karman spectra with finite upper limits, and the final non recursive turbulence simulation model which was used to generate the time series. Section 3 provides a description of the time series as currently recorded on magnetic tape. Conclusions and recommendations are presented in Section 4.

  2. Simulation of Ground Winds Time Series for the NASA Crew Launch Vehicle (CLV)

    NASA Technical Reports Server (NTRS)

    Adelfang, Stanley I.

    2008-01-01

    Simulation of wind time series based on power spectrum density (PSD) and spectral coherence models for ground wind turbulence is described. The wind models, originally developed for the Shuttle program, are based on wind measurements at the NASA 150-m meteorological tower at Cape Canaveral, FL. The current application is for the design and/or protection of the CLV from wind effects during on-pad exposure during periods from as long as days prior to launch, to seconds or minutes just prior to launch and seconds after launch. The evaluation of vehicle response to wind will influence the design and operation of constraint systems for support of the on-pad vehicle. Longitudinal and lateral wind component time series are simulated at critical vehicle locations. The PSD model for wind turbulence is a function of mean wind speed, elevation and temporal frequency. Integration of the PSD equation over a selected frequency range yields the variance of the time series to be simulated. The square root of the PSD defines a low-pass filter that is applied to adjust the components of the Fast Fourier Transform (FFT) of Gaussian white noise. The first simulated time series near the top of the launch vehicle is the inverse transform of the adjusted FFT. Simulation of the wind component time series at the nearest adjacent location (and all other succeeding next nearest locations) is based on a model for the coherence between winds at two locations as a function of frequency and separation distance, where the adjacent locations are separated vertically and/or horizontally. The coherence function is used to calculate a coherence weighted FFT of the wind at the next nearest location, given the FFT of the simulated time series at the previous location and the essentially incoherent FFT of the wind at the selected location derived a priori from the PSD model. The simulated time series at each adjacent location is the inverse Fourier transform of the coherence weighted FFT. For a selected design case, the equations, the process and the simulated time series at multiple vehicle stations are presented.

  3. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  4. Information and Complexity Measures Applied to Observed and Simulated Soil Moisture Time Series

    USDA-ARS?s Scientific Manuscript database

    Time series of soil moisture-related parameters provides important insights in functioning of soil water systems. Analysis of patterns within these time series has been used in several studies. The objective of this work was to compare patterns in observed and simulated soil moisture contents to u...

  5. Gridded Surface Subsurface Hydrologic Analysis Modeling for Analysis of Flood Design Features at the Picayune Strand Restoration Project

    DTIC Science & Technology

    2016-08-01

    the POI. ............................................................... 17  Figure 9. Discharge time series for the Miller pump system...2. In C2, the Miller Canal pump system was implicitly simulated by a time series of outflows assigned to model cells. This flow time series was...representative of how the pump system would operate during the storm events simulated in this work (USACE 2004). The outflow time series for the Miller

  6. Forecasting Nonlinear Chaotic Time Series with Function Expression Method Based on an Improved Genetic-Simulated Annealing Algorithm

    PubMed Central

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior. PMID:26000011

  7. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    PubMed

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  8. Power estimation using simulations for air pollution time-series studies

    PubMed Central

    2012-01-01

    Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599

  9. Power estimation using simulations for air pollution time-series studies.

    PubMed

    Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt

    2012-09-20

    Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.

  10. Joint Seasonal ARMA Approach for Modeling of Load Forecast Errors in Planning Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Samaan, Nader A.; Makarov, Yuri V.

    2014-04-14

    To make informed and robust decisions in the probabilistic power system operation and planning process, it is critical to conduct multiple simulations of the generated combinations of wind and load parameters and their forecast errors to handle the variability and uncertainty of these time series. In order for the simulation results to be trustworthy, the simulated series must preserve the salient statistical characteristics of the real series. In this paper, we analyze day-ahead load forecast error data from multiple balancing authority locations and characterize statistical properties such as mean, standard deviation, autocorrelation, correlation between series, time-of-day bias, and time-of-day autocorrelation.more » We then construct and validate a seasonal autoregressive moving average (ARMA) model to model these characteristics, and use the model to jointly simulate day-ahead load forecast error series for all BAs.« less

  11. Process fault detection and nonlinear time series analysis for anomaly detection in safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, T.L.; Mullen, M.F.; Wangen, L.E.

    In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less

  12. A comparative study of shallow groundwater level simulation with three time series models in a coastal aquifer of South China

    NASA Astrophysics Data System (ADS)

    Yang, Q.; Wang, Y.; Zhang, J.; Delgado, J.

    2017-05-01

    Accurate and reliable groundwater level forecasting models can help ensure the sustainable use of a watershed's aquifers for urban and rural water supply. In this paper, three time series analysis methods, Holt-Winters (HW), integrated time series (ITS), and seasonal autoregressive integrated moving average (SARIMA), are explored to simulate the groundwater level in a coastal aquifer, China. The monthly groundwater table depth data collected in a long time series from 2000 to 2011 are simulated and compared with those three time series models. The error criteria are estimated using coefficient of determination ( R 2), Nash-Sutcliffe model efficiency coefficient ( E), and root-mean-squared error. The results indicate that three models are all accurate in reproducing the historical time series of groundwater levels. The comparisons of three models show that HW model is more accurate in predicting the groundwater levels than SARIMA and ITS models. It is recommended that additional studies explore this proposed method, which can be used in turn to facilitate the development and implementation of more effective and sustainable groundwater management strategies.

  13. Scale-dependent intrinsic entropies of complex time series.

    PubMed

    Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E

    2016-04-13

    Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).

  14. Inference of scale-free networks from gene expression time series.

    PubMed

    Daisuke, Tominaga; Horton, Paul

    2006-04-01

    Quantitative time-series observation of gene expression is becoming possible, for example by cell array technology. However, there are no practical methods with which to infer network structures using only observed time-series data. As most computational models of biological networks for continuous time-series data have a high degree of freedom, it is almost impossible to infer the correct structures. On the other hand, it has been reported that some kinds of biological networks, such as gene networks and metabolic pathways, may have scale-free properties. We hypothesize that the architecture of inferred biological network models can be restricted to scale-free networks. We developed an inference algorithm for biological networks using only time-series data by introducing such a restriction. We adopt the S-system as the network model, and a distributed genetic algorithm to optimize models to fit its simulated results to observed time series data. We have tested our algorithm on a case study (simulated data). We compared optimization under no restriction, which allows for a fully connected network, and under the restriction that the total number of links must equal that expected from a scale free network. The restriction reduced both false positive and false negative estimation of the links and also the differences between model simulation and the given time-series data.

  15. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  16. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE PAGES

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; ...

    2016-01-28

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  17. Wavelet-based time series bootstrap model for multidecadal streamflow simulation using climate indicators

    NASA Astrophysics Data System (ADS)

    Erkyihun, Solomon Tassew; Rajagopalan, Balaji; Zagona, Edith; Lall, Upmanu; Nowak, Kenneth

    2016-05-01

    A model to generate stochastic streamflow projections conditioned on quasi-oscillatory climate indices such as Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO) is presented. Recognizing that each climate index has underlying band-limited components that contribute most of the energy of the signals, we first pursue a wavelet decomposition of the signals to identify and reconstruct these features from annually resolved historical data and proxy based paleoreconstructions of each climate index covering the period from 1650 to 2012. A K-Nearest Neighbor block bootstrap approach is then developed to simulate the total signal of each of these climate index series while preserving its time-frequency structure and marginal distributions. Finally, given the simulated climate signal time series, a K-Nearest Neighbor bootstrap is used to simulate annual streamflow series conditional on the joint state space defined by the simulated climate index for each year. We demonstrate this method by applying it to simulation of streamflow at Lees Ferry gauge on the Colorado River using indices of two large scale climate forcings: Pacific Decadal Oscillation (PDO) and Atlantic Multi-decadal Oscillation (AMO), which are known to modulate the Colorado River Basin (CRB) hydrology at multidecadal time scales. Skill in stochastic simulation of multidecadal projections of flow using this approach is demonstrated.

  18. Status, trends, and changes in freshwater inflows to bay systems in the Corpus Christi Bay National Estuary Program study area

    USGS Publications Warehouse

    Asquith, W.H.; Mosier, J. G.; Bush, P.W.

    1997-01-01

    The watershed simulation model Hydrologic Simulation Program—Fortran (HSPF) was used to generate simulated flow (runoff) from the 13 watersheds to the six bay systems because adequate gaged streamflow data from which to estimate freshwater inflows are not available; only about 23 percent of the adjacent contributing watershed area is gaged. The model was calibrated for the gaged parts of three watersheds—that is, selected input parameters (meteorologic and hydrologic properties and conditions) that control runoff were adjusted in a series of simulations until an adequate match between model-generated flows and a set (time series) of gaged flows was achieved. The primary model input is rainfall and evaporation data and the model output is a time series of runoff volumes. After calibration, simulations driven by daily rainfall for a 26-year period (1968–93) were done for the 13 watersheds to obtain runoff under current (1983–93), predevelopment (pre-1940 streamflow and pre-urbanization), and future (2010) land-use conditions for estimating freshwater inflows and for comparing runoff under the three land-use conditions; and to obtain time series of runoff from which to estimate time series of freshwater inflows for trend analysis.

  19. Simulation of time series by distorted Gaussian processes

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1977-01-01

    Distorted stationary Gaussian process can be used to provide computer-generated imitations of experimental time series. A method of analyzing a source time series and synthesizing an imitation is shown, and an example using X-band radiometer data is given.

  20. Over-fitting Time Series Models of Air Pollution Health Effects: Smoothing Tends to Bias Non-Null Associations Towards the Null.

    EPA Science Inventory

    Background: Simulation studies have previously demonstrated that time-series analyses using smoothing splines correctly model null health-air pollution associations. Methods: We repeatedly simulated season, meteorology and air quality for the metropolitan area of Atlanta from cyc...

  1. A Nonlinear Dynamical Systems based Model for Stochastic Simulation of Streamflow

    NASA Astrophysics Data System (ADS)

    Erkyihun, S. T.; Rajagopalan, B.; Zagona, E. A.

    2014-12-01

    Traditional time series methods model the evolution of the underlying process as a linear or nonlinear function of the autocorrelation. These methods capture the distributional statistics but are incapable of providing insights into the dynamics of the process, the potential regimes, and predictability. This work develops a nonlinear dynamical model for stochastic simulation of streamflows. In this, first a wavelet spectral analysis is employed on the flow series to isolate dominant orthogonal quasi periodic timeseries components. The periodic bands are added denoting the 'signal' component of the time series and the residual being the 'noise' component. Next, the underlying nonlinear dynamics of this combined band time series is recovered. For this the univariate time series is embedded in a d-dimensional space with an appropriate lag T to recover the state space in which the dynamics unfolds. Predictability is assessed by quantifying the divergence of trajectories in the state space with time, as Lyapunov exponents. The nonlinear dynamics in conjunction with a K-nearest neighbor time resampling is used to simulate the combined band, to which the noise component is added to simulate the timeseries. We demonstrate this method by applying it to the data at Lees Ferry that comprises of both the paleo reconstructed and naturalized historic annual flow spanning 1490-2010. We identify interesting dynamics of the signal in the flow series and epochal behavior of predictability. These will be of immense use for water resources planning and management.

  2. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    EPA Science Inventory

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...

  3. Advanced Space Shuttle simulation model

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1982-01-01

    A non-recursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed. It provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gusts gradients. Based on this model the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes, entitled Shuttle Simulation Turbulence Tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 120,000 meters. A description of the turbulence generation procedure is provided. The results of validating the simulated turbulence are described. Conclusions and recommendations are presented. One-dimensional von Karman spectra are tabulated, while a discussion of the minimum frequency simulated is provided. The results of spectral and statistical analyses of the SSTT are presented.

  4. Introduction and application of the multiscale coefficient of variation analysis.

    PubMed

    Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh

    2017-10-01

    Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.

  5. Normalization of time-series satellite reflectance data to a standard sun-target-sensor geometry using a semi-empirical model

    NASA Astrophysics Data System (ADS)

    Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang

    2017-10-01

    Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.

  6. Modeling Geodetic Processes with Levy α-Stable Distribution and FARIMA

    NASA Astrophysics Data System (ADS)

    Montillet, Jean-Philippe; Yu, Kegen

    2015-04-01

    Over the last years the scientific community has been using the auto regressive moving average (ARMA) model in the modeling of the noise in global positioning system (GPS) time series (daily solution). This work starts with the investigation of the limit of the ARMA model which is widely used in signal processing when the measurement noise is white. Since a typical GPS time series consists of geophysical signals (e.g., seasonal signal) and stochastic processes (e.g., coloured and white noise), the ARMA model may be inappropriate. Therefore, the application of the fractional auto-regressive integrated moving average (FARIMA) model is investigated. The simulation results using simulated time series as well as real GPS time series from a few selected stations around Australia show that the FARIMA model fits the time series better than other models when the coloured noise is larger than the white noise. The second fold of this work focuses on fitting the GPS time series with the family of Levy α-stable distributions. Using this distribution, a hypothesis test is developed to eliminate effectively coarse outliers from GPS time series, achieving better performance than using the rule of thumb of n standard deviations (with n chosen empirically).

  7. A large-signal dynamic simulation for the series resonant converter

    NASA Technical Reports Server (NTRS)

    King, R. J.; Stuart, T. A.

    1983-01-01

    A simple nonlinear discrete-time dynamic model for the series resonant dc-dc converter is derived using approximations appropriate to most power converters. This model is useful for the dynamic simulation of a series resonant converter using only a desktop calculator. The model is compared with a laboratory converter for a large transient event.

  8. Time irreversibility of financial time series based on higher moments and multiscale Kullback-Leibler divergence

    NASA Astrophysics Data System (ADS)

    Li, Jinyang; Shang, Pengjian

    2018-07-01

    Irreversibility is an important property of time series. In this paper, we propose the higher moments and multiscale Kullback-Leibler divergence to analyze time series irreversibility. The higher moments Kullback-Leibler divergence (HMKLD) can amplify irreversibility and make the irreversibility variation more obvious. Therefore, many time series whose irreversibility is hard to be found are also able to show the variations. We employ simulated data and financial stock data to test and verify this method, and find that HMKLD of stock data is growing in the form of fluctuations. As for multiscale Kullback-Leibler divergence (MKLD), it is very complex in the dynamic system, so that exploring the law of simulation and stock system is difficult. In conventional multiscale entropy method, the coarse-graining process is non-overlapping, however we apply a different coarse-graining process and obtain a surprising discovery. The result shows when the scales are 4 and 5, their entropy is nearly similar, which demonstrates MKLD is efficient to display characteristics of time series irreversibility.

  9. On Digital Simulation of Multicorrelated Random Processes and Its Applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sinha, A. K.

    1973-01-01

    Two methods are described to simulate, on a digital computer, a set of correlated, stationary, and Gaussian time series with zero mean from the given matrix of power spectral densities and cross spectral densities. The first method is based upon trigonometric series with random amplitudes and deterministic phase angles. The random amplitudes are generated by using a standard random number generator subroutine. An example is given which corresponds to three components of wind velocities at two different spatial locations for a total of six correlated time series. In the second method, the whole process is carried out using the Fast Fourier Transform approach. This method gives more accurate results and works about twenty times faster for a set of six correlated time series.

  10. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling.

    PubMed

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-12-07

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long ( 6 × 10 5 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series.

  11. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling

    PubMed Central

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-01-01

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long (6×105 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series. PMID:27941600

  12. Ranking streamflow model performance based on Information theory metrics

    NASA Astrophysics Data System (ADS)

    Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas

    2016-04-01

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.

  13. Comparison of different synthetic 5-min rainfall time series regarding their suitability for urban drainage modelling

    NASA Astrophysics Data System (ADS)

    van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András

    2015-04-01

    For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.

  14. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series

    PubMed Central

    2011-01-01

    Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598

  15. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series.

    PubMed

    Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp

    2011-08-18

    Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.

  16. Improvements of the two-dimensional FDTD method for the simulation of normal- and superconducting planar waveguides using time series analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofschen, S.; Wolff, I.

    1996-08-01

    Time-domain simulation results of two-dimensional (2-D) planar waveguide finite-difference time-domain (FDTD) analysis are normally analyzed using Fourier transform. The introduced method of time series analysis to extract propagation and attenuation constants reduces the desired computation time drastically. Additionally, a nonequidistant discretization together with an adequate excitation technique is used to reduce the number of spatial grid points. Therefore, it is possible to reduce the number of spatial grid points. Therefore, it is possible to simulate normal- and superconducting planar waveguide structures with very thin conductors and small dimensions, as they are used in MMIC technology. The simulation results are comparedmore » with measurements and show good agreement.« less

  17. A Multitaper, Causal Decomposition for Stochastic, Multivariate Time Series: Application to High-Frequency Calcium Imaging Data.

    PubMed

    Sornborger, Andrew T; Lauderdale, James D

    2016-11-01

    Neural data analysis has increasingly incorporated causal information to study circuit connectivity. Dimensional reduction forms the basis of most analyses of large multivariate time series. Here, we present a new, multitaper-based decomposition for stochastic, multivariate time series that acts on the covariance of the time series at all lags, C ( τ ), as opposed to standard methods that decompose the time series, X ( t ), using only information at zero-lag. In both simulated and neural imaging examples, we demonstrate that methods that neglect the full causal structure may be discarding important dynamical information in a time series.

  18. The detection of local irreversibility in time series based on segmentation

    NASA Astrophysics Data System (ADS)

    Teng, Yue; Shang, Pengjian

    2018-06-01

    We propose a strategy for the detection of local irreversibility in stationary time series based on multiple scale. The detection is beneficial to evaluate the displacement of irreversibility toward local skewness. By means of this method, we can availably discuss the local irreversible fluctuations of time series as the scale changes. The method was applied to simulated nonlinear signals generated by the ARFIMA process and logistic map to show how the irreversibility functions react to the increasing of the multiple scale. The method was applied also to series of financial markets i.e., American, Chinese and European markets. The local irreversibility for different markets demonstrate distinct characteristics. Simulations and real data support the need of exploring local irreversibility.

  19. Using NASA's Giovanni System to Simulate Time-Series Stations in the Outflow Region of California's Eel River

    NASA Technical Reports Server (NTRS)

    Acker, James G.; Shen, Suhung; Leptoukh, Gregory G.; Lee, Zhongping

    2012-01-01

    Oceanographic time-series stations provide vital data for the monitoring of oceanic processes, particularly those associated with trends over time and interannual variability. There are likely numerous locations where the establishment of a time-series station would be desirable, but for reasons of funding or logistics, such establishment may not be feasible. An alternative to an operational time-series station is monitoring of sites via remote sensing. In this study, the NASA Giovanni data system is employed to simulate the establishment of two time-series stations near the outflow region of California s Eel River, which carries a high sediment load. Previous time-series analysis of this location (Acker et al. 2009) indicated that remotely-sensed chl a exhibits a statistically significant increasing trend during summer (low flow) months, but no apparent trend during winter (high flow) months. Examination of several newly-available ocean data parameters in Giovanni, including 8-day resolution data, demonstrates the differences in ocean parameter trends at the two locations compared to regionally-averaged time-series. The hypothesis that the increased summer chl a values are related to increasing SST is evaluated, and the signature of the Eel River plume is defined with ocean optical parameters.

  20. Local normalization: Uncovering correlations in non-stationary financial time series

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Guhr, Thomas

    2010-09-01

    The measurement of correlations between financial time series is of vital importance for risk management. In this paper we address an estimation error that stems from the non-stationarity of the time series. We put forward a method to rid the time series of local trends and variable volatility, while preserving cross-correlations. We test this method in a Monte Carlo simulation, and apply it to empirical data for the S&P 500 stocks.

  1. Using simulations and data to evaluate mean sensitivity (ζ) as a useful statistic in dendrochronology

    Treesearch

    Andrew G. Bunn; Esther Jansma; Mikko Korpela; Robert D. Westfall; James Baldwin

    2013-01-01

    Mean sensitivity (ζ) continues to be used in dendrochronology despite a literature that shows it to be of questionable value in describing the properties of a time series. We simulate first-order autoregressive models with known parameters and show that ζ is a function of variance and autocorrelation of a time series. We then use 500 random tree-ring...

  2. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    NASA Astrophysics Data System (ADS)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  3. Validation of a national hydrological model

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Booker, D. J.; Cattoën, C.

    2016-10-01

    Nationwide predictions of flow time-series are valuable for development of policies relating to environmental flows, calculating reliability of supply to water users, or assessing risk of floods or droughts. This breadth of model utility is possible because various hydrological signatures can be derived from simulated flow time-series. However, producing national hydrological simulations can be challenging due to strong environmental diversity across catchments and a lack of data available to aid model parameterisation. A comprehensive and consistent suite of test procedures to quantify spatial and temporal patterns in performance across various parts of the hydrograph is described and applied to quantify the performance of an uncalibrated national rainfall-runoff model of New Zealand. Flow time-series observed at 485 gauging stations were used to calculate Nash-Sutcliffe efficiency and percent bias when simulating between-site differences in daily series, between-year differences in annual series, and between-site differences in hydrological signatures. The procedures were used to assess the benefit of applying a correction to the modelled flow duration curve based on an independent statistical analysis. They were used to aid understanding of climatological, hydrological and model-based causes of differences in predictive performance by assessing multiple hypotheses that describe where and when the model was expected to perform best. As the procedures produce quantitative measures of performance, they provide an objective basis for model assessment that could be applied when comparing observed daily flow series with competing simulated flow series from any region-wide or nationwide hydrological model. Model performance varied in space and time with better scores in larger and medium-wet catchments, and in catchments with smaller seasonal variations. Surprisingly, model performance was not sensitive to aquifer fraction or rain gauge density.

  4. Development of a Design Tool for Planning Aqueous Amendment Injection Systems Permanganate Design Tool

    DTIC Science & Technology

    2010-08-01

    CSTR continuously stirred tank reactors CT contact time EDB ethylene dibromide ESTCP Environmental Security Technology Certification Program...63 6.2 Simulating Oxidant Distribution Using a Series of CSTRs -------------------- 63 6.2.1 Model...SIMULATING OXIDANT DISTRIBUTION USING A SERIES OF CSTRS 6.2.1 MODEL DEVELOPMENT The transport and consumption of permanganate are simulated within the

  5. Space Object Classification Using Fused Features of Time Series Data

    NASA Astrophysics Data System (ADS)

    Jia, B.; Pham, K. D.; Blasch, E.; Shen, D.; Wang, Z.; Chen, G.

    In this paper, a fused feature vector consisting of raw time series and texture feature information is proposed for space object classification. The time series data includes historical orbit trajectories and asteroid light curves. The texture feature is derived from recurrence plots using Gabor filters for both unsupervised learning and supervised learning algorithms. The simulation results show that the classification algorithms using the fused feature vector achieve better performance than those using raw time series or texture features only.

  6. The conditional resampling model STARS: weaknesses of the modeling concept and development

    NASA Astrophysics Data System (ADS)

    Menz, Christoph

    2016-04-01

    The Statistical Analogue Resampling Scheme (STARS) is based on a modeling concept of Werner and Gerstengarbe (1997). The model uses a conditional resampling technique to create a simulation time series from daily observations. Unlike other time series generators (such as stochastic weather generators) STARS only needs a linear regression specification of a single variable as the target condition for the resampling. Since its first implementation the algorithm was further extended in order to allow for a spatially distributed trend signal, to preserve the seasonal cycle and the autocorrelation of the observation time series (Orlovsky, 2007; Orlovsky et al., 2008). This evolved version was successfully used in several climate impact studies. However a detaild evaluation of the simulations revealed two fundamental weaknesses of the utilized resampling technique. 1. The restriction of the resampling condition on a single individual variable can lead to a misinterpretation of the change signal of other variables when the model is applied to a mulvariate time series. (F. Wechsung and M. Wechsung, 2014). As one example, the short-term correlations between precipitation and temperature (cooling of the near-surface air layer after a rainfall event) can be misinterpreted as a climatic change signal in the simulation series. 2. The model restricts the linear regression specification to the annual mean time series, refusing the specification of seasonal varying trends. To overcome these fundamental weaknesses a redevelopment of the whole algorithm was done. The poster discusses the main weaknesses of the earlier model implementation and the methods applied to overcome these in the new version. Based on the new model idealized simulations were conducted to illustrate the enhancement.

  7. Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition

    NASA Astrophysics Data System (ADS)

    Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.

    2005-12-01

    Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.

  8. Model Performance Evaluation and Scenario Analysis ...

    EPA Pesticide Factsheets

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too

  9. Radiocarbon dating uncertainty and the reliability of the PEWMA method of time-series analysis for research on long-term human-environment interaction

    PubMed Central

    Carleton, W. Christopher; Campbell, David

    2018-01-01

    Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating—the most common chronometric technique in archaeological and palaeoenvironmental research—creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20–30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence. PMID:29351329

  10. Radiocarbon dating uncertainty and the reliability of the PEWMA method of time-series analysis for research on long-term human-environment interaction.

    PubMed

    Carleton, W Christopher; Campbell, David; Collard, Mark

    2018-01-01

    Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating-the most common chronometric technique in archaeological and palaeoenvironmental research-creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20-30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence.

  11. Analysis of Nonstationary Time Series for Biological Rhythms Research.

    PubMed

    Leise, Tanya L

    2017-06-01

    This article is part of a Journal of Biological Rhythms series exploring analysis and statistics topics relevant to researchers in biological rhythms and sleep research. The goal is to provide an overview of the most common issues that arise in the analysis and interpretation of data in these fields. In this article on time series analysis for biological rhythms, we describe some methods for assessing the rhythmic properties of time series, including tests of whether a time series is indeed rhythmic. Because biological rhythms can exhibit significant fluctuations in their period, phase, and amplitude, their analysis may require methods appropriate for nonstationary time series, such as wavelet transforms, which can measure how these rhythmic parameters change over time. We illustrate these methods using simulated and real time series.

  12. deltaGseg: macrostate estimation via molecular dynamics simulations and multiscale time series analysis.

    PubMed

    Low, Diana H P; Motakis, Efthymios

    2013-10-01

    Binding free energy calculations obtained through molecular dynamics simulations reflect intermolecular interaction states through a series of independent snapshots. Typically, the free energies of multiple simulated series (each with slightly different starting conditions) need to be estimated. Previous approaches carry out this task by moving averages at certain decorrelation times, assuming that the system comes from a single conformation description of binding events. Here, we discuss a more general approach that uses statistical modeling, wavelets denoising and hierarchical clustering to estimate the significance of multiple statistically distinct subpopulations, reflecting potential macrostates of the system. We present the deltaGseg R package that performs macrostate estimation from multiple replicated series and allows molecular biologists/chemists to gain physical insight into the molecular details that are not easily accessible by experimental techniques. deltaGseg is a Bioconductor R package available at http://bioconductor.org/packages/release/bioc/html/deltaGseg.html.

  13. Modeling climate change impacts on combined sewer overflow using synthetic precipitation time series.

    PubMed

    Bendel, David; Beck, Ferdinand; Dittmer, Ulrich

    2013-01-01

    In the presented study climate change impacts on combined sewer overflows (CSOs) in Baden-Wuerttemberg, Southern Germany, were assessed based on continuous long-term rainfall-runoff simulations. As input data, synthetic rainfall time series were used. The applied precipitation generator NiedSim-Klima accounts for climate change effects on precipitation patterns. Time series for the past (1961-1990) and future (2041-2050) were generated for various locations. Comparing the simulated CSO activity of both periods we observe significantly higher overflow frequencies for the future. Changes in overflow volume and overflow duration depend on the type of overflow structure. Both values will increase at simple CSO structures that merely divide the flow, whereas they will decrease when the CSO structure is combined with a storage tank. However, there is a wide variation between the results of different precipitation time series (representative for different locations).

  14. A Note on Verification of Computer Simulation Models

    ERIC Educational Resources Information Center

    Aigner, Dennis J.

    1972-01-01

    Establishes an argument that questions the validity of one test'' of goodness-of-fit (the extent to which a series of obtained measures agrees with a series of theoretical measures) for the simulated time path of a simple endogenous (internally developed) variable in a simultaneous, perhaps dynamic econometric model. (Author)

  15. Design Tool for Planning Permanganate Injection Systems

    DTIC Science & Technology

    2010-08-01

    Chemical Spill 10 CSTR continuously stirred tank reactors CT contact time EDB ethylene dibromide ESTCP Environmental Security Technology...63 6.2 Simulating Oxidant Distribution Using a Series of CSTRs ...ER- 0625. 6.2 SIMULATING OXIDANT DISTRIBUTION USING A SERIES OF CSTRS 6.2.1 MODEL DEVELOPMENT The transport and consumption of permanganate

  16. Pan-European stochastic flood event set

    NASA Astrophysics Data System (ADS)

    Kadlec, Martin; Pinto, Joaquim G.; He, Yi; Punčochář, Petr; Kelemen, Fanni D.; Manful, Desmond; Palán, Ladislav

    2017-04-01

    Impact Forecasting (IF), the model development center of Aon Benfield, has been developing a large suite of catastrophe flood models on probabilistic bases for individual countries in Europe. Such natural catastrophes do not follow national boundaries: for example, the major flood in 2016 was responsible for the Europe's largest insured loss of USD3.4bn and affected Germany, France, Belgium, Austria and parts of several other countries. Reflecting such needs, IF initiated a pan-European flood event set development which combines cross-country exposures with country based loss distributions to provide more insightful data to re/insurers. Because the observed discharge data are not available across the whole Europe in sufficient quantity and quality to permit a detailed loss evaluation purposes, a top-down approach was chosen. This approach is based on simulating precipitation from a GCM/RCM model chain followed by a calculation of discharges using rainfall-runoff modelling. IF set up this project in a close collaboration with Karlsruhe Institute of Technology (KIT) regarding the precipitation estimates and with University of East Anglia (UEA) in terms of the rainfall-runoff modelling. KIT's main objective is to provide high resolution daily historical and stochastic time series of key meteorological variables. A purely dynamical downscaling approach with the regional climate model COSMO-CLM (CCLM) is used to generate the historical time series, using re-analysis data as boundary conditions. The resulting time series are validated against the gridded observational dataset E-OBS, and different bias-correction methods are employed. The generation of the stochastic time series requires transfer functions between large-scale atmospheric variables and regional temperature and precipitation fields. These transfer functions are developed for the historical time series using reanalysis data as predictors and bias-corrected CCLM simulated precipitation and temperature as predictands. Finally, the transfer functions are applied to a large ensemble of GCM simulations with forcing corresponding to present day climate conditions to generate highly resolved stochastic time series of precipitation and temperature for several thousand years. These time series form the input for the rainfall-runoff model developed by the UEA team. It is a spatially distributed model adapted from the HBV model and will be calibrated for individual basins using historical discharge data. The calibrated model will be driven by the precipitation time series generated by the KIT team to simulate discharges at a daily time step. The uncertainties in the simulated discharges will be analysed using multiple model parameter sets. A number of statistical methods will be used to assess return periods, changes in the magnitudes, changes in the characteristics of floods such as time base and time to peak, and spatial correlations of large flood events. The Pan-European flood stochastic event set will permit a better view of flood risk for market applications.

  17. Simulating the complex output of rainfall and hydrological processes using the information contained in large data sets: the Direct Sampling approach.

    NASA Astrophysics Data System (ADS)

    Oriani, Fabio

    2017-04-01

    The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model. Softw., vol. 86, pp. 264 - 276, http://dx.doi.org/10.1016/j.envsoft.2016.10.002

  18. Space shuttle simulation model

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1980-01-01

    The effects of atmospheric turbulence in both horizontal and near horizontal flight, during the return of the space shuttle, are important for determining design, control, and 'pilot-in-the-loop' effects. A nonrecursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed which provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gust gradients. Based on this model, the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes which are entitled shuttle simulation turbulence tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 10,000 meters. The turbulence generation procedure is described as well as the results of validating the simulated turbulence. Conclusions and recommendations are presented and references cited. The tabulated one dimensional von Karman spectra and the results of spectral and statistical analyses of the SSTT are contained in the appendix.

  19. Forecasting Jakarta composite index (IHSG) based on chen fuzzy time series and firefly clustering algorithm

    NASA Astrophysics Data System (ADS)

    Ningrum, R. W.; Surarso, B.; Farikhin; Safarudin, Y. M.

    2018-03-01

    This paper proposes the combination of Firefly Algorithm (FA) and Chen Fuzzy Time Series Forecasting. Most of the existing fuzzy forecasting methods based on fuzzy time series use the static length of intervals. Therefore, we apply an artificial intelligence, i.e., Firefly Algorithm (FA) to set non-stationary length of intervals for each cluster on Chen Method. The method is evaluated by applying on the Jakarta Composite Index (IHSG) and compare with classical Chen Fuzzy Time Series Forecasting. Its performance verified through simulation using Matlab.

  20. Characterizing time series via complexity-entropy curves

    NASA Astrophysics Data System (ADS)

    Ribeiro, Haroldo V.; Jauregui, Max; Zunino, Luciano; Lenzi, Ervin K.

    2017-06-01

    The search for patterns in time series is a very common task when dealing with complex systems. This is usually accomplished by employing a complexity measure such as entropies and fractal dimensions. However, such measures usually only capture a single aspect of the system dynamics. Here, we propose a family of complexity measures for time series based on a generalization of the complexity-entropy causality plane. By replacing the Shannon entropy by a monoparametric entropy (Tsallis q entropy) and after considering the proper generalization of the statistical complexity (q complexity), we build up a parametric curve (the q -complexity-entropy curve) that is used for characterizing and classifying time series. Based on simple exact results and numerical simulations of stochastic processes, we show that these curves can distinguish among different long-range, short-range, and oscillating correlated behaviors. Also, we verify that simulated chaotic and stochastic time series can be distinguished based on whether these curves are open or closed. We further test this technique in experimental scenarios related to chaotic laser intensity, stock price, sunspot, and geomagnetic dynamics, confirming its usefulness. Finally, we prove that these curves enhance the automatic classification of time series with long-range correlations and interbeat intervals of healthy subjects and patients with heart disease.

  1. Simulation-based power calculation for designing interrupted time series analyses of health policy interventions.

    PubMed

    Zhang, Fang; Wagner, Anita K; Ross-Degnan, Dennis

    2011-11-01

    Interrupted time series is a strong quasi-experimental research design to evaluate the impacts of health policy interventions. Using simulation methods, we estimated the power requirements for interrupted time series studies under various scenarios. Simulations were conducted to estimate the power of segmented autoregressive (AR) error models when autocorrelation ranged from -0.9 to 0.9 and effect size was 0.5, 1.0, and 2.0, investigating balanced and unbalanced numbers of time periods before and after an intervention. Simple scenarios of autoregressive conditional heteroskedasticity (ARCH) models were also explored. For AR models, power increased when sample size or effect size increased, and tended to decrease when autocorrelation increased. Compared with a balanced number of study periods before and after an intervention, designs with unbalanced numbers of periods had less power, although that was not the case for ARCH models. The power to detect effect size 1.0 appeared to be reasonable for many practical applications with a moderate or large number of time points in the study equally divided around the intervention. Investigators should be cautious when the expected effect size is small or the number of time points is small. We recommend conducting various simulations before investigation. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. The CACAO Method for Smoothing, Gap Filling, and Characterizing Seasonal Anomalies in Satellite Time Series

    NASA Technical Reports Server (NTRS)

    Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.

    2013-01-01

    Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.

  3. Methods for developing time-series climate surfaces to drive topographically distributed energy- and water-balance models

    USGS Publications Warehouse

    Susong, D.; Marks, D.; Garen, D.

    1999-01-01

    Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.

  4. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE PAGES

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...

    2016-10-20

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  5. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  6. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    PubMed Central

    Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.

    2016-01-01

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187

  7. Comparison of different synthetic 5-min rainfall time series on the results of rainfall runoff simulations in urban drainage modelling

    NASA Astrophysics Data System (ADS)

    Krämer, Stefan; Rohde, Sophia; Schröder, Kai; Belli, Aslan; Maßmann, Stefanie; Schönfeld, Martin; Henkel, Erik; Fuchs, Lothar

    2015-04-01

    The design of urban drainage systems with numerical simulation models requires long, continuous rainfall time series with high temporal resolution. However, suitable observed time series are rare. As a result, usual design concepts often use uncertain or unsuitable rainfall data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic rainfall data as input for urban drainage modelling are advanced, tested, and compared. Synthetic rainfall time series of three different precipitation model approaches, - one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model-, are provided for three catchments with different sewer system characteristics in different climate regions in Germany: - Hamburg (northern Germany): maritime climate, mean annual rainfall: 770 mm; combined sewer system length: 1.729 km (City center of Hamburg), storm water sewer system length (Hamburg Harburg): 168 km - Brunswick (Lower Saxony, northern Germany): transitional climate from maritime to continental, mean annual rainfall: 618 mm; sewer system length: 278 km, connected impervious area: 379 ha, height difference: 27 m - Friburg in Brisgau (southern Germany): Central European transitional climate, mean annual rainfall: 908 mm; sewer system length: 794 km, connected impervious area: 1 546 ha, height difference 284 m Hydrodynamic models are set up for each catchment to simulate rainfall runoff processes in the sewer systems. Long term event time series are extracted from the - three different synthetic rainfall time series (comprising up to 600 years continuous rainfall) provided for each catchment and - observed gauge rainfall (reference rainfall) according national hydraulic design standards. The synthetic and reference long term event time series are used as rainfall input for the hydrodynamic sewer models. For comparison of the synthetic rainfall time series against the reference rainfall and against each other the number of - surcharged manholes, - surcharges per manhole, - and the average surcharge volume per manhole are applied as hydraulic performance criteria. The results are discussed and assessed to answer the following questions: - Are the synthetic rainfall approaches suitable to generate high resolution rainfall series and do they produce, - in combination with numerical rainfall runoff models - valid results for design of urban drainage systems? - What are the bounds of uncertainty in the runoff results depending on the synthetic rainfall model and on the climate region? The work is carried out within the SYNOPSE project, funded by the German Federal Ministry of Education and Research (BMBF).

  8. Pearson correlation estimation for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.

    2012-04-01

    Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.

  9. Capabilities of stochastic rainfall models as data providers for urban hydrology

    NASA Astrophysics Data System (ADS)

    Haberlandt, Uwe

    2017-04-01

    For planning of urban drainage systems using hydrological models, long, continuous precipitation series with high temporal resolution are needed. Since observed time series are often too short or not available everywhere, the use of synthetic precipitation is a common alternative. This contribution compares three precipitation models regarding their suitability to provide 5 minute continuous rainfall time series for a) sizing of drainage networks for urban flood protection and b) dimensioning of combined sewage systems for pollution reduction. The rainfall models are a parametric stochastic model (Haberlandt et al., 2008), a non-parametric probabilistic approach (Bárdossy, 1998) and a stochastic downscaling of dynamically simulated rainfall (Berg et al., 2013); all models are operated both as single site and multi-site generators. The models are applied with regionalised parameters assuming that there is no station at the target location. Rainfall and discharge characteristics are utilised for evaluation of the model performance. The simulation results are compared against results obtained from reference rainfall stations not used for parameter estimation. The rainfall simulations are carried out for the federal states of Baden-Württemberg and Lower Saxony in Germany and the discharge simulations for the drainage networks of the cities of Hamburg, Brunswick and Freiburg. Altogether, the results show comparable simulation performance for the three models, good capabilities for single site simulations but low skills for multi-site simulations. Remarkably, there is no significant difference in simulation performance comparing the tasks flood protection with pollution reduction, so the models are finally able to simulate both the extremes and the long term characteristics of rainfall equally well. Bárdossy, A., 1998. Generating precipitation time series using simulated annealing. Wat. Resour. Res., 34(7): 1737-1744. Berg, P., Wagner, S., Kunstmann, H., Schädler, G., 2013. High resolution regional climate model simulations for Germany: part I — validation. Climate Dynamics, 40(1): 401-414. Haberlandt, U., Ebner von Eschenbach, A.-D., Buchwald, I., 2008. A space-time hybrid hourly rainfall model for derived flood frequency analysis. Hydrol. Earth Syst. Sci., 12: 1353-1367.

  10. Time-Series Analysis of Supergranule Characterstics at Solar Minimum

    NASA Technical Reports Server (NTRS)

    Williams, Peter E.; Pesnell, W. Dean

    2013-01-01

    Sixty days of Doppler images from the Solar and Heliospheric Observatory (SOHO) / Michelson Doppler Imager (MDI) investigation during the 1996 and 2008 solar minima have been analyzed to show that certain supergranule characteristics (size, size range, and horizontal velocity) exhibit fluctuations of three to five days. Cross-correlating parameters showed a good, positive correlation between supergranulation size and size range, and a moderate, negative correlation between size range and velocity. The size and velocity do exhibit a moderate, negative correlation, but with a small time lag (less than 12 hours). Supergranule sizes during five days of co-temporal data from MDI and the Solar Dynamics Observatory (SDO) / Helioseismic Magnetic Imager (HMI) exhibit similar fluctuations with a high level of correlation between them. This verifies the solar origin of the fluctuations, which cannot be caused by instrumental artifacts according to these observations. Similar fluctuations are also observed in data simulations that model the evolution of the MDI Doppler pattern over a 60-day period. Correlations between the supergranule size and size range time-series derived from the simulated data are similar to those seen in MDI data. A simple toy-model using cumulative, uncorrelated exponential growth and decay patterns at random emergence times produces a time-series similar to the data simulations. The qualitative similarities between the simulated and the observed time-series suggest that the fluctuations arise from stochastic processes occurring within the solar convection zone. This behavior, propagating to surface manifestations of supergranulation, may assist our understanding of magnetic-field-line advection, evolution, and interaction.

  11. 75 FR 22779 - FIFRA Scientific Advisory Panel; Notice of Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-30

    ... exposure factors to generate time series of exposure for simulated individuals. One-stage or two-stage...., built-in simple source-to- concentration module, user-entered time series from other models or field... FOR FURTHER INFORMATION CONTACT at least 10 days prior to the meeting to give EPA as much time as...

  12. Distribution Feeder Modeling for Time-Series Simulation of Voltage Management Strategies: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giraldez Miner, Julieta I; Gotseff, Peter; Nagarajan, Adarsh

    This paper presents techniques to create baseline distribution models using a utility feeder from Hawai'ian Electric Company. It describes the software-to-software conversion, steady-state, and time-series validations of a utility feeder model. It also presents a methodology to add secondary low-voltage circuit models to accurately capture the voltage at the customer meter level. This enables preparing models to perform studies that simulate how customer-sited resources integrate into legacy utility distribution system operations.

  13. Automatising the analysis of stochastic biochemical time-series

    PubMed Central

    2015-01-01

    Background Mathematical and computational modelling of biochemical systems has seen a lot of effort devoted to the definition and implementation of high-performance mechanistic simulation frameworks. Within these frameworks it is possible to analyse complex models under a variety of configurations, eventually selecting the best setting of, e.g., parameters for a target system. Motivation This operational pipeline relies on the ability to interpret the predictions of a model, often represented as simulation time-series. Thus, an efficient data analysis pipeline is crucial to automatise time-series analyses, bearing in mind that errors in this phase might mislead the modeller's conclusions. Results For this reason we have developed an intuitive framework-independent Python tool to automate analyses common to a variety of modelling approaches. These include assessment of useful non-trivial statistics for simulation ensembles, e.g., estimation of master equations. Intuitive and domain-independent batch scripts will allow the researcher to automatically prepare reports, thus speeding up the usual model-definition, testing and refinement pipeline. PMID:26051821

  14. Simulation Exploration through Immersive Parallel Planes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  15. Simulation Exploration through Immersive Parallel Planes: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny

    We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less

  16. A probabilistic method for constructing wave time-series at inshore locations using model scenarios

    USGS Publications Warehouse

    Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.

    2014-01-01

    Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.

  17. Volatility Behaviors of Financial Time Series by Percolation System on Sierpinski Carpet Lattice

    NASA Astrophysics Data System (ADS)

    Pei, Anqi; Wang, Jun

    2015-01-01

    The financial time series is simulated and investigated by the percolation system on the Sierpinski carpet lattice, where percolation is usually employed to describe the behavior of connected clusters in a random graph, and the Sierpinski carpet lattice is a graph which corresponds the fractal — Sierpinski carpet. To study the fluctuation behavior of returns for the financial model and the Shanghai Composite Index, we establish a daily volatility measure — multifractal volatility (MFV) measure to obtain MFV series, which have long-range cross-correlations with squared daily return series. The autoregressive fractionally integrated moving average (ARFIMA) model is used to analyze the MFV series, which performs better when compared to other volatility series. By a comparative study of the multifractality and volatility analysis of the data, the simulation data of the proposed model exhibits very similar behaviors to those of the real stock index, which indicates somewhat rationality of the model to the market application.

  18. Ensemble Bayesian forecasting system Part I: Theory and algorithms

    NASA Astrophysics Data System (ADS)

    Herr, Henry D.; Krzysztofowicz, Roman

    2015-05-01

    The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.

  19. Addressing Spatial Dependence Bias in Climate Model Simulations—An Independent Component Analysis Approach

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2018-02-01

    Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.

  20. Validating the Modeling and Simulation of a Generic Tracking Radar

    DTIC Science & Technology

    2009-07-28

    order Gauss-Markov time series with CTGM = 250 units and rGM = 10 s is shown in the top panel of Figure 1. The time series, ifr , can represent any...are shared among the sensors. The total position and velocity estimation errors valid at time index k are given by < fr *|fc = rk\\k - rk and

  1. Development and evaluation of a data-adaptive alerting algorithm for univariate temporal biosurveillance data.

    PubMed

    Elbert, Yevgeniy; Burkom, Howard S

    2009-11-20

    This paper discusses further advances in making robust predictions with the Holt-Winters forecasts for a variety of syndromic time series behaviors and introduces a control-chart detection approach based on these forecasts. Using three collections of time series data, we compare biosurveillance alerting methods with quantified measures of forecast agreement, signal sensitivity, and time-to-detect. The study presents practical rules for initialization and parameterization of biosurveillance time series. Several outbreak scenarios are used for detection comparison. We derive an alerting algorithm from forecasts using Holt-Winters-generalized smoothing for prospective application to daily syndromic time series. The derived algorithm is compared with simple control-chart adaptations and to more computationally intensive regression modeling methods. The comparisons are conducted on background data from both authentic and simulated data streams. Both types of background data include time series that vary widely by both mean value and cyclic or seasonal behavior. Plausible, simulated signals are added to the background data for detection performance testing at signal strengths calculated to be neither too easy nor too hard to separate the compared methods. Results show that both the sensitivity and the timeliness of the Holt-Winters-based algorithm proved to be comparable or superior to that of the more traditional prediction methods used for syndromic surveillance.

  2. Correlation and Stacking of Relative Paleointensity and Oxygen Isotope Data

    NASA Astrophysics Data System (ADS)

    Lurcock, P. C.; Channell, J. E.; Lee, D.

    2012-12-01

    The transformation of a depth-series into a time-series is routinely implemented in the geological sciences. This transformation often involves correlation of a depth-series to an astronomically calibrated time-series. Eyeball tie-points with linear interpolation are still regularly used, although these have the disadvantages of being non-repeatable and not based on firm correlation criteria. Two automated correlation methods are compared: the simulated annealing algorithm (Huybers and Wunsch, 2004) and the Match protocol (Lisiecki and Lisiecki, 2002). Simulated annealing seeks to minimize energy (cross-correlation) as "temperature" is slowly decreased. The Match protocol divides records into intervals, applies penalty functions that constrain accumulation rates, and minimizes the sum of the squares of the differences between two series while maintaining the data sequence in each series. Paired relative paleointensity (RPI) and oxygen isotope records, such as those from IODP Site U1308 and/or reference stacks such as LR04 and PISO, are warped using known warping functions, and then the un-warped and warped time-series are correlated to evaluate the efficiency of the correlation methods. Correlations are performed in tandem to simultaneously optimize RPI and oxygen isotope data. Noise spectra are introduced at differing levels to determine correlation efficiency as noise levels change. A third potential method, known as dynamic time warping, involves minimizing the sum of distances between correlated point pairs across the whole series. A "cost matrix" between the two series is analyzed to find a least-cost path through the matrix. This least-cost path is used to nonlinearly map the time/depth of one record onto the depth/time of another. Dynamic time warping can be expanded to more than two dimensions and used to stack multiple time-series. This procedure can improve on arithmetic stacks, which often lose coherent high-frequency content during the stacking process.

  3. Multiscale Poincaré plots for visualizing the structure of heartbeat time series.

    PubMed

    Henriques, Teresa S; Mariani, Sara; Burykin, Anton; Rodrigues, Filipa; Silva, Tiago F; Goldberger, Ary L

    2016-02-09

    Poincaré delay maps are widely used in the analysis of cardiac interbeat interval (RR) dynamics. To facilitate visualization of the structure of these time series, we introduce multiscale Poincaré (MSP) plots. Starting with the original RR time series, the method employs a coarse-graining procedure to create a family of time series, each of which represents the system's dynamics in a different time scale. Next, the Poincaré plots are constructed for the original and the coarse-grained time series. Finally, as an optional adjunct, color can be added to each point to represent its normalized frequency. We illustrate the MSP method on simulated Gaussian white and 1/f noise time series. The MSP plots of 1/f noise time series reveal relative conservation of the phase space area over multiple time scales, while those of white noise show a marked reduction in area. We also show how MSP plots can be used to illustrate the loss of complexity when heartbeat time series from healthy subjects are compared with those from patients with chronic (congestive) heart failure syndrome or with atrial fibrillation. This generalized multiscale approach to Poincaré plots may be useful in visualizing other types of time series.

  4. A multiple-fan active control wind tunnel for outdoor wind speed and direction simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jia-Ying; Meng, Qing-Hao; Luo, Bing; Zeng, Ming

    2018-03-01

    This article presents a new type of active controlled multiple-fan wind tunnel. The wind tunnel consists of swivel plates and arrays of direct current fans, and the rotation speed of each fan and the shaft angle of each swivel plate can be controlled independently for simulating different kinds of outdoor wind fields. To measure the similarity between the simulated wind field and the outdoor wind field, wind speed and direction time series of two kinds of wind fields are recorded by nine two-dimensional ultrasonic anemometers, and then statistical properties of the wind signals in different time scales are analyzed based on the empirical mode decomposition. In addition, the complexity of wind speed and direction time series is also investigated using multiscale entropy and multivariate multiscale entropy. Results suggest that the simulated wind field in the multiple-fan wind tunnel has a high degree of similarity with the outdoor wind field.

  5. Sub- and Quasi-Centurial Cycles in Solar and Geomagnetic Activity Data Series

    NASA Astrophysics Data System (ADS)

    Komitov, B.; Sello, S.; Duchlev, P.; Dechev, M.; Penev, K.; Koleva, K.

    2016-07-01

    The subject of this paper is the existence and stability of solar cycles with durations in the range of 20-250 years. Five types of data series are used: 1) the Zurich series (1749-2009 AD), the mean annual International sunspot number Ri, 2) the Group sunspot number series Rh (1610-1995 AD), 3) the simulated extended sunspot number from Extended time series of Solar Activity Indices (ESAI) (1090-2002 AD), 4) the simulated extended geomagnetic aa-index from ESAI (1099-2002 AD), 5) the Meudon filament series (1919-1991 AD). Two principally independent methods of time series analysis are used: the T-R periodogram analysis (both in standard and ``scanning window'' regimes) and the wavelet-analysis. The obtained results are very similar. A strong cycle with a mean duration of 55-60 years is found to exist in all series. On the other hand, a strong and stable quasi 110-120 years and ˜200-year cycles are obtained in all of these series except in the Ri one. The high importance of the long term solar activity dynamics for the aims of solar dynamo modeling and predictions is especially noted.

  6. Volatility behavior of visibility graph EMD financial time series from Ising interacting system

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Wang, Jun; Fang, Wen

    2015-08-01

    A financial market dynamics model is developed and investigated by stochastic Ising system, where the Ising model is the most popular ferromagnetic model in statistical physics systems. Applying two graph based analysis and multiscale entropy method, we investigate and compare the statistical volatility behavior of return time series and the corresponding IMF series derived from the empirical mode decomposition (EMD) method. And the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, we find that the degree distribution of visibility graph for the simulation series has the power law tails, and the assortative network exhibits the mixing pattern property. All these features are in agreement with the real market data, the research confirms that the financial model established by the Ising system is reasonable.

  7. Physical habitat simulation system reference manual: version II

    USGS Publications Warehouse

    Milhous, Robert T.; Updike, Marlys A.; Schneider, Diane M.

    1989-01-01

    There are four major components of a stream system that determine the productivity of the fishery (Karr and Dudley 1978). These are: (1) flow regime, (2) physical habitat structure (channel form, substrate distribution, and riparian vegetation), (3) water quality (including temperature), and (4) energy inputs from the watershed (sediments, nutrients, and organic matter). The complex interaction of these components determines the primary production, secondary production, and fish population of the stream reach. The basic components and interactions needed to simulate fish populations as a function of management alternatives are illustrated in Figure I.1. The assessment process utilizes a hierarchical and modular approach combined with computer simulation techniques. The modular components represent the "building blocks" for the simulation. The quality of the physical habitat is a function of flow and, therefore, varies in quality and quantity over the range of the flow regime. The conceptual framework of the Incremental Methodology and guidelines for its application are described in "A Guide to Stream Habitat Analysis Using the Instream Flow Incremental Methodology" (Bovee 1982). Simulation of physical habitat is accomplished using the physical structure of the stream and streamflow. The modification of physical habitat by temperature and water quality is analyzed separately from physical habitat simulation. Temperature in a stream varies with the seasons, local meteorological conditions, stream network configuration, and the flow regime; thus, the temperature influences on habitat must be analysed on a stream system basis. Water quality under natural conditions is strongly influenced by climate and the geological materials, with the result that there is considerable natural variation in water quality. When we add the activities of man, the possible range of water quality possibilities becomes rather large. Consequently, water quality must also be analysed on a stream system basis. Such analysis is outside the scope of this manual, which concentrates on simulation of physical habitat based on depth, velocity, and a channel index. The results form PHABSIM can be used alone or by using a series of habitat time series programs that have been developed to generate monthly or daily habitat time series from the Weighted Usable Area versus streamflow table resulting from the habitat simulation programs and streamflow time series data. Monthly and daily streamflow time series may be obtained from USGS gages near the study site or as the output of river system management models.

  8. Effect of real-time boundary wind conditions on the air flow and pollutant dispersion in an urban street canyon—Large eddy simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Yun-Wei; Gu, Zhao-Lin; Cheng, Yan; Lee, Shun-Cheng

    2011-07-01

    Air flow and pollutant dispersion characteristics in an urban street canyon are studied under the real-time boundary conditions. A new scheme for realizing real-time boundary conditions in simulations is proposed, to keep the upper boundary wind conditions consistent with the measured time series of wind data. The air flow structure and its evolution under real-time boundary wind conditions are simulated by using this new scheme. The induced effect of time series of ambient wind conditions on the flow structures inside and above the street canyon is investigated. The flow shows an obvious intermittent feature in the street canyon and the flapping of the shear layer forms near the roof layer under real-time wind conditions, resulting in the expansion or compression of the air mass in the canyon. The simulations of pollutant dispersion show that the pollutants inside and above the street canyon are transported by different dispersion mechanisms, relying on the time series of air flow structures. Large scale air movements in the processes of the air mass expansion or compression in the canyon exhibit obvious effects on pollutant dispersion. The simulations of pollutant dispersion also show that the transport of pollutants from the canyon to the upper air flow is dominated by the shear layer turbulence near the roof level and the expansion or compression of the air mass in street canyon under real-time boundary wind conditions. Especially, the expansion of the air mass, which features the large scale air movement of the air mass, makes more contribution to the pollutant dispersion in this study. Comparisons of simulated results under different boundary wind conditions indicate that real-time boundary wind conditions produces better condition for pollutant dispersion than the artificially-designed steady boundary wind conditions.

  9. Evaluation of nonlinearity and validity of nonlinear modeling for complex time series.

    PubMed

    Suzuki, Tomoya; Ikeguchi, Tohru; Suzuki, Masuo

    2007-10-01

    Even if an original time series exhibits nonlinearity, it is not always effective to approximate the time series by a nonlinear model because such nonlinear models have high complexity from the viewpoint of information criteria. Therefore, we propose two measures to evaluate both the nonlinearity of a time series and validity of nonlinear modeling applied to it by nonlinear predictability and information criteria. Through numerical simulations, we confirm that the proposed measures effectively detect the nonlinearity of an observed time series and evaluate the validity of the nonlinear model. The measures are also robust against observational noises. We also analyze some real time series: the difference of the number of chickenpox and measles patients, the number of sunspots, five Japanese vowels, and the chaotic laser. We can confirm that the nonlinear model is effective for the Japanese vowel /a/, the difference of the number of measles patients, and the chaotic laser.

  10. Evaluation of nonlinearity and validity of nonlinear modeling for complex time series

    NASA Astrophysics Data System (ADS)

    Suzuki, Tomoya; Ikeguchi, Tohru; Suzuki, Masuo

    2007-10-01

    Even if an original time series exhibits nonlinearity, it is not always effective to approximate the time series by a nonlinear model because such nonlinear models have high complexity from the viewpoint of information criteria. Therefore, we propose two measures to evaluate both the nonlinearity of a time series and validity of nonlinear modeling applied to it by nonlinear predictability and information criteria. Through numerical simulations, we confirm that the proposed measures effectively detect the nonlinearity of an observed time series and evaluate the validity of the nonlinear model. The measures are also robust against observational noises. We also analyze some real time series: the difference of the number of chickenpox and measles patients, the number of sunspots, five Japanese vowels, and the chaotic laser. We can confirm that the nonlinear model is effective for the Japanese vowel /a/, the difference of the number of measles patients, and the chaotic laser.

  11. A high-fidelity weather time series generator using the Markov Chain process on a piecewise level

    NASA Astrophysics Data System (ADS)

    Hersvik, K.; Endrerud, O.-E. V.

    2017-12-01

    A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.

  12. Recurrence plots revisited

    NASA Astrophysics Data System (ADS)

    Casdagli, M. C.

    1997-09-01

    We show that recurrence plots (RPs) give detailed characterizations of time series generated by dynamical systems driven by slowly varying external forces. For deterministic systems we show that RPs of the time series can be used to reconstruct the RP of the driving force if it varies sufficiently slowly. If the driving force is one-dimensional, its functional form can then be inferred up to an invertible coordinate transformation. The same results hold for stochastic systems if the RP of the time series is suitably averaged and transformed. These results are used to investigate the nonlinear prediction of time series generated by dynamical systems driven by slowly varying external forces. We also consider the problem of detecting a small change in the driving force, and propose a surrogate data technique for assessing statistical significance. Numerically simulated time series and a time series of respiration rates recorded from a subject with sleep apnea are used as illustrative examples.

  13. A general statistical test for correlations in a finite-length time series.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-06-07

    The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.

  14. The promise of the state space approach to time series analysis for nursing research.

    PubMed

    Levy, Janet A; Elser, Heather E; Knobel, Robin B

    2012-01-01

    Nursing research, particularly related to physiological development, often depends on the collection of time series data. The state space approach to time series analysis has great potential to answer exploratory questions relevant to physiological development but has not been used extensively in nursing. The aim of the study was to introduce the state space approach to time series analysis and demonstrate potential applicability to neonatal monitoring and physiology. We present a set of univariate state space models; each one describing a process that generates a variable of interest over time. Each model is presented algebraically and a realization of the process is presented graphically from simulated data. This is followed by a discussion of how the model has been or may be used in two nursing projects on neonatal physiological development. The defining feature of the state space approach is the decomposition of the series into components that are functions of time; specifically, slowly varying level, faster varying periodic, and irregular components. State space models potentially simulate developmental processes where a phenomenon emerges and disappears before stabilizing, where the periodic component may become more regular with time, or where the developmental trajectory of a phenomenon is irregular. The ultimate contribution of this approach to nursing science will require close collaboration and cross-disciplinary education between nurses and statisticians.

  15. Motion Artifact Reduction in Ultrasound Based Thermal Strain Imaging of Atherosclerotic Plaques Using Time Series Analysis

    PubMed Central

    Dutta, Debaditya; Mahmoud, Ahmed M.; Leers, Steven A.; Kim, Kang

    2013-01-01

    Large lipid pools in vulnerable plaques, in principle, can be detected using US based thermal strain imaging (US-TSI). One practical challenge for in vivo cardiovascular application of US-TSI is that the thermal strain is masked by the mechanical strain caused by cardiac pulsation. ECG gating is a widely adopted method for cardiac motion compensation, but it is often susceptible to electrical and physiological noise. In this paper, we present an alternative time series analysis approach to separate thermal strain from the mechanical strain without using ECG. The performance and feasibility of the time-series analysis technique was tested via numerical simulation as well as in vitro water tank experiments using a vessel mimicking phantom and an excised human atherosclerotic artery where the cardiac pulsation is simulated by a pulsatile pump. PMID:24808628

  16. Computer Simulation of the Neuronal Action Potential.

    ERIC Educational Resources Information Center

    Solomon, Paul R.; And Others

    1988-01-01

    A series of computer simulations of the neuronal resting and action potentials are described. Discusses the use of simulations to overcome the difficulties of traditional instruction, such as blackboard illustration, which can only illustrate these events at one point in time. Describes systems requirements necessary to run the simulations.…

  17. Potential of VIIRS Data for Regional Monitoring of Gypsy Moth Defoliation: Implications for Forest Threat Early Warning System

    NASA Technical Reports Server (NTRS)

    Spruce, Joseph P.; Ryan, Robert E.; Smoot, James C.; Prados, Donald; McKellip, Rodney; Sader. Steven A.; Gasser, Jerry; May, George; Hargrove, William

    2007-01-01

    A NASA RPC (Rapid Prototyping Capability) experiment was conducted to assess the potential of VIIRS (Visible/Infrared Imager/Radiometer Suite) data for monitoring non-native gypsy moth (Lymantria dispar) defoliation of forests. This experiment compares defoliation detection products computed from simulated VIIRS and from MODIS (Moderate Resolution Imaging Spectroradiometer) time series products as potential inputs to a forest threat EWS (Early Warning System) being developed for the USFS (USDA Forest Service). Gypsy moth causes extensive defoliation of broadleaved forests in the United States and is specifically identified in the Healthy Forest Restoration Act (HFRA) of 2003. The HFRA mandates development of a national forest threat EWS. This system is being built by the USFS and NASA is aiding integration of needed satellite data products into this system, including MODIS products. This RPC experiment enabled the MODIS follow-on, VIIRS, to be evaluated as a data source for EWS forest monitoring products. The experiment included 1) assessment of MODIS-simulated VIIRS NDVI products, and 2) evaluation of gypsy moth defoliation mapping products from MODIS-simulated VIIRS and from MODIS NDVI time series data. This experiment employed MODIS data collected over the approximately 15 million acre mid-Appalachian Highlands during the annual peak defoliation time frame (approximately June 10 through July 27) during 2000-2006. NASA Stennis Application Research Toolbox software was used to produce MODIS-simulated VIIRS data and NASA Stennis Time Series Product Tool software was employed to process MODIS and MODIS-simulated VIIRS time series data scaled to planetary reflectance. MODIS-simulated VIIRS data was assessed through comparison to Hyperion-simulated VIIRS data using data collected during gypsy moth defoliation. Hyperion-simulated MODIS data showed a high correlation with actual MODIS data (NDVI R2 of 0.877 and RMSE of 0.023). MODIS-simulated VIIRS data for the same date showed moderately high correlation with Hyperion-simulated VIIRS data (NDVI R2 of 0.62 and RMSE of 0.035), even though the datasets were collected about a half an hour apart during changing weather conditions. MODIS products (MOD02, MOD09, and MOD13) and MOD02-simulated VIIRS time series data were used to generate defoliation mapping products based on image classification and image differencing change detection techniques. Accuracy of final defoliation mapping products was assessed by image interpreting over 170 randomly sampled locations found on Landsat and ASTER data in conjunction with defoliation map data from the USFS. The MOD02-simulated VIIRS 400-meter NDVI classification produced a similar overall accuracy (87.28 percent with 0.72 Kappa) to the MOD02 250-meter NDVI classification (86.71 percent with 0.71 Kappa). In addition, the VIIRS 400-meter NDVI, MOD02 250-meter NDVI, and MOD02 500-meter NDVI showed good user and producer accuracies for the defoliated forest class (70 percent) and acceptable Kappa values (0.66). MOD02 and MOD02-simulated VIIRS data both showed promise as data sources for regional monitoring of forest disturbance due to insect defoliation.

  18. Fast and unbiased estimator of the time-dependent Hurst exponent.

    PubMed

    Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria

    2018-03-01

    We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.

  19. Fast and unbiased estimator of the time-dependent Hurst exponent

    NASA Astrophysics Data System (ADS)

    Pianese, Augusto; Bianchi, Sergio; Palazzo, Anna Maria

    2018-03-01

    We combine two existing estimators of the local Hurst exponent to improve both the goodness of fit and the computational speed of the algorithm. An application with simulated time series is implemented, and a Monte Carlo simulation is performed to provide evidence of the improvement.

  20. Adaptive time-variant models for fuzzy-time-series forecasting.

    PubMed

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  1. Terrain Dynamics Analysis Using Space-Time Domain Hypersurfaces and Gradient Trajectories Derived From Time Series of 3D Point Clouds

    DTIC Science & Technology

    2015-08-01

    optimized space-time interpolation method. Tangible geospatial modeling system was further developed to support the analysis of changing elevation surfaces...Evolution Mapped by Terrestrial Laser Scanning, talk, AGU Fall 2012 *Hardin E, Mitas L, Mitasova H., Simulation of Wind -Blown Sand for...Geomorphological Applications: A Smoothed Particle Hydrodynamics Approach, GSA 2012 *Russ, E. Mitasova, H., Time series and space-time cube analyses on

  2. A user-defined data type for the storage of time series data allowing efficient similarity screening.

    PubMed

    Sorokin, Anatoly; Selkov, Gene; Goryanin, Igor

    2012-07-16

    The volume of the experimentally measured time series data is rapidly growing, while storage solutions offering better data types than simple arrays of numbers or opaque blobs for keeping series data are sorely lacking. A number of indexing methods have been proposed to provide efficient access to time series data, but none has so far been integrated into a tried-and-proven database system. To explore the possibility of such integration, we have developed a data type for time series storage in PostgreSQL, an object-relational database system, and equipped it with an access method based on SAX (Symbolic Aggregate approXimation). This new data type has been successfully tested in a database supporting a large-scale plant gene expression experiment, and it was additionally tested on a very large set of simulated time series data. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Spatiotemporal Visualization of Time-Series Satellite-Derived CO2 Flux Data Using Volume Rendering and Gpu-Based Interpolation on a Cloud-Driven Digital Earth

    NASA Astrophysics Data System (ADS)

    Wu, S.; Yan, Y.; Du, Z.; Zhang, F.; Liu, R.

    2017-10-01

    The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.

  4. Investigation of relationships between parameters of solar nano-flares and solar activity

    NASA Astrophysics Data System (ADS)

    Safari, Hossein; Javaherian, Mohsen; Kaki, Bardia

    2016-07-01

    Solar flares are one of the important coronal events which are originated in solar magnetic activity. They release lots of energy during the interstellar medium, right after the trigger. Flare prediction can play main role in avoiding eventual damages on the Earth. Here, to interpret solar large-scale events (e.g., flares), we investigate relationships between small-scale events (nano-flares) and large-scale events (e.g., flares). In our method, by using simulations of nano-flares based on Monte Carlo method, the intensity time series of nano-flares are simulated. Then, the solar full disk images taken at 171 angstrom recorded by SDO/AIA are employed. Some parts of the solar disk (quiet Sun (QS), coronal holes (CHs), and active regions (ARs)) are cropped and the time series of these regions are extracted. To compare the simulated intensity time series of nano-flares with the intensity time series of real data extracted from different parts of the Sun, the artificial neural networks is employed. Therefore, we are able to extract physical parameters of nano-flares like both kick and decay rate lifetime, and the power of their power-law distributions. The procedure of variations in the power value of power-law distributions within QS, CH is similar to AR. Thus, by observing the small part of the Sun, we can follow the procedure of solar activity.

  5. Drunk driving detection based on classification of multivariate time series.

    PubMed

    Li, Zhenlong; Jin, Xue; Zhao, Xiaohua

    2015-09-01

    This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  6. Effective precipitation duration for runoff peaks based on catchment modelling

    NASA Astrophysics Data System (ADS)

    Sikorska, A. E.; Viviroli, D.; Seibert, J.

    2018-01-01

    Despite precipitation intensities may greatly vary during one flood event, detailed information about these intensities may not be required to accurately simulate floods with a hydrological model which rather reacts to cumulative precipitation sums. This raises two questions: to which extent is it important to preserve sub-daily precipitation intensities and how long does it effectively rain from the hydrological point of view? Both questions might seem straightforward to answer with a direct analysis of past precipitation events but require some arbitrary choices regarding the length of a precipitation event. To avoid these arbitrary decisions, here we present an alternative approach to characterize the effective length of precipitation event which is based on runoff simulations with respect to large floods. More precisely, we quantify the fraction of a day over which the daily precipitation has to be distributed to faithfully reproduce the large annual and seasonal floods which were generated by the hourly precipitation rate time series. New precipitation time series were generated by first aggregating the hourly observed data into daily totals and then evenly distributing them over sub-daily periods (n hours). These simulated time series were used as input to a hydrological bucket-type model and the resulting runoff flood peaks were compared to those obtained when using the original precipitation time series. We define then the effective daily precipitation duration as the number of hours n, for which the largest peaks are simulated best. For nine mesoscale Swiss catchments this effective daily precipitation duration was about half a day, which indicates that detailed information on precipitation intensities is not necessarily required to accurately estimate peaks of the largest annual and seasonal floods. These findings support the use of simple disaggregation approaches to make usage of past daily precipitation observations or daily precipitation simulations (e.g. from climate models) for hydrological modeling at an hourly time step.

  7. An evaluation of dynamic mutuality measurements and methods in cyclic time series

    NASA Astrophysics Data System (ADS)

    Xia, Xiaohua; Huang, Guitian; Duan, Na

    2010-12-01

    Several measurements and techniques have been developed to detect dynamic mutuality and synchronicity of time series in econometrics. This study aims to compare the performances of five methods, i.e., linear regression, dynamic correlation, Markov switching models, concordance index and recurrence quantification analysis, through numerical simulations. We evaluate the abilities of these methods to capture structure changing and cyclicity in time series and the findings of this paper would offer guidance to both academic and empirical researchers. Illustration examples are also provided to demonstrate the subtle differences of these techniques.

  8. Quantitative evaluation of cross correlation between two finite-length time series with applications to single-molecule FRET.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-11-06

    The statistical properties of the cross correlation between two time series has been studied. An analytical expression for the cross correlation function's variance has been derived. On the basis of these results, a statistically robust method has been proposed to detect the existence and determine the direction of cross correlation between two time series. The proposed method has been characterized by computer simulations. Applications to single-molecule fluorescence spectroscopy are discussed. The results may also find immediate applications in fluorescence correlation spectroscopy (FCS) and its variants.

  9. Taurus II Stage Test Simulations: Using Large-Scale CFD Simulations to Provide Critical Insight into Plume Induced Environments During Design

    NASA Technical Reports Server (NTRS)

    Struzenberg, L. L.; West, J. S.

    2011-01-01

    This paper describes the use of targeted Loci/CHEM CFD simulations to evaluate the effects of a dual-engine first-stage hot-fire test on an evolving integrated launch pad/test article design. This effort was undertaken as a part of the NESC Independent Assessment of the Taurus II Stage Test Series. The underlying conceptual model included development of a series of computational models and simulations to analyze the plume induced environments on the pad, facility structures and test article. A pathfinder simulation was first developed, capable of providing quick-turn around evaluation of plume impingement pressures on the flame deflector. Results from this simulation were available in time to provide data for an ongoing structural assessment of the deflector. The resulting recommendation was available in a timely manner and was incorporated into construction schedule for the new launch stand under construction at Wallops Flight Facility. A series of Reynolds-Averaged Navier-Stokes (RANS) quasi-steady simulations representative of various key elements of the test profile was performed to identify potential concerns with the test configuration and test profile. As required, unsteady Hybrid-RANS/LES simulations were performed, to provide additional insight into critical aspects of the test sequence. Modifications to the test-specific hardware and facility structures thermal protection as well as modifications to the planned hot-fire test profile were implemented based on these simulation results.

  10. A dataset of future daily weather data for crop modelling over Europe derived from climate change scenarios

    NASA Astrophysics Data System (ADS)

    Duveiller, G.; Donatelli, M.; Fumagalli, D.; Zucchini, A.; Nelson, R.; Baruth, B.

    2017-02-01

    Coupled atmosphere-ocean general circulation models (GCMs) simulate different realizations of possible future climates at global scale under contrasting scenarios of land-use and greenhouse gas emissions. Such data require several additional processing steps before it can be used to drive impact models. Spatial downscaling, typically by regional climate models (RCM), and bias-correction are two such steps that have already been addressed for Europe. Yet, the errors in resulting daily meteorological variables may be too large for specific model applications. Crop simulation models are particularly sensitive to these inconsistencies and thus require further processing of GCM-RCM outputs. Moreover, crop models are often run in a stochastic manner by using various plausible weather time series (often generated using stochastic weather generators) to represent climate time scale for a period of interest (e.g. 2000 ± 15 years), while GCM simulations typically provide a single time series for a given emission scenario. To inform agricultural policy-making, data on near- and medium-term decadal time scale is mostly requested, e.g. 2020 or 2030. Taking a sample of multiple years from these unique time series to represent time horizons in the near future is particularly problematic because selecting overlapping years may lead to spurious trends, creating artefacts in the results of the impact model simulations. This paper presents a database of consolidated and coherent future daily weather data for Europe that addresses these problems. Input data consist of daily temperature and precipitation from three dynamically downscaled and bias-corrected regional climate simulations of the IPCC A1B emission scenario created within the ENSEMBLES project. Solar radiation is estimated from temperature based on an auto-calibration procedure. Wind speed and relative air humidity are collected from historical series. From these variables, reference evapotranspiration and vapour pressure deficit are estimated ensuring consistency within daily records. The weather generator ClimGen is then used to create 30 synthetic years of all variables to characterize the time horizons of 2000, 2020 and 2030, which can readily be used for crop modelling studies.

  11. Analysis and generation of groundwater concentration time series

    NASA Astrophysics Data System (ADS)

    Crăciun, Maria; Vamoş, Călin; Suciu, Nicolae

    2018-01-01

    Concentration time series are provided by simulated concentrations of a nonreactive solute transported in groundwater, integrated over the transverse direction of a two-dimensional computational domain and recorded at the plume center of mass. The analysis of a statistical ensemble of time series reveals subtle features that are not captured by the first two moments which characterize the approximate Gaussian distribution of the two-dimensional concentration fields. The concentration time series exhibit a complex preasymptotic behavior driven by a nonstationary trend and correlated fluctuations with time-variable amplitude. Time series with almost the same statistics are generated by successively adding to a time-dependent trend a sum of linear regression terms, accounting for correlations between fluctuations around the trend and their increments in time, and terms of an amplitude modulated autoregressive noise of order one with time-varying parameter. The algorithm generalizes mixing models used in probability density function approaches. The well-known interaction by exchange with the mean mixing model is a special case consisting of a linear regression with constant coefficients.

  12. Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time

    NASA Astrophysics Data System (ADS)

    Urteaga, Iñigo; Bugallo, Mónica F.; Djurić, Petar M.

    2017-12-01

    We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t+1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.

  13. Chaotic time series prediction for prenatal exposure to polychlorinated biphenyls in umbilical cord blood using the least squares SEATR model

    NASA Astrophysics Data System (ADS)

    Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia

    2016-04-01

    Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field.

  14. Chaotic time series prediction for prenatal exposure to polychlorinated biphenyls in umbilical cord blood using the least squares SEATR model

    PubMed Central

    Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia

    2016-01-01

    Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field. PMID:27118260

  15. Forecasting the Relative and Cumulative Effects of Multiple Stressors on At-risk Populations

    DTIC Science & Technology

    2011-08-01

    Vitals (observed vital rates), Movement, Ranges, Barriers (barrier interactions), Stochasticity (a time series of stochasticity indices...Simulation Viewer are themselves stochastic . They can change each time it is run. B. 196 Analysis If multiple Census events are present in the life...30-year period. A monthly time series was generated for the 20th-century using monthly anomalies for temperature, precipitation, and percent

  16. Evaluation of the effects of climate and man intervention on ground waters and their dependent ecosystems using time series analysis

    NASA Astrophysics Data System (ADS)

    Gemitzi, Alexandra; Stefanopoulos, Kyriakos

    2011-06-01

    SummaryGroundwaters and their dependent ecosystems are affected both by the meteorological conditions as well as from human interventions, mainly in the form of groundwater abstractions for irrigation needs. This work aims at investigating the quantitative effects of meteorological conditions and man intervention on groundwater resources and their dependent ecosystems. Various seasonal Auto-Regressive Integrated Moving Average (ARIMA) models with external predictor variables were used in order to model the influence of meteorological conditions and man intervention on the groundwater level time series. Initially, a seasonal ARIMA model that simulates the abstraction time series using as external predictor variable temperature ( T) was prepared. Thereafter, seasonal ARIMA models were developed in order to simulate groundwater level time series in 8 monitoring locations, using the appropriate predictor variables determined for each individual case. The spatial component was introduced through the use of Geographical Information Systems (GIS). Application of the proposed methodology took place in the Neon Sidirochorion alluvial aquifer (Northern Greece), for which a 7-year long time series (i.e., 2003-2010) of piezometric and groundwater abstraction data exists. According to the developed ARIMA models, three distinct groups of groundwater level time series exist; the first one proves to be dependent only on the meteorological parameters, the second group demonstrates a mixed dependence both on meteorological conditions and on human intervention, whereas the third group shows a clear influence from man intervention. Moreover, there is evidence that groundwater abstraction has affected an important protected ecosystem.

  17. Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal

    Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.

  18. Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error

    NASA Astrophysics Data System (ADS)

    Jung, Insung; Koo, Lockjo; Wang, Gi-Nam

    2008-11-01

    The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.

  19. An assessment of the ability of Bartlett-Lewis type of rainfall models to reproduce drought statistics

    NASA Astrophysics Data System (ADS)

    Pham, M. T.; Vanhaute, W. J.; Vandenberghe, S.; De Baets, B.; Verhoest, N. E. C.

    2013-12-01

    Of all natural disasters, the economic and environmental consequences of droughts are among the highest because of their longevity and widespread spatial extent. Because of their extreme behaviour, studying droughts generally requires long time series of historical climate data. Rainfall is a very important variable for calculating drought statistics, for quantifying historical droughts or for assessing the impact on other hydrological (e.g. water stage in rivers) or agricultural (e.g. irrigation requirements) variables. Unfortunately, time series of historical observations are often too short for such assessments. To circumvent this, one may rely on the synthetic rainfall time series from stochastic point process rainfall models, such as Bartlett-Lewis models. The present study investigates whether drought statistics are preserved when simulating rainfall with Bartlett-Lewis models. Therefore, a 105 yr 10 min rainfall time series obtained at Uccle, Belgium is used as a test case. First, drought events were identified on the basis of the Effective Drought Index (EDI), and each event was characterized by two variables, i.e. drought duration (D) and drought severity (S). As both parameters are interdependent, a multivariate distribution function, which makes use of a copula, was fitted. Based on the copula, four types of drought return periods are calculated for observed as well as simulated droughts and are used to evaluate the ability of the rainfall models to simulate drought events with the appropriate characteristics. Overall, all Bartlett-Lewis model types studied fail to preserve extreme drought statistics, which is attributed to the model structure and to the model stationarity caused by maintaining the same parameter set during the whole simulation period.

  20. A copula-based assessment of Bartlett-Lewis type of rainfall models for preserving drought statistics

    NASA Astrophysics Data System (ADS)

    Pham, M. T.; Vanhaute, W. J.; Vandenberghe, S.; De Baets, B.; Verhoest, N. E. C.

    2013-06-01

    Of all natural disasters, the economic and environmental consequences of droughts are among the highest because of their longevity and widespread spatial extent. Because of their extreme behaviour, studying droughts generally requires long time series of historical climate data. Rainfall is a very important variable for calculating drought statistics, for quantifying historical droughts or for assessing the impact on other hydrological (e.g. water stage in rivers) or agricultural (e.g. irrigation requirements) variables. Unfortunately, time series of historical observations are often too short for such assessments. To circumvent this, one may rely on the synthetic rainfall time series from stochastic point process rainfall models, such as Bartlett-Lewis models. The present study investigates whether drought statistics are preserved when simulating rainfall with Bartlett-Lewis models. Therefore, a 105 yr 10 min rainfall time series obtained at Uccle, Belgium is used as test case. First, drought events were identified on the basis of the Effective Drought Index (EDI), and each event was characterized by two variables, i.e. drought duration (D) and drought severity (S). As both parameters are interdependent, a multivariate distribution function, which makes use of a copula, was fitted. Based on the copula, four types of drought return periods are calculated for observed as well as simulated droughts and are used to evaluate the ability of the rainfall models to simulate drought events with the appropriate characteristics. Overall, all Bartlett-Lewis type of models studied fail in preserving extreme drought statistics, which is attributed to the model structure and to the model stationarity caused by maintaining the same parameter set during the whole simulation period.

  1. Getting It Right Matters: Climate Spectra and Their Estimation

    NASA Astrophysics Data System (ADS)

    Privalsky, Victor; Yushkov, Vladislav

    2018-06-01

    In many recent publications, climate spectra estimated with different methods from observed, GCM-simulated, and reconstructed time series contain many peaks at time scales from a few years to many decades and even centuries. However, respective spectral estimates obtained with the autoregressive (AR) and multitapering (MTM) methods showed that spectra of climate time series are smooth and contain no evidence of periodic or quasi-periodic behavior. Four order selection criteria for the autoregressive models were studied and proven sufficiently reliable for 25 time series of climate observations at individual locations or spatially averaged at local-to-global scales. As time series of climate observations are short, an alternative reliable nonparametric approach is Thomson's MTM. These results agree with both the earlier climate spectral analyses and the Markovian stochastic model of climate.

  2. A KST framework for correlation network construction from time series signals

    NASA Astrophysics Data System (ADS)

    Qi, Jin-Peng; Gu, Quan; Zhu, Ying; Zhang, Ping

    2018-04-01

    A KST (Kolmogorov-Smirnov test and T statistic) method is used for construction of a correlation network based on the fluctuation of each time series within the multivariate time signals. In this method, each time series is divided equally into multiple segments, and the maximal data fluctuation in each segment is calculated by a KST change detection procedure. Connections between each time series are derived from the data fluctuation matrix, and are used for construction of the fluctuation correlation network (FCN). The method was tested with synthetic simulations and the result was compared with those from using KS or T only for detection of data fluctuation. The novelty of this study is that the correlation analyses was based on the data fluctuation in each segment of each time series rather than on the original time signals, which would be more meaningful for many real world applications and for analysis of large-scale time signals where prior knowledge is uncertain.

  3. Inhomogeneities detection in annual precipitation time series in Portugal using direct sequential simulation

    NASA Astrophysics Data System (ADS)

    Caineta, Júlio; Ribeiro, Sara; Costa, Ana Cristina; Henriques, Roberto; Soares, Amílcar

    2014-05-01

    Climate data homogenisation is of major importance in monitoring climate change, the validation of weather forecasting, general circulation and regional atmospheric models, modelling of erosion, drought monitoring, among other studies of hydrological and environmental impacts. This happens because non-climate factors can cause time series discontinuities which may hide the true climatic signal and patterns, thus potentially bias the conclusions of those studies. In the last two decades, many methods have been developed to identify and remove these inhomogeneities. One of those is based on geostatistical simulation (DSS - direct sequential simulation), where local probability density functions (pdf) are calculated at candidate monitoring stations, using spatial and temporal neighbouring observations, and then are used for detection of inhomogeneities. This approach has been previously applied to detect inhomogeneities in four precipitation series (wet day count) from a network with 66 monitoring stations located in the southern region of Portugal (1980-2001). This study revealed promising results and the potential advantages of geostatistical techniques for inhomogeneities detection in climate time series. This work extends the case study presented before and investigates the application of the geostatistical stochastic approach to ten precipitation series that were previously classified as inhomogeneous by one of six absolute homogeneity tests (Mann-Kendall test, Wald-Wolfowitz runs test, Von Neumann ratio test, Standard normal homogeneity test (SNHT) for a single break, Pettit test, and Buishand range test). Moreover, a sensibility analysis is implemented to investigate the number of simulated realisations that should be used to accurately infer the local pdfs. Accordingly, the number of simulations per iteration is increased from 50 to 500, which resulted in a more representative local pdf. A set of default and recommended settings is provided, which will help other users to implement this method. The need of user intervention is reduced to a minimum through the usage of a cross-platform script. Finally, as in the previous study, the results are compared with those from the SNHT, Pettit and Buishand range tests, which were applied to composite (ratio) reference series. Acknowledgements: The authors gratefully acknowledge the financial support of "Fundação para a Ciência e Tecnologia" (FCT), Portugal, through the research project PTDC/GEO-MET/4026/2012 ("GSIMCLI - Geostatistical simulation with local distributions for the homogenization and interpolation of climate data").

  4. A data mining framework for time series estimation.

    PubMed

    Hu, Xiao; Xu, Peng; Wu, Shaozhi; Asgari, Shadnaz; Bergsneider, Marvin

    2010-04-01

    Time series estimation techniques are usually employed in biomedical research to derive variables less accessible from a set of related and more accessible variables. These techniques are traditionally built from systems modeling approaches including simulation, blind decovolution, and state estimation. In this work, we define target time series (TTS) and its related time series (RTS) as the output and input of a time series estimation process, respectively. We then propose a novel data mining framework for time series estimation when TTS and RTS represent different sets of observed variables from the same dynamic system. This is made possible by mining a database of instances of TTS, its simultaneously recorded RTS, and the input/output dynamic models between them. The key mining strategy is to formulate a mapping function for each TTS-RTS pair in the database that translates a feature vector extracted from RTS to the dissimilarity between true TTS and its estimate from the dynamic model associated with the same TTS-RTS pair. At run time, a feature vector is extracted from an inquiry RTS and supplied to the mapping function associated with each TTS-RTS pair to calculate a dissimilarity measure. An optimal TTS-RTS pair is then selected by analyzing these dissimilarity measures. The associated input/output model of the selected TTS-RTS pair is then used to simulate the TTS given the inquiry RTS as an input. An exemplary implementation was built to address a biomedical problem of noninvasive intracranial pressure assessment. The performance of the proposed method was superior to that of a simple training-free approach of finding the optimal TTS-RTS pair by a conventional similarity-based search on RTS features. 2009 Elsevier Inc. All rights reserved.

  5. iVAR: a program for imputing missing data in multivariate time series using vector autoregressive models.

    PubMed

    Liu, Siwei; Molenaar, Peter C M

    2014-12-01

    This article introduces iVAR, an R program for imputing missing data in multivariate time series on the basis of vector autoregressive (VAR) models. We conducted a simulation study to compare iVAR with three methods for handling missing data: listwise deletion, imputation with sample means and variances, and multiple imputation ignoring time dependency. The results showed that iVAR produces better estimates for the cross-lagged coefficients than do the other three methods. We demonstrate the use of iVAR with an empirical example of time series electrodermal activity data and discuss the advantages and limitations of the program.

  6. Stochastic series expansion simulation of the t -V model

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Ye-Hua; Troyer, Matthias

    2016-04-01

    We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.

  7. A synergic simulation-optimization approach for analyzing biomolecular dynamics in living organisms.

    PubMed

    Sadegh Zadeh, Kouroush

    2011-01-01

    A synergic duo simulation-optimization approach was developed and implemented to study protein-substrate dynamics and binding kinetics in living organisms. The forward problem is a system of several coupled nonlinear partial differential equations which, with a given set of kinetics and diffusion parameters, can provide not only the commonly used bleached area-averaged time series in fluorescence microscopy experiments but more informative full biomolecular/drug space-time series and can be successfully used to study dynamics of both Dirac and Gaussian fluorescence-labeled biomacromolecules in vivo. The incomplete Cholesky preconditioner was coupled with the finite difference discretization scheme and an adaptive time-stepping strategy to solve the forward problem. The proposed approach was validated with analytical as well as reference solutions and used to simulate dynamics of GFP-tagged glucocorticoid receptor (GFP-GR) in mouse cancer cell during a fluorescence recovery after photobleaching experiment. Model analysis indicates that the commonly practiced bleach spot-averaged time series is not an efficient approach to extract physiological information from the fluorescence microscopy protocols. It was recommended that experimental biophysicists should use full space-time series, resulting from experimental protocols, to study dynamics of biomacromolecules and drugs in living organisms. It was also concluded that in parameterization of biological mass transfer processes, setting the norm of the gradient of the penalty function at the solution to zero is not an efficient stopping rule to end the inverse algorithm. Theoreticians should use multi-criteria stopping rules to quantify model parameters by optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. AIRS-Observed Interrelationships of Anomaly Time-Series of Moist Process-Related Parameters and Inferred Feedback Values on Various Spatial Scales

    NASA Technical Reports Server (NTRS)

    Molnar, Gyula I.; Susskind, Joel; Iredell, Lena

    2011-01-01

    In the beginning, a good measure of a GMCs performance was their ability to simulate the observed mean seasonal cycle. That is, a reasonable simulation of the means (i.e., small biases) and standard deviations of TODAY?S climate would suffice. Here, we argue that coupled GCM (CG CM for short) simulations of FUTURE climates should be evaluated in much more detail, both spatially and temporally. Arguably, it is not the bias, but rather the reliability of the model-generated anomaly time-series, even down to the [C]GCM grid-scale, which really matter. This statement is underlined by the social need to address potential REGIONAL climate variability, and climate drifts/changes in a manner suitable for policy decisions.

  9. An improved portmanteau test for autocorrelated errors in interrupted time-series regression models.

    PubMed

    Huitema, Bradley E; McKean, Joseph W

    2007-08-01

    A new portmanteau test for autocorrelation among the errors of interrupted time-series regression models is proposed. Simulation results demonstrate that the inferential properties of the proposed Q(H-M) test statistic are considerably more satisfactory than those of the well known Ljung-Box test and moderately better than those of the Box-Pierce test. These conclusions generally hold for a wide variety of autoregressive (AR), moving averages (MA), and ARMA error processes that are associated with time-series regression models of the form described in Huitema and McKean (2000a, 2000b).

  10. Testing for intracycle determinism in pseudoperiodic time series.

    PubMed

    Coelho, Mara C S; Mendes, Eduardo M A M; Aguirre, Luis A

    2008-06-01

    A determinism test is proposed based on the well-known method of the surrogate data. Assuming predictability to be a signature of determinism, the proposed method checks for intracycle (e.g., short-term) determinism in the pseudoperiodic time series for which standard methods of surrogate analysis do not apply. The approach presented is composed of two steps. First, the data are preprocessed to reduce the effects of seasonal and trend components. Second, standard tests of surrogate analysis can then be used. The determinism test is applied to simulated and experimental pseudoperiodic time series and the results show the applicability of the proposed test.

  11. Influence analysis for high-dimensional time series with an application to epileptic seizure onset zone detection

    PubMed Central

    Flamm, Christoph; Graef, Andreas; Pirker, Susanne; Baumgartner, Christoph; Deistler, Manfred

    2013-01-01

    Granger causality is a useful concept for studying causal relations in networks. However, numerical problems occur when applying the corresponding methodology to high-dimensional time series showing co-movement, e.g. EEG recordings or economic data. In order to deal with these shortcomings, we propose a novel method for the causal analysis of such multivariate time series based on Granger causality and factor models. We present the theoretical background, successfully assess our methodology with the help of simulated data and show a potential application in EEG analysis of epileptic seizures. PMID:23354014

  12. Rainfall Stochastic models

    NASA Astrophysics Data System (ADS)

    Campo, M. A.; Lopez, J. J.; Rebole, J. P.

    2012-04-01

    This work was carried out in north of Spain. San Sebastian A meteorological station, where there are available precipitation records every ten minutes was selected. Precipitation data covers from October of 1927 to September of 1997. Pulse models describe the temporal process of rainfall as a succession of rainy cells, main storm, whose origins are distributed in time according to a Poisson process and a secondary process that generates a random number of cells of rain within each storm. Among different pulse models, the Bartlett-Lewis was used. On the other hand, alternative renewal processes and Markov chains describe the way in which the process will evolve in the future depending only on the current state. Therefore they are nor dependant on past events. Two basic processes are considered when describing the occurrence of rain: the alternation of wet and dry periods and temporal distribution of rainfall in each rain event, which determines the rainwater collected in each of the intervals that make up the rain. This allows the introduction of alternative renewal processes and Markov chains of three states, where interstorm time is given by either of the two dry states, short or long. Thus, the stochastic model of Markov chains tries to reproduce the basis of pulse models: the succession of storms, each one composed for a series of rain, separated by a short interval of time without theoretical complexity of these. In a first step, we analyzed all variables involved in the sequential process of the rain: rain event duration, event duration of non-rain, average rainfall intensity in rain events, and finally, temporal distribution of rainfall within the rain event. Additionally, for pulse Bartlett-Lewis model calibration, main descriptive statistics were calculated for each month, considering the process of seasonal rainfall in each month. In a second step, both models were calibrated. Finally, synthetic series were simulated with calibration parameters; series were recorded every ten minutes and hourly, aggregated. Preliminary results show adequate simulation of the main features of rain. Main variables are well simulated for time series of ten minutes, also over one hour precipitation time series, which are those that generate higher rainfall hydrologic design. For coarse scales, less than one hour, rainfall durations are not appropriate under the simulation. A hypothesis may be an excessive number of simulated events, which causes further fragmentation of storms, resulting in an excess of rain "short" (less than 1 hour), and therefore also among rain events, compared with the ones that occur in the actual series.

  13. Ultrascalable Techniques Applied to the Global Intelligence Community Information Awareness Common Operating Picture (IA COP)

    DTIC Science & Technology

    2005-11-01

    more random. Autonomous systems can exchange entropy statistics for packet streams with no confidentiality concerns, potentially enabling timely and... analysis began with simulation results, which were validated by analysis of actual data from an Autonomous System (AS). A scale-free network is one...traffic—for example, time series of flux at given nodes and mean path length Outputs the time series from any node queried Calculates

  14. Time series analysis of personal exposure to ambient air pollution and mortality using an exposure simulator.

    PubMed

    Chang, Howard H; Fuentes, Montserrat; Frey, H Christopher

    2012-09-01

    This paper describes a modeling framework for estimating the acute effects of personal exposure to ambient air pollution in a time series design. First, a spatial hierarchical model is used to relate Census tract-level daily ambient concentrations and simulated exposures for a subset of the study period. The complete exposure time series is then imputed for risk estimation. Modeling exposure via a statistical model reduces the computational burden associated with simulating personal exposures considerably. This allows us to consider personal exposures at a finer spatial resolution to improve exposure assessment and for a longer study period. The proposed approach is applied to an analysis of fine particulate matter of <2.5 μm in aerodynamic diameter (PM(2.5)) and daily mortality in the New York City metropolitan area during the period 2001-2005. Personal PM(2.5) exposures were simulated from the Stochastic Human Exposure and Dose Simulation. Accounting for exposure uncertainty, the authors estimated a 2.32% (95% posterior interval: 0.68, 3.94) increase in mortality per a 10 μg/m(3) increase in personal exposure to PM(2.5) from outdoor sources on the previous day. The corresponding estimates per a 10 μg/m(3) increase in PM(2.5) ambient concentration was 1.13% (95% confidence interval: 0.27, 2.00). The risks of mortality associated with PM(2.5) were also higher during the summer months.

  15. The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Brissette, Fancois; Chen, Jie

    2013-04-01

    Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.

  16. Webinar Presentation: Particle-Resolved Simulations for Quantifying Black Carbon Climate Impact and Model Uncertainty

    EPA Pesticide Factsheets

    This presentation, Particle-Resolved Simulations for Quantifying Black Carbon Climate Impact and Model Uncertainty, was given at the STAR Black Carbon 2016 Webinar Series: Changing Chemistry over Time held on Oct. 31, 2016.

  17. Measurement error in time-series analysis: a simulation study comparing modelled and monitored data.

    PubMed

    Butland, Barbara K; Armstrong, Ben; Atkinson, Richard W; Wilkinson, Paul; Heal, Mathew R; Doherty, Ruth M; Vieno, Massimo

    2013-11-13

    Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003-2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2). Even if correlations between model and monitor data appear reasonably strong, additive classical measurement error in model data may lead to appreciable bias in health effect estimates. As process-based air pollution models become more widely used in epidemiological time-series analysis, assessments of error impact that include statistical simulation may be useful.

  18. Reduced rank models for travel time estimation of low order mode pulses.

    PubMed

    Chandrayadula, Tarun K; Wage, Kathleen E; Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Howe, Bruce M

    2013-10-01

    Mode travel time estimation in the presence of internal waves (IWs) is a challenging problem. IWs perturb the sound speed, which results in travel time wander and mode scattering. A standard approach to travel time estimation is to pulse compress the broadband signal, pick the peak of the compressed time series, and average the peak time over multiple receptions to reduce variance. The peak-picking approach implicitly assumes there is a single strong arrival and does not perform well when there are multiple arrivals due to scattering. This article presents a statistical model for the scattered mode arrivals and uses the model to design improved travel time estimators. The model is based on an Empirical Orthogonal Function (EOF) analysis of the mode time series. Range-dependent simulations and data from the Long-range Ocean Acoustic Propagation Experiment (LOAPEX) indicate that the modes are represented by a small number of EOFs. The reduced-rank EOF model is used to construct a travel time estimator based on the Matched Subspace Detector (MSD). Analysis of simulation and experimental data show that the MSDs are more robust to IW scattering than peak picking. The simulation analysis also highlights how IWs affect the mode excitation by the source.

  19. Design considerations for case series models with exposure onset measurement error.

    PubMed

    Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V

    2013-02-28

    The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Hybrid Forecasting of Daily River Discharges Considering Autoregressive Heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Szolgayová, Elena Peksová; Danačová, Michaela; Komorniková, Magda; Szolgay, Ján

    2017-06-01

    It is widely acknowledged that in the hydrological and meteorological communities, there is a continuing need to improve the quality of quantitative rainfall and river flow forecasts. A hybrid (combined deterministic-stochastic) modelling approach is proposed here that combines the advantages offered by modelling the system dynamics with a deterministic model and a deterministic forecasting error series with a data-driven model in parallel. Since the processes to be modelled are generally nonlinear and the model error series may exhibit nonstationarity and heteroscedasticity, GARCH-type nonlinear time series models are considered here. The fitting, forecasting and simulation performance of such models have to be explored on a case-by-case basis. The goal of this paper is to test and develop an appropriate methodology for model fitting and forecasting applicable for daily river discharge forecast error data from the GARCH family of time series models. We concentrated on verifying whether the use of a GARCH-type model is suitable for modelling and forecasting a hydrological model error time series on the Hron and Morava Rivers in Slovakia. For this purpose we verified the presence of heteroscedasticity in the simulation error series of the KLN multilinear flow routing model; then we fitted the GARCH-type models to the data and compared their fit with that of an ARMA - type model. We produced one-stepahead forecasts from the fitted models and again provided comparisons of the model's performance.

  1. Time irreversibility and intrinsics revealing of series with complex network approach

    NASA Astrophysics Data System (ADS)

    Xiong, Hui; Shang, Pengjian; Xia, Jianan; Wang, Jing

    2018-06-01

    In this work, we analyze time series on the basis of the visibility graph algorithm that maps the original series into a graph. By taking into account the all-round information carried by the signals, the time irreversibility and fractal behavior of series are evaluated from a complex network perspective, and considered signals are further classified from different aspects. The reliability of the proposed analysis is supported by numerical simulations on synthesized uncorrelated random noise, short-term correlated chaotic systems and long-term correlated fractal processes, and by the empirical analysis on daily closing prices of eleven worldwide stock indices. Obtained results suggest that finite size has a significant effect on the evaluation, and that there might be no direct relation between the time irreversibility and long-range correlation of series. Similarity and dissimilarity between stock indices are also indicated from respective regional and global perspectives, showing the existence of multiple features of underlying systems.

  2. Optimizing Use of Water Management Systems during Changes of Hydrological Conditions

    NASA Astrophysics Data System (ADS)

    Výleta, Roman; Škrinár, Andrej; Danáčová, Michaela; Valent, Peter

    2017-10-01

    When designing the water management systems and their components, there is a need of more detail research on hydrological conditions of the river basin, runoff of which creates the main source of water in the reservoir. Over the lifetime of the water management systems the hydrological time series are never repeated in the same form which served as the input for the design of the system components. The design assumes the observed time series to be representative at the time of the system use. However, it is rather unrealistic assumption, because the hydrological past will not be exactly repeated over the design lifetime. When designing the water management systems, the specialists may occasionally face the insufficient or oversized capacity design, possibly wrong specification of the management rules which may lead to their non-optimal use. It is therefore necessary to establish a comprehensive approach to simulate the fluctuations in the interannual runoff (taking into account the current dry and wet periods) in the form of stochastic modelling techniques in water management practice. The paper deals with the methodological procedure of modelling the mean monthly flows using the stochastic Thomas-Fiering model, while modification of this model by Wilson-Hilferty transformation of independent random number has been applied. This transformation usually applies in the event of significant asymmetry in the observed time series. The methodological procedure was applied on the data acquired at the gauging station of Horné Orešany in the Parná Stream. Observed mean monthly flows for the period of 1.11.1980 - 31.10.2012 served as the model input information. After extrapolation the model parameters and Wilson-Hilferty transformation parameters the synthetic time series of mean monthly flows were simulated. Those have been compared with the observed hydrological time series using basic statistical characteristics (e. g. mean, standard deviation and skewness) for testing the quality of the model simulation. The synthetic hydrological series of monthly flows were created having the same statistical properties as the time series observed in the past. The compiled model was able to take into account the diversity of extreme hydrological situations in a form of synthetic series of mean monthly flows, while the occurrence of a set of flows was confirmed, which could and may occur in the future. The results of stochastic modelling in the form of synthetic time series of mean monthly flows, which takes into account the seasonal fluctuations of runoff within the year, could be applicable in engineering hydrology (e. g. for optimum use of the existing water management system that is related to reassessment of economic risks of the system).

  3. Diagnosis of inconsistencies in multi-year gridded precipitation data over mountainous areas and related impacts on hydrologic simulations

    NASA Astrophysics Data System (ADS)

    Mizukami, N.; Smith, M. B.

    2010-12-01

    It is common for the error characteristics of long-term precipitation data to change over time due to various factors such as gauge relocation and changes in data processing methods. The temporal consistency of precipitation data error characteristics is as important as data accuracy itself for hydrologic model calibration and subsequent use of the calibrated model for streamflow prediction. In mountainous areas, the generation of precipitation grids relies on sparse gage networks, the makeup of which often varies over time. This causes a change in error characteristics of the long-term precipitation data record. We will discuss the diagnostic analysis of the consistency of gridded precipitation time series and illustrate the adverse effect of inconsistent precipitation data on a hydrologic model simulation. We used hourly 4 km gridded precipitation time series over a mountainous basin in the Sierra Nevada Mountains of California from October 1988 through September 2006. The basin is part of the broader study area that served as the focus of the second phase of the Distributed Model Intercomparison Project (DMIP-2), organized by the U.S. National Weather Service (NWS) of the National Oceanographic and Atmospheric Administration (NOAA). To check the consistency of the gridded precipitation time series, double mass analysis was performed using single pixel and basin mean areal precipitation (MAP) values derived from gridded DMIP-2 and Parameter-Elevation Regressions on Independent Slopes Model (PRISM) precipitation data. The analysis leads to the conclusion that over the entire study time period, a clear change in error characteristics in the DMIP-2 data occurred in the beginning of 2003. This matches the timing of one of the major gage network changes. The inconsistency of two MAP time series computed from the gridded precipitation fields over two elevation zones was corrected by adjusting hourly values based on the double mass analysis. We show that model simulations using the adjusted MAP data produce improved stream flow compared to simulations using the inconsistent MAP input data.

  4. Testing for nonlinearity in non-stationary physiological time series.

    PubMed

    Guarín, Diego; Delgado, Edilson; Orozco, Álvaro

    2011-01-01

    Testing for nonlinearity is one of the most important preprocessing steps in nonlinear time series analysis. Typically, this is done by means of the linear surrogate data methods. But it is a known fact that the validity of the results heavily depends on the stationarity of the time series. Since most physiological signals are non-stationary, it is easy to falsely detect nonlinearity using the linear surrogate data methods. In this document, we propose a methodology to extend the procedure for generating constrained surrogate time series in order to assess nonlinearity in non-stationary data. The method is based on the band-phase-randomized surrogates, which consists (contrary to the linear surrogate data methods) in randomizing only a portion of the Fourier phases in the high frequency domain. Analysis of simulated time series showed that in comparison to the linear surrogate data method, our method is able to discriminate between linear stationarity, linear non-stationary and nonlinear time series. Applying our methodology to heart rate variability (HRV) records of five healthy patients, we encountered that nonlinear correlations are present in this non-stationary physiological signals.

  5. Complex-valued time-series correlation increases sensitivity in FMRI analysis.

    PubMed

    Kociuba, Mary C; Rowe, Daniel B

    2016-07-01

    To develop a linear matrix representation of correlation between complex-valued (CV) time-series in the temporal Fourier frequency domain, and demonstrate its increased sensitivity over correlation between magnitude-only (MO) time-series in functional MRI (fMRI) analysis. The standard in fMRI is to discard the phase before the statistical analysis of the data, despite evidence of task related change in the phase time-series. With a real-valued isomorphism representation of Fourier reconstruction, correlation is computed in the temporal frequency domain with CV time-series data, rather than with the standard of MO data. A MATLAB simulation compares the Fisher-z transform of MO and CV correlations for varying degrees of task related magnitude and phase amplitude change in the time-series. The increased sensitivity of the complex-valued Fourier representation of correlation is also demonstrated with experimental human data. Since the correlation description in the temporal frequency domain is represented as a summation of second order temporal frequencies, the correlation is easily divided into experimentally relevant frequency bands for each voxel's temporal frequency spectrum. The MO and CV correlations for the experimental human data are analyzed for four voxels of interest (VOIs) to show the framework with high and low contrast-to-noise ratios in the motor cortex and the supplementary motor cortex. The simulation demonstrates the increased strength of CV correlations over MO correlations for low magnitude contrast-to-noise time-series. In the experimental human data, the MO correlation maps are noisier than the CV maps, and it is more difficult to distinguish the motor cortex in the MO correlation maps after spatial processing. Including both magnitude and phase in the spatial correlation computations more accurately defines the correlated left and right motor cortices. Sensitivity in correlation analysis is important to preserve the signal of interest in fMRI data sets with high noise variance, and avoid excessive processing induced correlation. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. The incorporation of focused history in checklist for early recognition and treatment of acute illness and injury.

    PubMed

    Jayaprakash, Namita; Ali, Rashid; Kashyap, Rahul; Bennett, Courtney; Kogan, Alexander; Gajic, Ognjen

    2016-08-31

    Diagnostic error and delay are critical impediments to the safety of critically ill patients. Checklist for early recognition and treatment of acute illness and injury (CERTAIN) has been developed as a tool that facilitates timely and error-free evaluation of critically ill patients. While the focused history is an essential part of the CERTAIN framework, it is not clear how best to choreograph this step in the process of evaluation and treatment of the acutely decompensating patient. An un-blinded crossover clinical simulation study was designed in which volunteer critical care clinicians (fellows and attendings) were randomly assigned to start with either obtaining a focused history choreographed in series (after) or in parallel to the primary survey. A focused history was obtained using the standardized SAMPLE model that is incorporated into American College of Trauma Life Support (ATLS) and Pediatric Advanced Life Support (PALS). Clinicians were asked to assess six acutely decompensating patients using pre - determined clinical scenarios (three in series choreography, three in parallel). Once the initial choreography was completed the clinician would crossover to the alternative choreography. The primary outcome was the cognitive burden assessed through the NASA task load index. Secondary outcome was time to completion of a focused history. A total of 84 simulated cases (42 in parallel, 42 in series) were tested on 14 clinicians. Both the overall cognitive load and time to completion improved with each successive practice scenario, however no difference was observed between the series versus parallel choreographies. The median (IQR) overall NASA TLX task load index for series was 39 (17 - 58) and for parallel 43 (27 - 52), p = 0.57. The median (IQR) time to completion of the tasks in series was 125 (112 - 158) seconds and in parallel 122 (108 - 158) seconds, p = 0.92. In this clinical simulation study assessing the incorporation of a focused history into the primary survey of a non-trauma critically ill patient, there was no difference in cognitive burden or time to task completion when using series choreography (after the exam) compared to parallel choreography (concurrent with the primary survey physical exam). However, with repetition of the task both overall task load and time to completion improved in each of the choreographies.

  7. Scaling and efficiency determine the irreversible evolution of a market

    PubMed Central

    Baldovin, F.; Stella, A. L.

    2007-01-01

    In setting up a stochastic description of the time evolution of a financial index, the challenge consists in devising a model compatible with all stylized facts emerging from the analysis of financial time series and providing a reliable basis for simulating such series. Based on constraints imposed by market efficiency and on an inhomogeneous-time generalization of standard simple scaling, we propose an analytical model which accounts simultaneously for empirical results like the linear decorrelation of successive returns, the power law dependence on time of the volatility autocorrelation function, and the multiscaling associated to this dependence. In addition, our approach gives a justification and a quantitative assessment of the irreversible character of the index dynamics. This irreversibility enters as a key ingredient in a novel simulation strategy of index evolution which demonstrates the predictive potential of the model.

  8. Influence of Time-Series Normalization, Number of Nodes, Connectivity and Graph Measure Selection on Seizure-Onset Zone Localization from Intracranial EEG.

    PubMed

    van Mierlo, Pieter; Lie, Octavian; Staljanssens, Willeke; Coito, Ana; Vulliémoz, Serge

    2018-04-26

    We investigated the influence of processing steps in the estimation of multivariate directed functional connectivity during seizures recorded with intracranial EEG (iEEG) on seizure-onset zone (SOZ) localization. We studied the effect of (i) the number of nodes, (ii) time-series normalization, (iii) the choice of multivariate time-varying connectivity measure: Adaptive Directed Transfer Function (ADTF) or Adaptive Partial Directed Coherence (APDC) and (iv) graph theory measure: outdegree or shortest path length. First, simulations were performed to quantify the influence of the various processing steps on the accuracy to localize the SOZ. Afterwards, the SOZ was estimated from a 113-electrodes iEEG seizure recording and compared with the resection that rendered the patient seizure-free. The simulations revealed that ADTF is preferred over APDC to localize the SOZ from ictal iEEG recordings. Normalizing the time series before analysis resulted in an increase of 25-35% of correctly localized SOZ, while adding more nodes to the connectivity analysis led to a moderate decrease of 10%, when comparing 128 with 32 input nodes. The real-seizure connectivity estimates localized the SOZ inside the resection area using the ADTF coupled to outdegree or shortest path length. Our study showed that normalizing the time-series is an important pre-processing step, while adding nodes to the analysis did only marginally affect the SOZ localization. The study shows that directed multivariate Granger-based connectivity analysis is feasible with many input nodes (> 100) and that normalization of the time-series before connectivity analysis is preferred.

  9. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  10. Phase unwrapping in three dimensions with application to InSAR time series.

    PubMed

    Hooper, Andrew; Zebker, Howard A

    2007-09-01

    The problem of phase unwrapping in two dimensions has been studied extensively in the past two decades, but the three-dimensional (3D) problem has so far received relatively little attention. We develop here a theoretical framework for 3D phase unwrapping and also describe two algorithms for implementation, both of which can be applied to synthetic aperture radar interferometry (InSAR) time series. We test the algorithms on simulated data and find both give more accurate results than a two-dimensional algorithm. When applied to actual InSAR time series, we find good agreement both between the algorithms and with ground truth.

  11. Dynamically downscaled climate simulations over North America: Methods, evaluation, and supporting documentation for users

    USGS Publications Warehouse

    Hostetler, S.W.; Alder, J.R.; Allan, A.M.

    2011-01-01

    We have completed an array of high-resolution simulations of present and future climate over Western North America (WNA) and Eastern North America (ENA) by dynamically downscaling global climate simulations using a regional climate model, RegCM3. The simulations are intended to provide long time series of internally consistent surface and atmospheric variables for use in climate-related research. In addition to providing high-resolution weather and climate data for the past, present, and future, we have developed an integrated data flow and methodology for processing, summarizing, viewing, and delivering the climate datasets to a wide range of potential users. Our simulations were run over 50- and 15-kilometer model grids in an attempt to capture more of the climatic detail associated with processes such as topographic forcing than can be captured by general circulation models (GCMs). The simulations were run using output from four GCMs. All simulations span the present (for example, 1968-1999), common periods of the future (2040-2069), and two simulations continuously cover 2010-2099. The trace gas concentrations in our simulations were the same as those of the GCMs: the IPCC 20th century time series for 1968-1999 and the A2 time series for simulations of the future. We demonstrate that RegCM3 is capable of producing present day annual and seasonal climatologies of air temperature and precipitation that are in good agreement with observations. Important features of the high-resolution climatology of temperature, precipitation, snow water equivalent (SWE), and soil moisture are consistently reproduced in all model runs over WNA and ENA. The simulations provide a potential range of future climate change for selected decades and display common patterns of the direction and magnitude of changes. As expected, there are some model to model differences that limit interpretability and give rise to uncertainties. Here, we provide background information about the GCMs and the RegCM3, a basic evaluation of the model output and examples of simulated future climate. We also provide information needed to access the web applications for visualizing and downloading the data, and give complete metadata that describe the variables in the datasets.

  12. Aerial Refueling Simulator Validation Using Operational Experimentation and Response Surface Methods with Time Series Responses

    DTIC Science & Technology

    2013-03-21

    10 2.3 Time Series Response Data ................................................................................. 12 2.4 Comparison of Response...to 12 evaluating the efficiency of the parameter estimates. In the past, the most popular form of response surface design used the D-optimality...as well. A model can refer to almost anything in math , statistics, or computer science. It can be any “physical, mathematical, or logical

  13. Tracking the Momentum Flux of a CME and Quantifying Its Influence on Geomagnetically Induced Currents at Earth

    NASA Technical Reports Server (NTRS)

    Savani, N. P.; Vourlidas, A.; Pulkkinen, A.; Nieves-Chinchilla, T.; Lavraud, B.; Owens, M. J.

    2013-01-01

    We investigate a coronal mass ejection (CME) propagating toward Earth on 29 March 2011. This event is specifically chosen for its predominately northward directed magnetic field, so that the influence from the momentum flux onto Earth can be isolated. We focus our study on understanding how a small Earth-directed segment propagates. Mass images are created from the white-light cameras onboard STEREO which are also converted into mass height-time maps (mass J-maps). The mass tracks on these J-maps correspond to the sheath region between the CME and its associated shockfront as detected by in situ measurements at L1. A time series of mass measurements from the STEREOCOR-2A instrument is made along the Earth propagation direction. Qualitatively, this mass time series shows a remarkable resemblance to the L1 in situ density series. The in situ measurements are used as inputs into a three-dimensional (3-D) magnetospheric space weather simulation from the Community Coordinated Modeling Center. These simulations display a sudden compression of the magnetosphere from the large momentum flux at the leading edge of the CME, and predictions are made for the time derivative of the magnetic field (dBdt) on the ground. The predicted dBdt values were then compared with the observations from specific equatorially located ground stations and showed notable similarity. This study of the momentum of a CME from the Sun down to its influence on magnetic ground stations on Earth is presented as a preliminary proof of concept, such that future attempts may try to use remote sensing to create density and velocity time series as inputs to magnetospheric simulations.

  14. Methods for Detecting Early Warnings of Critical Transitions in Time Series Illustrated Using Simulated Ecological Data

    PubMed Central

    Dakos, Vasilis; Carpenter, Stephen R.; Brock, William A.; Ellison, Aaron M.; Guttal, Vishwesha; Ives, Anthony R.; Kéfi, Sonia; Livina, Valerie; Seekell, David A.; van Nes, Egbert H.; Scheffer, Marten

    2012-01-01

    Many dynamical systems, including lakes, organisms, ocean circulation patterns, or financial markets, are now thought to have tipping points where critical transitions to a contrasting state can happen. Because critical transitions can occur unexpectedly and are difficult to manage, there is a need for methods that can be used to identify when a critical transition is approaching. Recent theory shows that we can identify the proximity of a system to a critical transition using a variety of so-called ‘early warning signals’, and successful empirical examples suggest a potential for practical applicability. However, while the range of proposed methods for predicting critical transitions is rapidly expanding, opinions on their practical use differ widely, and there is no comparative study that tests the limitations of the different methods to identify approaching critical transitions using time-series data. Here, we summarize a range of currently available early warning methods and apply them to two simulated time series that are typical of systems undergoing a critical transition. In addition to a methodological guide, our work offers a practical toolbox that may be used in a wide range of fields to help detect early warning signals of critical transitions in time series data. PMID:22815897

  15. An Observation Analysis Tool for time-series analysis and sensor management in the FREEWAT GIS environment for water resources management

    NASA Astrophysics Data System (ADS)

    Cannata, Massimiliano; Neumann, Jakob; Cardoso, Mirko; Rossetto, Rudy; Foglia, Laura; Borsi, Iacopo

    2017-04-01

    In situ time-series are an important aspect of environmental modelling, especially with the advancement of numerical simulation techniques and increased model complexity. In order to make use of the increasing data available through the requirements of the EU Water Framework Directive, the FREEWAT GIS environment incorporates the newly developed Observation Analysis Tool for time-series analysis. The tool is used to import time-series data into QGIS from local CSV files, online sensors using the istSOS service, or MODFLOW model result files and enables visualisation, pre-processing of data for model development, and post-processing of model results. OAT can be used as a pre-processor for calibration observations, integrating the creation of observations for calibration directly from sensor time-series. The tool consists in an expandable Python library of processing methods and an interface integrated in the QGIS FREEWAT plug-in which includes a large number of modelling capabilities, data management tools and calibration capacity.

  16. Statistical power of intervention analyses: simulation and empirical application to treated lumber prices

    Treesearch

    Jeffrey P. Prestemon

    2009-01-01

    Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...

  17. Measurements of spatial population synchrony: influence of time series transformations.

    PubMed

    Chevalier, Mathieu; Laffaille, Pascal; Ferdy, Jean-Baptiste; Grenouillet, Gaël

    2015-09-01

    Two mechanisms have been proposed to explain spatial population synchrony: dispersal among populations, and the spatial correlation of density-independent factors (the "Moran effect"). To identify which of these two mechanisms is driving spatial population synchrony, time series transformations (TSTs) of abundance data have been used to remove the signature of one mechanism, and highlight the effect of the other. However, several issues with TSTs remain, and to date no consensus has emerged about how population time series should be handled in synchrony studies. Here, by using 3131 time series involving 34 fish species found in French rivers, we computed several metrics commonly used in synchrony studies to determine whether a large-scale climatic factor (temperature) influenced fish population dynamics at the regional scale, and to test the effect of three commonly used TSTs (detrending, prewhitening and a combination of both) on these metrics. We also tested whether the influence of TSTs on time series and population synchrony levels was related to the features of the time series using both empirical and simulated time series. For several species, and regardless of the TST used, we evidenced a Moran effect on freshwater fish populations. However, these results were globally biased downward by TSTs which reduced our ability to detect significant signals. Depending on the species and the features of the time series, we found that TSTs could lead to contradictory results, regardless of the metric considered. Finally, we suggest guidelines on how population time series should be processed in synchrony studies.

  18. The short time Fourier transform and local signals

    NASA Astrophysics Data System (ADS)

    Okumura, Shuhei

    In this thesis, I examine the theoretical properties of the short time discrete Fourier transform (STFT). The STFT is obtained by applying the Fourier transform by a fixed-sized, moving window to input series. We move the window by one time point at a time, so we have overlapping windows. I present several theoretical properties of the STFT, applied to various types of complex-valued, univariate time series inputs, and their outputs in closed forms. In particular, just like the discrete Fourier transform, the STFT's modulus time series takes large positive values when the input is a periodic signal. One main point is that a white noise time series input results in the STFT output being a complex-valued stationary time series and we can derive the time and time-frequency dependency structure such as the cross-covariance functions. Our primary focus is the detection of local periodic signals. I present a method to detect local signals by computing the probability that the squared modulus STFT time series has consecutive large values exceeding some threshold after one exceeding observation following one observation less than the threshold. We discuss a method to reduce the computation of such probabilities by the Box-Cox transformation and the delta method, and show that it works well in comparison to the Monte Carlo simulation method.

  19. A New Hybrid-Multiscale SSA Prediction of Non-Stationary Time Series

    NASA Astrophysics Data System (ADS)

    Ghanbarzadeh, Mitra; Aminghafari, Mina

    2016-02-01

    Singular spectral analysis (SSA) is a non-parametric method used in the prediction of non-stationary time series. It has two parameters, which are difficult to determine and very sensitive to their values. Since, SSA is a deterministic-based method, it does not give good results when the time series is contaminated with a high noise level and correlated noise. Therefore, we introduce a novel method to handle these problems. It is based on the prediction of non-decimated wavelet (NDW) signals by SSA and then, prediction of residuals by wavelet regression. The advantages of our method are the automatic determination of parameters and taking account of the stochastic structure of time series. As shown through the simulated and real data, we obtain better results than SSA, a non-parametric wavelet regression method and Holt-Winters method.

  20. Implemented Lomb-Scargle periodogram: a valuable tool for improving cyclostratigraphic research on unevenly sampled deep-sea stratigraphic sequences

    NASA Astrophysics Data System (ADS)

    Pardo-Iguzquiza, Eulogio; Rodríguez-Tovar, Francisco J.

    2011-12-01

    One important handicap when working with stratigraphic sequences is the discontinuous character of the sedimentary record, especially relevant in cyclostratigraphic analysis. Uneven palaeoclimatic/palaeoceanographic time series are common, their cyclostratigraphic analysis being comparatively difficult because most spectral methodologies are appropriate only when working with even sampling. As a means to solve this problem, a program for calculating the smoothed Lomb-Scargle periodogram and cross-periodogram, which additionally evaluates the statistical confidence of the estimated power spectrum through a Monte Carlo procedure (the permutation test), has been developed. The spectral analysis of a short uneven time series calls for assessment of the statistical significance of the spectral peaks, since a periodogram can always be calculated but the main challenge resides in identifying true spectral features. To demonstrate the effectiveness of this program, two case studies are presented: the one deals with synthetic data and the other with paleoceanographic/palaeoclimatic proxies. On a simulated time series of 500 data, two uneven time series (with 100 and 25 data) were generated by selecting data at random. Comparative analysis between the power spectra from the simulated series and from the two uneven time series demonstrates the usefulness of the smoothed Lomb-Scargle periodogram for uneven sequences, making it possible to distinguish between statistically significant and spurious spectral peaks. Fragmentary time series of Cd/Ca ratios and δ18O from core AII107-131 of SPECMAP were analysed as a real case study. The efficiency of the direct and cross Lomb-Scargle periodogram in recognizing Milankovitch and sub-Milankovitch signals related to palaeoclimatic/palaeoceanographic changes is demonstrated. As implemented, the Lomb-Scargle periodogram may be applied to any palaeoclimatic/palaeoceanographic proxies, including those usually recovered from contourites, and it holds special interest in the context of centennial- to millennial-scale climatic changes affecting contouritic currents.

  1. Using Animated Computer Simulation to Determine the Optimal Resource Support for the Endodontic Specialty Practice at Fort Lewis.

    DTIC Science & Technology

    1998-03-01

    Series Pt Endo Tx 114 Time Series Pt Perio Ex 114 None Pt Perio Tx 114 None Pt Perio Sx 114 None Pt Perio Pot 114 None Pt Exam 114 None Pt Other...prevention, diagnosis, and treatment of diseases and injuries that affect the dental pulp, tooth root, and periapical tissue" (Jablonski, 1982...Time Priority Scheduled Disable Logic Entrance 1 480 99 Yes No wait 180 * Entities * Name Speed (fpm) Stats Pt Endo Ex 114 Time

  2. Near Real Time Change-Point detection in Optical and Thermal Infrared Time Series Using Bayesian Inference over the Dry Chaco Forest

    NASA Astrophysics Data System (ADS)

    Barraza Bernadas, V.; Grings, F.; Roitberg, E.; Perna, P.; Karszenbaum, H.

    2017-12-01

    The Dry Chaco region (DCF) has the highest absolute deforestation rates of all Argentinian forests. The most recent report indicates a current deforestation rate of 200,000 Ha year-1. In order to better monitor this process, DCF was chosen to implement an early warning program for illegal deforestation. Although the area is intensively studied using medium resolution imagery (Landsat), the products obtained have a yearly pace and therefore unsuited for an early warning program. In this paper, we evaluated the performance of an online Bayesian change-point detection algorithm for MODIS Enhanced Vegetation Index (EVI) and Land Surface Temperature (LST) datasets. The goal was to to monitor the abrupt changes in vegetation dynamics associated with deforestation events. We tested this model by simulating 16-day EVI and 8-day LST time series with varying amounts of seasonality, noise, length of the time series and by adding abrupt changes with different magnitudes. This model was then tested on real satellite time series available through the Google Earth Engine, over a pilot area in DCF, where deforestation was common in the 2004-2016 period. A comparison with yearly benchmark products based on Landsat images is also presented (REDAF dataset). The results shows the advantages of using an automatic model to detect a changepoint in the time series than using only visual inspection techniques. Simulating time series with varying amounts of seasonality and noise, and by adding abrupt changes at different times and magnitudes, revealed that this model is robust against noise, and is not influenced by changes in amplitude of the seasonal component. Furthermore, the results compared favorably with REDAF dataset (near 65% of agreement). These results show the potential to combine LST and EVI to identify deforestation events. This work is being developed within the frame of the national Forest Law for the protection and sustainable development of Native Forest in Argentina in agreement with international legislation (REDD+).

  3. ELECTRONIC ANALOG COMPUTER FOR DETERMINING RADIOACTIVE DISINTEGRATION

    DOEpatents

    Robinson, H.P.

    1959-07-14

    A computer is presented for determining growth and decay curves for elements in a radioactive disintegration series wherein one unstable element decays to form a second unstable element or isotope, which in turn forms a third element, etc. The growth and decay curves of radioactive elements are simulated by the charge and discharge curves of a resistance-capacitance network. Several such networks having readily adjustable values are connected in series with an amplifier between each successive pair. The time constant of each of the various networks is set proportional to the half-life of a corresponding element in the series represented and the charge and discharge curves of each of the networks simulates the element growth and decay curve.

  4. POLYNOMIAL-BASED DISAGGREGATION OF HOURLY RAINFALL FOR CONTINUOUS HYDROLOGIC SIMULATION

    EPA Science Inventory

    Hydrologic modeling of urban watersheds for designs and analyses of stormwater conveyance facilities can be performed in either an event-based or continuous fashion. Continuous simulation requires, among other things, the use of a time series of rainfall amounts. However, for urb...

  5. Analysis of PV Advanced Inverter Functions and Setpoints under Time Series Simulation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seuss, John; Reno, Matthew J.; Broderick, Robert Joseph

    Utilities are increasingly concerned about the potential negative impacts distributed PV may have on the operational integrity of their distribution feeders. Some have proposed novel methods for controlling a PV system's grid - tie inverter to mitigate poten tial PV - induced problems. This report investigates the effectiveness of several of these PV advanced inverter controls on improving distribution feeder operational metrics. The controls are simulated on a large PV system interconnected at several locations within two realistic distribution feeder models. Due to the time - domain nature of the advanced inverter controls, quasi - static time series simulations aremore » performed under one week of representative variable irradiance and load data for each feeder. A para metric study is performed on each control type to determine how well certain measurable network metrics improve as a function of the control parameters. This methodology is used to determine appropriate advanced inverter settings for each location on the f eeder and overall for any interconnection location on the feeder.« less

  6. Impact of Sensor Degradation on the MODIS NDVI Time Series

    NASA Technical Reports Server (NTRS)

    Wang, Dongdong; Morton, Douglas Christopher; Masek, Jeffrey; Wu, Aisheng; Nagol, Jyoteshwar; Xiong, Xiaoxiong; Levy, Robert; Vermote, Eric; Wolfe, Robert

    2012-01-01

    Time series of satellite data provide unparalleled information on the response of vegetation to climate variability. Detecting subtle changes in vegetation over time requires consistent satellite-based measurements. Here, the impact of sensor degradation on trend detection was evaluated using Collection 5 data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors on the Terra and Aqua platforms. For Terra MODIS, the impact of blue band (Band 3, 470 nm) degradation on simulated surface reflectance was most pronounced at near-nadir view angles, leading to a 0.001-0.004 yr-1 decline in Normalized Difference Vegetation Index (NDVI) under a range of simulated aerosol conditions and surface types. Observed trends in MODIS NDVI over North America were consistentwith simulated results,with nearly a threefold difference in negative NDVI trends derived from Terra (17.4%) and Aqua (6.7%) MODIS sensors during 2002-2010. Planned adjustments to Terra MODIS calibration for Collection 6 data reprocessing will largely eliminate this negative bias in detection of NDVI trends.

  7. Impact of Sensor Degradation on the MODIS NDVI Time Series

    NASA Technical Reports Server (NTRS)

    Wang, Dongdong; Morton, Douglas; Masek, Jeffrey; Wu, Aisheng; Nagol, Jyoteshwar; Xiong, Xiaoxiong; Levy, Robert; Vermote, Eric; Wolfe, Robert

    2011-01-01

    Time series of satellite data provide unparalleled information on the response of vegetation to climate variability. Detecting subtle changes in vegetation over time requires consistent satellite-based measurements. Here, we evaluated the impact of sensor degradation on trend detection using Collection 5 data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors on the Terra and Aqua platforms. For Terra MODIS, the impact of blue band (Band 3, 470nm) degradation on simulated surface reflectance was most pronounced at near-nadir view angles, leading to a 0.001-0.004/yr decline in Normalized Difference Vegetation Index (NDVI) under a range of simulated aerosol conditions and surface types. Observed trends MODIS NDVI over North America were consistent with simulated results, with nearly a threefold difference in negative NDVI trends derived from Terra (17.4%) and Aqua (6.7%) MODIS sensors during 2002-2010. Planned adjustments to Terra MODIS calibration for Collection 6 data reprocessing will largely eliminate this negative bias in NDVI trends over vegetation.

  8. Statistical test for ΔρDCCA cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Guedes, E. F.; Brito, A. A.; Oliveira Filho, F. M.; Fernandez, B. F.; de Castro, A. P. N.; da Silva Filho, A. M.; Zebende, G. F.

    2018-07-01

    In this paper we propose a new statistical test for ΔρDCCA, Detrended Cross-Correlation Coefficient Difference, a tool to measure contagion/interdependence effect in time series of size N at different time scale n. For this proposition we analyzed simulated and real time series. The results showed that the statistical significance of ΔρDCCA depends on the size N and the time scale n, and we can define a critical value for this dependency in 90%, 95%, and 99% of confidence level, as will be shown in this paper.

  9. Quantifying the consequences of changing hydroclimatic extremes on protection levels for the Rhine

    NASA Astrophysics Data System (ADS)

    Sperna Weiland, Frederiek; Hegnauer, Mark; Buiteveld, Hendrik; Lammersen, Rita; van den Boogaard, Henk; Beersma, Jules

    2017-04-01

    The Dutch method for quantifying the magnitude and frequency of occurrence of discharge extremes in the Rhine basin and the potential influence of climate change hereon are presented. In the Netherlands flood protection design requires estimates of discharge extremes for return periods of 1000 up to 100,000 years. Observed discharge records are too short to derive such extreme return discharges, therefore extreme value assessment is based on very long synthetic discharge time-series generated with the Generator of Rainfall And Discharge Extremes (GRADE). The GRADE instrument consists of (1) a stochastic weather generator based on time series resampling of historical f rainfall and temperature and (2) a hydrological model optimized following the GLUE methodology and (3) a hydrodynamic model to simulate the propagation of flood waves based on the generated hydrological time-series. To assess the potential influence of climate change, the four KNMI'14 climate scenarios are applied. These four scenarios represent a large part of the uncertainty provided by the GCMs used for the IPCC 5th assessment report (the CMIP5 GCM simulations under different climate forcings) and are for this purpose tailored to the Rhine and Meuse river basins. To derive the probability distributions of extreme discharges under climate change the historical synthetic rainfall and temperature series simulated with the weather generator are transformed to the future following the KNMI'14 scenarios. For this transformation the Advanced Delta Change method, which allows that the changes in the extremes differ from those in the means, is used. Subsequently the hydrological model is forced with the historical and future (i.e. transformed) synthetic time-series after which the propagation of the flood waves is simulated with the hydrodynamic model to obtain the extreme discharge statistics both for current and future climate conditions. The study shows that both for 2050 and 2085 increases in discharge extremes for the river Rhine at Lobith are projected by all four KNMI'14 climate scenarios. This poses increased requirements for flood protection design in order to prepare for changing climate conditions.

  10. A web-based Tamsui River flood early-warning system with correction of real-time water stage using monitoring data

    NASA Astrophysics Data System (ADS)

    Liao, H. Y.; Lin, Y. J.; Chang, H. K.; Shang, R. K.; Kuo, H. C.; Lai, J. S.; Tan, Y. C.

    2017-12-01

    Taiwan encounters heavy rainfalls frequently. There are three to four typhoons striking Taiwan every year. To provide lead time for reducing flood damage, this study attempt to build a flood early-warning system (FEWS) in Tanshui River using time series correction techniques. The predicted rainfall is used as the input for the rainfall-runoff model. Then, the discharges calculated by the rainfall-runoff model is converted to the 1-D river routing model. The 1-D river routing model will output the simulating water stages in 487 cross sections for the future 48-hr. The downstream water stage at the estuary in 1-D river routing model is provided by storm surge simulation. Next, the water stages of 487 cross sections are corrected by time series model such as autoregressive (AR) model using real-time water stage measurements to improve the predicted accuracy. The results of simulated water stages are displayed on a web-based platform. In addition, the models can be performed remotely by any users with web browsers through a user interface. The on-line video surveillance images, real-time monitoring water stages, and rainfalls can also be shown on this platform. If the simulated water stage exceeds the embankments of Tanshui River, the alerting lights of FEWS will be flashing on the screen. This platform runs periodically and automatically to generate the simulation graphic data of flood water stages for flood disaster prevention and decision making.

  11. Evaluation and comparison of statistical methods for early temporal detection of outbreaks: A simulation-based study

    PubMed Central

    Le Strat, Yann

    2017-01-01

    The objective of this paper is to evaluate a panel of statistical algorithms for temporal outbreak detection. Based on a large dataset of simulated weekly surveillance time series, we performed a systematic assessment of 21 statistical algorithms, 19 implemented in the R package surveillance and two other methods. We estimated false positive rate (FPR), probability of detection (POD), probability of detection during the first week, sensitivity, specificity, negative and positive predictive values and F1-measure for each detection method. Then, to identify the factors associated with these performance measures, we ran multivariate Poisson regression models adjusted for the characteristics of the simulated time series (trend, seasonality, dispersion, outbreak sizes, etc.). The FPR ranged from 0.7% to 59.9% and the POD from 43.3% to 88.7%. Some methods had a very high specificity, up to 99.4%, but a low sensitivity. Methods with a high sensitivity (up to 79.5%) had a low specificity. All methods had a high negative predictive value, over 94%, while positive predictive values ranged from 6.5% to 68.4%. Multivariate Poisson regression models showed that performance measures were strongly influenced by the characteristics of time series. Past or current outbreak size and duration strongly influenced detection performances. PMID:28715489

  12. Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach

    NASA Astrophysics Data System (ADS)

    Thomas, C.; Lark, R. M.

    2013-12-01

    Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second (spherical) model, it cuts off at a temporal range. Having fitted the model, multiple realisations were generated; the random effects were simulated by specifying a covariance matrix for the simulated values, with the estimated parameters. The Cholesky factorisation of the covariance matrix was computed and realizations of the random component of the model generated by pre-multiplying a vector of iid standard Gaussian variables by the lower triangular factor. The resulting random variate was added to the mean value computed from the fixed effects, and the result back-transformed to the original scale of the measurement. Realistic simulations result from approach described above. Background exploratory data analysis was undertaken on 20-day sets of 30-minute buoy data, selected from days 5-24 of months January, April, July, October, 2011, to elucidate daily to weekly variations, and to keep numerical analysis tractable computationally. Work remains to be undertaken to develop suitable models for synthetic directional data. We suggest that the general principles of the method will have applications in other geomorphological modelling endeavours requiring time series of stochastically variable environmental parameters.

  13. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.

    PubMed

    Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang

    2017-01-01

    Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  14. Using permutations to detect dependence between time series

    NASA Astrophysics Data System (ADS)

    Cánovas, Jose S.; Guillamón, Antonio; Ruíz, María del Carmen

    2011-07-01

    In this paper, we propose an independence test between two time series which is based on permutations. The proposed test can be carried out by means of different common statistics such as Pearson’s chi-square or the likelihood ratio. We also point out why an exact test is necessary. Simulated and real data (return exchange rates between several currencies) reveal the capacity of this test to detect linear and nonlinear dependences.

  15. Satellite image time series simulation for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Guo, Tao

    2014-11-01

    The performance of environmental monitoring heavily depends on the availability of consecutive observation data and it turns out an increasing demand in remote sensing community for satellite image data in the sufficient resolution with respect to both spatial and temporal requirements, which appear to be conflictive and hard to tune tradeoffs. Multiple constellations could be a solution if without concerning cost, and thus it is so far interesting but very challenging to develop a method which can simultaneously improve both spatial and temporal details. There are some research efforts to deal with the problem from various aspects, a type of approaches is to enhance the spatial resolution using techniques of super resolution, pan-sharpen etc. which can produce good visual effects, but mostly cannot preserve spectral signatures and result in losing analytical value. Another type is to fill temporal frequency gaps by adopting time interpolation, which actually doesn't increase informative context at all. In this paper we presented a novel method to generate satellite images in higher spatial and temporal details, which further enables satellite image time series simulation. Our method starts with a pair of high-low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and the temporal change is then projected onto high resolution data plane and assigned to each high resolution pixel referring the predefined temporal change patterns of each type of ground objects to generate a simulated high resolution data. A preliminary experiment shows that our method can simulate a high resolution data with a good accuracy. We consider the contribution of our method is to enable timely monitoring of temporal changes through analysis of low resolution images time series only, and usage of costly high resolution data can be reduced as much as possible, and it presents an efficient solution with great cost performance to build up an economically operational monitoring service for environment, agriculture, forest, land use investigation, and other applications.

  16. Risk of portfolio with simulated returns based on copula model

    NASA Astrophysics Data System (ADS)

    Razak, Ruzanna Ab; Ismail, Noriszura

    2015-02-01

    The commonly used tool for measuring risk of a portfolio with equally weighted stocks is variance-covariance method. Under extreme circumstances, this method leads to significant underestimation of actual risk due to its multivariate normality assumption of the joint distribution of stocks. The purpose of this research is to compare the actual risk of portfolio with the simulated risk of portfolio in which the joint distribution of two return series is predetermined. The data used is daily stock prices from the ASEAN market for the period January 2000 to December 2012. The copula approach is applied to capture the time varying dependence among the return series. The results shows that the chosen copula families are not suitable to present the dependence structures of each bivariate returns. Exception for the Philippines-Thailand pair where by t copula distribution appears to be the appropriate choice to depict its dependence. Assuming that the t copula distribution is the joint distribution of each paired series, simulated returns is generated and value-at-risk (VaR) is then applied to evaluate the risk of each portfolio consisting of two simulated return series. The VaR estimates was found to be symmetrical due to the simulation of returns via elliptical copula-GARCH approach. By comparison, it is found that the actual risks are underestimated for all pairs of portfolios except for Philippines-Thailand. This study was able to show that disregard of the non-normal dependence structure of two series will result underestimation of actual risk of the portfolio.

  17. A method for generating high resolution satellite image time series

    NASA Astrophysics Data System (ADS)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation, environment and etc. applications.

  18. Extension of classical hydrological risk analysis to non-stationary conditions due to climate change - application to the Fulda catchment, Germany

    NASA Astrophysics Data System (ADS)

    Fink, G.; Koch, M.

    2010-12-01

    An important aspect in water resources and hydrological engineering is the assessment of hydrological risk, due to the occurrence of extreme events, e.g. droughts or floods. When dealing with the latter - as is the focus here - the classical methods of flood frequency analysis (FFA) are usually being used for the proper dimensioning of a hydraulic structure, for the purpose of bringing down the flood risk to an acceptable level. FFA is based on extreme value statistics theory. Despite the progress of methods in this scientific branch, the development, decision, and fitting of an appropriate distribution function stills remains a challenge, particularly, when certain underlying assumptions of the theory are not met in real applications. This is, for example, the case when the stationarity-condition for a random flood time series is not satisfied anymore, as could be the situation when long-term hydrological impacts of future climate change are to be considered. The objective here is to verify the applicability of classical (stationary) FFA to predicted flood time series in the Fulda catchment in central Germany, as they may occur in the wake of climate change during the 21st century. These discharge time series at the outlet of the Fulda basin have been simulated with a distributed hydrological model (SWAT) that is forced by predicted climate variables of a regional climate model for Germany (REMO). From the simulated future daily time series, annual maximum (extremes) values are computed and analyzed for the purpose of risk evaluation. Although the 21st century estimated extreme flood series of the Fulda river turn out to be only mildly non-stationary, alleviating the need for further action and concern at the first sight, the more detailed analysis of the risk, as quantified, for example, by the return period, shows non-negligent differences in the calculated risk levels. This could be verified by employing a new method, the so-called flood series maximum analysis (FSMA) method, which consists in the stochastic simulation of numerous trajectories of a stochastic process with a given GEV-distribution over a certain length of time (> larger than a desired return period). Then the maximum value for each trajectory is computed, all of which are then used to determine the empirical distribution of this maximum series. Through graphical inversion of this distribution function the size of the design flood for a given risk (quantile) and given life duration can be inferred. The results of numerous simulations show that for stationary flood series, the new FSMA method results, expectedly, in nearly identical risk values as the classical FFA approach. However, once the flood time series becomes slightly non-stationary - for reasons as discussed - and regardless of whether the trend is increasing or decreasing, large differences in the computed risk values for a given design flood occur. Or in other word, for the same risk, the new FSMA method would lead to different values in the design flood for a hydraulic structure than the classical FFA method. This, in turn, could lead to some cost savings in the realization of a hydraulic project.

  19. A novel encoding Lempel-Ziv complexity algorithm for quantifying the irregularity of physiological time series.

    PubMed

    Zhang, Yatao; Wei, Shoushui; Liu, Hai; Zhao, Lina; Liu, Chengyu

    2016-09-01

    The Lempel-Ziv (LZ) complexity and its variants have been extensively used to analyze the irregularity of physiological time series. To date, these measures cannot explicitly discern between the irregularity and the chaotic characteristics of physiological time series. Our study compared the performance of an encoding LZ (ELZ) complexity algorithm, a novel variant of the LZ complexity algorithm, with those of the classic LZ (CLZ) and multistate LZ (MLZ) complexity algorithms. Simulation experiments on Gaussian noise, logistic chaotic, and periodic time series showed that only the ELZ algorithm monotonically declined with the reduction in irregularity in time series, whereas the CLZ and MLZ approaches yielded overlapped values for chaotic time series and time series mixed with Gaussian noise, demonstrating the accuracy of the proposed ELZ algorithm in capturing the irregularity, rather than the complexity, of physiological time series. In addition, the effect of sequence length on the ELZ algorithm was more stable compared with those on CLZ and MLZ, especially when the sequence length was longer than 300. A sensitivity analysis for all three LZ algorithms revealed that both the MLZ and the ELZ algorithms could respond to the change in time sequences, whereas the CLZ approach could not. Cardiac interbeat (RR) interval time series from the MIT-BIH database were also evaluated, and the results showed that the ELZ algorithm could accurately measure the inherent irregularity of the RR interval time series, as indicated by lower LZ values yielded from a congestive heart failure group versus those yielded from a normal sinus rhythm group (p < 0.01). Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Dimension reduction of frequency-based direct Granger causality measures on short time series.

    PubMed

    Siggiridou, Elsa; Kimiskidis, Vasilios K; Kugiumtzis, Dimitris

    2017-09-01

    The mainstream in the estimation of effective brain connectivity relies on Granger causality measures in the frequency domain. If the measure is meant to capture direct causal effects accounting for the presence of other observed variables, as in multi-channel electroencephalograms (EEG), typically the fit of a vector autoregressive (VAR) model on the multivariate time series is required. For short time series of many variables, the estimation of VAR may not be stable requiring dimension reduction resulting in restricted or sparse VAR models. The restricted VAR obtained by the modified backward-in-time selection method (mBTS) is adapted to the generalized partial directed coherence (GPDC), termed restricted GPDC (RGPDC). Dimension reduction on other frequency based measures, such the direct directed transfer function (dDTF), is straightforward. First, a simulation study using linear stochastic multivariate systems is conducted and RGPDC is favorably compared to GPDC on short time series in terms of sensitivity and specificity. Then the two measures are tested for their ability to detect changes in brain connectivity during an epileptiform discharge (ED) from multi-channel scalp EEG. It is shown that RGPDC identifies better than GPDC the connectivity structure of the simulated systems, as well as changes in the brain connectivity, and is less dependent on the free parameter of VAR order. The proposed dimension reduction in frequency measures based on VAR constitutes an appropriate strategy to estimate reliably brain networks within short-time windows. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2012-01-01

    The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.

  2. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  3. Evapotranspiration Calculator Desktop Tool

    EPA Pesticide Factsheets

    The Evapotranspiration Calculator estimates evapotranspiration time series data for hydrological and water quality models for the Hydrologic Simulation Program - Fortran (HSPF) and the Stormwater Management Model (SWMM).

  4. Multiscale Analysis of Time Irreversibility Based on Phase-Space Reconstruction and Horizontal Visibility Graph Approach

    NASA Astrophysics Data System (ADS)

    Zhang, Yongping; Shang, Pengjian; Xiong, Hui; Xia, Jianan

    Time irreversibility is an important property of nonequilibrium dynamic systems. A visibility graph approach was recently proposed, and this approach is generally effective to measure time irreversibility of time series. However, its result may be unreliable when dealing with high-dimensional systems. In this work, we consider the joint concept of time irreversibility and adopt the phase-space reconstruction technique to improve this visibility graph approach. Compared with the previous approach, the improved approach gives a more accurate estimate for the irreversibility of time series, and is more effective to distinguish irreversible and reversible stochastic processes. We also use this approach to extract the multiscale irreversibility to account for the multiple inherent dynamics of time series. Finally, we apply the approach to detect the multiscale irreversibility of financial time series, and succeed to distinguish the time of financial crisis and the plateau. In addition, Asian stock indexes away from other indexes are clearly visible in higher time scales. Simulations and real data support the effectiveness of the improved approach when detecting time irreversibility.

  5. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    NASA Astrophysics Data System (ADS)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  6. Rainfall disaggregation for urban hydrology: Effects of spatial consistence

    NASA Astrophysics Data System (ADS)

    Müller, Hannes; Haberlandt, Uwe

    2015-04-01

    For urban hydrology rainfall time series with a high temporal resolution are crucial. Observed time series of this kind are very short in most cases, so they cannot be used. On the contrary, time series with lower temporal resolution (daily measurements) exist for much longer periods. The objective is to derive time series with a long duration and a high resolution by disaggregating time series of the non-recording stations with information of time series of the recording stations. The multiplicative random cascade model is a well-known disaggregation model for daily time series. For urban hydrology it is often assumed, that a day consists of only 1280 minutes in total as starting point for the disaggregation process. We introduce a new variant for the cascade model, which is functional without this assumption and also outperforms the existing approach regarding time series characteristics like wet and dry spell duration, average intensity, fraction of dry intervals and extreme value representation. However, in both approaches rainfall time series of different stations are disaggregated without consideration of surrounding stations. This yields in unrealistic spatial patterns of rainfall. We apply a simulated annealing algorithm that has been used successfully for hourly values before. Relative diurnal cycles of the disaggregated time series are resampled to reproduce the spatial dependence of rainfall. To describe spatial dependence we use bivariate characteristics like probability of occurrence, continuity ratio and coefficient of correlation. Investigation area is a sewage system in Northern Germany. We show that the algorithm has the capability to improve spatial dependence. The influence of the chosen disaggregation routine and the spatial dependence on overflow occurrences and volumes of the sewage system will be analyzed.

  7. Detection of "noisy" chaos in a time series

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Kanters, J. K.; Cohen, R. J.; Holstein-Rathlou, N. H.

    1997-01-01

    Time series from biological system often displays fluctuations in the measured variables. Much effort has been directed at determining whether this variability reflects deterministic chaos, or whether it is merely "noise". The output from most biological systems is probably the result of both the internal dynamics of the systems, and the input to the system from the surroundings. This implies that the system should be viewed as a mixed system with both stochastic and deterministic components. We present a method that appears to be useful in deciding whether determinism is present in a time series, and if this determinism has chaotic attributes. The method relies on fitting a nonlinear autoregressive model to the time series followed by an estimation of the characteristic exponents of the model over the observed probability distribution of states for the system. The method is tested by computer simulations, and applied to heart rate variability data.

  8. Parameter motivated mutual correlation analysis: Application to the study of currency exchange rates based on intermittency parameter and Hurst exponent

    NASA Astrophysics Data System (ADS)

    Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.

    2012-04-01

    We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.

  9. Detecting dynamical changes in time series by using the Jensen Shannon divergence

    NASA Astrophysics Data System (ADS)

    Mateos, D. M.; Riveaud, L. E.; Lamberti, P. W.

    2017-08-01

    Most of the time series in nature are a mixture of signals with deterministic and random dynamics. Thus the distinction between these two characteristics becomes important. Distinguishing between chaotic and aleatory signals is difficult because they have a common wide band power spectrum, a delta like autocorrelation function, and share other features as well. In general, signals are presented as continuous records and require to be discretized for being analyzed. In this work, we introduce different schemes for discretizing and for detecting dynamical changes in time series. One of the main motivations is to detect transitions between the chaotic and random regime. The tools here used here originate from the Information Theory. The schemes proposed are applied to simulated and real life signals, showing in all cases a high proficiency for detecting changes in the dynamics of the associated time series.

  10. Monitoring Farmland Loss Caused by Urbanization in Beijing from Modis Time Series Using Hierarchical Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Yuan, Y.; Meng, Y.; Chen, Y. X.; Jiang, C.; Yue, A. Z.

    2018-04-01

    In this study, we proposed a method to map urban encroachment onto farmland using satellite image time series (SITS) based on the hierarchical hidden Markov model (HHMM). In this method, the farmland change process is decomposed into three hierarchical levels, i.e., the land cover level, the vegetation phenology level, and the SITS level. Then a three-level HHMM is constructed to model the multi-level semantic structure of farmland change process. Once the HHMM is established, a change from farmland to built-up could be detected by inferring the underlying state sequence that is most likely to generate the input time series. The performance of the method is evaluated on MODIS time series in Beijing. Results on both simulated and real datasets demonstrate that our method improves the change detection accuracy compared with the HMM-based method.

  11. Multiple GISS AGCM Hindcasts and MSU Versions of 1979-1998

    NASA Technical Reports Server (NTRS)

    Shah, Kathryn Pierce; Rind, David; Druyan, Leonard; Lonergan, Patrick; Chandler, Mark

    1998-01-01

    Multiple realizations of the 1979-1998 time period have been simulated by the Goddard Institute for Space Studies Atmospheric General Circulation Model (GISS AGCM) to explore its responsiveness to accumulated forcings, particularly over sensitive agricultural regions. A microwave radiative transfer postprocessor has produced the AGCM's lower tropospheric, tropospheric and lower stratospheric brightness temperature (Tb) time series for correlations with the various Microwave Sounding Unit (MSU) time series available. MSU maps of monthly means and anomalies were also used to assess the AGCM's mean annual cycle and regional variability. Seven realizations by the AGCM were forced by observed sea surface temperatures (sst) through 1992 to gather rough standard deviations associated with internal model variability. Subsequent runs hindcast January 1979 through April 1998 with an accumulation of forcings: observed ssts, greenhouse gases, stratospheric volcanic aerosols. stratospheric and tropospheric ozone and tropospheric sulfate and black carbon aerosols. The goal of narrowing gaps between AGCM and MSU time series was complicated by MSU time series, by Tb simulation concerns and by unforced climatic variability in the AGCM and in the real world. Lower stratospheric Tb correlations between the AGCM and MSU for 1979-1998 reached as high as 0.91 +/-0.16 globally with sst, greenhouse gases, volcanic aerosol, stratospheric ozone forcings and tropospheric aerosols. Mid-tropospheric Tb correlations reached as high as 0.66 +/-.04 globally and 0.84 +/-.02 in the tropics. Oceanic lower tropospheric Tb correlations similarly reached 0.61 +/-.06 globally and 0.79 +/-.02 in the tropics. Of the sensitive agricultural areas considered, Nordeste in northeastern Brazil was simulated best with mid-tropospheric Tb correlations up to 0.75 +/- .03. The two other agricultural regions, in Africa and in the northern mid-latitudes, suffered from higher levels of non-sst variability. Zimbabwe had a maximum mid-tropospheric correlation of 0.54 +/- 0.11 while the U.S. Cornbelt had only 0.25 +/- .10. Precipitation and surface temperature performance are also examined over these regions. Correlations of MSU and AGCM time series mostly improved with addition of explicit atmospheric forcings in zonal bands but not in agricultural regional bins each encompassing only six AGCM gridcells.

  12. Phase walk analysis of leptokurtic time series.

    PubMed

    Schreiber, Korbinian; Modest, Heike I; Räth, Christoph

    2018-06-01

    The Fourier phase information play a key role for the quantified description of nonlinear data. We present a novel tool for time series analysis that identifies nonlinearities by sensitively detecting correlations among the Fourier phases. The method, being called phase walk analysis, is based on well established measures from random walk analysis, which are now applied to the unwrapped Fourier phases of time series. We provide an analytical description of its functionality and demonstrate its capabilities on systematically controlled leptokurtic noise. Hereby, we investigate the properties of leptokurtic time series and their influence on the Fourier phases of time series. The phase walk analysis is applied to measured and simulated intermittent time series, whose probability density distribution is approximated by power laws. We use the day-to-day returns of the Dow-Jones industrial average, a synthetic time series with tailored nonlinearities mimicing the power law behavior of the Dow-Jones and the acceleration of the wind at an Atlantic offshore site. Testing for nonlinearities by means of surrogates shows that the new method yields strong significances for nonlinear behavior. Due to the drastically decreased computing time as compared to embedding space methods, the number of surrogate realizations can be increased by orders of magnitude. Thereby, the probability distribution of the test statistics can very accurately be derived and parameterized, which allows for much more precise tests on nonlinearities.

  13. Short-term versus long-term rainfall time series in the assessment of potable water savings by using rainwater in houses.

    PubMed

    Ghisi, Enedir; Cardoso, Karla Albino; Rupp, Ricardo Forgiarini

    2012-06-15

    The main objective of this article is to assess the possibility of using short-term instead of long-term rainfall time series to evaluate the potential for potable water savings by using rainwater in houses. The analysis was performed considering rainfall data from 1960 to 1995 for the city of Santa Bárbara do Oeste, located in the state of São Paulo, southeastern Brazil. The influence of the rainfall time series, roof area, potable water demand and percentage rainwater demand on the potential for potable water savings was evaluated. The potential for potable water savings was estimated using computer simulations considering a set of long-term rainfall time series and different sets of short-term rainfall time series. The ideal rainwater tank capacity was also assessed for some cases. It was observed that the higher the percentage rainwater demand and the shorter the rainfall time series, the larger the difference between the potential for potable water savings and the greater the variation in the ideal rainwater tank size. The sets of short-term rainfall time series considered adequate for different scenarios ranged from 1 to 13 years depending on the roof area, percentage rainwater demand and potable water demand. The main finding of the research is that sets of short-term rainfall time series can be used to assess the potential for potable water savings by using rainwater, as the results obtained are similar to those obtained from the long-term rainfall time series. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Time-Domain Simulation of Along-Track Interferometric SAR for Moving Ocean Surfaces.

    PubMed

    Yoshida, Takero; Rheem, Chang-Kyu

    2015-06-10

    A time-domain simulation of along-track interferometric synthetic aperture radar (AT-InSAR) has been developed to support ocean observations. The simulation is in the time domain and based on Bragg scattering to be applicable for moving ocean surfaces. The time-domain simulation is suitable for examining velocities of moving objects. The simulation obtains the time series of microwave backscattering as raw signals for movements of ocean surfaces. In terms of realizing Bragg scattering, the computational grid elements for generating the numerical ocean surface are set to be smaller than the wavelength of the Bragg resonant wave. In this paper, the simulation was conducted for a Bragg resonant wave and irregular waves with currents. As a result, the phases of the received signals from two antennas differ due to the movement of the numerical ocean surfaces. The phase differences shifted by currents were in good agreement with the theoretical values. Therefore, the adaptability of the simulation to observe velocities of ocean surfaces with AT-InSAR was confirmed.

  15. Time-Domain Simulation of Along-Track Interferometric SAR for Moving Ocean Surfaces

    PubMed Central

    Yoshida, Takero; Rheem, Chang-Kyu

    2015-01-01

    A time-domain simulation of along-track interferometric synthetic aperture radar (AT-InSAR) has been developed to support ocean observations. The simulation is in the time domain and based on Bragg scattering to be applicable for moving ocean surfaces. The time-domain simulation is suitable for examining velocities of moving objects. The simulation obtains the time series of microwave backscattering as raw signals for movements of ocean surfaces. In terms of realizing Bragg scattering, the computational grid elements for generating the numerical ocean surface are set to be smaller than the wavelength of the Bragg resonant wave. In this paper, the simulation was conducted for a Bragg resonant wave and irregular waves with currents. As a result, the phases of the received signals from two antennas differ due to the movement of the numerical ocean surfaces. The phase differences shifted by currents were in good agreement with the theoretical values. Therefore, the adaptability of the simulation to observe velocities of ocean surfaces with AT-InSAR was confirmed. PMID:26067197

  16. An evaluation of Dynamic TOPMODEL for low flow simulation

    NASA Astrophysics Data System (ADS)

    Coxon, G.; Freer, J. E.; Quinn, N.; Woods, R. A.; Wagener, T.; Howden, N. J. K.

    2015-12-01

    Hydrological models are essential tools for drought risk management, often providing input to water resource system models, aiding our understanding of low flow processes within catchments and providing low flow predictions. However, simulating low flows and droughts is challenging as hydrological systems often demonstrate threshold effects in connectivity, non-linear groundwater contributions and a greater influence of water resource system elements during low flow periods. These dynamic processes are typically not well represented in commonly used hydrological models due to data and model limitations. Furthermore, calibrated or behavioural models may not be effectively evaluated during more extreme drought periods. A better understanding of the processes that occur during low flows and how these are represented within models is thus required if we want to be able to provide robust and reliable predictions of future drought events. In this study, we assess the performance of dynamic TOPMODEL for low flow simulation. Dynamic TOPMODEL was applied to a number of UK catchments in the Thames region using time series of observed rainfall and potential evapotranspiration data that captured multiple historic droughts over a period of several years. The model performance was assessed against the observed discharge time series using a limits of acceptability framework, which included uncertainty in the discharge time series. We evaluate the models against multiple signatures of catchment low-flow behaviour and investigate differences in model performance between catchments, model diagnostics and for different low flow periods. We also considered the impact of surface water and groundwater abstractions and discharges on the observed discharge time series and how this affected the model evaluation. From analysing the model performance, we suggest future improvements to Dynamic TOPMODEL to improve the representation of low flow processes within the model structure.

  17. Understanding the source of multifractality in financial markets

    NASA Astrophysics Data System (ADS)

    Barunik, Jozef; Aste, Tomaso; Di Matteo, T.; Liu, Ruipeng

    2012-09-01

    In this paper, we use the generalized Hurst exponent approach to study the multi-scaling behavior of different financial time series. We show that this approach is robust and powerful in detecting different types of multi-scaling. We observe a puzzling phenomenon where an apparent increase in multifractality is measured in time series generated from shuffled returns, where all time-correlations are destroyed, while the return distributions are conserved. This effect is robust and it is reproduced in several real financial data including stock market indices, exchange rates and interest rates. In order to understand the origin of this effect we investigate different simulated time series by means of the Markov switching multifractal model, autoregressive fractionally integrated moving average processes with stable innovations, fractional Brownian motion and Levy flights. Overall we conclude that the multifractality observed in financial time series is mainly a consequence of the characteristic fat-tailed distribution of the returns and time-correlations have the effect to decrease the measured multifractality.

  18. Time reversibility from visibility graphs of nonstationary processes

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas; Flanagan, Ryan

    2015-08-01

    Visibility algorithms are a family of methods to map time series into networks, with the aim of describing the structure of time series and their underlying dynamical properties in graph-theoretical terms. Here we explore some properties of both natural and horizontal visibility graphs associated to several nonstationary processes, and we pay particular attention to their capacity to assess time irreversibility. Nonstationary signals are (infinitely) irreversible by definition (independently of whether the process is Markovian or producing entropy at a positive rate), and thus the link between entropy production and time series irreversibility has only been explored in nonequilibrium stationary states. Here we show that the visibility formalism naturally induces a new working definition of time irreversibility, which allows us to quantify several degrees of irreversibility for stationary and nonstationary series, yielding finite values that can be used to efficiently assess the presence of memory and off-equilibrium dynamics in nonstationary processes without the need to differentiate or detrend them. We provide rigorous results complemented by extensive numerical simulations on several classes of stochastic processes.

  19. Testing the structure of earthquake networks from multivariate time series of successive main shocks in Greece

    NASA Astrophysics Data System (ADS)

    Chorozoglou, D.; Kugiumtzis, D.; Papadimitriou, E.

    2018-06-01

    The seismic hazard assessment in the area of Greece is attempted by studying the earthquake network structure, such as small-world and random. In this network, a node represents a seismic zone in the study area and a connection between two nodes is given by the correlation of the seismic activity of two zones. To investigate the network structure, and particularly the small-world property, the earthquake correlation network is compared with randomized ones. Simulations on multivariate time series of different length and number of variables show that for the construction of randomized networks the method randomizing the time series performs better than methods randomizing directly the original network connections. Based on the appropriate randomization method, the network approach is applied to time series of earthquakes that occurred between main shocks in the territory of Greece spanning the period 1999-2015. The characterization of networks on sliding time windows revealed that small-world structure emerges in the last time interval, shortly before the main shock.

  20. 21 CFR 1020.31 - Radiographic equipment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... time, but means may be provided to permit completion of any single exposure of the series in process.... Radiation therapy simulation systems are exempt from this requirement. (iii) The edge of the light field at..., except when the spot-film device is provided for use with a radiation therapy simulation system: (1...

  1. Impact of number of realizations on the suitability of simulated weather data for hydrologic and environmental applications

    USDA-ARS?s Scientific Manuscript database

    Stochastic weather generators are widely used in hydrological, environmental, and agricultural applications to simulate and forecast weather time series. However, such stochastic processes usually produce random outputs hence the question on how representative the generated data are if obtained fro...

  2. Wavelet analysis of near-resonant series RLC circuit with time-dependent forcing frequency

    NASA Astrophysics Data System (ADS)

    Caccamo, M. T.; Cannuli, A.; Magazù, S.

    2018-07-01

    In this work, the results of an analysis of the response of a near-resonant series resistance‑inductance‑capacitance (RLC) electric circuit with time-dependent forcing frequency by means of a wavelet cross-correlation approach are reported. In particular, it is shown how the wavelet approach enables frequency and time analysis of the circuit response to be carried out simultaneously—this procedure not being possible by Fourier transform, since the frequency is not stationary in time. A series RLC circuit simulation is performed by using the Simulation Program with Integrated Circuits Emphasis (SPICE), in which an oscillatory sinusoidal voltage drive signal of constant amplitude is swept through the resonant condition by progressively increasing the frequency over a 20-second time window, linearly, from 0.32 Hz to 6.69 Hz. It is shown that the wavelet cross-correlation procedure quantifies the common power between the input signal (represented by the electromotive force) and the output signal, which in the present case is a current, highlighting not only which frequencies are present but also when they occur, i.e. providing a simultaneous time-frequency analysis. The work is directed toward graduate Physics, Engineering and Mathematics students, with the main intention of introducing wavelet analysis into their data analysis toolkit.

  3. [Wave-type time series variation of the correlation between NDVI and climatic factors].

    PubMed

    Bi, Xiaoli; Wang, Hui; Ge, Jianping

    2005-02-01

    Based on the 1992-1996 data of 1 km monthly NDVI and those of the monthly precipitation and mean temperature collected by 400 standard meteorological stations in China, this paper analyzed the temporal and spatial dynamic changes of the correlation between NDVI and climatic factors in different climate districts of this country. The results showed that there was a significant correlation between monthly precipitations and NDVI. The wave-type time series model could simulate well the temporal dynamic changes of the correlation between NDVI and climatic factors, and the simulated results of the correlation between NDVI and precipitation was better than that between NDVI and temperature. The correlation coefficients (R2) were 0.91 and 0.86, respectively for the whole country.

  4. Time Series Analysis for Spatial Node Selection in Environment Monitoring Sensor Networks

    PubMed Central

    Bhandari, Siddhartha; Jurdak, Raja; Kusy, Branislav

    2017-01-01

    Wireless sensor networks are widely used in environmental monitoring. The number of sensor nodes to be deployed will vary depending on the desired spatio-temporal resolution. Selecting an optimal number, position and sampling rate for an array of sensor nodes in environmental monitoring is a challenging question. Most of the current solutions are either theoretical or simulation-based where the problems are tackled using random field theory, computational geometry or computer simulations, limiting their specificity to a given sensor deployment. Using an empirical dataset from a mine rehabilitation monitoring sensor network, this work proposes a data-driven approach where co-integrated time series analysis is used to select the number of sensors from a short-term deployment of a larger set of potential node positions. Analyses conducted on temperature time series show 75% of sensors are co-integrated. Using only 25% of the original nodes can generate a complete dataset within a 0.5 °C average error bound. Our data-driven approach to sensor position selection is applicable for spatiotemporal monitoring of spatially correlated environmental parameters to minimize deployment cost without compromising data resolution. PMID:29271880

  5. Fitting Flux Ropes to a Global MHD Solution: A Comparison of Techniques. Appendix 1

    NASA Technical Reports Server (NTRS)

    Riley, Pete; Linker, J. A.; Lionello, R.; Mikic, Z.; Odstrcil, D.; Hidalgo, M. A.; Cid, C.; Hu, Q.; Lepping, R. P.; Lynch, B. J.

    2004-01-01

    Flux rope fitting (FRF) techniques are an invaluable tool for extracting information about the properties of a subclass of CMEs in the solar wind. However, it has proven difficult to assess their accuracy since the underlying global structure of the CME cannot be independently determined from the data. In contrast, large-scale MHD simulations of CME evolution can provide both a global view as well as localized time series at specific points in space. In this study we apply 5 different fitting techniques to 2 hypothetical time series derived from MHD simulation results. Independent teams performed the analysis of the events in "blind tests", for which no information, other than the time series, was provided. F rom the results, we infer the following: (1) Accuracy decreases markedly with increasingly glancing encounters; (2) Correct identification of the boundaries of the flux rope can be a significant limiter; and (3) Results from techniques that infer global morphology must be viewed with caution. In spite of these limitations, FRF techniques remain a useful tool for describing in situ observations of flux rope CMEs.

  6. Hierarchical Engine for Large-scale Infrastructure Co-Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-04-24

    HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.

  7. Interpretable Categorization of Heterogeneous Time Series Data

    NASA Technical Reports Server (NTRS)

    Lee, Ritchie; Kochenderfer, Mykel J.; Mengshoel, Ole J.; Silbermann, Joshua

    2017-01-01

    We analyze data from simulated aircraft encounters to validate and inform the development of a prototype aircraft collision avoidance system. The high-dimensional and heterogeneous time series dataset is analyzed to discover properties of near mid-air collisions (NMACs) and categorize the NMAC encounters. Domain experts use these properties to better organize and understand NMAC occurrences. Existing solutions either are not capable of handling high-dimensional and heterogeneous time series datasets or do not provide explanations that are interpretable by a domain expert. The latter is critical to the acceptance and deployment of safety-critical systems. To address this gap, we propose grammar-based decision trees along with a learning algorithm. Our approach extends decision trees with a grammar framework for classifying heterogeneous time series data. A context-free grammar is used to derive decision expressions that are interpretable, application-specific, and support heterogeneous data types. In addition to classification, we show how grammar-based decision trees can also be used for categorization, which is a combination of clustering and generating interpretable explanations for each cluster. We apply grammar-based decision trees to a simulated aircraft encounter dataset and evaluate the performance of four variants of our learning algorithm. The best algorithm is used to analyze and categorize near mid-air collisions in the aircraft encounter dataset. We describe each discovered category in detail and discuss its relevance to aircraft collision avoidance.

  8. Time series measurements of transient tracers and tracer-derived transport in the Deep Western Boundary Current between the Labrador Sea and the subtropical Atlantic Ocean at Line W

    NASA Astrophysics Data System (ADS)

    Smith, John N.; Smethie, William M.; Yashayev, Igor; Curry, Ruth; Azetsu-Scott, Kumiko

    2016-11-01

    Time series measurements of the nuclear fuel reprocessing tracer 129I and the gas ventilation tracer CFC-11 were undertaken on the AR7W section in the Labrador Sea (1997-2014) and on Line W (2004-2014), located over the US continental slope off Cape Cod, to determine advection and mixing time scales for the transport of Denmark Strait Overflow Water (DSOW) within the Deep Western Boundary Current (DWBC). Tracer measurements were also conducted in 2010 over the continental rise southeast of Bermuda to intercept the equatorward flow of DSOW by interior pathways. The Labrador Sea tracer and hydrographic time series data were used as input functions in a boundary current model that employs transit time distributions to simulate the effects of mixing and advection on downstream tracer distributions. Model simulations of tracer levels in the boundary current core and adjacent interior (shoulder) region with which mixing occurs were compared with the Line W time series measurements to determine boundary current model parameters. These results indicate that DSOW is transported from the Labrador Sea to Line W via the DWBC on a time scale of 5-6 years corresponding to a mean flow velocity of 2.7 cm/s while mixing between the core and interior regions occurs with a time constant of 2.6 years. A tracer section over the southern flank of the Bermuda rise indicates that the flow of DSOW that separated from the DWBC had undergone transport through interior pathways on a time scale of 9 years with a mixing time constant of 4 years.

  9. Multivariate multiscale entropy of financial markets

    NASA Astrophysics Data System (ADS)

    Lu, Yunfan; Wang, Jun

    2017-11-01

    In current process of quantifying the dynamical properties of the complex phenomena in financial market system, the multivariate financial time series are widely concerned. In this work, considering the shortcomings and limitations of univariate multiscale entropy in analyzing the multivariate time series, the multivariate multiscale sample entropy (MMSE), which can evaluate the complexity in multiple data channels over different timescales, is applied to quantify the complexity of financial markets. Its effectiveness and advantages have been detected with numerical simulations with two well-known synthetic noise signals. For the first time, the complexity of four generated trivariate return series for each stock trading hour in China stock markets is quantified thanks to the interdisciplinary application of this method. We find that the complexity of trivariate return series in each hour show a significant decreasing trend with the stock trading time progressing. Further, the shuffled multivariate return series and the absolute multivariate return series are also analyzed. As another new attempt, quantifying the complexity of global stock markets (Asia, Europe and America) is carried out by analyzing the multivariate returns from them. Finally we utilize the multivariate multiscale entropy to assess the relative complexity of normalized multivariate return volatility series with different degrees.

  10. Multivariable Time Series Prediction for the Icing Process on Overhead Power Transmission Line

    PubMed Central

    Li, Peng; Zhao, Na; Zhou, Donghua; Cao, Min; Li, Jingjie; Shi, Xinling

    2014-01-01

    The design of monitoring and predictive alarm systems is necessary for successful overhead power transmission line icing. Given the characteristics of complexity, nonlinearity, and fitfulness in the line icing process, a model based on a multivariable time series is presented here to predict the icing load of a transmission line. In this model, the time effects of micrometeorology parameters for the icing process have been analyzed. The phase-space reconstruction theory and machine learning method were then applied to establish the prediction model, which fully utilized the history of multivariable time series data in local monitoring systems to represent the mapping relationship between icing load and micrometeorology factors. Relevant to the characteristic of fitfulness in line icing, the simulations were carried out during the same icing process or different process to test the model's prediction precision and robustness. According to the simulation results for the Tao-Luo-Xiong Transmission Line, this model demonstrates a good accuracy of prediction in different process, if the prediction length is less than two hours, and would be helpful for power grid departments when deciding to take action in advance to address potential icing disasters. PMID:25136653

  11. Using simulated historical time series to prioritize fuel treatments on landscapes across the United States: The LANDFIRE prototype project

    USGS Publications Warehouse

    Keane, Robert E.; Rollins, Matthew; Zhu, Zhi-Liang

    2007-01-01

    Canopy and surface fuels in many fire-prone forests of the United States have increased over the last 70 years as a result of modern fire exclusion policies, grazing, and other land management activities. The Healthy Forest Restoration Act and National Fire Plan establish a national commitment to reduce fire hazard and restore fire-adapted ecosystems across the USA. The primary index used to prioritize treatment areas across the nation is Fire Regime Condition Class (FRCC) computed as departures of current conditions from the historical fire and landscape conditions. This paper describes a process that uses an extensive set of ecological models to map FRCC from a departure statistic computed from simulated time series of historical landscape composition. This mapping process uses a data-driven, biophysical approach where georeferenced field data, biogeochemical simulation models, and spatial data libraries are integrated using spatial statistical modeling to map environmental gradients that are then used to predict vegetation and fuels characteristics over space. These characteristics are then fed into a landscape fire and succession simulation model to simulate a time series of historical landscape compositions that are then compared to the composition of current landscapes to compute departure, and the FRCC values. Intermediate products from this process are then used to create ancillary vegetation, fuels, and fire regime layers that are useful in the eventual planning and implementation of fuel and restoration treatments at local scales. The complex integration of varied ecological models at different scales is described and problems encountered during the implementation of this process in the LANDFIRE prototype project are addressed.

  12. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    PubMed

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step algorithms can potentially incorporate with different nonlinear differential equation models to reconstruct the GRN.

  13. Graphic analysis and multifractal on percolation-based return interval series

    NASA Astrophysics Data System (ADS)

    Pei, A. Q.; Wang, J.

    2015-05-01

    A financial time series model is developed and investigated by the oriented percolation system (one of the statistical physics systems). The nonlinear and statistical behaviors of the return interval time series are studied for the proposed model and the real stock market by applying visibility graph (VG) and multifractal detrended fluctuation analysis (MF-DFA). We investigate the fluctuation behaviors of return intervals of the model for different parameter settings, and also comparatively study these fluctuation patterns with those of the real financial data for different threshold values. The empirical research of this work exhibits the multifractal features for the corresponding financial time series. Further, the VGs deviated from both of the simulated data and the real data show the behaviors of small-world, hierarchy, high clustering and power-law tail for the degree distributions.

  14. Quasi-static time-series simulation using OpenDSS in IEEE distribution feeder model with high PV penetration and its impact on solar forecasting

    NASA Astrophysics Data System (ADS)

    Mohammed, Touseef Ahmed Faisal

    Since 2000, renewable electricity installations in the United States (excluding hydropower) have more than tripled. Renewable electricity has grown at a compounded annual average of nearly 14% per year from 2000-2010. Wind, Concentrated Solar Power (CSP) and solar Photo Voltaic (PV) are the fastest growing renewable energy sectors. In 2010 in the U.S., solar PV grew over 71% and CSP grew by 18% from the previous year. Globally renewable electricity installations have more than quadrupled from 2000-2010. Solar PV generation grew by a factor of more than 28 between 2000 and 2010. The amount of CSP and solar PV installations are increasing on the distribution grid. These PV installations transmit electrical current from the load centers to the generating stations. But the transmission and distribution grid have been designed for uni-directional flow of electrical energy from generating stations to load centers. This causes imbalances in voltage and switchgear of the electrical circuitry. With the continuous rise in PV installations, analysis of voltage profile and penetration levels remain an active area of research. Standard distributed photovoltaic (PV) generators represented in simulation studies do not reflect the exact location and variability properties such as distance between interconnection points to substations, voltage regulators, solar irradiance and other environmental factors. Quasi-Static simulations assist in peak load planning hour and day ahead as it gives a time sequence analysis to help in generation allocation. Simulation models can be daily, hourly or yearly depending on duty cycle and dynamics of the system. High penetration of PV into the power grid changes the voltage profile and power flow dynamically in the distribution circuits due to the inherent variability of PV. There are a number of modeling and simulations tools available for the study of such high penetration PV scenarios. This thesis will specifically utilize OpenDSS, a open source Distribution System Simulator developed by Electric Power Research Institute, to simulate grid voltage profile with a large scale PV system under quasi-static time series considering variations of PV output in seconds, minutes, and the average daily load variations. A 13 bus IEEE distribution feeder model is utilized with distributed residential and commercial scale PV at different buses for simulation studies. Time series simulations are discussed for various modes of operation considering dynamic PV penetration at different time periods in a day. In addition, this thesis demonstrates simulations taking into account the presence of moving cloud for solar forecasting studies.

  15. Time series smoother for effect detection.

    PubMed

    You, Cheng; Lin, Dennis K J; Young, S Stanley

    2018-01-01

    In environmental epidemiology, it is often encountered that multiple time series data with a long-term trend, including seasonality, cannot be fully adjusted by the observed covariates. The long-term trend is difficult to separate from abnormal short-term signals of interest. This paper addresses how to estimate the long-term trend in order to recover short-term signals. Our case study demonstrates that the current spline smoothing methods can result in significant positive and negative cross-correlations from the same dataset, depending on how the smoothing parameters are chosen. To circumvent this dilemma, three classes of time series smoothers are proposed to detrend time series data. These smoothers do not require fine tuning of parameters and can be applied to recover short-term signals. The properties of these smoothers are shown with both a case study using a factorial design and a simulation study using datasets generated from the original dataset. General guidelines are provided on how to discover short-term signals from time series with a long-term trend. The benefit of this research is that a problem is identified and characteristics of possible solutions are determined.

  16. A combinatorial filtering method for magnetotelluric time-series based on Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Cai, Jianhua

    2014-11-01

    Magnetotelluric (MT) time-series are often contaminated with noise from natural or man-made processes. A substantial improvement is possible when the time-series are presented as clean as possible for further processing. A combinatorial method is described for filtering of MT time-series based on the Hilbert-Huang transform that requires a minimum of human intervention and leaves good data sections unchanged. Good data sections are preserved because after empirical mode decomposition the data are analysed through hierarchies, morphological filtering, adaptive threshold and multi-point smoothing, allowing separation of noise from signals. The combinatorial method can be carried out without any assumption about the data distribution. Simulated data and the real measured MT time-series from three different regions, with noise caused by baseline drift, high frequency noise and power-line contribution, are processed to demonstrate the application of the proposed method. Results highlight the ability of the combinatorial method to pick out useful signals, and the noise is suppressed greatly so that their deleterious influence is eliminated for the MT transfer function estimation.

  17. Crossing trend analysis methodology and application for Turkish rainfall records

    NASA Astrophysics Data System (ADS)

    Şen, Zekâi

    2018-01-01

    Trend analyses are the necessary tools for depicting possible general increase or decrease in a given time series. There are many versions of trend identification methodologies such as the Mann-Kendall trend test, Spearman's tau, Sen's slope, regression line, and Şen's innovative trend analysis. The literature has many papers about the use, cons and pros, and comparisons of these methodologies. In this paper, a completely new approach is proposed based on the crossing properties of a time series. It is suggested that the suitable trend from the centroid of the given time series should have the maximum number of crossings (total number of up-crossings or down-crossings). This approach is applicable whether the time series has dependent or independent structure and also without any dependence on the type of the probability distribution function. The validity of this method is presented through extensive Monte Carlo simulation technique and its comparison with other existing trend identification methodologies. The application of the methodology is presented for a set of annual daily extreme rainfall time series from different parts of Turkey and they have physically independent structure.

  18. Time series smoother for effect detection

    PubMed Central

    Lin, Dennis K. J.; Young, S. Stanley

    2018-01-01

    In environmental epidemiology, it is often encountered that multiple time series data with a long-term trend, including seasonality, cannot be fully adjusted by the observed covariates. The long-term trend is difficult to separate from abnormal short-term signals of interest. This paper addresses how to estimate the long-term trend in order to recover short-term signals. Our case study demonstrates that the current spline smoothing methods can result in significant positive and negative cross-correlations from the same dataset, depending on how the smoothing parameters are chosen. To circumvent this dilemma, three classes of time series smoothers are proposed to detrend time series data. These smoothers do not require fine tuning of parameters and can be applied to recover short-term signals. The properties of these smoothers are shown with both a case study using a factorial design and a simulation study using datasets generated from the original dataset. General guidelines are provided on how to discover short-term signals from time series with a long-term trend. The benefit of this research is that a problem is identified and characteristics of possible solutions are determined. PMID:29684033

  19. Statistical tools for analysis and modeling of cosmic populations and astronomical time series: CUDAHM and TSE

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas; Budavari, Tamas; Scargle, Jeffrey D.

    2018-01-01

    This presentation provides an overview of open-source software packages addressing two challenging classes of astrostatistics problems. (1) CUDAHM is a C++ framework for hierarchical Bayesian modeling of cosmic populations, leveraging graphics processing units (GPUs) to enable applying this computationally challenging paradigm to large datasets. CUDAHM is motivated by measurement error problems in astronomy, where density estimation and linear and nonlinear regression must be addressed for populations of thousands to millions of objects whose features are measured with possibly complex uncertainties, potentially including selection effects. An example calculation demonstrates accurate GPU-accelerated luminosity function estimation for simulated populations of $10^6$ objects in about two hours using a single NVIDIA Tesla K40c GPU. (2) Time Series Explorer (TSE) is a collection of software in Python and MATLAB for exploratory analysis and statistical modeling of astronomical time series. It comprises a library of stand-alone functions and classes, as well as an application environment for interactive exploration of times series data. The presentation will summarize key capabilities of this emerging project, including new algorithms for analysis of irregularly-sampled time series.

  20. Detection of chaotic determinism in time series from randomly forced maps

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Kanters, J. K.; Cohen, R. J.; Holstein-Rathlou, N. H.

    1997-01-01

    Time series from biological system often display fluctuations in the measured variables. Much effort has been directed at determining whether this variability reflects deterministic chaos, or whether it is merely "noise". Despite this effort, it has been difficult to establish the presence of chaos in time series from biological sytems. The output from a biological system is probably the result of both its internal dynamics, and the input to the system from the surroundings. This implies that the system should be viewed as a mixed system with both stochastic and deterministic components. We present a method that appears to be useful in deciding whether determinism is present in a time series, and if this determinism has chaotic attributes, i.e., a positive characteristic exponent that leads to sensitivity to initial conditions. The method relies on fitting a nonlinear autoregressive model to the time series followed by an estimation of the characteristic exponents of the model over the observed probability distribution of states for the system. The method is tested by computer simulations, and applied to heart rate variability data.

  1. Classification of biosensor time series using dynamic time warping: applications in screening cancer cells with characteristic biomarkers.

    PubMed

    Rai, Shesh N; Trainor, Patrick J; Khosravi, Farhad; Kloecker, Goetz; Panchapakesan, Balaji

    2016-01-01

    The development of biosensors that produce time series data will facilitate improvements in biomedical diagnostics and in personalized medicine. The time series produced by these devices often contains characteristic features arising from biochemical interactions between the sample and the sensor. To use such characteristic features for determining sample class, similarity-based classifiers can be utilized. However, the construction of such classifiers is complicated by the variability in the time domains of such series that renders the traditional distance metrics such as Euclidean distance ineffective in distinguishing between biological variance and time domain variance. The dynamic time warping (DTW) algorithm is a sequence alignment algorithm that can be used to align two or more series to facilitate quantifying similarity. In this article, we evaluated the performance of DTW distance-based similarity classifiers for classifying time series that mimics electrical signals produced by nanotube biosensors. Simulation studies demonstrated the positive performance of such classifiers in discriminating between time series containing characteristic features that are obscured by noise in the intensity and time domains. We then applied a DTW distance-based k -nearest neighbors classifier to distinguish the presence/absence of mesenchymal biomarker in cancer cells in buffy coats in a blinded test. Using a train-test approach, we find that the classifier had high sensitivity (90.9%) and specificity (81.8%) in differentiating between EpCAM-positive MCF7 cells spiked in buffy coats and those in plain buffy coats.

  2. Temporal downscaling of decadal sediment load estimates to a daily interval for use in hindcast simulations

    USGS Publications Warehouse

    Ganju, N.K.; Knowles, N.; Schoellhamer, D.H.

    2008-01-01

    In this study we used hydrologic proxies to develop a daily sediment load time-series, which agrees with decadal sediment load estimates, when integrated. Hindcast simulations of bathymetric change in estuaries require daily sediment loads from major tributary rivers, to capture the episodic delivery of sediment during multi-day freshwater flow pulses. Two independent decadal sediment load estimates are available for the Sacramento/San Joaquin River Delta, California prior to 1959, but they must be downscaled to a daily interval for use in hindcast models. Daily flow and sediment load data to the Delta are available after 1930 and 1959, respectively, but bathymetric change simulations for San Francisco Bay prior to this require a method to generate daily sediment load estimates into the Delta. We used two historical proxies, monthly rainfall and unimpaired flow magnitudes, to generate monthly unimpaired flows to the Sacramento/San Joaquin Delta for the 1851-1929 period. This step generated the shape of the monthly hydrograph. These historical monthly flows were compared to unimpaired monthly flows from the modern era (1967-1987), and a least-squares metric selected a modern water year analogue for each historical water year. The daily hydrograph for the modern analogue was then assigned to the historical year and scaled to match the flow volume estimated by dendrochronology methods, providing the correct total flow for the year. We applied a sediment rating curve to this time-series of daily flows, to generate daily sediment loads for 1851-1958. The rating curve was calibrated with the two independent decadal sediment load estimates, over two distinct periods. This novel technique retained the timing and magnitude of freshwater flows and sediment loads, without damping variability or net sediment loads to San Francisco Bay. The time-series represents the hydraulic mining period with sustained periods of increased sediment loads, and a dramatic decrease after 1910, corresponding to a reduction in available mining debris. The analogue selection procedure also permits exploration of the morphological hydrograph concept, where a limited set of hydrographs is used to simulate the same bathymetric change as the actual set of hydrographs. The final daily sediment load time-series and morphological hydrograph concept will be applied as landward boundary conditions for hindcasting simulations of bathymetric change in San Francisco Bay.

  3. Creating historical range of variation (HRV) time series using landscape modeling: Overview and issues [Chapter 8

    Treesearch

    Robert E. Keane

    2012-01-01

    Simulation modeling can be a powerful tool for generating information about historical range of variation (HRV) in landscape conditions. In this chapter, I will discuss several aspects of the use of simulation modeling to generate landscape HRV data, including (1) the advantages and disadvantages of using simulation, (2) a brief review of possible landscape models. and...

  4. Assimilating a synthetic Kalman filter leaf area index series into the WOFOST model to improve regional winter wheat yield estimation

    USDA-ARS?s Scientific Manuscript database

    The scale mismatch between remotely sensed observations and crop growth models simulated state variables decreases the reliability of crop yield estimates. To overcome this problem, we used a two-step data assimilation phases: first we generated a complete leaf area index (LAI) time series by combin...

  5. Using Coupled Groundwater-Surface Water Models to Simulate Eco-Regional Differences in Climate Change Impacts on Hydrological Drought Regimes in British Columbia

    NASA Astrophysics Data System (ADS)

    Dierauer, J. R.; Allen, D. M.

    2016-12-01

    Climate change is expected to lead to an increase in extremes, including daily maximum temperatures, heat waves, and meteorological droughts, which will likely result in shifts in the hydrological drought regime (i.e. the frequency, timing, duration, and severity of drought events). While many studies have used hydrologic models to simulate climate change impacts on water resources, only a small portion of these studies have analyzed impacts on low flows and/or hydrological drought. This study is the first to use a fully coupled groundwater-surface water (gw-sw) model to study climate change impacts on hydrological drought. Generic catchment-scale gw-sw models were created for each of the six major eco-regions in British Columbia using the MIKE-SHE/MIKE-11 modelling code. Daily precipitation and temperature time series downscaled using bias-correction spatial disaggregation for the simulated period of 1950-2100 were obtained from the Pacific Climate Institute Consortium (PCIC). Streamflow and groundwater drought events were identified from the simulated time series for each catchment model using the moving window quantile threshold. The frequency, timing, duration, and severity of drought events were compared between the reference period (1961-2000) and two future time periods (2031-2060, 2071-2100). Results show how hydrological drought regimes across the different British Columbia eco-regions will be impacted by climate change.

  6. Analyzing time-ordered event data with missed observations.

    PubMed

    Dokter, Adriaan M; van Loon, E Emiel; Fokkema, Wimke; Lameris, Thomas K; Nolet, Bart A; van der Jeugd, Henk P

    2017-09-01

    A common problem with observational datasets is that not all events of interest may be detected. For example, observing animals in the wild can difficult when animals move, hide, or cannot be closely approached. We consider time series of events recorded in conditions where events are occasionally missed by observers or observational devices. These time series are not restricted to behavioral protocols, but can be any cyclic or recurring process where discrete outcomes are observed. Undetected events cause biased inferences on the process of interest, and statistical analyses are needed that can identify and correct the compromised detection processes. Missed observations in time series lead to observed time intervals between events at multiples of the true inter-event time, which conveys information on their detection probability. We derive the theoretical probability density function for observed intervals between events that includes a probability of missed detection. Methodology and software tools are provided for analysis of event data with potential observation bias and its removal. The methodology was applied to simulation data and a case study of defecation rate estimation in geese, which is commonly used to estimate their digestive throughput and energetic uptake, or to calculate goose usage of a feeding site from dropping density. Simulations indicate that at a moderate chance to miss arrival events ( p  = 0.3), uncorrected arrival intervals were biased upward by up to a factor 3, while parameter values corrected for missed observations were within 1% of their true simulated value. A field case study shows that not accounting for missed observations leads to substantial underestimates of the true defecation rate in geese, and spurious rate differences between sites, which are introduced by differences in observational conditions. These results show that the derived methodology can be used to effectively remove observational biases in time-ordered event data.

  7. Fading channel simulator

    DOEpatents

    Argo, Paul E.; Fitzgerald, T. Joseph

    1993-01-01

    Fading channel effects on a transmitted communication signal are simulated with both frequency and time variations using a channel scattering function to affect the transmitted signal. A conventional channel scattering function is converted to a series of channel realizations by multiplying the square root of the channel scattering function by a complex number of which the real and imaginary parts are each independent variables. The two-dimensional inverse-FFT of this complex-valued channel realization yields a matrix of channel coefficients that provide a complete frequency-time description of the channel. The transmitted radio signal is segmented to provide a series of transmitted signal and each segment is subject to FFT to generate a series of signal coefficient matrices. The channel coefficient matrices and signal coefficient matrices are then multiplied and subjected to inverse-FFT to output a signal representing the received affected radio signal. A variety of channel scattering functions can be used to characterize the response of a transmitter-receiver system to such atmospheric effects.

  8. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  9. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  10. An approach to checking case-crossover analyses based on equivalence with time-series methods.

    PubMed

    Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L

    2008-03-01

    The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.

  11. Estimating rainfall time series and model parameter distributions using model data reduction and inversion techniques

    NASA Astrophysics Data System (ADS)

    Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.

    2017-08-01

    Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  12. Precipitation extremes on multiple timescales - Bartlett-Lewis rectangular pulse model and intensity-duration-frequency curves

    NASA Astrophysics Data System (ADS)

    Ritschel, Christoph; Ulbrich, Uwe; Névir, Peter; Rust, Henning W.

    2017-12-01

    For several hydrological modelling tasks, precipitation time series with a high (i.e. sub-daily) resolution are indispensable. The data are, however, not always available, and thus model simulations are used to compensate. A canonical class of stochastic models for sub-daily precipitation are Poisson cluster processes, with the original Bartlett-Lewis (OBL) model as a prominent representative. The OBL model has been shown to well reproduce certain characteristics found in observations. Our focus is on intensity-duration-frequency (IDF) relationships, which are of particular interest in risk assessment. Based on a high-resolution precipitation time series (5 min) from Berlin-Dahlem, OBL model parameters are estimated and IDF curves are obtained on the one hand directly from the observations and on the other hand from OBL model simulations. Comparing the resulting IDF curves suggests that the OBL model is able to reproduce the main features of IDF statistics across several durations but cannot capture rare events (here an event with a return period larger than 1000 years on the hourly timescale). In this paper, IDF curves are estimated based on a parametric model for the duration dependence of the scale parameter in the generalized extreme value distribution; this allows us to obtain a consistent set of curves over all durations. We use the OBL model to investigate the validity of this approach based on simulated long time series.

  13. How to Run FAST Simulations.

    PubMed

    Zimmerman, M I; Bowman, G R

    2016-01-01

    Molecular dynamics (MD) simulations are a powerful tool for understanding enzymes' structures and functions with full atomistic detail. These physics-based simulations model the dynamics of a protein in solution and store snapshots of its atomic coordinates at discrete time intervals. Analysis of the snapshots from these trajectories provides thermodynamic and kinetic properties such as conformational free energies, binding free energies, and transition times. Unfortunately, simulating biologically relevant timescales with brute force MD simulations requires enormous computing resources. In this chapter we detail a goal-oriented sampling algorithm, called fluctuation amplification of specific traits, that quickly generates pertinent thermodynamic and kinetic information by using an iterative series of short MD simulations to explore the vast depths of conformational space. © 2016 Elsevier Inc. All rights reserved.

  14. Exploring total cardiac variability in healthy and pathophysiological subjects using improved refined multiscale entropy.

    PubMed

    Marwaha, Puneeta; Sunkaria, Ramesh Kumar

    2017-02-01

    Multiscale entropy (MSE) and refined multiscale entropy (RMSE) techniques are being widely used to evaluate the complexity of a time series across multiple time scales 't'. Both these techniques, at certain time scales (sometimes for the entire time scales, in the case of RMSE), assign higher entropy to the HRV time series of certain pathologies than that of healthy subjects, and to their corresponding randomized surrogate time series. This incorrect assessment of signal complexity may be due to the fact that these techniques suffer from the following limitations: (1) threshold value 'r' is updated as a function of long-term standard deviation and hence unable to explore the short-term variability as well as substantial variability inherited in beat-to-beat fluctuations of long-term HRV time series. (2) In RMSE, entropy values assigned to different filtered scaled time series are the result of changes in variance, but do not completely reflect the real structural organization inherited in original time series. In the present work, we propose an improved RMSE (I-RMSE) technique by introducing a new procedure to set the threshold value by taking into account the period-to-period variability inherited in a signal and evaluated it on simulated and real HRV database. The proposed I-RMSE assigns higher entropy to the age-matched healthy subjects than that of patients suffering from atrial fibrillation, congestive heart failure, sudden cardiac death and diabetes mellitus, for the entire time scales. The results strongly support the reduction in complexity of HRV time series in female group, old-aged, patients suffering from severe cardiovascular and non-cardiovascular diseases, and in their corresponding surrogate time series.

  15. Assessing the Effectiveness of a Computer Simulation for Teaching Ecological Experimental Design

    ERIC Educational Resources Information Center

    Stafford, Richard; Goodenough, Anne E.; Davies, Mark S.

    2010-01-01

    Designing manipulative ecological experiments is a complex and time-consuming process that is problematic to teach in traditional undergraduate classes. This study investigates the effectiveness of using a computer simulation--the Virtual Rocky Shore (VRS)--to facilitate rapid, student-centred learning of experimental design. We gave a series of…

  16. A method for ensemble wildland fire simulation

    Treesearch

    Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain

    2011-01-01

    An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...

  17. Identification of market trends with string and D2-brane maps

    NASA Astrophysics Data System (ADS)

    Bartoš, Erik; Pinčák, Richard

    2017-08-01

    The multidimensional string objects are introduced as a new alternative for an application of string models for time series forecasting in trading on financial markets. The objects are represented by open string with 2-endpoints and D2-brane, which are continuous enhancement of 1-endpoint open string model. We show how new object properties can change the statistics of the predictors, which makes them the candidates for modeling a wide range of time series systems. String angular momentum is proposed as another tool to analyze the stability of currency rates except the historical volatility. To show the reliability of our approach with application of string models for time series forecasting we present the results of real demo simulations for four currency exchange pairs.

  18. Hierarchical Regularity in Multi-Basin Dynamics on Protein Landscapes

    NASA Astrophysics Data System (ADS)

    Matsunaga, Yasuhiro; Kostov, Konstatin S.; Komatsuzaki, Tamiki

    2004-04-01

    We analyze time series of potential energy fluctuations and principal components at several temperatures for two kinds of off-lattice 46-bead models that have two distinctive energy landscapes. The less-frustrated "funnel" energy landscape brings about stronger nonstationary behavior of the potential energy fluctuations at the folding temperature than the other, rather frustrated energy landscape at the collapse temperature. By combining principal component analysis with an embedding nonlinear time-series analysis, it is shown that the fast fluctuations with small amplitudes of 70-80% of the principal components cause the time series to become almost "random" in only 100 simulation steps. However, the stochastic feature of the principal components tends to be suppressed through a wide range of degrees of freedom at the transition temperature.

  19. Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Hou, Zhangshuan; Meng, Da

    2016-07-17

    In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.

  20. Non-linear dynamical classification of short time series of the rössler system in high noise regimes.

    PubMed

    Lainscsek, Claudia; Weyhenmeyer, Jonathan; Hernandez, Manuel E; Poizner, Howard; Sejnowski, Terrence J

    2013-01-01

    Time series analysis with delay differential equations (DDEs) reveals non-linear properties of the underlying dynamical system and can serve as a non-linear time-domain classification tool. Here global DDE models were used to analyze short segments of simulated time series from a known dynamical system, the Rössler system, in high noise regimes. In a companion paper, we apply the DDE model developed here to classify short segments of encephalographic (EEG) data recorded from patients with Parkinson's disease and healthy subjects. Nine simulated subjects in each of two distinct classes were generated by varying the bifurcation parameter b and keeping the other two parameters (a and c) of the Rössler system fixed. All choices of b were in the chaotic parameter range. We diluted the simulated data using white noise ranging from 10 to -30 dB signal-to-noise ratios (SNR). Structure selection was supervised by selecting the number of terms, delays, and order of non-linearity of the model DDE model that best linearly separated the two classes of data. The distances d from the linear dividing hyperplane was then used to assess the classification performance by computing the area A' under the ROC curve. The selected model was tested on untrained data using repeated random sub-sampling validation. DDEs were able to accurately distinguish the two dynamical conditions, and moreover, to quantify the changes in the dynamics. There was a significant correlation between the dynamical bifurcation parameter b of the simulated data and the classification parameter d from our analysis. This correlation still held for new simulated subjects with new dynamical parameters selected from each of the two dynamical regimes. Furthermore, the correlation was robust to added noise, being significant even when the noise was greater than the signal. We conclude that DDE models may be used as a generalizable and reliable classification tool for even small segments of noisy data.

  1. Non-Linear Dynamical Classification of Short Time Series of the Rössler System in High Noise Regimes

    PubMed Central

    Lainscsek, Claudia; Weyhenmeyer, Jonathan; Hernandez, Manuel E.; Poizner, Howard; Sejnowski, Terrence J.

    2013-01-01

    Time series analysis with delay differential equations (DDEs) reveals non-linear properties of the underlying dynamical system and can serve as a non-linear time-domain classification tool. Here global DDE models were used to analyze short segments of simulated time series from a known dynamical system, the Rössler system, in high noise regimes. In a companion paper, we apply the DDE model developed here to classify short segments of encephalographic (EEG) data recorded from patients with Parkinson’s disease and healthy subjects. Nine simulated subjects in each of two distinct classes were generated by varying the bifurcation parameter b and keeping the other two parameters (a and c) of the Rössler system fixed. All choices of b were in the chaotic parameter range. We diluted the simulated data using white noise ranging from 10 to −30 dB signal-to-noise ratios (SNR). Structure selection was supervised by selecting the number of terms, delays, and order of non-linearity of the model DDE model that best linearly separated the two classes of data. The distances d from the linear dividing hyperplane was then used to assess the classification performance by computing the area A′ under the ROC curve. The selected model was tested on untrained data using repeated random sub-sampling validation. DDEs were able to accurately distinguish the two dynamical conditions, and moreover, to quantify the changes in the dynamics. There was a significant correlation between the dynamical bifurcation parameter b of the simulated data and the classification parameter d from our analysis. This correlation still held for new simulated subjects with new dynamical parameters selected from each of the two dynamical regimes. Furthermore, the correlation was robust to added noise, being significant even when the noise was greater than the signal. We conclude that DDE models may be used as a generalizable and reliable classification tool for even small segments of noisy data. PMID:24379798

  2. A hybrid-domain approach for modeling climate data time series

    NASA Astrophysics Data System (ADS)

    Wen, Qiuzi H.; Wang, Xiaolan L.; Wong, Augustine

    2011-09-01

    In order to model climate data time series that often contain periodic variations, trends, and sudden changes in mean (mean shifts, mostly artificial), this study proposes a hybrid-domain (HD) algorithm, which incorporates a time domain test and a newly developed frequency domain test through an iterative procedure that is analogue to the well known backfitting algorithm. A two-phase competition procedure is developed to address the confounding issue between modeling periodic variations and mean shifts. A variety of distinctive features of climate data time series, including trends, periodic variations, mean shifts, and a dependent noise structure, can be modeled in tandem using the HD algorithm. This is particularly important for homogenization of climate data from a low density observing network in which reference series are not available to help preserve climatic trends and long-term periodic variations, preventing them from being mistaken as artificial shifts. The HD algorithm is also powerful in estimating trend and periodicity in a homogeneous data time series (i.e., in the absence of any mean shift). The performance of the HD algorithm (in terms of false alarm rate and hit rate in detecting shifts/cycles, and estimation accuracy) is assessed via a simulation study. Its power is further illustrated through its application to a few climate data time series.

  3. Microscopic Spin Model for the STOCK Market with Attractor Bubbling on Regular and Small-World Lattices

    NASA Astrophysics Data System (ADS)

    Krawiecki, A.

    A multi-agent spin model for changes of prices in the stock market based on the Ising-like cellular automaton with interactions between traders randomly varying in time is investigated by means of Monte Carlo simulations. The structure of interactions has topology of a small-world network obtained from regular two-dimensional square lattices with various coordination numbers by randomly cutting and rewiring edges. Simulations of the model on regular lattices do not yield time series of logarithmic price returns with statistical properties comparable with the empirical ones. In contrast, in the case of networks with a certain degree of randomness for a wide range of parameters the time series of the logarithmic price returns exhibit intermittent bursting typical of volatility clustering. Also the tails of distributions of returns obey a power scaling law with exponents comparable to those obtained from the empirical data.

  4. Real-time liquid-crystal atmosphere turbulence simulator with graphic processing unit.

    PubMed

    Hu, Lifa; Xuan, Li; Li, Dayu; Cao, Zhaoliang; Mu, Quanquan; Liu, Yonggang; Peng, Zenghui; Lu, Xinghai

    2009-04-27

    To generate time-evolving atmosphere turbulence in real time, a phase-generating method for our liquid-crystal (LC) atmosphere turbulence simulator (ATS) is derived based on the Fourier series (FS) method. A real matrix expression for generating turbulence phases is given and calculated with a graphic processing unit (GPU), the GeForce 8800 Ultra. A liquid crystal on silicon (LCOS) with 256x256 pixels is used as the turbulence simulator. The total time to generate a turbulence phase is about 7.8 ms for calculation and readout with the GPU. A parallel processing method of calculating and sending a picture to the LCOS is used to improve the simulating speed of our LC ATS. Therefore, the real-time turbulence phase-generation frequency of our LC ATS is up to 128 Hz. To our knowledge, it is the highest speed used to generate a turbulence phase in real time.

  5. A NEW METHOD OF LONGITUDINAL DIARY ASSEMBLY FOR HUMAN EXPOSURE MODELING

    EPA Science Inventory

    Human exposure time-series modeling requires longitudinal time-activity diaries to evaluate the sequence of concentrations encountered, and hence, pollutant exposure for the simulated individuals. However, most of the available data on human activities are from cross-sectional su...

  6. Quantifying Selection with Pool-Seq Time Series Data.

    PubMed

    Taus, Thomas; Futschik, Andreas; Schlötterer, Christian

    2017-11-01

    Allele frequency time series data constitute a powerful resource for unraveling mechanisms of adaptation, because the temporal dimension captures important information about evolutionary forces. In particular, Evolve and Resequence (E&R), the whole-genome sequencing of replicated experimentally evolving populations, is becoming increasingly popular. Based on computer simulations several studies proposed experimental parameters to optimize the identification of the selection targets. No such recommendations are available for the underlying parameters selection strength and dominance. Here, we introduce a highly accurate method to estimate selection parameters from replicated time series data, which is fast enough to be applied on a genome scale. Using this new method, we evaluate how experimental parameters can be optimized to obtain the most reliable estimates for selection parameters. We show that the effective population size (Ne) and the number of replicates have the largest impact. Because the number of time points and sequencing coverage had only a minor effect, we suggest that time series analysis is feasible without major increase in sequencing costs. We anticipate that time series analysis will become routine in E&R studies. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  7. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series

    PubMed Central

    Fransson, Peter

    2016-01-01

    Abstract Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box–Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed. PMID:27784176

  8. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    PubMed

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  9. Time series inversion of spectra from ground-based radiometers

    NASA Astrophysics Data System (ADS)

    Christensen, O. M.; Eriksson, P.

    2013-02-01

    Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the OSO water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.

  10. Documentation of a spreadsheet for time-series analysis and drawdown estimation

    USGS Publications Warehouse

    Halford, Keith J.

    2006-01-01

    Drawdowns during aquifer tests can be obscured by barometric pressure changes, earth tides, regional pumping, and recharge events in the water-level record. These stresses can create water-level fluctuations that should be removed from observed water levels prior to estimating drawdowns. Simple models have been developed for estimating unpumped water levels during aquifer tests that are referred to as synthetic water levels. These models sum multiple time series such as barometric pressure, tidal potential, and background water levels to simulate non-pumping water levels. The amplitude and phase of each time series are adjusted so that synthetic water levels match measured water levels during periods unaffected by an aquifer test. Differences between synthetic and measured water levels are minimized with a sum-of-squares objective function. Root-mean-square errors during fitting and prediction periods were compared multiple times at four geographically diverse sites. Prediction error equaled fitting error when fitting periods were greater than or equal to four times prediction periods. The proposed drawdown estimation approach has been implemented in a spreadsheet application. Measured time series are independent so that collection frequencies can differ and sampling times can be asynchronous. Time series can be viewed selectively and magnified easily. Fitting and prediction periods can be defined graphically or entered directly. Synthetic water levels for each observation well are created with earth tides, measured time series, moving averages of time series, and differences between measured and moving averages of time series. Selected series and fitting parameters for synthetic water levels are stored and drawdowns are estimated for prediction periods. Drawdowns can be viewed independently and adjusted visually if an anomaly skews initial drawdowns away from 0. The number of observations in a drawdown time series can be reduced by averaging across user-defined periods. Raw or reduced drawdown estimates can be copied from the spreadsheet application or written to tab-delimited ASCII files.

  11. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun

    Here, an entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace’s law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results.more » Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.« less

  12. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    NASA Astrophysics Data System (ADS)

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; Derome, Dominique; Carmeliet, Jan

    2018-03-01

    An entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace's law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results. Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.

  13. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    DOE PAGES

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; ...

    2018-03-22

    Here, an entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace’s law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results.more » Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.« less

  14. Software Comparison for Renewable Energy Deployment in a Distribution Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian

    The main objective of this report is to evaluate different software options for performing robust distributed generation (DG) power system modeling. The features and capabilities of four simulation tools, OpenDSS, GridLAB-D, CYMDIST, and PowerWorld Simulator, are compared to analyze their effectiveness in analyzing distribution networks with DG. OpenDSS and GridLAB-D, two open source software, have the capability to simulate networks with fluctuating data values. These packages allow the running of a simulation each time instant by iterating only the main script file. CYMDIST, a commercial software, allows for time-series simulation to study variations on network controls. PowerWorld Simulator, another commercialmore » tool, has a batch mode simulation function through the 'Time Step Simulation' tool, which obtains solutions for a list of specified time points. PowerWorld Simulator is intended for analysis of transmission-level systems, while the other three are designed for distribution systems. CYMDIST and PowerWorld Simulator feature easy-to-use graphical user interfaces (GUIs). OpenDSS and GridLAB-D, on the other hand, are based on command-line programs, which increase the time necessary to become familiar with the software packages.« less

  15. Climate change impact and potential adaptation strategies under alternate realizations of climate scenarios for three major crops in Europe

    NASA Astrophysics Data System (ADS)

    Donatelli, Marcello; Srivastava, Amit Kumar; Duveiller, Gregory; Niemeyer, Stefan; Fumagalli, Davide

    2015-07-01

    This study presents an estimate of the effects of climate variables and CO2 on three major crops, namely wheat, rapeseed and sunflower, in EU27 Member States. We also investigated some technical adaptation options which could offset climate change impacts. The time-slices 2000, 2020 and 2030 were chosen to represent the baseline and future climate, respectively. Furthermore, two realizations within the A1B emission scenario proposed by the Special Report on Emissions Scenarios (SRES), from the ECHAM5 and HadCM3 GCM, were selected. A time series of 30 years for each GCM and time slice were used as input weather data for simulation. The time series were generated with a stochastic weather generator trained over GCM-RCM time series (downscaled simulations from the ENSEMBLES project which were statistically bias-corrected prior to the use of the weather generator). GCM-RCM simulations differed primarily for rainfall patterns across Europe, whereas the temperature increase was similar in the time horizons considered. Simulations based on the model CropSyst v. 3 were used to estimate crop responses; CropSyst was re-implemented in the modelling framework BioMA. The results presented in this paper refer to abstraction of crop growth with respect to its production system, and consider growth as limited by weather and soil water. How crop growth responds to CO2 concentrations; pests, diseases, and nutrients limitations were not accounted for in simulations. The results show primarily that different realization of the emission scenario lead to noticeably different crop performance projections in the same time slice. Simple adaptation techniques such as changing sowing dates and the use of different varieties, the latter in terms of duration of the crop cycle, may be effective in alleviating the adverse effects of climate change in most areas, although response to best adaptation (within the techniques tested) differed across crops. Although a negative impact of climate scenarios is evident in most areas, the combination of rainfall patterns and increased photosynthesis efficiency due to CO2 concentrations showed possible improvements of production patterns in some areas, including Southern Europe. The uncertainty deriving from GCM realizations with respect to rainfall suggests that articulated and detailed testing of adaptation techniques would be redundant. Using ensemble simulations would allow for the identification of areas where adaptation, like those simulated, may be run autonomously by farmers, hence not requiring specific intervention in terms of support policies.

  16. Atmospheric extinction in simulation tools for solar tower plants

    NASA Astrophysics Data System (ADS)

    Hanrieder, Natalie; Wilbert, Stefan; Schroedter-Homscheidt, Marion; Schnell, Franziska; Guevara, Diana Mancera; Buck, Reiner; Giuliano, Stefano; Pitz-Paal, Robert

    2017-06-01

    Atmospheric extinction causes significant radiation losses between the heliostat field and the receiver in a solar tower plants. These losses vary with site and time. State of the art is that in ray-tracing and plant optimization tools, atmospheric extinction is included by choosing between few constant standard atmospheric conditions. Even though some tools allow the consideration of site and time dependent extinction data, such data sets are nearly never available. This paper summarizes and compares the most common model equations implemented in several ray-tracing tools. There are already several methods developed and published to measure extinction on-site. An overview of the existing methods is also given here. Ray-tracing simulations of one exemplary tower plant at the Plataforma Solar de Almería (PSA) are presented to estimate the plant yield deviations between simulations using standard model equations instead of extinction time series. For PSA, the effect of atmospheric extinction accounts for losses between 1.6 and 7 %. This range is caused by considering overload dumping or not. Applying standard clear or hazy model equations instead of extinction time series lead to an underestimation of the annual plant yield at PSA. The discussion of the effect of extinction in tower plants has to include overload dumping. Situations in which overload dumping occurs are mostly connected to high radiation levels and low atmospheric extinction. Therefore it can be recommended that project developers should consider site and time dependent extinction data especially on hazy sites. A reduced uncertainty of the plant yield prediction can significantly reduce costs due to smaller risk margins for financing and EPCs. The generation of extinction data for several locations in form of representative yearly time series or geographical maps should be further elaborated.

  17. Linking time-series of single-molecule experiments with molecular dynamics simulations by machine learning

    PubMed Central

    Matsunaga, Yasuhiro

    2018-01-01

    Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. PMID:29723137

  18. Linking time-series of single-molecule experiments with molecular dynamics simulations by machine learning.

    PubMed

    Matsunaga, Yasuhiro; Sugita, Yuji

    2018-05-03

    Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. © 2018, Matsunaga et al.

  19. Quantifying Uncertainty from Computational Factors in Simulations of a Model Ballistic System

    DTIC Science & Technology

    2017-08-01

    Comparison of runs 6–9 with the corresponding simulations from the stop time study (Tables 22 and 23) show that the restart series produces...Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by other authorized...0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing

  20. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series.

    PubMed

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-07-17

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS.

  1. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series

    PubMed Central

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-01-01

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283

  2. Detecting population-environmental interactions with mismatched time series data.

    PubMed

    Ferguson, Jake M; Reichert, Brian E; Fletcher, Robert J; Jager, Henriëtte I

    2017-11-01

    Time series analysis is an essential method for decomposing the influences of density and exogenous factors such as weather and climate on population regulation. However, there has been little work focused on understanding how well commonly collected data can reconstruct the effects of environmental factors on population dynamics. We show that, analogous to similar scale issues in spatial data analysis, coarsely sampled temporal data can fail to detect covariate effects when interactions occur on timescales that are fast relative to the survey period. We propose a method for modeling mismatched time series data that couples high-resolution environmental data to low-resolution abundance data. We illustrate our approach with simulations and by applying it to Florida's southern Snail kite population. Our simulation results show that our method can reliably detect linear environmental effects and that detecting nonlinear effects requires high-resolution covariate data even when the population turnover rate is slow. In the Snail kite analysis, our approach performed among the best in a suite of previously used environmental covariates explaining Snail kite dynamics and was able to detect a potential phenological shift in the environmental dependence of Snail kites. Our work provides a statistical framework for reliably detecting population-environment interactions from coarsely surveyed time series. An important implication of this work is that the low predictability of animal population growth by weather variables found in previous studies may be due, in part, to how these data are utilized as covariates. © 2017 by the Ecological Society of America.

  3. Application of time series analysis on molecular dynamics simulations of proteins: a study of different conformational spaces by principal component analysis.

    PubMed

    Alakent, Burak; Doruker, Pemra; Camurdan, Mehmet C

    2004-09-08

    Time series analysis is applied on the collective coordinates obtained from principal component analysis of independent molecular dynamics simulations of alpha-amylase inhibitor tendamistat and immunity protein of colicin E7 based on the Calpha coordinates history. Even though the principal component directions obtained for each run are considerably different, the dynamics information obtained from these runs are surprisingly similar in terms of time series models and parameters. There are two main differences in the dynamics of the two proteins: the higher density of low frequencies and the larger step sizes for the interminima motions of colicin E7 than those of alpha-amylase inhibitor, which may be attributed to the higher number of residues of colicin E7 and/or the structural differences of the two proteins. The cumulative density function of the low frequencies in each run conforms to the expectations from the normal mode analysis. When different runs of alpha-amylase inhibitor are projected on the same set of eigenvectors, it is found that principal components obtained from a certain conformational region of a protein has a moderate explanation power in other conformational regions and the local minima are similar to a certain extent, while the height of the energy barriers in between the minima significantly change. As a final remark, time series analysis tools are further exploited in this study with the motive of explaining the equilibrium fluctuations of proteins. Copyright 2004 American Institute of Physics

  4. Application of time series analysis on molecular dynamics simulations of proteins: A study of different conformational spaces by principal component analysis

    NASA Astrophysics Data System (ADS)

    Alakent, Burak; Doruker, Pemra; Camurdan, Mehmet C.

    2004-09-01

    Time series analysis is applied on the collective coordinates obtained from principal component analysis of independent molecular dynamics simulations of α-amylase inhibitor tendamistat and immunity protein of colicin E7 based on the Cα coordinates history. Even though the principal component directions obtained for each run are considerably different, the dynamics information obtained from these runs are surprisingly similar in terms of time series models and parameters. There are two main differences in the dynamics of the two proteins: the higher density of low frequencies and the larger step sizes for the interminima motions of colicin E7 than those of α-amylase inhibitor, which may be attributed to the higher number of residues of colicin E7 and/or the structural differences of the two proteins. The cumulative density function of the low frequencies in each run conforms to the expectations from the normal mode analysis. When different runs of α-amylase inhibitor are projected on the same set of eigenvectors, it is found that principal components obtained from a certain conformational region of a protein has a moderate explanation power in other conformational regions and the local minima are similar to a certain extent, while the height of the energy barriers in between the minima significantly change. As a final remark, time series analysis tools are further exploited in this study with the motive of explaining the equilibrium fluctuations of proteins.

  5. Detecting population–environmental interactions with mismatched time series data

    PubMed Central

    Ferguson, Jake M.; Reichert, Brian E.; Fletcher, Robert J.; Jager, Henriëtte I.

    2017-01-01

    Time series analysis is an essential method for decomposing the influences of density and exogenous factors such as weather and climate on population regulation. However, there has been little work focused on understanding how well commonly collected data can reconstruct the effects of environmental factors on population dynamics. We show that, analogous to similar scale issues in spatial data analysis, coarsely sampled temporal data can fail to detect covariate effects when interactions occur on timescales that are fast relative to the survey period. We propose a method for modeling mismatched time series data that couples high-resolution environmental data to low-resolution abundance data. We illustrate our approach with simulations and by applying it to Florida’s southern Snail kite population. Our simulation results show that our method can reliably detect linear environmental effects and that detecting nonlinear effects requires high-resolution covariate data even when the population turnover rate is slow. In the Snail kite analysis, our approach performed among the best in a suite of previously used environmental covariates explaining Snail kite dynamics and was able to detect a potential phenological shift in the environmental dependence of Snail kites. Our work provides a statistical framework for reliably detecting population–environment interactions from coarsely surveyed time series. An important implication of this work is that the low predictability of animal population growth by weather variables found in previous studies may be due, in part, to how these data are utilized as covariates. PMID:28759123

  6. PSO-MISMO modeling strategy for multistep-ahead time series prediction.

    PubMed

    Bao, Yukun; Xiong, Tao; Hu, Zhongyi

    2014-05-01

    Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.

  7. Spatial and Temporal Uncertainty of Crop Yield Aggregations

    NASA Technical Reports Server (NTRS)

    Porwollik, Vera; Mueller, Christoph; Elliott, Joshua; Chryssanthacopoulos, James; Iizumi, Toshichika; Ray, Deepak K.; Ruane, Alex C.; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; hide

    2016-01-01

    The aggregation of simulated gridded crop yields to national or regional scale requires information on temporal and spatial patterns of crop-specific harvested areas. This analysis estimates the uncertainty of simulated gridded yield time series related to the aggregation with four different harvested area data sets. We compare aggregated yield time series from the Global Gridded Crop Model Inter-comparison project for four crop types from 14 models at global, national, and regional scale to determine aggregation-driven differences in mean yields and temporal patterns as measures of uncertainty. The quantity and spatial patterns of harvested areas differ for individual crops among the four datasets applied for the aggregation. Also simulated spatial yield patterns differ among the 14 models. These differences in harvested areas and simulated yield patterns lead to differences in aggregated productivity estimates, both in mean yield and in the temporal dynamics. Among the four investigated crops, wheat yield (17% relative difference) is most affected by the uncertainty introduced by the aggregation at the global scale. The correlation of temporal patterns of global aggregated yield time series can be as low as for soybean (r = 0.28).For the majority of countries, mean relative differences of nationally aggregated yields account for10% or less. The spatial and temporal difference can be substantial higher for individual countries. Of the top-10 crop producers, aggregated national multi-annual mean relative difference of yields can be up to 67% (maize, South Africa), 43% (wheat, Pakistan), 51% (rice, Japan), and 427% (soybean, Bolivia).Correlations of differently aggregated yield time series can be as low as r = 0.56 (maize, India), r = 0.05*Corresponding (wheat, Russia), r = 0.13 (rice, Vietnam), and r = -0.01 (soybean, Uruguay). The aggregation to sub-national scale in comparison to country scale shows that spatial uncertainties can cancel out in countries with large harvested areas per crop type. We conclude that the aggregation uncertainty can be substantial for crop productivity and production estimations in the context of food security, impact assessment, and model evaluation exercises.

  8. Refined composite multiscale weighted-permutation entropy of financial time series

    NASA Astrophysics Data System (ADS)

    Zhang, Yongping; Shang, Pengjian

    2018-04-01

    For quantifying the complexity of nonlinear systems, multiscale weighted-permutation entropy (MWPE) has recently been proposed. MWPE has incorporated amplitude information and been applied to account for the multiple inherent dynamics of time series. However, MWPE may be unreliable, because its estimated values show large fluctuation for slight variation of the data locations, and a significant distinction only for the different length of time series. Therefore, we propose the refined composite multiscale weighted-permutation entropy (RCMWPE). By comparing the RCMWPE results with other methods' results on both synthetic data and financial time series, RCMWPE method shows not only the advantages inherited from MWPE but also lower sensitivity to the data locations, more stable and much less dependent on the length of time series. Moreover, we present and discuss the results of RCMWPE method on the daily price return series from Asian and European stock markets. There are significant differences between Asian markets and European markets, and the entropy values of Hang Seng Index (HSI) are close to but higher than those of European markets. The reliability of the proposed RCMWPE method has been supported by simulations on generated and real data. It could be applied to a variety of fields to quantify the complexity of the systems over multiple scales more accurately.

  9. Rainfall-induced release of microbes from manure: model development, parameter estimation, and uncertainty evaluation on small plots

    USDA-ARS?s Scientific Manuscript database

    A series of simulated rainfall-runoff experiments with applications of different manure types (cattle solid pats, poultry dry litter, swine slurry) were conducted across four seasons on a field containing 36 plots (0.75 × 2 m each), resulting in 144 rainfall-runoff events. Simulating time-varying re...

  10. Climate change effects on historical range and variability of two large landscapes in western Montana, USA

    Treesearch

    Robert E. Keane; Lisa M. Holsinger; Russell A. Parsons; Kathy Gray

    2008-01-01

    Quantifying the historical range and variability of landscape composition and structure using simulation modeling is becoming an important means of assessing current landscape condition and prioritizing landscapes for ecosystem restoration. However, most simulated time series are generated using static climate conditions which fail to account for the predicted major...

  11. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less

  12. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less

  13. Discriminant Analysis of Time Series in the Presence of Within-Group Spectral Variability.

    PubMed

    Krafty, Robert T

    2016-07-01

    Many studies record replicated time series epochs from different groups with the goal of using frequency domain properties to discriminate between the groups. In many applications, there exists variation in cyclical patterns from time series in the same group. Although a number of frequency domain methods for the discriminant analysis of time series have been explored, there is a dearth of models and methods that account for within-group spectral variability. This article proposes a model for groups of time series in which transfer functions are modeled as stochastic variables that can account for both between-group and within-group differences in spectra that are identified from individual replicates. An ensuing discriminant analysis of stochastic cepstra under this model is developed to obtain parsimonious measures of relative power that optimally separate groups in the presence of within-group spectral variability. The approach possess favorable properties in classifying new observations and can be consistently estimated through a simple discriminant analysis of a finite number of estimated cepstral coefficients. Benefits in accounting for within-group spectral variability are empirically illustrated in a simulation study and through an analysis of gait variability.

  14. Examining deterrence of adult sex crimes: A semi-parametric intervention time series approach.

    PubMed

    Park, Jin-Hong; Bandyopadhyay, Dipankar; Letourneau, Elizabeth

    2014-01-01

    Motivated by recent developments on dimension reduction (DR) techniques for time series data, the association of a general deterrent effect towards South Carolina (SC)'s registration and notification (SORN) policy for preventing sex crimes was examined. Using adult sex crime arrestee data from 1990 to 2005, the the idea of Central Mean Subspace (CMS) is extended to intervention time series analysis (CMS-ITS) to model the sequential intervention effects of 1995 (the year SC's SORN policy was initially implemented) and 1999 (the year the policy was revised to include online notification) on the time series spectrum. The CMS-ITS model estimation was achieved via kernel smoothing techniques, and compared to interrupted auto-regressive integrated time series (ARIMA) models. Simulation studies and application to the real data underscores our model's ability towards achieving parsimony, and to detect intervention effects not earlier determined via traditional ARIMA models. From a public health perspective, findings from this study draw attention to the potential general deterrent effects of SC's SORN policy. These findings are considered in light of the overall body of research on sex crime arrestee registration and notification policies, which remain controversial.

  15. Research of laser echo signal simulator

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Shi, Rui; Wang, Xin; Li, Zhou

    2015-11-01

    Laser echo signal simulator is one of the most significant components of hardware-in-the-loop (HWIL) simulation systems for LADAR. System model and time series model of laser echo signal simulator are established. Some influential factors which could induce fixed error and random error on the simulated return signals are analyzed, and then these system insertion errors are analyzed quantitatively. Using this theoretical model, the simulation system is investigated experimentally. The results corrected by subtracting fixed error indicate that the range error of the simulated laser return signal is less than 0.25m, and the distance range that the system can simulate is from 50m to 20km.

  16. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting

    PubMed Central

    Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927

  17. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    PubMed

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  18. Data imputation analysis for Cosmic Rays time series

    NASA Astrophysics Data System (ADS)

    Fernandes, R. C.; Lucio, P. S.; Fernandez, J. H.

    2017-05-01

    The occurrence of missing data concerning Galactic Cosmic Rays time series (GCR) is inevitable since loss of data is due to mechanical and human failure or technical problems and different periods of operation of GCR stations. The aim of this study was to perform multiple dataset imputation in order to depict the observational dataset. The study has used the monthly time series of GCR Climax (CLMX) and Roma (ROME) from 1960 to 2004 to simulate scenarios of 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% and 90% of missing data compared to observed ROME series, with 50 replicates. Then, the CLMX station as a proxy for allocation of these scenarios was used. Three different methods for monthly dataset imputation were selected: AMÉLIA II - runs the bootstrap Expectation Maximization algorithm, MICE - runs an algorithm via Multivariate Imputation by Chained Equations and MTSDI - an Expectation Maximization algorithm-based method for imputation of missing values in multivariate normal time series. The synthetic time series compared with the observed ROME series has also been evaluated using several skill measures as such as RMSE, NRMSE, Agreement Index, R, R2, F-test and t-test. The results showed that for CLMX and ROME, the R2 and R statistics were equal to 0.98 and 0.96, respectively. It was observed that increases in the number of gaps generate loss of quality of the time series. Data imputation was more efficient with MTSDI method, with negligible errors and best skill coefficients. The results suggest a limit of about 60% of missing data for imputation, for monthly averages, no more than this. It is noteworthy that CLMX, ROME and KIEL stations present no missing data in the target period. This methodology allowed reconstructing 43 time series.

  19. tsiR: An R package for time-series Susceptible-Infected-Recovered models of epidemics.

    PubMed

    Becker, Alexander D; Grenfell, Bryan T

    2017-01-01

    tsiR is an open source software package implemented in the R programming language designed to analyze infectious disease time-series data. The software extends a well-studied and widely-applied algorithm, the time-series Susceptible-Infected-Recovered (TSIR) model, to infer parameters from incidence data, such as contact seasonality, and to forward simulate the underlying mechanistic model. The tsiR package aggregates a number of different fitting features previously described in the literature in a user-friendly way, providing support for their broader adoption in infectious disease research. Also included in tsiR are a number of diagnostic tools to assess the fit of the TSIR model. This package should be useful for researchers analyzing incidence data for fully-immunizing infectious diseases.

  20. Model Performance Evaluation and Scenario Analysis (MPESA)

    EPA Pesticide Factsheets

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  1. Impact of Optimization Strategy and Adoption Rate on V2X Technology on Environmental Impact

    DOT National Transportation Integrated Search

    2018-12-31

    This research evaluated the effects of automated vehicle control strategies on system level emissions, travel time and wait time through a series of traffic lights. The study was conducted using traffic simulation and a realistic vehicle mix. Two con...

  2. GPU-based real-time soft tissue deformation with cutting and haptic feedback.

    PubMed

    Courtecuisse, Hadrien; Jung, Hoeryong; Allard, Jérémie; Duriez, Christian; Lee, Doo Yong; Cotin, Stéphane

    2010-12-01

    This article describes a series of contributions in the field of real-time simulation of soft tissue biomechanics. These contributions address various requirements for interactive simulation of complex surgical procedures. In particular, this article presents results in the areas of soft tissue deformation, contact modelling, simulation of cutting, and haptic rendering, which are all relevant to a variety of medical interventions. The contributions described in this article share a common underlying model of deformation and rely on GPU implementations to significantly improve computation times. This consistency in the modelling technique and computational approach ensures coherent results as well as efficient, robust and flexible solutions. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  4. Comparing groundwater recharge and storage variability from GRACE satellite observations with observed water levels and recharge model simulations

    NASA Astrophysics Data System (ADS)

    Allen, D. M.; Henry, C.; Demon, H.; Kirste, D. M.; Huang, J.

    2011-12-01

    Sustainable management of groundwater resources, particularly in water stressed regions, requires estimates of groundwater recharge. This study in southern Mali, Africa compares approaches for estimating groundwater recharge and understanding recharge processes using a variety of methods encompassing groundwater level-climate data analysis, GRACE satellite data analysis, and recharge modelling for current and future climate conditions. Time series data for GRACE (2002-2006) and observed groundwater level data (1982-2001) do not overlap. To overcome this problem, GRACE time series data were appended to the observed historical time series data, and the records compared. Terrestrial water storage anomalies from GRACE were corrected for soil moisture (SM) using the Global Land Data Assimilation System (GLDAS) to obtain monthly groundwater storage anomalies (GRACE-SM), and monthly recharge estimates. Historical groundwater storage anomalies and recharge were determined using the water table fluctuation method using observation data from 15 wells. Historical annual recharge averaged 145.0 mm (or 15.9% of annual rainfall) and compared favourably with the GRACE-SM estimate of 149.7 mm (or 14.8% of annual rainfall). Both records show lows and peaks in May and September, respectively; however, the peak for the GRACE-SM data is shifted later in the year to November, suggesting that the GLDAS may poorly predict the timing of soil water storage in this region. Recharge simulation results show good agreement between the timing and magnitude of the mean monthly simulated recharge and the regional mean monthly storage anomaly hydrograph generated from all monitoring wells. Under future climate conditions, annual recharge is projected to decrease by 8% for areas with luvisols and by 11% for areas with nitosols. Given this potential reduction in groundwater recharge, there may be added stress placed on an already stressed resource.

  5. Shaping ability of ProFile.04 Taper Series 29 rotary nickel-titanium instruments in simulated root canals. Part 1.

    PubMed

    Thompson, S A; Dummer, P M

    1997-01-01

    The aim of this study was to determine the shaping ability of ProFile.04 Taper Series 29 nickel-titanium instruments in simulated canals. A total of 40 simulated root canals made up of four different shapes in terms of angle and position of curvature were prepared by ProFile instruments using a step-down approach. Part 1 of this two-part report describes the efficacy of the instruments in terms of preparation time, instrument failure, canal blockages, loss of canal length and three-dimensional canal form. The time necessary for canal preparation was not influenced significantly by canal shape. No instrument fractures occurred but a total of 52 instruments deformed. Size 6 instruments deformed the most followed by sizes 5, 3 and 4. Canal shape did not influence significantly instrument deformation. None of the canals became blocked with debris and loss of working distance was on average 0.5 mm or less. Intracanal impressions of canal form demonstrated that most canals had definite apical stops, smooth canal walls and good flow and taper. Under the conditions of this study, ProFile.04 Taper Series 29 rotary nickel-titanium instruments prepared simulated canals rapidly and created good three-dimensional form. A substantial number of instruments deformed but it was not possible to determine whether this phenomenon occurred because of the nature of the experimental model or through an inherent design weakness in the instruments.

  6. CFAVC scheme for high frequency series resonant inverter-fed domestic induction heating system

    NASA Astrophysics Data System (ADS)

    Nagarajan, Booma; Reddy Sathi, Rama

    2016-01-01

    This article presents the investigations on the constant frequency asymmetric voltage cancellation control in the AC-AC resonant converter-fed domestic induction heating system. Conventional fixed frequency control techniques used in the high frequency converters lead to non-zero voltage switching operation and reduced output power. The proposed control technique produces higher output power than the conventional fixed-frequency control strategies. In this control technique, zero-voltage-switching operation is maintained during different duty cycle operation for reduction in the switching losses. Complete analysis of the induction heating power supply system with asymmetric voltage cancellation control is discussed in this article. Simulation and experimental study on constant frequency asymmetric voltage cancellation (CFAVC)-controlled full bridge series resonant inverter is performed. Time domain simulation results for the open and closed loop of the system are obtained using MATLAB simulation tool. The simulation results prove the control of voltage and power in a wide range. PID controller-based closed loop control system achieves the voltage regulation of the proposed system for the step change in load. Hardware implementation of the system under CFAVC control is done using the embedded controller. The simulation and experimental results validate the performance of the CFAVC control technique for series resonant-based induction cooking system.

  7. Simulation of financial market via nonlinear Ising model

    NASA Astrophysics Data System (ADS)

    Ko, Bonggyun; Song, Jae Wook; Chang, Woojin

    2016-09-01

    In this research, we propose a practical method for simulating the financial return series whose distribution has a specific heaviness. We employ the Ising model for generating financial return series to be analogous to those of the real series. The similarity between real financial return series and simulated one is statistically verified based on their stylized facts including the power law behavior of tail distribution. We also suggest the scheme for setting the parameters in order to simulate the financial return series with specific tail behavior. The simulation method introduced in this paper is expected to be applied to the other financial products whose price return distribution is fat-tailed.

  8. GAPPARD: a computationally efficient method of approximating gap-scale disturbance in vegetation models

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.

    2013-09-01

    Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, we developed a new method for simulating stand-replacing disturbances that is both accurate and faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing (e.g., as a result of climate change), GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the vegetation model LPJ-GUESS, and evaluated it in a series of simulations along an altitudinal transect of an inner-Alpine valley. We obtained results very similar to the output of the original LPJ-GUESS model that uses 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited for rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and results of other forest models.

  9. Wavelet analysis in ecology and epidemiology: impact of statistical tests

    PubMed Central

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-01-01

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method. PMID:24284892

  10. Wavelet analysis in ecology and epidemiology: impact of statistical tests.

    PubMed

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-02-06

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.

  11. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    PubMed

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  12. Generation of synthetic influent data to perform (micro)pollutant wastewater treatment modelling studies.

    PubMed

    Snip, L J P; Flores-Alsina, X; Aymerich, I; Rodríguez-Mozaz, S; Barceló, D; Plósz, B G; Corominas, Ll; Rodriguez-Roda, I; Jeppsson, U; Gernaey, K V

    2016-11-01

    The use of process models to simulate the fate of micropollutants in wastewater treatment plants is constantly growing. However, due to the high workload and cost of measuring campaigns, many simulation studies lack sufficiently long time series representing realistic wastewater influent dynamics. In this paper, the feasibility of the Benchmark Simulation Model No. 2 (BSM2) influent generator is tested to create realistic dynamic influent (micro)pollutant disturbance scenarios. The presented set of models is adjusted to describe the occurrence of three pharmaceutical compounds and one of each of its metabolites with samples taken every 2-4h: the anti-inflammatory drug ibuprofen (IBU), the antibiotic sulfamethoxazole (SMX) and the psychoactive carbamazepine (CMZ). Information about type of excretion and total consumption rates forms the basis for creating the data-defined profiles used to generate the dynamic time series. In addition, the traditional influent characteristics such as flow rate, ammonium, particulate chemical oxygen demand and temperature are also modelled using the same framework with high frequency data. The calibration is performed semi-automatically with two different methods depending on data availability. The 'traditional' variables are calibrated with the Bootstrap method while the pharmaceutical loads are estimated with a least squares approach. The simulation results demonstrate that the BSM2 influent generator can describe the dynamics of both traditional variables and pharmaceuticals. Lastly, the study is complemented with: 1) the generation of longer time series for IBU following the same catchment principles; 2) the study of the impact of in-sewer SMX biotransformation when estimating the average daily load; and, 3) a critical discussion of the results, and the future opportunities of the presented approach balancing model structure/calibration procedure complexity versus predictive capabilities. Copyright © 2016. Published by Elsevier B.V.

  13. 14 CFR 121.907 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... (LOE) means a simulated line environment, the scenario content of which is designed to test integrating... intended purpose in an AQP. Planned hours means the estimated amount of time (as specified in a curriculum..., and series. ...

  14. Most suitable mother wavelet for the analysis of fractal properties of stride interval time series via the average wavelet coefficient

    PubMed Central

    Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S.; Perera, Subashan

    2016-01-01

    Human gait is a complex interaction of many nonlinear systems and stride intervals exhibit self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a biomarker of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of sixteen mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length. PMID:27960102

  15. Analysis of the precipitation and streamflow extremes in Northern Italy using high resolution reanalysis dataset Express-Hydro

    NASA Astrophysics Data System (ADS)

    Silvestro, Francesco; Parodi, Antonio; Campo, Lorenzo

    2017-04-01

    The characterization of the hydrometeorological extremes, both in terms of rainfall and streamflow, in a given region plays a key role in the environmental monitoring provided by the flood alert services. In last years meteorological simulations (both near real-time and historical reanalysis) were available at increasing spatial and temporal resolutions, making possible long-period hydrological reanalysis in which the meteo dataset is used as input in distributed hydrological models. In this work, a very high resolution meteorological reanalysis dataset, namely Express-Hydro (CIMA, ISAC-CNR, GAUSS Special Project PR45DE), was employed as input in the hydrological model Continuum in order to produce long time series of streamflows in the Liguria territory, located in the Northern part of Italy. The original dataset covers the whole Europe territory in the 1979-2008 period, at 4 km of spatial resolution and 3 hours of time resolution. Analyses in terms of comparison between the rainfall estimated by the dataset and the observations (available from the local raingauges network) were carried out, and a bias correction was also performed in order to better match the observed climatology. An extreme analysis was eventually carried on the streamflows time series obtained by the simulations, by comparing them with the results of the same hydrological model fed with the observed time series of rainfall. The results of the analysis are shown and discussed.

  16. A Bayesian nonparametric approach to dynamical noise reduction

    NASA Astrophysics Data System (ADS)

    Kaloudis, Konstantinos; Hatjispyros, Spyridon J.

    2018-06-01

    We propose a Bayesian nonparametric approach for the noise reduction of a given chaotic time series contaminated by dynamical noise, based on Markov Chain Monte Carlo methods. The underlying unknown noise process (possibly) exhibits heavy tailed behavior. We introduce the Dynamic Noise Reduction Replicator model with which we reconstruct the unknown dynamic equations and in parallel we replicate the dynamics under reduced noise level dynamical perturbations. The dynamic noise reduction procedure is demonstrated specifically in the case of polynomial maps. Simulations based on synthetic time series are presented.

  17. Point-Process Models of Social Network Interactions: Parameter Estimation and Missing Data Recovery

    DTIC Science & Technology

    2014-08-01

    treating them as zero will have a de minimis impact on the results, but avoiding computing them (and computing with them) saves tremendous time. Set a... test the methods on simulated time series on artificial social networks, including some toy networks and some meant to resemble IkeNet. We conclude...the section by discussing the results in detail. In each of our tests we begin with a complete data set, whether it is real (IkeNet) or simulated. Then

  18. Simulated runoff at many stream locations in the Methow River Basin, Washington

    USGS Publications Warehouse

    Mastin, Mark C.

    2015-01-01

    Comparisons of the simulated runoff with observed runoff at six selected long-term streamflow-gaging stations showed that the simulated annual runoff was within +15.4 to -9.6 percent of the annual observed runoff. The simulated runoff generally matched the seasonal flow patterns, with bias at some stations indicated by over-simulation of the October–November late autumn season and under-simulation of the snowmelt runoff months of May and June. Sixty-one time series of daily runoff for a 26-year period representative of the long-term runoff pattern, water years 1988–2013, were simulated and provided to the trophic modeling team.

  19. Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Hunsberger, Randolph J

    This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less

  20. Multiple Solutions of Real-time Tsunami Forecasting Using Short-term Inundation Forecasting for Tsunamis Tool

    NASA Astrophysics Data System (ADS)

    Gica, E.

    2016-12-01

    The Short-term Inundation Forecasting for Tsunamis (SIFT) tool, developed by NOAA Center for Tsunami Research (NCTR) at the Pacific Marine Environmental Laboratory (PMEL), is used in forecast operations at the Tsunami Warning Centers in Alaska and Hawaii. The SIFT tool relies on a pre-computed tsunami propagation database, real-time DART buoy data, and an inversion algorithm to define the tsunami source. The tsunami propagation database is composed of 50×100km unit sources, simulated basin-wide for at least 24 hours. Different combinations of unit sources, DART buoys, and length of real-time DART buoy data can generate a wide range of results within the defined tsunami source. For an inexperienced SIFT user, the primary challenge is to determine which solution, among multiple solutions for a single tsunami event, would provide the best forecast in real time. This study investigates how the use of different tsunami sources affects simulated tsunamis at tide gauge locations. Using the tide gauge at Hilo, Hawaii, a total of 50 possible solutions for the 2011 Tohoku tsunami are considered. Maximum tsunami wave amplitude and root mean square error results are used to compare tide gauge data and the simulated tsunami time series. Results of this study will facilitate SIFT users' efforts to determine if the simulated tide gauge tsunami time series from a specific tsunami source solution would be within the range of possible solutions. This study will serve as the basis for investigating more historical tsunami events and tide gauge locations.

  1. Molecular Simulations in Astrobiology

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Wilson, Michael A.; Schweighofer, Karl; Chipot, Christophe; New, Michael H.

    2000-01-01

    One of the main goals of astrobiology is to understand the origin of cellular life. The most direct approach to this problem is to construct laboratory models of protocells. Such efforts, currently underway in the NASA Astrobiology Program, are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures that are capable of performing protocellular functions. Many of these functions, such as importing nutrients, capturing energy and responding to changes in the environment, are carried out by proteins bound to membranes. We use computer simulations to address the following questions about these proteins: (1) How do small proteins self-organize into ordered structures at water-membrane interfaces and insert into membranes? (2) How do peptides form membrane-spanning structures (e.g. channels)? (3) By what mechanisms do such structures perform their functions? The simulations are performed using the molecular dynamics method. In this method, Newton's equations of motion for each atom in the system are solved iteratively. At each time step, the forces exerted on each atom by the remaining atoms are evaluated by dividing them into two parts. Short-range forces are calculated in real space while long-range forces are evaluated in reciprocal space, using a particle-mesh algorithm which is of order O(NInN). With a time step of 2 femtoseconds, problems occurring on multi-nanosecond time scales (10(exp 6)-10(exp 8) time steps) are accessible. To address a broader range of problems, simulations need to be extended by three orders of magnitude, which requires algorithmic improvements and codes scalable to a large number of processors. Work in this direction is in progress. Two series of simulations are discussed. In one series, it is shown that nonpolar peptides, disordered in water, translocate to the nonpolar interior of the membrane and fold into helical structures (see Figure). Once in the membrane, the peptides exhibit orientational flexibility with changing conditions, which may have provided a mechanism of transmitting signals between the protocell and its environment. In another series of simulations, the mechanism by which a simple protein channel efficiently mediates proton transport across membranes was investigated. This process is a key step in cellular bioenergetics. In the channel under study, proton transport is gated by four histidines that occlude the channel pore. The simulations identify the mechanisms by which protons move through the gate.

  2. A Computer Program for the Generation of ARIMA Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Noles, Keith O.

    1977-01-01

    The autoregressive integrated moving averages model (ARIMA) has been applied to time series data in psychological and educational research. A program is described that generates ARIMA data of a known order. The program enables researchers to explore statistical properties of ARIMA data and simulate systems producing time dependent observations.…

  3. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less

  4. An Experimental Design of a Foundational Framework for the Application of Affective Computing to Soaring Flight Simulation and Training

    ERIC Educational Resources Information Center

    Moon, Shannon

    2017-01-01

    In the absence of tools for intelligent tutoring systems for soaring flight simulation training, this study evaluated a framework foundation to measure pilot performance, affect, and physiological response to training in real-time. Volunteers were asked to perform a series of flight tasks selected from Federal Aviation Administration Practical…

  5. FBST for Cointegration Problems

    NASA Astrophysics Data System (ADS)

    Diniz, M.; Pereira, C. A. B.; Stern, J. M.

    2008-11-01

    In order to estimate causal relations, the time series econometrics has to be aware of spurious correlation, a problem first mentioned by Yule [21]. To solve the problem, one can work with differenced series or use multivariate models like VAR or VEC models. In this case, the analysed series are going to present a long run relation i.e. a cointegration relation. Even though the Bayesian literature about inference on VAR/VEC models is quite advanced, Bauwens et al. [2] highlight that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results." This paper presents the Full Bayesian Significance Test applied to cointegration rank selection tests in multivariate (VAR/VEC) time series models and shows how to implement it using available in the literature and simulated data sets. A standard non-informative prior is assumed.

  6. Simulation of Combustion Systems with Realistic g-jitter

    NASA Technical Reports Server (NTRS)

    Mell, William E.; McGrattan, Kevin B.; Baum, Howard R.

    2003-01-01

    In this project a transient, fully three-dimensional computer simulation code was developed to simulate the effects of realistic g-jitter on a number of combustion systems. The simulation code is capable of simulating flame spread on a solid and nonpremixed or premixed gaseous combustion in nonturbulent flow with simple combustion models. Simple combustion models were used to preserve computational efficiency since this is meant to be an engineering code. Also, the use of sophisticated turbulence models was not pursued (a simple Smagorinsky type model can be implemented if deemed appropriate) because if flow velocities are large enough for turbulence to develop in a reduced gravity combustion scenario it is unlikely that g-jitter disturbances (in NASA's reduced gravity facilities) will play an important role in the flame dynamics. Acceleration disturbances of realistic orientation, magnitude, and time dependence can be easily included in the simulation. The simulation algorithm was based on techniques used in an existing large eddy simulation code which has successfully simulated fire dynamics in complex domains. A series of simulations with measured and predicted acceleration disturbances on the International Space Station (ISS) are presented. The results of this series of simulations suggested a passive isolation system and appropriate scheduling of crew activity would provide a sufficiently "quiet" acceleration environment for spherical diffusion flames.

  7. Dynamical Interpolation of Mesoscale Flows in the TOPEX/Poseidon Diamond Surrounding the U.S. Joint Global Ocean Flux Study Bermuda Atlantic Time-Series Study Site

    NASA Technical Reports Server (NTRS)

    McGillicuddy, Dennis J., Jr.; Kosnyrev, V. K.

    2001-01-01

    An open boundary ocean model is configured in a domain bounded by the four TOPEX/Poseidon (T/P) ground tracks surrounding the US Joint Global Ocean Flux Study Bermuda Atlantic Time-Series Study (BATS) site. This implementation facilitates prescription of model boundary conditions directly from altimetric measurements (both TIP and ERS-2). The expected error characteristics for a domain of this size with periodically updated boundary conditions are established with idealized numerical experiments using simulated data. A hindcast simulation is then constructed using actual altimetric observations during the period October 1992 through September 1998. Quantitative evaluation of the simulation suggests significant skill. The correlation coefficient between predicted sea level anomaly and ERS observations in the model interior is 0.89; that for predicted versus observed dynamic height anomaly based on hydrography at the BATS site is 0.73. Comparison with the idealized experiments suggests that the main source of error in the hindcast is temporal undersampling of the boundary conditions. The hindcast simulation described herein provides a basis for retrospective analysis of BATS observations in the context of the mesoscale eddy field.

  8. Dynamical Interpolation of Mesoscale Flows in the TOPEX/ Poseidon Diamond Surrounding the U.S. Joint Global Ocean Flux Study Bermuda Atlantic Time-Series Study Site

    NASA Technical Reports Server (NTRS)

    McGillicuddy, D. J.; Kosnyrev, V. K.

    2001-01-01

    An open boundary ocean model is configured in a domain bounded by the four TOPEX/Poseidon (TIP) ground tracks surrounding the U.S. Joint Global Ocean Flux Study Bermuda Atlantic Time-series Study (BATS) site. This implementation facilitates prescription of model boundary conditions directly from altimetric measurements (both TIP and ERS-2). The expected error characteristics for a domain of this size with periodically updated boundary conditions are established with idealized numerical experiments using simulated data. A hindcast simulation is then constructed using actual altimetric observations during the period October 1992 through September 1998. Quantitative evaluation of the simulation suggests significant skill. The correlation coefficient between predicted sea level anomaly and ERS observations in the model interior is 0.89; that for predicted versus observed dynamic height anomaly based on hydrography at the BATS site is 0.73. Comparison with the idealized experiments suggests that the main source of error in the hindcast is temporal undersampling of the boundary conditions. The hindcast simulation described herein provides a basis for retrospective analysis of BATS observations in the context of the mesoscale eddy field.

  9. A model-free characterization of recurrences in stationary time series

    NASA Astrophysics Data System (ADS)

    Chicheportiche, Rémy; Chakraborti, Anirban

    2017-05-01

    Study of recurrences in earthquakes, climate, financial time-series, etc. is crucial to better forecast disasters and limit their consequences. Most of the previous phenomenological studies of recurrences have involved only a long-ranged autocorrelation function, and ignored the multi-scaling properties induced by potential higher order dependencies. We argue that copulas is a natural model-free framework to study non-linear dependencies in time series and related concepts like recurrences. Consequently, we arrive at the facts that (i) non-linear dependences do impact both the statistics and dynamics of recurrence times, and (ii) the scaling arguments for the unconditional distribution may not be applicable. Hence, fitting and/or simulating the intertemporal distribution of recurrence intervals is very much system specific, and cannot actually benefit from universal features, in contrast to the previous claims. This has important implications in epilepsy prognosis and financial risk management applications.

  10. Stability Estimation of ABWR on the Basis of Noise Analysis

    NASA Astrophysics Data System (ADS)

    Furuya, Masahiro; Fukahori, Takanori; Mizokami, Shinya; Yokoya, Jun

    In order to investigate the stability of a nuclear reactor core with an oxide mixture of uranium and plutonium (MOX) fuel installed, channel stability and regional stability tests were conducted with the SIRIUS-F facility. The SIRIUS-F facility was designed and constructed to provide a highly accurate simulation of thermal-hydraulic (channel) instabilities and coupled thermalhydraulics-neutronics instabilities of the Advanced Boiling Water Reactors (ABWRs). A real-time simulation was performed by modal point kinetics of reactor neutronics and fuel-rod thermal conduction on the basis of a measured void fraction in a reactor core section of the facility. A time series analysis was performed to calculate decay ratio and resonance frequency from a dominant pole of a transfer function by applying auto regressive (AR) methods to the time-series of the core inlet flow rate. Experiments were conducted with the SIRIUS-F facility, which simulates ABWR with MOX fuel installed. The variations in the decay ratio and resonance frequency among the five common AR methods are within 0.03 and 0.01 Hz, respectively. In this system, the appropriate decay ratio and resonance frequency can be estimated on the basis of the Yule-Walker method with the model order of 30.

  11. Using Monte Carlo Simulation to Prioritize Key Maritime Environmental Impacts of Port Infrastructure

    NASA Astrophysics Data System (ADS)

    Perez Lespier, L. M.; Long, S.; Shoberg, T.

    2016-12-01

    This study creates a Monte Carlo simulation model to prioritize key indicators of environmental impacts resulting from maritime port infrastructure. Data inputs are derived from LandSat imagery, government databases, and industry reports to create the simulation. Results are validated using subject matter experts and compared with those returned from time-series regression to determine goodness of fit. The Port of Prince Rupert, Canada is used as the location for the study.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan S; Bugbee, Bruce; Gotseff, Peter

    Capturing technical and economic impacts of solar photovoltaics (PV) and other distributed energy resources (DERs) on electric distribution systems can require high-time resolution (e.g. 1 minute), long-duration (e.g. 1 year) simulations. However, such simulations can be computationally prohibitive, particularly when including complex control schemes in quasi-steady-state time series (QSTS) simulation. Various approaches have been used in the literature to down select representative time segments (e.g. days), but typically these are best suited for lower time resolutions or consider only a single data stream (e.g. PV production) for selection. We present a statistical approach that combines stratified sampling and bootstrapping tomore » select representative days while also providing a simple method to reassemble annual results. We describe the approach in the context of a recent study with a utility partner. This approach enables much faster QSTS analysis by simulating only a subset of days, while maintaining accurate annual estimates.« less

  13. Time series inversion of spectra from ground-based radiometers

    NASA Astrophysics Data System (ADS)

    Christensen, O. M.; Eriksson, P.

    2013-07-01

    Retrieving time series of atmospheric constituents from ground-based spectrometers often requires different temporal averaging depending on the altitude region in focus. This can lead to several datasets existing for one instrument, which complicates validation and comparisons between instruments. This paper puts forth a possible solution by incorporating the temporal domain into the maximum a posteriori (MAP) retrieval algorithm. The state vector is increased to include measurements spanning a time period, and the temporal correlations between the true atmospheric states are explicitly specified in the a priori uncertainty matrix. This allows the MAP method to effectively select the best temporal smoothing for each altitude, removing the need for several datasets to cover different altitudes. The method is compared to traditional averaging of spectra using a simulated retrieval of water vapour in the mesosphere. The simulations show that the method offers a significant advantage compared to the traditional method, extending the sensitivity an additional 10 km upwards without reducing the temporal resolution at lower altitudes. The method is also tested on the Onsala Space Observatory (OSO) water vapour microwave radiometer confirming the advantages found in the simulation. Additionally, it is shown how the method can interpolate data in time and provide diagnostic values to evaluate the interpolated data.

  14. Sound field simulation and acoustic animation in urban squares

    NASA Astrophysics Data System (ADS)

    Kang, Jian; Meng, Yan

    2005-04-01

    Urban squares are important components of cities, and the acoustic environment is important for their usability. While models and formulae for predicting the sound field in urban squares are important for their soundscape design and improvement, acoustic animation tools would be of great importance for designers as well as for public participation process, given that below a certain sound level, the soundscape evaluation depends mainly on the type of sounds rather than the loudness. This paper first briefly introduces acoustic simulation models developed for urban squares, as well as empirical formulae derived from a series of simulation. It then presents an acoustic animation tool currently being developed. In urban squares there are multiple dynamic sound sources, so that the computation time becomes a main concern. Nevertheless, the requirements for acoustic animation in urban squares are relatively low compared to auditoria. As a result, it is important to simplify the simulation process and algorithms. Based on a series of subjective tests in a virtual reality environment with various simulation parameters, a fast simulation method with acceptable accuracy has been explored. [Work supported by the European Commission.

  15. Detecting Abrupt Changes in a Piecewise Locally Stationary Time Series

    PubMed Central

    Last, Michael; Shumway, Robert

    2007-01-01

    Non-stationary time series arise in many settings, such as seismology, speech-processing, and finance. In many of these settings we are interested in points where a model of local stationarity is violated. We consider the problem of how to detect these change-points, which we identify by finding sharp changes in the time-varying power spectrum. Several different methods are considered, and we find that the symmetrized Kullback-Leibler information discrimination performs best in simulation studies. We derive asymptotic normality of our test statistic, and consistency of estimated change-point locations. We then demonstrate the technique on the problem of detecting arrival phases in earthquakes. PMID:19190715

  16. Flight investigation of a four-dimensional terminal area guidance system for STOL aircraft

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Hardy, G. H.

    1981-01-01

    A series of flight tests and fast-time simulations were conducted, using the augmentor wing jet STOL research aircraft and the STOLAND 4D-RNAV system to add to the growing data base of 4D-RNAV system performance capabilities. To obtain statistically meaningful data a limited amount of flight data were supplemented by a statistically significant amount of data obtained from fast-time simulation. The results of these tests are reported. Included are comparisons of the 4D-RNAV estimated winds with actual winds encountered in flight, as well as data on along-track navigation and guidance errors, and time-of-arrival errors at the final approach waypoint. In addition, a slight improvement of the STOLAND 4D-RNAV system is proposed and demonstrated, using the fast-time simulation.

  17. In and out of glacial extremes by way of dust-climate feedbacks.

    PubMed

    Shaffer, Gary; Lambert, Fabrice

    2018-02-27

    Mineral dust aerosols cool Earth directly by scattering incoming solar radiation and indirectly by affecting clouds and biogeochemical cycles. Recent Earth history has featured quasi-100,000-y, glacial-interglacial climate cycles with lower/higher temperatures and greenhouse gas concentrations during glacials/interglacials. Global average, glacial maxima dust levels were more than 3 times higher than during interglacials, thereby contributing to glacial cooling. However, the timing, strength, and overall role of dust-climate feedbacks over these cycles remain unclear. Here we use dust deposition data and temperature reconstructions from ice sheet, ocean sediment, and land archives to construct dust-climate relationships. Although absolute dust deposition rates vary greatly among these archives, they all exhibit striking, nonlinear increases toward coldest glacial conditions. From these relationships and reconstructed temperature time series, we diagnose glacial-interglacial time series of dust radiative forcing and iron fertilization of ocean biota, and use these time series to force Earth system model simulations. The results of these simulations show that dust-climate feedbacks, perhaps set off by orbital forcing, push the system in and out of extreme cold conditions such as glacial maxima. Without these dust effects, glacial temperature and atmospheric CO 2 concentrations would have been much more stable at higher, intermediate glacial levels. The structure of residual anomalies over the glacial-interglacial climate cycles after subtraction of dust effects provides constraints for the strength and timing of other processes governing these cycles. Copyright © 2018 the Author(s). Published by PNAS.

  18. In and out of glacial extremes by way of dust‑climate feedbacks

    NASA Astrophysics Data System (ADS)

    Shaffer, Gary; Lambert, Fabrice

    2018-03-01

    Mineral dust aerosols cool Earth directly by scattering incoming solar radiation and indirectly by affecting clouds and biogeochemical cycles. Recent Earth history has featured quasi-100,000-y, glacial‑interglacial climate cycles with lower/higher temperatures and greenhouse gas concentrations during glacials/interglacials. Global average, glacial maxima dust levels were more than 3 times higher than during interglacials, thereby contributing to glacial cooling. However, the timing, strength, and overall role of dust‑climate feedbacks over these cycles remain unclear. Here we use dust deposition data and temperature reconstructions from ice sheet, ocean sediment, and land archives to construct dust‑climate relationships. Although absolute dust deposition rates vary greatly among these archives, they all exhibit striking, nonlinear increases toward coldest glacial conditions. From these relationships and reconstructed temperature time series, we diagnose glacial‑interglacial time series of dust radiative forcing and iron fertilization of ocean biota, and use these time series to force Earth system model simulations. The results of these simulations show that dust‑climate feedbacks, perhaps set off by orbital forcing, push the system in and out of extreme cold conditions such as glacial maxima. Without these dust effects, glacial temperature and atmospheric CO2 concentrations would have been much more stable at higher, intermediate glacial levels. The structure of residual anomalies over the glacial‑interglacial climate cycles after subtraction of dust effects provides constraints for the strength and timing of other processes governing these cycles.

  19. In and out of glacial extremes by way of dust−climate feedbacks

    PubMed Central

    Lambert, Fabrice

    2018-01-01

    Mineral dust aerosols cool Earth directly by scattering incoming solar radiation and indirectly by affecting clouds and biogeochemical cycles. Recent Earth history has featured quasi-100,000-y, glacial−interglacial climate cycles with lower/higher temperatures and greenhouse gas concentrations during glacials/interglacials. Global average, glacial maxima dust levels were more than 3 times higher than during interglacials, thereby contributing to glacial cooling. However, the timing, strength, and overall role of dust−climate feedbacks over these cycles remain unclear. Here we use dust deposition data and temperature reconstructions from ice sheet, ocean sediment, and land archives to construct dust−climate relationships. Although absolute dust deposition rates vary greatly among these archives, they all exhibit striking, nonlinear increases toward coldest glacial conditions. From these relationships and reconstructed temperature time series, we diagnose glacial−interglacial time series of dust radiative forcing and iron fertilization of ocean biota, and use these time series to force Earth system model simulations. The results of these simulations show that dust−climate feedbacks, perhaps set off by orbital forcing, push the system in and out of extreme cold conditions such as glacial maxima. Without these dust effects, glacial temperature and atmospheric CO2 concentrations would have been much more stable at higher, intermediate glacial levels. The structure of residual anomalies over the glacial−interglacial climate cycles after subtraction of dust effects provides constraints for the strength and timing of other processes governing these cycles. PMID:29440407

  20. Memory interface simulator: A computer design aid

    NASA Technical Reports Server (NTRS)

    Taylor, D. S.; Williams, T.; Weatherbee, J. E.

    1972-01-01

    Results are presented of a study conducted with a digital simulation model being used in the design of the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. The model simulates the activity involved as instructions are fetched from random access memory for execution in one of the system central processing units. A series of model runs measured instruction execution time under various assumptions pertaining to the CPU's and the interface between the CPU's and RAM. Design tradeoffs are presented in the following areas: Bus widths, CPU microprogram read only memory cycle time, multiple instruction fetch, and instruction mix.

  1. Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios

    USGS Publications Warehouse

    Banta, Edward R.

    2014-01-01

    Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.

  2. Assessment of start-up mechanisms for anaerobic fluidized bed reactor in series based on mathematical simulation

    NASA Astrophysics Data System (ADS)

    Sudibyo, Hanifrahmawan; Guntama, Dody; Budhijanto, Wiratni

    2017-05-01

    Anaerobic digestion is associated with long hydraulic residence time and hence leads to huge reactor volume, especially for high rate input to the reactor. To overcome this major drawback, one of the possibilities is optimizing the schemes of reactor configuration and start-up mechanisms. This study aimed to determine the most promising start-up mechanism for anaerobic digestion reactors in series, with respect to the shortest hydraulic residence time to reach the highest biogas production rate. The reactor to be studied is anaerobic fluidized bed reactor (AFBR) which is known as the most efficient reactor for high organic loading rate. Case to be studied is landfill leachate digestion. Although reactor optimization can be conducted experimentally, it could be expensive and time consuming. This study proposed the utilization of mathematical modeling to screen the possibilities towards the best options to be verified experimentally. Kinetic study of landfill leachate anaerobic digestion was first conducted to depict the rate of microbial growth and the rate of substrate consumption. Kinetics constants obtained from this batch experiment were then used in the mathematical model representing AFBR. Several mechanisms were simulated in this study. In the first mechanism, all digesters were started simultaneously. In the second mechanism, each digester was started until it achieved steady-state condition before the next digester was started. The third mechanism was start-up scenario for single reactor as opposed to the previous two mechanisms. These all three mechanisms were simulated for either one-through stream and recycling a portion of the reactor effluent. The mathematical simulation result was used to evaluate each mechanism based on hydraulic residence time required for all digesters in series to reach the steady-state condition, the extent of pollutant removal, and the rate of biogas production. In the need of high sCOD removal, the second mechanism emerged as the best one.

  3. 3D Simulations of the ``Keyhole'' Hohlraum for Shock Timing on NIF

    NASA Astrophysics Data System (ADS)

    Robey, H. F.; Marinak, M. M.; Munro, D. H.; Jones, O. S.

    2007-11-01

    Ignition implosions planned for the National Ignition Facility (NIF) require a pulse shape with a carefully designed series of steps, which launch a series of shocks through the ablator and DT fuel. The relative timing of these shocks must be tuned to better than +/- 100ps to maintain the DT fuel on a sufficiently low adiabat. To meet these requirements, pre-ignition tuning experiments using a modified hohlraum geometry are being planned. This modified geometry, known as the ``keyhole'' hohlraum, adds a re-entrant gold cone, which passes through the hohlraum and capsule walls, to provide an optical line-of-sight to directly measure the shocks as they break out of the ablator. In order to assess the surrogacy of this modified geometry, 3D simulations using HYDRA [1] have been performed. The drive conditions and the resulting effect on shock timing in the keyhole hohlraum will be compared with the corresponding results for the standard ignition hohlraum. [1] M.M. Marinak, et al., Phys. Plasmas 8, 2275 (2001).

  4. An open Markov chain scheme model for a credit consumption portfolio fed by ARIMA and SARMA processes

    NASA Astrophysics Data System (ADS)

    Esquível, Manuel L.; Fernandes, José Moniz; Guerreiro, Gracinda R.

    2016-06-01

    We introduce a schematic formalism for the time evolution of a random population entering some set of classes and such that each member of the population evolves among these classes according to a scheme based on a Markov chain model. We consider that the flow of incoming members is modeled by a time series and we detail the time series structure of the elements in each of the classes. We present a practical application to data from a credit portfolio of a Cape Verdian bank; after modeling the entering population in two different ways - namely as an ARIMA process and as a deterministic sigmoid type trend plus a SARMA process for the residues - we simulate the behavior of the population and compare the results. We get that the second method is more accurate in describing the behavior of the populations when compared to the observed values in a direct simulation of the Markov chain.

  5. Tuning the Voices of a Choir: Detecting Ecological Gradients in Time-Series Populations.

    PubMed

    Buras, Allan; van der Maaten-Theunissen, Marieke; van der Maaten, Ernst; Ahlgrimm, Svenja; Hermann, Philipp; Simard, Sonia; Heinrich, Ingo; Helle, Gerd; Unterseher, Martin; Schnittler, Martin; Eusemann, Pascal; Wilmking, Martin

    2016-01-01

    This paper introduces a new approach-the Principal Component Gradient Analysis (PCGA)-to detect ecological gradients in time-series populations, i.e. several time-series originating from different individuals of a population. Detection of ecological gradients is of particular importance when dealing with time-series from heterogeneous populations which express differing trends. PCGA makes use of polar coordinates of loadings from the first two axes obtained by principal component analysis (PCA) to define groups of similar trends. Based on the mean inter-series correlation (rbar) the gain of increasing a common underlying signal by PCGA groups is quantified using Monte Carlo Simulations. In terms of validation PCGA is compared to three other existing approaches. Focusing on dendrochronological examples, PCGA is shown to correctly determine population gradients and in particular cases to be advantageous over other considered methods. Furthermore, PCGA groups in each example allowed for enhancing the strength of a common underlying signal and comparably well as hierarchical cluster analysis. Our results indicate that PCGA potentially allows for a better understanding of mechanisms causing time-series population gradients as well as objectively enhancing the performance of climate transfer functions in dendroclimatology. While our examples highlight the relevance of PCGA to the field of dendrochronology, we believe that also other disciplines working with data of comparable structure may benefit from PCGA.

  6. Tuning the Voices of a Choir: Detecting Ecological Gradients in Time-Series Populations

    PubMed Central

    Buras, Allan; van der Maaten-Theunissen, Marieke; van der Maaten, Ernst; Ahlgrimm, Svenja; Hermann, Philipp; Simard, Sonia; Heinrich, Ingo; Helle, Gerd; Unterseher, Martin; Schnittler, Martin; Eusemann, Pascal; Wilmking, Martin

    2016-01-01

    This paper introduces a new approach–the Principal Component Gradient Analysis (PCGA)–to detect ecological gradients in time-series populations, i.e. several time-series originating from different individuals of a population. Detection of ecological gradients is of particular importance when dealing with time-series from heterogeneous populations which express differing trends. PCGA makes use of polar coordinates of loadings from the first two axes obtained by principal component analysis (PCA) to define groups of similar trends. Based on the mean inter-series correlation (rbar) the gain of increasing a common underlying signal by PCGA groups is quantified using Monte Carlo Simulations. In terms of validation PCGA is compared to three other existing approaches. Focusing on dendrochronological examples, PCGA is shown to correctly determine population gradients and in particular cases to be advantageous over other considered methods. Furthermore, PCGA groups in each example allowed for enhancing the strength of a common underlying signal and comparably well as hierarchical cluster analysis. Our results indicate that PCGA potentially allows for a better understanding of mechanisms causing time-series population gradients as well as objectively enhancing the performance of climate transfer functions in dendroclimatology. While our examples highlight the relevance of PCGA to the field of dendrochronology, we believe that also other disciplines working with data of comparable structure may benefit from PCGA. PMID:27467508

  7. Forward Period Analysis Method of the Periodic Hamiltonian System.

    PubMed

    Wang, Pengfei

    2016-01-01

    Using the forward period analysis (FPA), we obtain the period of a Morse oscillator and mathematical pendulum system, with the accuracy of 100 significant digits. From these results, the long-term [0, 1060] (time unit) solutions, ranging from the Planck time to the age of the universe, are computed reliably and quickly with a parallel multiple-precision Taylor series (PMT) scheme. The application of FPA to periodic systems can greatly reduce the computation time of long-term reliable simulations. This scheme provides an efficient way to generate reference solutions, against which long-term simulations using other schemes can be tested.

  8. Distinguishing time-delayed causal interactions using convergent cross mapping

    PubMed Central

    Ye, Hao; Deyle, Ethan R.; Gilarranz, Luis J.; Sugihara, George

    2015-01-01

    An important problem across many scientific fields is the identification of causal effects from observational data alone. Recent methods (convergent cross mapping, CCM) have made substantial progress on this problem by applying the idea of nonlinear attractor reconstruction to time series data. Here, we expand upon the technique of CCM by explicitly considering time lags. Applying this extended method to representative examples (model simulations, a laboratory predator-prey experiment, temperature and greenhouse gas reconstructions from the Vostok ice core, and long-term ecological time series collected in the Southern California Bight), we demonstrate the ability to identify different time-delayed interactions, distinguish between synchrony induced by strong unidirectional-forcing and true bidirectional causality, and resolve transitive causal chains. PMID:26435402

  9. Nonlinear time-series-based adaptive control applications

    NASA Technical Reports Server (NTRS)

    Mohler, R. R.; Rajkumar, V.; Zakrzewski, R. R.

    1991-01-01

    A control design methodology based on a nonlinear time-series reference model is presented. It is indicated by highly nonlinear simulations that such designs successfully stabilize troublesome aircraft maneuvers undergoing large changes in angle of attack as well as large electric power transients due to line faults. In both applications, the nonlinear controller was significantly better than the corresponding linear adaptive controller. For the electric power network, a flexible AC transmission system with series capacitor power feedback control is studied. A bilinear autoregressive moving average reference model is identified from system data, and the feedback control is manipulated according to a desired reference state. The control is optimized according to a predictive one-step quadratic performance index. A similar algorithm is derived for control of rapid changes in aircraft angle of attack over a normally unstable flight regime. In the latter case, however, a generalization of a bilinear time-series model reference includes quadratic and cubic terms in angle of attack.

  10. Time Series Observations of the 2015 Eclipse of b Persei (not beta Persei) (Abstract)

    NASA Astrophysics Data System (ADS)

    Collins, D. F.

    2016-06-01

    (Abstract only) The bright (V = 4.6) ellipsoidal variable b Persei consists of a close non-eclipsing binary pair that shows a nearly sinusoidal light curve with a ~1.5 day period. This system also contains a third star that orbits the binary pair every 702 days. AAVSO observers recently detected the first ever optical eclipse of A-B binary pair by the third star as a series of snapshots (D. Collins, R. Zavala, J. Sanborn - AAVSO Spring Meeting, 2013); abstract published in Collins, JAAVSO, 41, 2, 391 (2013); b Per mis-printed as b Per therein. A follow-up eclipse campaign in mid-January 2015 recorded time-series observations. These new time-series observations clearly show multiple ingress and egress of each component of the binary system by the third star over the eclipse duration of 2 to 3 days. A simulation of the eclipse was created. Orbital and some astrophysical parameters were adjusted within constraints to give a reasonable fit to the observed light curve.

  11. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.

  12. Towards seasonal forecasting of malaria in India.

    PubMed

    Lauderdale, Jonathan M; Caminade, Cyril; Heath, Andrew E; Jones, Anne E; MacLeod, David A; Gouda, Krushna C; Murty, Upadhyayula Suryanarayana; Goswami, Prashant; Mutheneni, Srinivasa R; Morse, Andrew P

    2014-08-10

    Malaria presents public health challenge despite extensive intervention campaigns. A 30-year hindcast of the climatic suitability for malaria transmission in India is presented, using meteorological variables from a state of the art seasonal forecast model to drive a process-based, dynamic disease model. The spatial distribution and seasonal cycles of temperature and precipitation from the forecast model are compared to three observationally-based meteorological datasets. These time series are then used to drive the disease model, producing a simulated forecast of malaria and three synthetic malaria time series that are qualitatively compared to contemporary and pre-intervention malaria estimates. The area under the Relative Operator Characteristic (ROC) curve is calculated as a quantitative metric of forecast skill, comparing the forecast to the meteorologically-driven synthetic malaria time series. The forecast shows probabilistic skill in predicting the spatial distribution of Plasmodium falciparum incidence when compared to the simulated meteorologically-driven malaria time series, particularly where modelled incidence shows high seasonal and interannual variability such as in Orissa, West Bengal, and Jharkhand (North-east India), and Gujarat, Rajastan, Madhya Pradesh and Maharashtra (North-west India). Focusing on these two regions, the malaria forecast is able to distinguish between years of "high", "above average" and "low" malaria incidence in the peak malaria transmission seasons, with more than 70% sensitivity and a statistically significant area under the ROC curve. These results are encouraging given that the three month forecast lead time used is well in excess of the target for early warning systems adopted by the World Health Organization. This approach could form the basis of an operational system to identify the probability of regional malaria epidemics, allowing advanced and targeted allocation of resources for combatting malaria in India.

  13. Evenly spaced Detrended Fluctuation Analysis: Selecting the number of points for the diffusion plot

    NASA Astrophysics Data System (ADS)

    Liddy, Joshua J.; Haddad, Jeffrey M.

    2018-02-01

    Detrended Fluctuation Analysis (DFA) has become a widely-used tool to examine the correlation structure of a time series and provided insights into neuromuscular health and disease states. As the popularity of utilizing DFA in the human behavioral sciences has grown, understanding its limitations and how to properly determine parameters is becoming increasingly important. DFA examines the correlation structure of variability in a time series by computing α, the slope of the log SD- log n diffusion plot. When using the traditional DFA algorithm, the timescales, n, are often selected as a set of integers between a minimum and maximum length based on the number of data points in the time series. This produces non-uniformly distributed values of n in logarithmic scale, which influences the estimation of α due to a disproportionate weighting of the long-timescale regions of the diffusion plot. Recently, the evenly spaced DFA and evenly spaced average DFA algorithms were introduced. Both algorithms compute α by selecting k points for the diffusion plot based on the minimum and maximum timescales of interest and improve the consistency of α estimates for simulated fractional Gaussian noise and fractional Brownian motion time series. Two issues that remain unaddressed are (1) how to select k and (2) whether the evenly-spaced DFA algorithms show similar benefits when assessing human behavioral data. We manipulated k and examined its effects on the accuracy, consistency, and confidence limits of α in simulated and experimental time series. We demonstrate that the accuracy and consistency of α are relatively unaffected by the selection of k. However, the confidence limits of α narrow as k increases, dramatically reducing measurement uncertainty for single trials. We provide guidelines for selecting k and discuss potential uses of the evenly spaced DFA algorithms when assessing human behavioral data.

  14. Controlling for seasonal patterns and time varying confounders in time-series epidemiological models: a simulation study.

    PubMed

    Perrakis, Konstantinos; Gryparis, Alexandros; Schwartz, Joel; Le Tertre, Alain; Katsouyanni, Klea; Forastiere, Francesco; Stafoggia, Massimo; Samoli, Evangelia

    2014-12-10

    An important topic when estimating the effect of air pollutants on human health is choosing the best method to control for seasonal patterns and time varying confounders, such as temperature and humidity. Semi-parametric Poisson time-series models include smooth functions of calendar time and weather effects to control for potential confounders. Case-crossover (CC) approaches are considered efficient alternatives that control seasonal confounding by design and allow inclusion of smooth functions of weather confounders through their equivalent Poisson representations. We evaluate both methodological designs with respect to seasonal control and compare spline-based approaches, using natural splines and penalized splines, and two time-stratified CC approaches. For the spline-based methods, we consider fixed degrees of freedom, minimization of the partial autocorrelation function, and general cross-validation as smoothing criteria. Issues of model misspecification with respect to weather confounding are investigated under simulation scenarios, which allow quantifying omitted, misspecified, and irrelevant-variable bias. The simulations are based on fully parametric mechanisms designed to replicate two datasets with different mortality and atmospheric patterns. Overall, minimum partial autocorrelation function approaches provide more stable results for high mortality counts and strong seasonal trends, whereas natural splines with fixed degrees of freedom perform better for low mortality counts and weak seasonal trends followed by the time-season-stratified CC model, which performs equally well in terms of bias but yields higher standard errors. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Analysis of financial time series using multiscale entropy based on skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Shang, Pengjian

    2018-01-01

    There is a great interest in studying dynamic characteristics of the financial time series of the daily stock closing price in different regions. Multi-scale entropy (MSE) is effective, mainly in quantifying the complexity of time series on different time scales. This paper applies a new method for financial stability from the perspective of MSE based on skewness and kurtosis. To better understand the superior coarse-graining method for the different kinds of stock indexes, we take into account the developmental characteristics of the three continents of Asia, North America and European stock markets. We study the volatility of different financial time series in addition to analyze the similarities and differences of coarsening time series from the perspective of skewness and kurtosis. A kind of corresponding relationship between the entropy value of stock sequences and the degree of stability of financial markets, were observed. The three stocks which have particular characteristics in the eight piece of stock sequences were discussed, finding the fact that it matches the result of applying the MSE method to showing results on a graph. A comparative study is conducted to simulate over synthetic and real world data. Results show that the modified method is more effective to the change of dynamics and has more valuable information. The result is obtained at the same time, finding the results of skewness and kurtosis discrimination is obvious, but also more stable.

  16. Fractal analysis of the short time series in a visibility graph method

    NASA Astrophysics Data System (ADS)

    Li, Ruixue; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Chen, Yingyuan

    2016-05-01

    The aim of this study is to evaluate the performance of the visibility graph (VG) method on short fractal time series. In this paper, the time series of Fractional Brownian motions (fBm), characterized by different Hurst exponent H, are simulated and then mapped into a scale-free visibility graph, of which the degree distributions show the power-law form. The maximum likelihood estimation (MLE) is applied to estimate power-law indexes of degree distribution, and in this progress, the Kolmogorov-Smirnov (KS) statistic is used to test the performance of estimation of power-law index, aiming to avoid the influence of droop head and heavy tail in degree distribution. As a result, we find that the MLE gives an optimal estimation of power-law index when KS statistic reaches its first local minimum. Based on the results from KS statistic, the relationship between the power-law index and the Hurst exponent is reexamined and then amended to meet short time series. Thus, a method combining VG, MLE and KS statistics is proposed to estimate Hurst exponents from short time series. Lastly, this paper also offers an exemplification to verify the effectiveness of the combined method. In addition, the corresponding results show that the VG can provide a reliable estimation of Hurst exponents.

  17. Integration of Steady-State and Temporal Gene Expression Data for the Inference of Gene Regulatory Networks

    PubMed Central

    Wang, Yi Kan; Hurley, Daniel G.; Schnell, Santiago; Print, Cristin G.; Crampin, Edmund J.

    2013-01-01

    We develop a new regression algorithm, cMIKANA, for inference of gene regulatory networks from combinations of steady-state and time-series gene expression data. Using simulated gene expression datasets to assess the accuracy of reconstructing gene regulatory networks, we show that steady-state and time-series data sets can successfully be combined to identify gene regulatory interactions using the new algorithm. Inferring gene networks from combined data sets was found to be advantageous when using noisy measurements collected with either lower sampling rates or a limited number of experimental replicates. We illustrate our method by applying it to a microarray gene expression dataset from human umbilical vein endothelial cells (HUVECs) which combines time series data from treatment with growth factor TNF and steady state data from siRNA knockdown treatments. Our results suggest that the combination of steady-state and time-series datasets may provide better prediction of RNA-to-RNA interactions, and may also reveal biological features that cannot be identified from dynamic or steady state information alone. Finally, we consider the experimental design of genomics experiments for gene regulatory network inference and show that network inference can be improved by incorporating steady-state measurements with time-series data. PMID:23967277

  18. Hybrid model for forecasting time series with trend, seasonal and salendar variation patterns

    NASA Astrophysics Data System (ADS)

    Suhartono; Rahayu, S. P.; Prastyo, D. D.; Wijayanti, D. G. P.; Juliyanto

    2017-09-01

    Most of the monthly time series data in economics and business in Indonesia and other Moslem countries not only contain trend and seasonal, but also affected by two types of calendar variation effects, i.e. the effect of the number of working days or trading and holiday effects. The purpose of this research is to develop a hybrid model or a combination of several forecasting models to predict time series that contain trend, seasonal and calendar variation patterns. This hybrid model is a combination of classical models (namely time series regression and ARIMA model) and/or modern methods (artificial intelligence method, i.e. Artificial Neural Networks). A simulation study was used to show that the proposed procedure for building the hybrid model could work well for forecasting time series with trend, seasonal and calendar variation patterns. Furthermore, the proposed hybrid model is applied for forecasting real data, i.e. monthly data about inflow and outflow of currency at Bank Indonesia. The results show that the hybrid model tend to provide more accurate forecasts than individual forecasting models. Moreover, this result is also in line with the third results of the M3 competition, i.e. the hybrid model on average provides a more accurate forecast than the individual model.

  19. Sensitivity analysis of machine-learning models of hydrologic time series

    NASA Astrophysics Data System (ADS)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  20. SIMULATING ATMOSPHERIC EXPOSURE USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME

    EPA Science Inventory

    Multimedia Risk assessments require the temporal integration of atmospheric concentration and deposition estimates with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-ter...

  1. Investigation of Time Series Representations and Similarity Measures for Structural Damage Pattern Recognition

    PubMed Central

    Swartz, R. Andrew

    2013-01-01

    This paper investigates the time series representation methods and similarity measures for sensor data feature extraction and structural damage pattern recognition. Both model-based time series representation and dimensionality reduction methods are studied to compare the effectiveness of feature extraction for damage pattern recognition. The evaluation of feature extraction methods is performed by examining the separation of feature vectors among different damage patterns and the pattern recognition success rate. In addition, the impact of similarity measures on the pattern recognition success rate and the metrics for damage localization are also investigated. The test data used in this study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case datasets and damage test data with different damage modalities are used. The simulation results show that both time series representation methods and similarity measures have significant impact on the pattern recognition success rate. PMID:24191136

  2. Price of gasoline: forecasting comparisons. [Box-Jenkins, econometric, and regression methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bopp, A.E.; Neri, J.A.

    Gasoline prices are simulated using three popular forecasting methodologies: A Box--Jenkins type method, an econometric method, and a regression method. One-period-ahead and 18-period-ahead comparisons are made. For the one-period-ahead method, a Box--Jenkins type time-series model simulated best, although all do well. However, for the 18-period simulation, the econometric and regression methods perform substantially better than the Box-Jenkins formulation. A rationale for and implications of these results ae discussed. 11 references.

  3. Time-dependent Reliability of Dynamic Systems using Subset Simulation with Splitting over a Series of Correlated Time Intervals

    DTIC Science & Technology

    2013-08-01

    cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended...MCMC and splitting sampling schemes. Our proposed SS/ STP method is presented in Section 4, including accuracy bounds and computational effort

  4. Feeder Voltage Regulation with High-Penetration PV Using Advanced Inverters and a Distribution Management System: A Duke Energy Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Giraldez, Julieta; Gruchalla, Kenny

    2016-11-01

    Duke Energy, Alstom Grid, and the National Renewable Energy Laboratory teamed up to better understand the impacts of solar photovoltaics (PV) on distribution system operations. The core goal of the project is to compare the operational - specifically, voltage regulation - impacts of three classes of PV inverter operations: 1.) Active power only (Baseline); 2.) Local inverter control (e.g., PF...not equal...1, Q(V), etc.); and 3.) Integrated volt-VAR control (centralized through the distribution management system). These comparisons were made using multiple approaches, each of which represents an important research-and-development effort on its own: a) Quasi-steady-state time-series modeling for approximately 1 yearmore » of operations using the Alstom eTerra (DOTS) system as a simulation engine, augmented by Python scripting for scenario and time-series control and using external models for an advanced inverter; b) Power-hardware-in-the-loop (PHIL) testing of a 500-kVA-class advanced inverter and traditional voltage regulating equipment. This PHIL testing used cosimulation to link full-scale feeder simulation using DOTS in real time to hardware testing; c) Advanced visualization to provide improved insights into time-series results and other PV operational impacts; and d) Cost-benefit analysis to compare the financial and business-model impacts of each integration approach.« less

  5. Catalog of Wargaming and Military Simulation Models.

    DTIC Science & Technology

    1982-05-01

    ix War Games and Simulations ............................ 1-823 Functional Index ................................ Appendix A Data Collection Sheet...34AGTM (An Air and Ground Theatre Model); User’s Guide and Program Description," Jan 1974 (NU) TIME REQUIREMENTS: Collection of the data base can be...to exposures. Facilities can be provided in any series to collect and output data on any specific subject, appropriate to the level of the game. 97

  6. Load Balancing Using Time Series Analysis for Soft Real Time Systems with Statistically Periodic Loads

    NASA Technical Reports Server (NTRS)

    Hailperin, M.

    1993-01-01

    This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.

  7. Functional magnetic resonance imaging activation detection: fuzzy cluster analysis in wavelet and multiwavelet domains.

    PubMed

    Jahanian, Hesamoddin; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gholam-Ali

    2005-09-01

    To present novel feature spaces, based on multiscale decompositions obtained by scalar wavelet and multiwavelet transforms, to remedy problems associated with high dimension of functional magnetic resonance imaging (fMRI) time series (when they are used directly in clustering algorithms) and their poor signal-to-noise ratio (SNR) that limits accurate classification of fMRI time series according to their activation contents. Using randomization, the proposed method finds wavelet/multiwavelet coefficients that represent the activation content of fMRI time series and combines them to define new feature spaces. Using simulated and experimental fMRI data sets, the proposed feature spaces are compared to the cross-correlation (CC) feature space and their performances are evaluated. In these studies, the false positive detection rate is controlled using randomization. To compare different methods, several points of the receiver operating characteristics (ROC) curves, using simulated data, are estimated and compared. The proposed features suppress the effects of confounding signals and improve activation detection sensitivity. Experimental results show improved sensitivity and robustness of the proposed method compared to the conventional CC analysis. More accurate and sensitive activation detection can be achieved using the proposed feature spaces compared to CC feature space. Multiwavelet features show superior detection sensitivity compared to the scalar wavelet features. (c) 2005 Wiley-Liss, Inc.

  8. Simulating transient dynamics of the time-dependent time fractional Fokker-Planck systems

    NASA Astrophysics Data System (ADS)

    Kang, Yan-Mei

    2016-09-01

    For a physically realistic type of time-dependent time fractional Fokker-Planck (FP) equation, derived as the continuous limit of the continuous time random walk with time-modulated Boltzmann jumping weight, a semi-analytic iteration scheme based on the truncated (generalized) Fourier series is presented to simulate the resultant transient dynamics when the external time modulation is a piece-wise constant signal. At first, the iteration scheme is demonstrated with a simple time-dependent time fractional FP equation on finite interval with two absorbing boundaries, and then it is generalized to the more general time-dependent Smoluchowski-type time fractional Fokker-Planck equation. The numerical examples verify the efficiency and accuracy of the iteration method, and some novel dynamical phenomena including polarized motion orientations and periodic response death are discussed.

  9. Visibility graph analysis on quarterly macroeconomic series of China based on complex network theory

    NASA Astrophysics Data System (ADS)

    Wang, Na; Li, Dong; Wang, Qiwen

    2012-12-01

    The visibility graph approach and complex network theory provide a new insight into time series analysis. The inheritance of the visibility graph from the original time series was further explored in the paper. We found that degree distributions of visibility graphs extracted from Pseudo Brownian Motion series obtained by the Frequency Domain algorithm exhibit exponential behaviors, in which the exponential exponent is a binomial function of the Hurst index inherited in the time series. Our simulations presented that the quantitative relations between the Hurst indexes and the exponents of degree distribution function are different for different series and the visibility graph inherits some important features of the original time series. Further, we convert some quarterly macroeconomic series including the growth rates of value-added of three industry series and the growth rates of Gross Domestic Product series of China to graphs by the visibility algorithm and explore the topological properties of graphs associated from the four macroeconomic series, namely, the degree distribution and correlations, the clustering coefficient, the average path length, and community structure. Based on complex network analysis we find degree distributions of associated networks from the growth rates of value-added of three industry series are almost exponential and the degree distributions of associated networks from the growth rates of GDP series are scale free. We also discussed the assortativity and disassortativity of the four associated networks as they are related to the evolutionary process of the original macroeconomic series. All the constructed networks have “small-world” features. The community structures of associated networks suggest dynamic changes of the original macroeconomic series. We also detected the relationship among government policy changes, community structures of associated networks and macroeconomic dynamics. We find great influences of government policies in China on the changes of dynamics of GDP and the three industries adjustment. The work in our paper provides a new way to understand the dynamics of economic development.

  10. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  11. Uncertainty estimates of altimetric Global Mean Sea Level timeseries

    NASA Astrophysics Data System (ADS)

    Scharffenberg, Martin; Hemming, Michael; Stammer, Detlef

    2016-04-01

    An attempt is being presented concerned with providing uncertainty measures for global mean sea level time series. For this purpose sea surface height (SSH) fields, simulated by the high resolution STORM/NCEP model for the period 1993 - 2010, were subsampled along altimeter tracks and processed similar to techniques used by five working groups to estimate GMSL. Results suggest that the spatial and temporal resolution have a substantial impact on GMSL estimates. Major impacts can especially result from the interpolation technique or the treatment of SSH outliers and easily lead to artificial temporal variability in the resulting time series.

  12. Kernel canonical-correlation Granger causality for multiple time series

    NASA Astrophysics Data System (ADS)

    Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu

    2011-04-01

    Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.

  13. Detecting the sampling rate through observations

    NASA Astrophysics Data System (ADS)

    Shoji, Isao

    2018-09-01

    This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.

  14. A Composite View of Ozone Evolution in the 1995-1996 Northern Winter Polar Vortex Developed from Airborne Lidar and Satellite Observations

    NASA Technical Reports Server (NTRS)

    Douglass, A. R.; Schoeberl, M. R.; Kawa, S. R.; Browell, E. V.

    2000-01-01

    The processes which contribute to the ozone evolution in the high latitude northern lower stratosphere are evaluated using a three dimensional model simulation and ozone observations. The model uses winds and temperatures from the Goddard Earth Observing System Data Assimilation System. The simulation results are compared with ozone observations from three platforms: the differential absorption lidar (DIAL) which was flown on the NASA DC-8 as part of the Vortex Ozone Transport Experiment; the Microwave Limb Sounder (MLS); the Polar Ozone and Aerosol Measurement (POAM II) solar occultation instrument. Time series for the different data sets are consistent with each other, and diverge from model time series during December and January. The model ozone in December and January is shown to be much less sensitive to the model photochemistry than to the model vertical transport, which depends on the model vertical motion as well as the model vertical gradient. We evaluate the dependence of model ozone evolution on the model ozone gradient by comparing simulations with different initial conditions for ozone. The modeled ozone throughout December and January most closely resembles observed ozone when the vertical profiles between 12 and 20 km within the polar vortex closely match December DIAL observations. We make a quantitative estimate of the uncertainty in the vertical advection using diabatic trajectory calculations. The net transport uncertainty is significant, and should be accounted for when comparing observations with model ozone. The observed and modeled ozone time series during December and January are consistent when these transport uncertainties are taken into account.

  15. Simulation Study Using a New Type of Sample Variance

    NASA Technical Reports Server (NTRS)

    Howe, D. A.; Lainson, K. J.

    1996-01-01

    We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.

  16. Variability simulations with a steady, linearized primitive equations model

    NASA Technical Reports Server (NTRS)

    Kinter, J. L., III; Nigam, S.

    1985-01-01

    Solutions of the steady, primitive equations on a sphere, linearized about a zonally symmetric basic state are computed for the purpose of simulating monthly mean variability in the troposphere. The basic states are observed, winter monthly mean, zonal means of zontal and meridional velocities, temperatures and surface pressures computed from the 15 year NMC time series. A least squares fit to a series of Legendre polynomials is used to compute the basic states between 20 H and the equator, and the hemispheres are assumed symmetric. The model is spectral in the zonal direction, and centered differences are employed in the meridional and vertical directions. Since the model is steady and linear, the solution is obtained by inversion of a block, pente-diagonal matrix. The model simulates the climatology of the GFDL nine level, spectral general circulation model quite closely, particularly in middle latitudes above the boundary layer. This experiment is an extension of that simulation to examine variability of the steady, linear solution.

  17. Extended local similarity analysis (eLSA) of microbial community and other time series data with replicates.

    PubMed

    Xia, Li C; Steele, Joshua A; Cram, Jacob A; Cardon, Zoe G; Simmons, Sheri L; Vallino, Joseph J; Fuhrman, Jed A; Sun, Fengzhu

    2011-01-01

    The increasing availability of time series microbial community data from metagenomics and other molecular biological studies has enabled the analysis of large-scale microbial co-occurrence and association networks. Among the many analytical techniques available, the Local Similarity Analysis (LSA) method is unique in that it captures local and potentially time-delayed co-occurrence and association patterns in time series data that cannot otherwise be identified by ordinary correlation analysis. However LSA, as originally developed, does not consider time series data with replicates, which hinders the full exploitation of available information. With replicates, it is possible to understand the variability of local similarity (LS) score and to obtain its confidence interval. We extended our LSA technique to time series data with replicates and termed it extended LSA, or eLSA. Simulations showed the capability of eLSA to capture subinterval and time-delayed associations. We implemented the eLSA technique into an easy-to-use analytic software package. The software pipeline integrates data normalization, statistical correlation calculation, statistical significance evaluation, and association network construction steps. We applied the eLSA technique to microbial community and gene expression datasets, where unique time-dependent associations were identified. The extended LSA analysis technique was demonstrated to reveal statistically significant local and potentially time-delayed association patterns in replicated time series data beyond that of ordinary correlation analysis. These statistically significant associations can provide insights to the real dynamics of biological systems. The newly designed eLSA software efficiently streamlines the analysis and is freely available from the eLSA homepage, which can be accessed at http://meta.usc.edu/softs/lsa.

  18. Extended local similarity analysis (eLSA) of microbial community and other time series data with replicates

    PubMed Central

    2011-01-01

    Background The increasing availability of time series microbial community data from metagenomics and other molecular biological studies has enabled the analysis of large-scale microbial co-occurrence and association networks. Among the many analytical techniques available, the Local Similarity Analysis (LSA) method is unique in that it captures local and potentially time-delayed co-occurrence and association patterns in time series data that cannot otherwise be identified by ordinary correlation analysis. However LSA, as originally developed, does not consider time series data with replicates, which hinders the full exploitation of available information. With replicates, it is possible to understand the variability of local similarity (LS) score and to obtain its confidence interval. Results We extended our LSA technique to time series data with replicates and termed it extended LSA, or eLSA. Simulations showed the capability of eLSA to capture subinterval and time-delayed associations. We implemented the eLSA technique into an easy-to-use analytic software package. The software pipeline integrates data normalization, statistical correlation calculation, statistical significance evaluation, and association network construction steps. We applied the eLSA technique to microbial community and gene expression datasets, where unique time-dependent associations were identified. Conclusions The extended LSA analysis technique was demonstrated to reveal statistically significant local and potentially time-delayed association patterns in replicated time series data beyond that of ordinary correlation analysis. These statistically significant associations can provide insights to the real dynamics of biological systems. The newly designed eLSA software efficiently streamlines the analysis and is freely available from the eLSA homepage, which can be accessed at http://meta.usc.edu/softs/lsa. PMID:22784572

  19. POD Model Reconstruction for Gray-Box Fault Detection

    NASA Technical Reports Server (NTRS)

    Park, Han; Zak, Michail

    2007-01-01

    Proper orthogonal decomposition (POD) is the mathematical basis of a method of constructing low-order mathematical models for the "gray-box" fault-detection algorithm that is a component of a diagnostic system known as beacon-based exception analysis for multi-missions (BEAM). POD has been successfully applied in reducing computational complexity by generating simple models that can be used for control and simulation for complex systems such as fluid flows. In the present application to BEAM, POD brings the same benefits to automated diagnosis. BEAM is a method of real-time or offline, automated diagnosis of a complex dynamic system.The gray-box approach makes it possible to utilize incomplete or approximate knowledge of the dynamics of the system that one seeks to diagnose. In the gray-box approach, a deterministic model of the system is used to filter a time series of system sensor data to remove the deterministic components of the time series from further examination. What is left after the filtering operation is a time series of residual quantities that represent the unknown (or at least unmodeled) aspects of the behavior of the system. Stochastic modeling techniques are then applied to the residual time series. The procedure for detecting abnormal behavior of the system then becomes one of looking for statistical differences between the residual time series and the predictions of the stochastic model.

  20. Distributions-per-level: a means of testing level detectors and models of patch-clamp data.

    PubMed

    Schröder, I; Huth, T; Suitchmezian, V; Jarosik, J; Schnell, S; Hansen, U P

    2004-01-01

    Level or jump detectors generate the reconstructed time series from a noisy record of patch-clamp current. The reconstructed time series is used to create dwell-time histograms for the kinetic analysis of the Markov model of the investigated ion channel. It is shown here that some additional lines in the software of such a detector can provide a powerful new means of patch-clamp analysis. For each current level that can be recognized by the detector, an array is declared. The new software assigns every data point of the original time series to the array that belongs to the actual state of the detector. From the data sets in these arrays distributions-per-level are generated. Simulated and experimental time series analyzed by Hinkley detectors are used to demonstrate the benefits of these distributions-per-level. First, they can serve as a test of the reliability of jump and level detectors. Second, they can reveal beta distributions as resulting from fast gating that would usually be hidden in the overall amplitude histogram. Probably the most valuable feature is that the malfunctions of the Hinkley detectors turn out to depend on the Markov model of the ion channel. Thus, the errors revealed by the distributions-per-level can be used to distinguish between different putative Markov models of the measured time series.

  1. Mapping and spatial-temporal modeling of Bromus tectorum invasion in central Utah

    NASA Astrophysics Data System (ADS)

    Jin, Zhenyu

    Cheatgrass, or Downy Brome, is an exotic winter annual weed native to the Mediterranean region. Since its introduction to the U.S., it has become a significant weed and aggressive invader of sagebrush, pinion-juniper, and other shrub communities, where it can completely out-compete native grasses and shrubs. In this research, remotely sensed data combined with field collected data are used to investigate the distribution of the cheatgrass in Central Utah, to characterize the trend of the NDVI time-series of cheatgrass, and to construct a spatially explicit population-based model to simulate the spatial-temporal dynamics of the cheatgrass. This research proposes a method for mapping the canopy closure of invasive species using remotely sensed data acquired at different dates. Different invasive species have their own distinguished phenologies and the satellite images in different dates could be used to capture the phenology. The results of cheatgrass abundance prediction have a good fit with the field data for both linear regression and regression tree models, although the regression tree model has better performance than the linear regression model. To characterize the trend of NDVI time-series of cheatgrass, a novel smoothing algorithm named RMMEH is presented in this research to overcome some drawbacks of many other algorithms. By comparing the performance of RMMEH in smoothing a 16-day composite of the MODIS NDVI time-series with that of two other methods, which are the 4253EH, twice and the MVI, we have found that RMMEH not only keeps the original valid NDVI points, but also effectively removes the spurious spikes. The reconstructed NDVI time-series of different land covers are of higher quality and have smoother temporal trend. To simulate the spatial-temporal dynamics of cheatgrass, a spatially explicit population-based model is built applying remotely sensed data. The comparison between the model output and the ground truth of cheatgrass closure demonstrates that the model could successfully simulate the spatial-temporal dynamics of cheatgrass in a simple cheatgrass-dominant environment. The simulation of the functional response of different prescribed fire rates also shows that this model is helpful to answer management questions like, "What are the effects of prescribed fire to invasive species?" It demonstrates that a medium fire rate of 10% can successfully prevent cheatgrass invasion.

  2. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  3. Simulated hydrologic response to climate change during the 21st century in New Hampshire

    USGS Publications Warehouse

    Bjerklie, David M.; Sturtevant, Luke P.

    2018-01-24

    The U.S. Geological Survey, in cooperation with the New Hampshire Department of Environmental Services and the Department of Health and Human Services, has developed a hydrologic model to assess the effects of short- and long-term climate change on hydrology in New Hampshire. This report documents the model and datasets developed by using the model to predict how climate change will affect the hydrologic cycle and provide data that can be used by State and local agencies to identify locations that are vulnerable to the effects of climate change in areas across New Hampshire. Future hydrologic projections were developed from the output of five general circulation models for two future climate scenarios. The scenarios are based on projected future greenhouse gas emissions and estimates of land-use and land-cover change within a projected global economic framework. An evaluation of the possible effect of projected future temperature on modeling of evapotranspiration is summarized to address concerns regarding the implications of the future climate on model parameters that are based on climate variables. The results of the model simulations are hydrologic projections indicating increasing streamflow across the State with large increases in streamflow during winter and early spring and general decreases during late spring and summer. Wide spatial variability in changes to groundwater recharge is projected, with general decreases in the Connecticut River Valley and at high elevations in the northern part of the State and general increases in coastal and lowland areas of the State. In general, total winter snowfall is projected to decrease across the State, but there is a possibility of increasing snow in some locations, particularly during November, February, and March. The simulated future changes in recharge and snowfall vary by watershed across the State. This means that each area of the State could experience very different changes, depending on topography or other factors. Therefore, planning for infrastructure and public safety needs to be flexible in order to address the range of possible outcomes indicated by the various model simulations. The absolute magnitude and timing of the daily streamflows, especially the larger floods, are not considered to be reliably simulated compared to changes in frequency and duration of daily streamflows and changes in accumulated monthly and seasonal streamflow volumes. Simulated current and future streamflow, groundwater recharge, and snowfall datasets include simulated data derived from the five general circulation models used in this study for a current reference time period and two future time periods. Average monthly streamflow time series datasets are provided for 27 streamgages in New Hampshire. Fourteen of the 27 streamgages associated with daily streamflow time series showed a good calibration. Average monthly groundwater recharge and snowfall time series for the same reference time period and two future time periods are also provided for each of the 467 hydrologic response units that compose the model.

  4. Numerical Investigations of Capabilities and Limits of Photospheric Data Driven Magnetic Flux Emergence

    NASA Astrophysics Data System (ADS)

    Linton, M.; Leake, J. E.; Schuck, P. W.

    2016-12-01

    The magnetic field of the solar atmosphere is the primary driver of solar activity. Understanding the magnetic state of the solar atmosphere is therefore of key importance to predicting solar activity. One promising means of studying the magnetic atmosphere is to dynamically build up and evolve this atmosphere from the time evolution of emerging magnetic field at the photosphere, where it can be measured with current solar vector magnetograms at high temporal and spatial resolution. We report here on a series of numerical experiments investigating the capabilities and limits of magnetohydrodynamical simulations of such a process, where a magnetic corona is dynamically built up and evolved from a time series of synthetic photospheric data. These synthetic data are composed of photospheric slices taken from self consistent convection zone to corona simulations of flux emergence. The driven coronae are then quantitatively compared against the coronae of the original simulations. We investigate and report on the fidelity of these driven simulations, both as a function of the emergence timescale of the magnetic flux, and as a function of the driving cadence of the input data. These investigations will then be used to outline future prospects and challenges for using observed photospheric data to drive such solar atmospheric simulations. This work was supported by the Chief of Naval Research and the NASA Living with a Star and Heliophysics Supporting Research programs.

  5. Nonparametric autocovariance estimation from censored time series by Gaussian imputation.

    PubMed

    Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K

    2009-02-01

    One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.

  6. Relation between delayed feedback and delay-coupled systems and its application to chaotic lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soriano, Miguel C., E-mail: miguel@ifisc.uib-csic.es; Flunkert, Valentin; Fischer, Ingo

    2013-12-15

    We present a systematic approach to identify the similarities and differences between a chaotic system with delayed feedback and two mutually delay-coupled systems. We consider the general case in which the coupled systems are either unsynchronized or in a generally synchronized state, in contrast to the mostly studied case of identical synchronization. We construct a new time-series for each of the two coupling schemes, respectively, and present analytic evidence and numerical confirmation that these two constructed time-series are statistically equivalent. From the construction, it then follows that the distribution of time-series segments that are small compared to the overall delaymore » in the system is independent of the value of the delay and of the coupling scheme. By focusing on numerical simulations of delay-coupled chaotic lasers, we present a practical example of our findings.« less

  7. A hybrid clustering approach for multivariate time series - A case study applied to failure analysis in a gas turbine.

    PubMed

    Fontes, Cristiano Hora; Budman, Hector

    2017-11-01

    A clustering problem involving multivariate time series (MTS) requires the selection of similarity metrics. This paper shows the limitations of the PCA similarity factor (SPCA) as a single metric in nonlinear problems where there are differences in magnitude of the same process variables due to expected changes in operation conditions. A novel method for clustering MTS based on a combination between SPCA and the average-based Euclidean distance (AED) within a fuzzy clustering approach is proposed. Case studies involving either simulated or real industrial data collected from a large scale gas turbine are used to illustrate that the hybrid approach enhances the ability to recognize normal and fault operating patterns. This paper also proposes an oversampling procedure to create synthetic multivariate time series that can be useful in commonly occurring situations involving unbalanced data sets. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Error-based Extraction of States and Energy Landscapes from Experimental Single-Molecule Time-Series

    NASA Astrophysics Data System (ADS)

    Taylor, J. Nicholas; Li, Chun-Biu; Cooper, David R.; Landes, Christy F.; Komatsuzaki, Tamiki

    2015-03-01

    Characterization of states, the essential components of the underlying energy landscapes, is one of the most intriguing subjects in single-molecule (SM) experiments due to the existence of noise inherent to the measurements. Here we present a method to extract the underlying state sequences from experimental SM time-series. Taking into account empirical error and the finite sampling of the time-series, the method extracts a steady-state network which provides an approximation of the underlying effective free energy landscape. The core of the method is the application of rate-distortion theory from information theory, allowing the individual data points to be assigned to multiple states simultaneously. We demonstrate the method's proficiency in its application to simulated trajectories as well as to experimental SM fluorescence resonance energy transfer (FRET) trajectories obtained from isolated agonist binding domains of the AMPA receptor, an ionotropic glutamate receptor that is prevalent in the central nervous system.

  9. Vesicle Motion during Sustained Exocytosis in Chromaffin Cells: Numerical Model Based on Amperometric Measurements.

    PubMed

    Jarukanont, Daungruthai; Bonifas Arredondo, Imelda; Femat, Ricardo; Garcia, Martin E

    2015-01-01

    Chromaffin cells release catecholamines by exocytosis, a process that includes vesicle docking, priming and fusion. Although all these steps have been intensively studied, some aspects of their mechanisms, particularly those regarding vesicle transport to the active sites situated at the membrane, are still unclear. In this work, we show that it is possible to extract information on vesicle motion in Chromaffin cells from the combination of Langevin simulations and amperometric measurements. We developed a numerical model based on Langevin simulations of vesicle motion towards the cell membrane and on the statistical analysis of vesicle arrival times. We also performed amperometric experiments in bovine-adrenal Chromaffin cells under Ba2+ stimulation to capture neurotransmitter releases during sustained exocytosis. In the sustained phase, each amperometric peak can be related to a single release from a new vesicle arriving at the active site. The amperometric signal can then be mapped into a spike-series of release events. We normalized the spike-series resulting from the current peaks using a time-rescaling transformation, thus making signals coming from different cells comparable. We discuss why the obtained spike-series may contain information about the motion of all vesicles leading to release of catecholamines. We show that the release statistics in our experiments considerably deviate from Poisson processes. Moreover, the interspike-time probability is reasonably well described by two-parameter gamma distributions. In order to interpret this result we computed the vesicles' arrival statistics from our Langevin simulations. As expected, assuming purely diffusive vesicle motion we obtain Poisson statistics. However, if we assume that all vesicles are guided toward the membrane by an attractive harmonic potential, simulations also lead to gamma distributions of the interspike-time probability, in remarkably good agreement with experiment. We also show that including the fusion-time statistics in our model does not produce any significant changes on the results. These findings indicate that the motion of the whole ensemble of vesicles towards the membrane is directed and reflected in the amperometric signals. Our results confirm the conclusions of previous imaging studies performed on single vesicles that vesicles' motion underneath plasma membranes is not purely random, but biased towards the membrane.

  10. Vesicle Motion during Sustained Exocytosis in Chromaffin Cells: Numerical Model Based on Amperometric Measurements

    PubMed Central

    Jarukanont, Daungruthai; Bonifas Arredondo, Imelda; Femat, Ricardo; Garcia, Martin E.

    2015-01-01

    Chromaffin cells release catecholamines by exocytosis, a process that includes vesicle docking, priming and fusion. Although all these steps have been intensively studied, some aspects of their mechanisms, particularly those regarding vesicle transport to the active sites situated at the membrane, are still unclear. In this work, we show that it is possible to extract information on vesicle motion in Chromaffin cells from the combination of Langevin simulations and amperometric measurements. We developed a numerical model based on Langevin simulations of vesicle motion towards the cell membrane and on the statistical analysis of vesicle arrival times. We also performed amperometric experiments in bovine-adrenal Chromaffin cells under Ba2+ stimulation to capture neurotransmitter releases during sustained exocytosis. In the sustained phase, each amperometric peak can be related to a single release from a new vesicle arriving at the active site. The amperometric signal can then be mapped into a spike-series of release events. We normalized the spike-series resulting from the current peaks using a time-rescaling transformation, thus making signals coming from different cells comparable. We discuss why the obtained spike-series may contain information about the motion of all vesicles leading to release of catecholamines. We show that the release statistics in our experiments considerably deviate from Poisson processes. Moreover, the interspike-time probability is reasonably well described by two-parameter gamma distributions. In order to interpret this result we computed the vesicles’ arrival statistics from our Langevin simulations. As expected, assuming purely diffusive vesicle motion we obtain Poisson statistics. However, if we assume that all vesicles are guided toward the membrane by an attractive harmonic potential, simulations also lead to gamma distributions of the interspike-time probability, in remarkably good agreement with experiment. We also show that including the fusion-time statistics in our model does not produce any significant changes on the results. These findings indicate that the motion of the whole ensemble of vesicles towards the membrane is directed and reflected in the amperometric signals. Our results confirm the conclusions of previous imaging studies performed on single vesicles that vesicles’ motion underneath plasma membranes is not purely random, but biased towards the membrane. PMID:26675312

  11. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  12. Numerical simulation of transonic compressor under circumferential inlet distortion and rotor/stator interference using harmonic balance method

    NASA Astrophysics Data System (ADS)

    Wang, Ziwei; Jiang, Xiong; Chen, Ti; Hao, Yan; Qiu, Min

    2018-05-01

    Simulating the unsteady flow of compressor under circumferential inlet distortion and rotor/stator interference would need full-annulus grid with a dual time method. This process is time consuming and needs a large amount of computational resources. Harmonic balance method simulates the unsteady flow in compressor on single passage grid with a series of steady simulations. This will largely increase the computational efficiency in comparison with the dual time method. However, most simulations with harmonic balance method are conducted on the flow under either circumferential inlet distortion or rotor/stator interference. Based on an in-house CFD code, the harmonic balance method is applied in the simulation of flow in the NASA Stage 35 under both circumferential inlet distortion and rotor/stator interference. As the unsteady flow is influenced by two different unsteady disturbances, it leads to the computational instability. The instability can be avoided by coupling the harmonic balance method with an optimizing algorithm. The computational result of harmonic balance method is compared with the result of full-annulus simulation. It denotes that, the harmonic balance method simulates the flow under circumferential inlet distortion and rotor/stator interference as precise as the full-annulus simulation with a speed-up of about 8 times.

  13. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  14. Towards pattern generation and chaotic series prediction with photonic reservoir computers

    NASA Astrophysics Data System (ADS)

    Antonik, Piotr; Hermans, Michiel; Duport, François; Haelterman, Marc; Massar, Serge

    2016-03-01

    Reservoir Computing is a bio-inspired computing paradigm for processing time dependent signals that is particularly well suited for analog implementations. Our team has demonstrated several photonic reservoir computers with performance comparable to digital algorithms on a series of benchmark tasks such as channel equalisation and speech recognition. Recently, we showed that our opto-electronic reservoir computer could be trained online with a simple gradient descent algorithm programmed on an FPGA chip. This setup makes it in principle possible to feed the output signal back into the reservoir, and thus highly enrich the dynamics of the system. This will allow to tackle complex prediction tasks in hardware, such as pattern generation and chaotic and financial series prediction, which have so far only been studied in digital implementations. Here we report simulation results of our opto-electronic setup with an FPGA chip and output feedback applied to pattern generation and Mackey-Glass chaotic series prediction. The simulations take into account the major aspects of our experimental setup. We find that pattern generation can be easily implemented on the current setup with very good results. The Mackey-Glass series prediction task is more complex and requires a large reservoir and more elaborate training algorithm. With these adjustments promising result are obtained, and we now know what improvements are needed to match previously reported numerical results. These simulation results will serve as basis of comparison for experiments we will carry out in the coming months.

  15. A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz

    2018-04-01

    For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.

  16. The use of Tcl and Tk to improve design and code reutilization

    NASA Technical Reports Server (NTRS)

    Rodriguez, Lisbet; Reinholtz, Kirk

    1995-01-01

    Tcl and Tk facilitate design and code reuse in the ZIPSIM series of high-performance, high-fidelity spacecraft simulators. Tcl and Tk provide a framework for the construction of the Graphical User Interfaces for the simulators. The interfaces are architected such that a large proportion of the design and code is used for several applications, which has reduced design time and life-cycle costs.

  17. Signatures of ecological processes in microbial community time series.

    PubMed

    Faust, Karoline; Bauchinger, Franziska; Laroche, Béatrice; de Buyl, Sophie; Lahti, Leo; Washburne, Alex D; Gonze, Didier; Widder, Stefanie

    2018-06-28

    Growth rates, interactions between community members, stochasticity, and immigration are important drivers of microbial community dynamics. In sequencing data analysis, such as network construction and community model parameterization, we make implicit assumptions about the nature of these drivers and thereby restrict model outcome. Despite apparent risk of methodological bias, the validity of the assumptions is rarely tested, as comprehensive procedures are lacking. Here, we propose a classification scheme to determine the processes that gave rise to the observed time series and to enable better model selection. We implemented a three-step classification scheme in R that first determines whether dependence between successive time steps (temporal structure) is present in the time series and then assesses with a recently developed neutrality test whether interactions between species are required for the dynamics. If the first and second tests confirm the presence of temporal structure and interactions, then parameters for interaction models are estimated. To quantify the importance of temporal structure, we compute the noise-type profile of the community, which ranges from black in case of strong dependency to white in the absence of any dependency. We applied this scheme to simulated time series generated with the Dirichlet-multinomial (DM) distribution, Hubbell's neutral model, the generalized Lotka-Volterra model and its discrete variant (the Ricker model), and a self-organized instability model, as well as to human stool microbiota time series. The noise-type profiles for all but DM data clearly indicated distinctive structures. The neutrality test correctly classified all but DM and neutral time series as non-neutral. The procedure reliably identified time series for which interaction inference was suitable. Both tests were required, as we demonstrated that all structured time series, including those generated with the neutral model, achieved a moderate to high goodness of fit to the Ricker model. We present a fast and robust scheme to classify community structure and to assess the prevalence of interactions directly from microbial time series data. The procedure not only serves to determine ecological drivers of microbial dynamics, but also to guide selection of appropriate community models for prediction and follow-up analysis.

  18. Numerical Investigations of Capabilities and Limits of Photospheric Data Driven Magnetic Flux Emergence

    NASA Astrophysics Data System (ADS)

    Linton, Mark; Leake, James; Schuck, Peter W.

    2016-05-01

    The magnetic field of the solar atmosphere is the primary driver of solar activity. Understanding the magnetic state of the solar atmosphere is therefore of key importance to predicting solaractivity. One promising means of studying the magnetic atmosphere is to dynamically build up and evolve this atmosphere from the time evolution of the magnetic field at the photosphere, where it can be measured with current solar vector magnetograms at high temporal and spatial resolution.We report here on a series of numerical experiments investigating the capabilities and limits of magnetohydrodynamical simulations of such a process, where a magnetic corona is dynamically built up and evolved from a time series of synthetic photospheric data. These synthetic data are composed of photospheric slices taken from self consistent convection zone to corona simulations of flux emergence. The driven coronae are then quantitatively compared against the coronae of the original simulations. We investigate and report on the fidelity of these driven simulations, both as a function of the emergence timescale of the magnetic flux, and as a function of the driving cadence of the input data.This work was supported by the Chief of Naval Research and the NASA Living with a Star and Heliophysics Supporting Research programs.

  19. Applying downscaled global climate model data to a hydrodynamic surface-water and groundwater model

    USGS Publications Warehouse

    Swain, Eric; Stefanova, Lydia; Smith, Thomas

    2014-01-01

    Precipitation data from Global Climate Models have been downscaled to smaller regions. Adapting this downscaled precipitation data to a coupled hydrodynamic surface-water/groundwater model of southern Florida allows an examination of future conditions and their effect on groundwater levels, inundation patterns, surface-water stage and flows, and salinity. The downscaled rainfall data include the 1996-2001 time series from the European Center for Medium-Range Weather Forecasting ERA-40 simulation and both the 1996-1999 and 2038-2057 time series from two global climate models: the Community Climate System Model (CCSM) and the Geophysical Fluid Dynamic Laboratory (GFDL). Synthesized surface-water inflow datasets were developed for the 2038-2057 simulations. The resulting hydrologic simulations, with and without a 30-cm sea-level rise, were compared with each other and field data to analyze a range of projected conditions. Simulations predicted generally higher future stage and groundwater levels and surface-water flows, with sea-level rise inducing higher coastal salinities. A coincident rise in sea level, precipitation and surface-water flows resulted in a narrower inland saline/fresh transition zone. The inland areas were affected more by the rainfall difference than the sea-level rise, and the rainfall differences make little difference in coastal inundation, but a larger difference in coastal salinities.

  20. GAPPARD: a computationally efficient method of approximating gap-scale disturbance in vegetation models

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.

    2013-02-01

    Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, and to explore patterns of spatial scaling in forests, we developed a new method for simulating stand-replacing disturbances that is both accurate and 10-50x faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing, e.g., as a result of climate change, GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the forest models LPJ-GUESS and TreeM-LPJ, and evaluated these in a series of simulations along an altitudinal transect of an inner-alpine valley. With GAPPARD applied to LPJ-GUESS results were insignificantly different from the output of the original model LPJ-GUESS using 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and forest models.

  1. Prototype software model for designing intruder detection systems with simulation

    NASA Astrophysics Data System (ADS)

    Smith, Jeffrey S.; Peters, Brett A.; Curry, James C.; Gupta, Dinesh

    1998-08-01

    This article explores using discrete-event simulation for the design and control of defence oriented fixed-sensor- based detection system in a facility housing items of significant interest to enemy forces. The key issues discussed include software development, simulation-based optimization within a modeling framework, and the expansion of the framework to create real-time control tools and training simulations. The software discussed in this article is a flexible simulation environment where the data for the simulation are stored in an external database and the simulation logic is being implemented using a commercial simulation package. The simulation assesses the overall security level of a building against various intruder scenarios. A series of simulation runs with different inputs can determine the change in security level with changes in the sensor configuration, building layout, and intruder/guard strategies. In addition, the simulation model developed for the design stage of the project can be modified to produce a control tool for the testing, training, and real-time control of systems with humans and sensor hardware in the loop.

  2. Power in the bucket and angle of arrival modelling in the presence of an airborne platform-induced turbulence

    NASA Astrophysics Data System (ADS)

    Velluet, Marie-Thérèse

    2017-10-01

    In the framework of a European collaborative research project called ALWS (Airborne platform effects on lasers and Warning Sensors), the effects of platform-related turbulence on MAWS (missile approach warning systems) and DIRCM (directed infrared countermeasures) performance are investigated. Field trials have been conducted to study the turbulence effects around a hovering helicopter and behind a turboprop aircraft on the ground, with engines running. The time dependence of the power in the bucket and the amplitude of the angle of arrival have been characterized during the trial. Temporal spectra of these two parameters present an asymptotic behavior typical of optical beams propagating through developed turbulence (Kolmogorov). Based on the formalism developed in the case of propagation through atmospheric turbulence, we have first estimated turbulence strength and wind velocity inside plume for different flight conditions. We have then proposed an approach to simulate times series of these two quantities in the same conditions. These simulated time series have been compared with the recorded data to assess their validity domain. This model will be integrated in a simulator to estimate the impact of the turbulence induced by the platform and calculate the system performance. In this model dedicated to plume and downwash effects, aero-optical effects are not taken into account.

  3. Using time series structural characteristics to analyze grain prices in food insecure countries

    USGS Publications Warehouse

    Davenport, Frank; Funk, Chris

    2015-01-01

    Two components of food security monitoring are accurate forecasts of local grain prices and the ability to identify unusual price behavior. We evaluated a method that can both facilitate forecasts of cross-country grain price data and identify dissimilarities in price behavior across multiple markets. This method, characteristic based clustering (CBC), identifies similarities in multiple time series based on structural characteristics in the data. Here, we conducted a simulation experiment to determine if CBC can be used to improve the accuracy of maize price forecasts. We then compared forecast accuracies among clustered and non-clustered price series over a rolling time horizon. We found that the accuracy of forecasts on clusters of time series were equal to or worse than forecasts based on individual time series. However, in the following experiment we found that CBC was still useful for price analysis. We used the clusters to explore the similarity of price behavior among Kenyan maize markets. We found that price behavior in the isolated markets of Mandera and Marsabit has become increasingly dissimilar from markets in other Kenyan cities, and that these dissimilarities could not be explained solely by geographic distance. The structural isolation of Mandera and Marsabit that we find in this paper is supported by field studies on food security and market integration in Kenya. Our results suggest that a market with a unique price series (as measured by structural characteristics that differ from neighboring markets) may lack market integration and food security.

  4. Computational simulation of the creep-rupture process in filamentary composite materials

    NASA Technical Reports Server (NTRS)

    Slattery, Kerry T.; Hackett, Robert M.

    1991-01-01

    A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.

  5. Modified cross sample entropy and surrogate data analysis method for financial time series

    NASA Astrophysics Data System (ADS)

    Yin, Yi; Shang, Pengjian

    2015-09-01

    For researching multiscale behaviors from the angle of entropy, we propose a modified cross sample entropy (MCSE) and combine surrogate data analysis with it in order to compute entropy differences between original dynamics and surrogate series (MCSDiff). MCSDiff is applied to simulated signals to show accuracy and then employed to US and Chinese stock markets. We illustrate the presence of multiscale behavior in the MCSDiff results and reveal that there are synchrony containing in the original financial time series and they have some intrinsic relations, which are destroyed by surrogate data analysis. Furthermore, the multifractal behaviors of cross-correlations between these financial time series are investigated by multifractal detrended cross-correlation analysis (MF-DCCA) method, since multifractal analysis is a multiscale analysis. We explore the multifractal properties of cross-correlation between these US and Chinese markets and show the distinctiveness of NQCI and HSI among the markets in their own region. It can be concluded that the weaker cross-correlation between US markets gives the evidence for the better inner mechanism in the US stock markets than that of Chinese stock markets. To study the multiscale features and properties of financial time series can provide valuable information for understanding the inner mechanism of financial markets.

  6. [Gene method for inconsistent hydrological frequency calculation. I: Inheritance, variability and evolution principles of hydrological genes].

    PubMed

    Xie, Ping; Wu, Zi Yi; Zhao, Jiang Yan; Sang, Yan Fang; Chen, Jie

    2018-04-01

    A stochastic hydrological process is influenced by both stochastic and deterministic factors. A hydrological time series contains not only pure random components reflecting its inheri-tance characteristics, but also deterministic components reflecting variability characteristics, such as jump, trend, period, and stochastic dependence. As a result, the stochastic hydrological process presents complicated evolution phenomena and rules. To better understand these complicated phenomena and rules, this study described the inheritance and variability characteristics of an inconsistent hydrological series from two aspects: stochastic process simulation and time series analysis. In addition, several frequency analysis approaches for inconsistent time series were compared to reveal the main problems in inconsistency study. Then, we proposed a new concept of hydrological genes origined from biological genes to describe the inconsistent hydrolocal processes. The hydrologi-cal genes were constructed using moments methods, such as general moments, weight function moments, probability weight moments and L-moments. Meanwhile, the five components, including jump, trend, periodic, dependence and pure random components, of a stochastic hydrological process were defined as five hydrological bases. With this method, the inheritance and variability of inconsistent hydrological time series were synthetically considered and the inheritance, variability and evolution principles were fully described. Our study would contribute to reveal the inheritance, variability and evolution principles in probability distribution of hydrological elements.

  7. A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction.

    PubMed

    Chen, C P; Wan, J Z

    1999-01-01

    A fast learning algorithm is proposed to find an optimal weights of the flat neural networks (especially, the functional-link network). Although the flat networks are used for nonlinear function approximation, they can be formulated as linear systems. Thus, the weights of the networks can be solved easily using a linear least-square method. This formulation makes it easier to update the weights instantly for both a new added pattern and a new added enhancement node. A dynamic stepwise updating algorithm is proposed to update the weights of the system on-the-fly. The model is tested on several time-series data including an infrared laser data set, a chaotic time-series, a monthly flour price data set, and a nonlinear system identification problem. The simulation results are compared to existing models in which more complex architectures and more costly training are needed. The results indicate that the proposed model is very attractive to real-time processes.

  8. Quantifying evolutionary dynamics from variant-frequency time series

    NASA Astrophysics Data System (ADS)

    Khatri, Bhavin S.

    2016-09-01

    From Kimura’s neutral theory of protein evolution to Hubbell’s neutral theory of biodiversity, quantifying the relative importance of neutrality versus selection has long been a basic question in evolutionary biology and ecology. With deep sequencing technologies, this question is taking on a new form: given a time-series of the frequency of different variants in a population, what is the likelihood that the observation has arisen due to selection or neutrality? To tackle the 2-variant case, we exploit Fisher’s angular transformation, which despite being discovered by Ronald Fisher a century ago, has remained an intellectual curiosity. We show together with a heuristic approach it provides a simple solution for the transition probability density at short times, including drift, selection and mutation. Our results show under that under strong selection and sufficiently frequent sampling these evolutionary parameters can be accurately determined from simulation data and so they provide a theoretical basis for techniques to detect selection from variant or polymorphism frequency time-series.

  9. Quantifying evolutionary dynamics from variant-frequency time series.

    PubMed

    Khatri, Bhavin S

    2016-09-12

    From Kimura's neutral theory of protein evolution to Hubbell's neutral theory of biodiversity, quantifying the relative importance of neutrality versus selection has long been a basic question in evolutionary biology and ecology. With deep sequencing technologies, this question is taking on a new form: given a time-series of the frequency of different variants in a population, what is the likelihood that the observation has arisen due to selection or neutrality? To tackle the 2-variant case, we exploit Fisher's angular transformation, which despite being discovered by Ronald Fisher a century ago, has remained an intellectual curiosity. We show together with a heuristic approach it provides a simple solution for the transition probability density at short times, including drift, selection and mutation. Our results show under that under strong selection and sufficiently frequent sampling these evolutionary parameters can be accurately determined from simulation data and so they provide a theoretical basis for techniques to detect selection from variant or polymorphism frequency time-series.

  10. Calculation of power spectrums from digital time series with missing data points

    NASA Technical Reports Server (NTRS)

    Murray, C. W., Jr.

    1980-01-01

    Two algorithms are developed for calculating power spectrums from the autocorrelation function when there are missing data points in the time series. Both methods use an average sampling interval to compute lagged products. One method, the correlation function power spectrum, takes the discrete Fourier transform of the lagged products directly to obtain the spectrum, while the other, the modified Blackman-Tukey power spectrum, takes the Fourier transform of the mean lagged products. Both techniques require fewer calculations than other procedures since only 50% to 80% of the maximum lags need be calculated. The algorithms are compared with the Fourier transform power spectrum and two least squares procedures (all for an arbitrary data spacing). Examples are given showing recovery of frequency components from simulated periodic data where portions of the time series are missing and random noise has been added to both the time points and to values of the function. In addition the methods are compared using real data. All procedures performed equally well in detecting periodicities in the data.

  11. Lead-lag cross-sectional structure and detection of correlated anticorrelated regime shifts: Application to the volatilities of inflation and economic growth rates

    NASA Astrophysics Data System (ADS)

    Zhou, Wei-Xing; Sornette, Didier

    2007-07-01

    We have recently introduced the “thermal optimal path” (TOP) method to investigate the real-time lead-lag structure between two time series. The TOP method consists in searching for a robust noise-averaged optimal path of the distance matrix along which the two time series have the greatest similarity. Here, we generalize the TOP method by introducing a more general definition of distance which takes into account possible regime shifts between positive and negative correlations. This generalization to track possible changes of correlation signs is able to identify possible transitions from one convention (or consensus) to another. Numerical simulations on synthetic time series verify that the new TOP method performs as expected even in the presence of substantial noise. We then apply it to investigate changes of convention in the dependence structure between the historical volatilities of the USA inflation rate and economic growth rate. Several measures show that the new TOP method significantly outperforms standard cross-correlation methods.

  12. Noise-assisted data processing with empirical mode decomposition in biomedical signals.

    PubMed

    Karagiannis, Alexandros; Constantinou, Philip

    2011-01-01

    In this paper, a methodology is described in order to investigate the performance of empirical mode decomposition (EMD) in biomedical signals, and especially in the case of electrocardiogram (ECG). Synthetic ECG signals corrupted with white Gaussian noise are employed and time series of various lengths are processed with EMD in order to extract the intrinsic mode functions (IMFs). A statistical significance test is implemented for the identification of IMFs with high-level noise components and their exclusion from denoising procedures. Simulation campaign results reveal that a decrease of processing time is accomplished with the introduction of preprocessing stage, prior to the application of EMD in biomedical time series. Furthermore, the variation in the number of IMFs according to the type of the preprocessing stage is studied as a function of SNR and time-series length. The application of the methodology in MIT-BIH ECG records is also presented in order to verify the findings in real ECG signals.

  13. Optical signal processing using photonic reservoir computing

    NASA Astrophysics Data System (ADS)

    Salehi, Mohammad Reza; Dehyadegari, Louiza

    2014-10-01

    As a new approach to recognition and classification problems, photonic reservoir computing has such advantages as parallel information processing, power efficient and high speed. In this paper, a photonic structure has been proposed for reservoir computing which is investigated using a simple, yet, non-partial noisy time series prediction task. This study includes the application of a suitable topology with self-feedbacks in a network of SOA's - which lends the system a strong memory - and leads to adjusting adequate parameters resulting in perfect recognition accuracy (100%) for noise-free time series, which shows a 3% improvement over previous results. For the classification of noisy time series, the rate of accuracy showed a 4% increase and amounted to 96%. Furthermore, an analytical approach was suggested to solve rate equations which led to a substantial decrease in the simulation time, which is an important parameter in classification of large signals such as speech recognition, and better results came up compared with previous works.

  14. Model for the heart beat-to-beat time series during meditation

    NASA Astrophysics Data System (ADS)

    Capurro, A.; Diambra, L.; Malta, C. P.

    2003-09-01

    We present a model for the respiratory modulation of the heart beat-to-beat interval series. The model consists of a pacemaker, that simulates the membrane potential of the sinoatrial node, modulated by a periodic input signal plus correlated noise that simulates the respiratory input. The model was used to assess the waveshape of the respiratory signals needed to reproduce in the phase space the trajectory of experimental heart beat-to-beat interval data. The data sets were recorded during meditation practices of the Chi and Kundalini Yoga techniques. Our study indicates that in the first case the respiratory signal has the shape of a smoothed square wave, and in the second case it has the shape of a smoothed triangular wave.

  15. Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm

    NASA Astrophysics Data System (ADS)

    Mathai, J.; Mujumdar, P.

    2017-12-01

    A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.

  16. Detection of long term persistence in time series of the Neuquen River (Argentina)

    NASA Astrophysics Data System (ADS)

    Seoane, Rafael; Paz González, Antonio

    2014-05-01

    In the Patagonian region (Argentina), previous hydrometeorological studies that have been developed using general circulation models show variations in annual mean flows. Future climate scenarios obtained from high-resolution models indicate decreases in total annual precipitation, and these scenarios are more important in the Neuquén river basin (23000 km2). The aim of this study was the estimation of long term persistence in the Neuquén River basin (Argentina). The detection of variations in the long range dependence term and long memory of time series was evaluated with the Hurst exponent. We applied rescaled adjusted range analysis (R/S) to time series of River discharges measured from 1903 to 2011 and this time series was divided into two subperiods: the first was from 1903 to 1970 and the second from 1970 to 2011. Results show a small increase in persistence for the second period. Our results are consistent with those obtained by Koch and Markovic (2007), who observed and estimated an increase of the H exponent for the period 1960-2000 in the Elbe River (Germany). References Hurst, H. (1951).Long term storage capacities of reservoirs". Trans. Am. Soc. Civil Engrs., 116:776-808. Koch and Markovic (2007). Evidences for Climate Change in Germany over the 20th Century from the Stochastic Analysis of hydro-meteorological Time Series, MODSIM07, International Congress on Modelling and Simulation, Christchurch, New Zealand.

  17. Model for the respiratory modulation of the heart beat-to-beat time interval series

    NASA Astrophysics Data System (ADS)

    Capurro, Alberto; Diambra, Luis; Malta, C. P.

    2005-09-01

    In this study we present a model for the respiratory modulation of the heart beat-to-beat interval series. The model consists of a set of differential equations used to simulate the membrane potential of a single rabbit sinoatrial node cell, excited with a periodic input signal with added correlated noise. This signal, which simulates the input from the autonomous nervous system to the sinoatrial node, was included in the pacemaker equations as a modulation of the iNaK current pump and the potassium current iK. We focus at modeling the heart beat-to-beat time interval series from normal subjects during meditation of the Kundalini Yoga and Chi techniques. The analysis of the experimental data indicates that while the embedding of pre-meditation and control cases have a roughly circular shape, it acquires a polygonal shape during meditation, triangular for the Kundalini Yoga data and quadrangular in the case of Chi data. The model was used to assess the waveshape of the respiratory signals needed to reproduce the trajectory of the experimental data in the phase space. The embedding of the Chi data could be reproduced using a periodic signal obtained by smoothing a square wave. In the case of Kundalini Yoga data, the embedding was reproduced with a periodic signal obtained by smoothing a triangular wave having a rising branch of longer duration than the decreasing branch. Our study provides an estimation of the respiratory signal using only the heart beat-to-beat time interval series.

  18. Piecewise multivariate modelling of sequential metabolic profiling data.

    PubMed

    Rantalainen, Mattias; Cloarec, Olivier; Ebbels, Timothy M D; Lundstedt, Torbjörn; Nicholson, Jeremy K; Holmes, Elaine; Trygg, Johan

    2008-02-19

    Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS) models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA) for modelling and analysis of short time-series data.

  19. Simulations of 6-DOF Motion with a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.

  20. SIMULATING ATMOSPHERIC EXPOSURE IN A NATIONAL RISK ASSESSMENT USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME

    EPA Science Inventory

    Multimedia risk assessments require the temporal integration of atmospheric concentration and deposition with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-term average a...

  1. Oculometric indices of simulator and aircraft motion

    NASA Technical Reports Server (NTRS)

    Comstock, J. R.

    1984-01-01

    The effects on eye scan behavior of both simulator and aircraft motion and sensitivity of an oculometric measure to motion effects was demonstrated. It was found that fixation time is sensitive to motion effects. Differences between simulator motion and no motion conditions during a series of simulated ILS approaches were studied. The mean fixation time for the no motion condition was found to be significantly longer than for the motion conditions. Eye scan parameters based on data collected in flight, and in fixed base simulation were investigated. Motion effects were evident when the subject was viewing a display supplying attitude and flight path information. The nature of the information provided by motion was examined. The mean fixation times for the no motion condition were significantly longer than for either motion condition, while the two motion conditions did not differ. It is shown that motion serves an alerting function, providing a cue or clue to the pilot that something happened. It is suggested that simulation without motion cues may represent an understatement of the true capacity of the pilot.

  2. Spatiotemporal groundwater level modeling using hybrid artificial intelligence-meshless method

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram

    2016-05-01

    Uncertainties of the field parameters, noise of the observed data and unknown boundary conditions are the main factors involved in the groundwater level (GL) time series which limit the modeling and simulation of GL. This paper presents a hybrid artificial intelligence-meshless model for spatiotemporal GL modeling. In this way firstly time series of GL observed in different piezometers were de-noised using threshold-based wavelet method and the impact of de-noised and noisy data was compared in temporal GL modeling by artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). In the second step, both ANN and ANFIS models were calibrated and verified using GL data of each piezometer, rainfall and runoff considering various input scenarios to predict the GL at one month ahead. In the final step, the simulated GLs in the second step of modeling were considered as interior conditions for the multiquadric radial basis function (RBF) based solve of governing partial differential equation of groundwater flow to estimate GL at any desired point within the plain where there is not any observation. In order to evaluate and compare the GL pattern at different time scales, the cross-wavelet coherence was also applied to GL time series of piezometers. The results showed that the threshold-based wavelet de-noising approach can enhance the performance of the modeling up to 13.4%. Also it was found that the accuracy of ANFIS-RBF model is more reliable than ANN-RBF model in both calibration and validation steps.

  3. Simulating Responses of Gravitational-Wave Instrumentation

    NASA Technical Reports Server (NTRS)

    Armstrong, John; Edlund, Jeffrey; Vallisneri. Michele

    2006-01-01

    Synthetic LISA is a computer program for simulating the responses of the instrumentation of the NASA/ESA Laser Interferometer Space Antenna (LISA) mission, the purpose of which is to detect and study gravitational waves. Synthetic LISA generates synthetic time series of the LISA fundamental noises, as filtered through all the time-delay-interferometry (TDI) observables. (TDI is a method of canceling phase noise in temporally varying unequal-arm interferometers.) Synthetic LISA provides a streamlined module to compute the TDI responses to gravitational waves, according to a full model of TDI (including the motion of the LISA array and the temporal and directional dependence of the arm lengths). Synthetic LISA is written in the C++ programming language as a modular package that accommodates the addition of code for specific gravitational wave sources or for new noise models. In addition, time series for waves and noises can be easily loaded from disk storage or electronic memory. The package includes a Python-language interface for easy, interactive steering and scripting. Through Python, Synthetic LISA can read and write data files in Flexible Image Transport System (FITS), which is a commonly used astronomical data format.

  4. A Comparison of Grid-based and SPH Binary Mass-transfer and Merger Simulations

    DOE PAGES

    Motl, Patrick M.; Frank, Juhan; Staff, Jan; ...

    2017-03-29

    There is currently a great amount of interest in the outcomes and astrophysical implications of mergers of double degenerate binaries. In a commonly adopted approximation, the components of such binaries are represented by polytropes with an index of n = 3/2. We present detailed comparisons of stellar mass-transfer and merger simulations of polytropic binaries that have been carried out using two very different numerical algorithms—a finite-volume "grid" code and a smoothed-particle hydrodynamics (SPH) code. We find that there is agreement in both the ultimate outcomes of the evolutions and the intermediate stages if the initial conditions for each code aremore » chosen to match as closely as possible. We find that even with closely matching initial setups, the time it takes to reach a concordant evolution differs between the two codes because the initial depth of contact cannot be matched exactly. There is a general tendency for SPH to yield higher mass transfer rates and faster evolution to the final outcome. Here, we also present comparisons of simulations calculated from two different energy equations: in one series, we assume a polytropic equation of state and in the other series an ideal gas equation of state. In the latter series of simulations, an atmosphere forms around the accretor, which can exchange angular momentum and cause a more rapid loss of orbital angular momentum. In the simulations presented here, the effect of the ideal equation of state is to de-stabilize the binary in both SPH and grid simulations, but the effect is more pronounced in the grid code.« less

  5. The Gaussian Graphical Model in Cross-Sectional and Time-Series Data.

    PubMed

    Epskamp, Sacha; Waldorp, Lourens J; Mõttus, René; Borsboom, Denny

    2018-04-16

    We discuss the Gaussian graphical model (GGM; an undirected network of partial correlation coefficients) and detail its utility as an exploratory data analysis tool. The GGM shows which variables predict one-another, allows for sparse modeling of covariance structures, and may highlight potential causal relationships between observed variables. We describe the utility in three kinds of psychological data sets: data sets in which consecutive cases are assumed independent (e.g., cross-sectional data), temporally ordered data sets (e.g., n = 1 time series), and a mixture of the 2 (e.g., n > 1 time series). In time-series analysis, the GGM can be used to model the residual structure of a vector-autoregression analysis (VAR), also termed graphical VAR. Two network models can then be obtained: a temporal network and a contemporaneous network. When analyzing data from multiple subjects, a GGM can also be formed on the covariance structure of stationary means-the between-subjects network. We discuss the interpretation of these models and propose estimation methods to obtain these networks, which we implement in the R packages graphicalVAR and mlVAR. The methods are showcased in two empirical examples, and simulation studies on these methods are included in the supplementary materials.

  6. Spectral Unmixing Analysis of Time Series Landsat 8 Images

    NASA Astrophysics Data System (ADS)

    Zhuo, R.; Xu, L.; Peng, J.; Chen, Y.

    2018-05-01

    Temporal analysis of Landsat 8 images opens up new opportunities in the unmixing procedure. Although spectral analysis of time series Landsat imagery has its own advantage, it has rarely been studied. Nevertheless, using the temporal information can provide improved unmixing performance when compared to independent image analyses. Moreover, different land cover types may demonstrate different temporal patterns, which can aid the discrimination of different natures. Therefore, this letter presents time series K-P-Means, a new solution to the problem of unmixing time series Landsat imagery. The proposed approach is to obtain the "purified" pixels in order to achieve optimal unmixing performance. The vertex component analysis (VCA) is used to extract endmembers for endmember initialization. First, nonnegative least square (NNLS) is used to estimate abundance maps by using the endmember. Then, the estimated endmember is the mean value of "purified" pixels, which is the residual of the mixed pixel after excluding the contribution of all nondominant endmembers. Assembling two main steps (abundance estimation and endmember update) into the iterative optimization framework generates the complete algorithm. Experiments using both simulated and real Landsat 8 images show that the proposed "joint unmixing" approach provides more accurate endmember and abundance estimation results compared with "separate unmixing" approach.

  7. Clustering Multivariate Time Series Using Hidden Markov Models

    PubMed Central

    Ghassempour, Shima; Girosi, Federico; Maeder, Anthony

    2014-01-01

    In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs), where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers. PMID:24662996

  8. Characterization of autoregressive processes using entropic quantifiers

    NASA Astrophysics Data System (ADS)

    Traversaro, Francisco; Redelico, Francisco O.

    2018-01-01

    The aim of the contribution is to introduce a novel information plane, the causal-amplitude informational plane. As previous works seems to indicate, Bandt and Pompe methodology for estimating entropy does not allow to distinguish between probability distributions which could be fundamental for simulation or for probability analysis purposes. Once a time series is identified as stochastic by the causal complexity-entropy informational plane, the novel causal-amplitude gives a deeper understanding of the time series, quantifying both, the autocorrelation strength and the probability distribution of the data extracted from the generating processes. Two examples are presented, one from climate change model and the other from financial markets.

  9. Global Gridded Crop Model Evaluation: Benchmarking, Skills, Deficiencies and Implications.

    NASA Technical Reports Server (NTRS)

    Muller, Christoph; Elliott, Joshua; Chryssanthacopoulos, James; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Folberth, Christian; Glotter, Michael; Hoek, Steven; hide

    2017-01-01

    Crop models are increasingly used to simulate crop yields at the global scale, but so far there is no general framework on how to assess model performance. Here we evaluate the simulation results of 14 global gridded crop modeling groups that have contributed historic crop yield simulations for maize, wheat, rice and soybean to the Global Gridded Crop Model Intercomparison (GGCMI) of the Agricultural Model Intercomparison and Improvement Project (AgMIP). Simulation results are compared to reference data at global, national and grid cell scales and we evaluate model performance with respect to time series correlation, spatial correlation and mean bias. We find that global gridded crop models (GGCMs) show mixed skill in reproducing time series correlations or spatial patterns at the different spatial scales. Generally, maize, wheat and soybean simulations of many GGCMs are capable of reproducing larger parts of observed temporal variability (time series correlation coefficients (r) of up to 0.888 for maize, 0.673 for wheat and 0.643 for soybean at the global scale) but rice yield variability cannot be well reproduced by most models. Yield variability can be well reproduced for most major producing countries by many GGCMs and for all countries by at least some. A comparison with gridded yield data and a statistical analysis of the effects of weather variability on yield variability shows that the ensemble of GGCMs can explain more of the yield variability than an ensemble of regression models for maize and soybean, but not for wheat and rice. We identify future research needs in global gridded crop modeling and for all individual crop modeling groups. In the absence of a purely observation-based benchmark for model evaluation, we propose that the best performing crop model per crop and region establishes the benchmark for all others, and modelers are encouraged to investigate how crop model performance can be increased. We make our evaluation system accessible to all crop modelers so that other modeling groups can also test their model performance against the reference data and the GGCMI benchmark.

  10. Temporal rainfall disaggregation using a multiplicative cascade model for spatial application in urban hydrology

    NASA Astrophysics Data System (ADS)

    Müller, H.; Haberlandt, U.

    2018-01-01

    Rainfall time series of high temporal resolution and spatial density are crucial for urban hydrology. The multiplicative random cascade model can be used for temporal rainfall disaggregation of daily data to generate such time series. Here, the uniform splitting approach with a branching number of 3 in the first disaggregation step is applied. To achieve a final resolution of 5 min, subsequent steps after disaggregation are necessary. Three modifications at different disaggregation levels are tested in this investigation (uniform splitting at Δt = 15 min, linear interpolation at Δt = 7.5 min and Δt = 3.75 min). Results are compared both with observations and an often used approach, based on the assumption that a time steps with Δt = 5.625 min, as resulting if a branching number of 2 is applied throughout, can be replaced with Δt = 5 min (called the 1280 min approach). Spatial consistence is implemented in the disaggregated time series using a resampling algorithm. In total, 24 recording stations in Lower Saxony, Northern Germany with a 5 min resolution have been used for the validation of the disaggregation procedure. The urban-hydrological suitability is tested with an artificial combined sewer system of about 170 hectares. The results show that all three variations outperform the 1280 min approach regarding reproduction of wet spell duration, average intensity, fraction of dry intervals and lag-1 autocorrelation. Extreme values with durations of 5 min are also better represented. For durations of 1 h, all approaches show only slight deviations from the observed extremes. The applied resampling algorithm is capable to achieve sufficient spatial consistence. The effects on the urban hydrological simulations are significant. Without spatial consistence, flood volumes of manholes and combined sewer overflow are strongly underestimated. After resampling, results using disaggregated time series as input are in the range of those using observed time series. Best overall performance regarding rainfall statistics are obtained by the method in which the disaggregation process ends at time steps with 7.5 min duration, deriving the 5 min time steps by linear interpolation. With subsequent resampling this method leads to a good representation of manhole flooding and combined sewer overflow volume in terms of hydrological simulations and outperforms the 1280 min approach.

  11. Short-arc measurement and fitting based on the bidirectional prediction of observed data

    NASA Astrophysics Data System (ADS)

    Fei, Zhigen; Xu, Xiaojie; Georgiadis, Anthimos

    2016-02-01

    To measure a short arc is a notoriously difficult problem. In this study, the bidirectional prediction method based on the Radial Basis Function Neural Network (RBFNN) to the observed data distributed along a short arc is proposed to increase the corresponding arc length, and thus improve its fitting accuracy. Firstly, the rationality of regarding observed data as a time series is discussed in accordance with the definition of a time series. Secondly, the RBFNN is constructed to predict the observed data where the interpolation method is used for enlarging the size of training examples in order to improve the learning accuracy of the RBFNN’s parameters. Finally, in the numerical simulation section, we focus on simulating how the size of the training sample and noise level influence the learning error and prediction error of the built RBFNN. Typically, the observed data coming from a 5{}^\\circ short arc are used to evaluate the performance of the Hyper method known as the ‘unbiased fitting method of circle’ with a different noise level before and after prediction. A number of simulation experiments reveal that the fitting stability and accuracy of the Hyper method after prediction are far superior to the ones before prediction.

  12. Optimal heavy tail estimation - Part 1: Order selection

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred; Bermejo, Miguel A.

    2017-12-01

    The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1) serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.

  13. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.

    2002-01-01

    The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.

  14. Analysis of statistical and standard algorithms for detecting muscle onset with surface electromyography.

    PubMed

    Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A

    2017-01-01

    The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.

  15. Evolution of record-breaking high and low monthly mean temperatures

    NASA Astrophysics Data System (ADS)

    Anderson, A. L.; Kostinski, A. B.

    2011-12-01

    We examine the ratio of record-breaking highs to record-breaking lows with respect to extent of time-series for monthly mean temperatures within the continental United States (1900-2006) and ask the following question. How are record-breaking high and low surface temperatures in the United States affected by time period? We find that the ratio of record-breaking highs to lows in 2006 increases as the time-series extend further into the past. For example: in 2006, the ratio of record-breaking highs to record-breaking lows is ≈ 13 : 1 with 1950 as the first year and ≈ 25 : 1 with 1900 as the first year; both ratios are an order of magnitude greater than 3-σ for stationary simulations. We also find record-breaking events are more sensitive to trends in time-series of monthly averages than time-series of corresponding daily values. When we consider the ratio as it evolves with respect to a fixed start year, we find it is strongly correlated with the ensemble mean. Correlation coefficients are 0.76 and 0.82 for 1900-2006 and 1950-2006 respectively; 3-σ = 0.3 for pairs of uncorrelated stationary time-series. We find similar values for globally distributed time-series: 0.87 and 0.92 for 1900-2006 and 1950-2006 respectively. However, the ratios evolve differently: global ratios increase throughout (1920-2006) while continental United States ratios decrease from about 1940 to 1970. (Based on Anderson and Kostinski (2011), Evolution and distribution of record-breaking high and low monthly mean temperatures. Journal of Applied Meteorology and Climatology. doi: 10.1175/JAMC-D-10-05025.1)

  16. A Point Rainfall Generator With Internal Storm Structure

    NASA Astrophysics Data System (ADS)

    Marien, J. L.; Vandewiele, G. L.

    1986-04-01

    A point rainfall generator is a probabilistic model for the time series of rainfall as observed in one geographical point. The main purpose of such a model is to generate long synthetic sequences of rainfall for simulation studies. The present generator is a continuous time model based on 13.5 years of 10-min point rainfalls observed in Belgium and digitized with a resolution of 0.1 mm. The present generator attempts to model all features of the rainfall time series which are important for flood studies as accurately as possible. The original aspects of the model are on the one hand the way in which storms are defined and on the other hand the theoretical model for the internal storm characteristics. The storm definition has the advantage that the important characteristics of successive storms are fully independent and very precisely modelled, even on time bases as small as 10 min. The model of the internal storm characteristics has a strong theoretical structure. This fact justifies better the extrapolation of this model to severe storms for which the data are very sparse. This can be important when using the model to simulate severe flood events.

  17. Strain System for the Motion Base Shuttle Mission Simulator

    NASA Technical Reports Server (NTRS)

    Huber, David C.; Van Vossen, Karl G.; Kunkel, Glenn W.; Wells, Larry W.

    2010-01-01

    The Motion Base Shuttle Mission Simulator (MBSMS) Strain System is an innovative engineering tool used to monitor the stresses applied to the MBSMS motion platform tilt pivot frames during motion simulations in real time. The Strain System comprises hardware and software produced by several different companies. The system utilizes a series of strain gages, accelerometers, orientation sensor, rotational meter, scanners, computer, and software packages working in unison. By monitoring and recording the inputs applied to the simulator, data can be analyzed if weld cracks or other problems are found during routine simulator inspections. This will help engineers diagnose problems as well as aid in repair solutions for both current as well as potential problems.

  18. Role of Boundary Conditions in Monte Carlo Simulation of MEMS Devices

    NASA Technical Reports Server (NTRS)

    Nance, Robert P.; Hash, David B.; Hassan, H. A.

    1997-01-01

    A study is made of the issues surrounding prediction of microchannel flows using the direct simulation Monte Carlo method. This investigation includes the introduction and use of new inflow and outflow boundary conditions suitable for subsonic flows. A series of test simulations for a moderate-size microchannel indicates that a high degree of grid under-resolution in the streamwise direction may be tolerated without loss of accuracy. In addition, the results demonstrate the importance of physically correct boundary conditions, as well as possibilities for reducing the time associated with the transient phase of a simulation. These results imply that simulations of longer ducts may be more feasible than previously envisioned.

  19. The matrix exponential in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon

    1987-01-01

    The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.

  20. Autoregressive-model-based missing value estimation for DNA microarray time series data.

    PubMed

    Choong, Miew Keen; Charbit, Maurice; Yan, Hong

    2009-01-01

    Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.

  1. Time Series Expression Analyses Using RNA-seq: A Statistical Approach

    PubMed Central

    Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P.

    2013-01-01

    RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis. PMID:23586021

  2. Time series expression analyses using RNA-seq: a statistical approach.

    PubMed

    Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P

    2013-01-01

    RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis.

  3. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  4. Finite-element time-domain modeling of electromagnetic data in general dispersive medium using adaptive Padé series

    NASA Astrophysics Data System (ADS)

    Cai, Hongzhu; Hu, Xiangyun; Xiong, Bin; Zhdanov, Michael S.

    2017-12-01

    The induced polarization (IP) method has been widely used in geophysical exploration to identify the chargeable targets such as mineral deposits. The inversion of the IP data requires modeling the IP response of 3D dispersive conductive structures. We have developed an edge-based finite-element time-domain (FETD) modeling method to simulate the electromagnetic (EM) fields in 3D dispersive medium. We solve the vector Helmholtz equation for total electric field using the edge-based finite-element method with an unstructured tetrahedral mesh. We adopt the backward propagation Euler method, which is unconditionally stable, with semi-adaptive time stepping for the time domain discretization. We use the direct solver based on a sparse LU decomposition to solve the system of equations. We consider the Cole-Cole model in order to take into account the frequency-dependent conductivity dispersion. The Cole-Cole conductivity model in frequency domain is expanded using a truncated Padé series with adaptive selection of the center frequency of the series for early and late time. This approach can significantly increase the accuracy of FETD modeling.

  5. Nonlinear complexity of random visibility graph and Lempel-Ziv on multitype range-intensity interacting financial dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Yali; Wang, Jun

    2017-09-01

    In an attempt to investigate the nonlinear complex evolution of financial dynamics, a new financial price model - the multitype range-intensity contact (MRIC) financial model, is developed based on the multitype range-intensity interacting contact system, in which the interaction and transmission of different types of investment attitudes in a stock market are simulated by viruses spreading. Two new random visibility graph (VG) based analyses and Lempel-Ziv complexity (LZC) are applied to study the complex behaviors of return time series and the corresponding random sorted series. The VG method is the complex network theory, and the LZC is a non-parametric measure of complexity reflecting the rate of new pattern generation of a series. In this work, the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, the numerical empirical study shows the similar complexity behaviors between the model and the real markets, the research confirms that the financial model is reasonable to some extent.

  6. A Series of Computational Neuroscience Labs Increases Comfort with MATLAB.

    PubMed

    Nichols, David F

    2015-01-01

    Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience.

  7. A Series of Computational Neuroscience Labs Increases Comfort with MATLAB

    PubMed Central

    Nichols, David F.

    2015-01-01

    Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience. PMID:26557798

  8. Cartesian-Grid Simulations of a Canard-Controlled Missile with a Free-Spinning Tail

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The proposed paper presents a series of simulations of a geometrically complex, canard-controlled, supersonic missile with free-spinning tail fins. Time-dependent simulations were performed using an inviscid Cartesian-grid-based method with results compared to both experimental data and high-resolution Navier-Stokes computations. At fixed free stream conditions and canard deflections, the tail spin rate was iteratively determined such that the net rolling moment on the empennage is zero. This rate corresponds to the time-asymptotic rate of the free-to-spin fin system. After obtaining spin-averaged aerodynamic coefficients for the missile, the investigation seeks a fixed-tail approximation to the spin-averaged aerodynamic coefficients, and examines the validity of this approximation over a variety of freestream conditions.

  9. i-SVOC -- A simulation program for indoor SVOCs (Version 1.0)

    EPA Science Inventory

    Program i-SVOC estimates the emissions, transport, and sorption of semivolatile organic compounds (SVOCs) in the indoor environment as functions of time when a series of initial conditions is given. This program implements a framework for dynamic modeling of indoor SVOCs develope...

  10. Simulation Program i-SVOC User’s Guide

    EPA Science Inventory

    This document is the User’s Guide for computer program i-SVOC, which estimates the emissions, transport, and sorption of semivolatile organic compounds (SVOCs) in the indoor environment as a function of time when a series of initial conditions is given. This program implements a ...

  11. Validation of Broadband Ground Motion Simulations for Japanese Crustal Earthquakes by the Recipe

    NASA Astrophysics Data System (ADS)

    Iwaki, A.; Maeda, T.; Morikawa, N.; Miyake, H.; Fujiwara, H.

    2015-12-01

    The Headquarters for Earthquake Research Promotion (HERP) of Japan has organized the broadband ground motion simulation method into a standard procedure called the "recipe" (HERP, 2009). In the recipe, the source rupture is represented by the characterized source model (Irikura and Miyake, 2011). The broadband ground motion time histories are computed by a hybrid approach: the 3-D finite-difference method (Aoi et al. 2004) and the stochastic Green's function method (Dan and Sato, 1998; Dan et al. 2000) for the long- (> 1 s) and short-period (< 1 s) components, respectively, using the 3-D velocity structure model. As the engineering significance of scenario earthquake ground motion prediction is increasing, thorough verification and validation are required for the simulation methods. This study presents the self-validation of the recipe for two MW6.6 crustal events in Japan, the 2000 Tottori and 2004 Chuetsu (Niigata) earthquakes. We first compare the simulated velocity time series with the observation. Main features of the velocity waveforms, such as the near-fault pulses and the large later phases on deep sediment sites are well reproduced by the simulations. Then we evaluate 5% damped pseudo acceleration spectra (PSA) in the framework of the SCEC Broadband Platform (BBP) validation (Dreger et al. 2015). The validation results are generally acceptable in the period range 0.1 - 10 s, whereas those in the shortest period range (0.01-0.1 s) are less satisfactory. We also evaluate the simulations with the 1-D velocity structure models used in the SCEC BBP validation exercise. Although the goodness-of-fit parameters for PSA do not significantly differ from those for the 3-D velocity structure model, noticeable differences in velocity waveforms are observed. Our results suggest the importance of 1) well-constrained 3-D velocity structure model for broadband ground motion simulations and 2) evaluation of time series of ground motion as well as response spectra.

  12. Imputation of missing data in time series for air pollutants

    NASA Astrophysics Data System (ADS)

    Junger, W. L.; Ponce de Leon, A.

    2015-02-01

    Missing data are major concerns in epidemiological studies of the health effects of environmental air pollutants. This article presents an imputation-based method that is suitable for multivariate time series data, which uses the EM algorithm under the assumption of normal distribution. Different approaches are considered for filtering the temporal component. A simulation study was performed to assess validity and performance of proposed method in comparison with some frequently used methods. Simulations showed that when the amount of missing data was as low as 5%, the complete data analysis yielded satisfactory results regardless of the generating mechanism of the missing data, whereas the validity began to degenerate when the proportion of missing values exceeded 10%. The proposed imputation method exhibited good accuracy and precision in different settings with respect to the patterns of missing observations. Most of the imputations obtained valid results, even under missing not at random. The methods proposed in this study are implemented as a package called mtsdi for the statistical software system R.

  13. A new method to calibrate Lagrangian model with ASAR images for oil slick trajectory.

    PubMed

    Tian, Siyu; Huang, Xiaoxia; Li, Hongga

    2017-03-15

    Since Lagrangian model coefficients vary with different conditions, it is necessary to calibrate the model to obtain optimal coefficient combination for special oil spill accident. This paper focuses on proposing a new method to calibrate Lagrangian model with time series of Envisat ASAR images. Oil slicks extracted from time series images form a detected trajectory of special oil slick. Lagrangian model is calibrated by minimizing the difference between simulated trajectory and detected trajectory. mean center position distance difference (MCPD) and rotation difference (RD) of Oil slicks' or particles' standard deviational ellipses (SDEs) are calculated as two evaluations. The two parameters are taken to evaluate the performance of Lagrangian transport model with different coefficient combinations. This method is applied to Penglai 19-3 oil spill accident. The simulation result with calibrated model agrees well with related satellite observations. It is suggested the new method is effective to calibrate Lagrangian model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Application of computational mechanics to the analysis of natural data: an example in geomagnetism.

    PubMed

    Clarke, Richard W; Freeman, Mervyn P; Watkins, Nicholas W

    2003-01-01

    We discuss how the ideal formalism of computational mechanics can be adapted to apply to a noninfinite series of corrupted and correlated data, that is typical of most observed natural time series. Specifically, a simple filter that removes the corruption that creates rare unphysical causal states is demonstrated, and the concept of effective soficity is introduced. We believe that computational mechanics cannot be applied to a noisy and finite data series without invoking an argument based upon effective soficity. A related distinction between noise and unresolved structure is also defined: Noise can only be eliminated by increasing the length of the time series, whereas the resolution of previously unresolved structure only requires the finite memory of the analysis to be increased. The benefits of these concepts are demonstrated in a simulated times series by (a) the effective elimination of white noise corruption from a periodic signal using the expletive filter and (b) the appearance of an effectively sofic region in the statistical complexity of a biased Poisson switch time series that is insensitive to changes in the word length (memory) used in the analysis. The new algorithm is then applied to an analysis of a real geomagnetic time series measured at Halley, Antarctica. Two principal components in the structure are detected that are interpreted as the diurnal variation due to the rotation of the Earth-based station under an electrical current pattern that is fixed with respect to the Sun-Earth axis and the random occurrence of a signature likely to be that of the magnetic substorm. In conclusion, some useful terminology for the discussion of model construction in general is introduced.

  15. The Fourier decomposition method for nonlinear and non-stationary time series analysis.

    PubMed

    Singh, Pushpendra; Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik

    2017-03-01

    for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.

  16. Effects of Uncertainties in Electric Field Boundary Conditions for Ring Current Simulations

    NASA Astrophysics Data System (ADS)

    Chen, Margaret W.; O'Brien, T. Paul; Lemon, Colby L.; Guild, Timothy B.

    2018-01-01

    Physics-based simulation results can vary widely depending on the applied boundary conditions. As a first step toward assessing the effect of boundary conditions on ring current simulations, we analyze the uncertainty of cross-polar cap potentials (CPCP) on electric field boundary conditions applied to the Rice Convection Model-Equilibrium (RCM-E). The empirical Weimer model of CPCP is chosen as the reference model and Defense Meteorological Satellite Program CPCP measurements as the reference data. Using temporal correlations from a statistical analysis of the "errors" between the reference model and data, we construct a Monte Carlo CPCP discrete time series model that can be generalized to other model boundary conditions. RCM-E simulations using electric field boundary conditions from the reference model and from 20 randomly generated Monte Carlo discrete time series of CPCP are performed for two large storms. During the 10 August 2000 storm main phase, the proton density at 10 RE at midnight was observed to be low (< 1.4 cm-3) and the observed disturbance Dst index is bounded by the simulated Dst values. In contrast, the simulated Dst values during the recovery phases of the 10 August 2000 and 31 August 2005 storms tend to underestimate systematically the observed late Dst recovery. This suggests a need to improve the accuracy of particle loss calculations in the RCM-E model. Application of this technique can aid modelers to make efficient choices on either investing more effort on improving specification of boundary conditions or on improving descriptions of physical processes.

  17. Simulating Pre-Asymptotic, Non-Fickian Transport Although Doing Simple Random Walks - Supported By Empirical Pore-Scale Velocity Distributions and Memory Effects

    NASA Astrophysics Data System (ADS)

    Most, S.; Jia, N.; Bijeljic, B.; Nowak, W.

    2016-12-01

    Pre-asymptotic characteristics are almost ubiquitous when analyzing solute transport processes in porous media. These pre-asymptotic aspects are caused by spatial coherence in the velocity field and by its heterogeneity. For the Lagrangian perspective of particle displacements, the causes of pre-asymptotic, non-Fickian transport are skewed velocity distribution, statistical dependencies between subsequent increments of particle positions (memory) and dependence between the x, y and z-components of particle increments. Valid simulation frameworks should account for these factors. We propose a particle tracking random walk (PTRW) simulation technique that can use empirical pore-space velocity distributions as input, enforces memory between subsequent random walk steps, and considers cross dependence. Thus, it is able to simulate pre-asymptotic non-Fickian transport phenomena. Our PTRW framework contains an advection/dispersion term plus a diffusion term. The advection/dispersion term produces time-series of particle increments from the velocity CDFs. These time series are equipped with memory by enforcing that the CDF values of subsequent velocities change only slightly. The latter is achieved through a random walk on the axis of CDF values between 0 and 1. The virtual diffusion coefficient for that random walk is our only fitting parameter. Cross-dependence can be enforced by constraining the random walk to certain combinations of CDF values between the three velocity components in x, y and z. We will show that this modelling framework is capable of simulating non-Fickian transport by comparison with a pore-scale transport simulation and we analyze the approach to asymptotic behavior.

  18. Solar Magnetic Carpet III: Coronal Modelling of Synthetic Magnetograms

    NASA Astrophysics Data System (ADS)

    Meyer, K. A.; Mackay, D. H.; van Ballegooijen, A. A.; Parnell, C. E.

    2013-09-01

    This article is the third in a series working towards the construction of a realistic, evolving, non-linear force-free coronal-field model for the solar magnetic carpet. Here, we present preliminary results of 3D time-dependent simulations of the small-scale coronal field of the magnetic carpet. Four simulations are considered, each with the same evolving photospheric boundary condition: a 48-hour time series of synthetic magnetograms produced from the model of Meyer et al. ( Solar Phys. 272, 29, 2011). Three simulations include a uniform, overlying coronal magnetic field of differing strength, the fourth simulation includes no overlying field. The build-up, storage, and dissipation of magnetic energy within the simulations is studied. In particular, we study their dependence upon the evolution of the photospheric magnetic field and the strength of the overlying coronal field. We also consider where energy is stored and dissipated within the coronal field. The free magnetic energy built up is found to be more than sufficient to power small-scale, transient phenomena such as nanoflares and X-ray bright points, with the bulk of the free energy found to be stored low down, between 0.5 - 0.8 Mm. The energy dissipated is currently found to be too small to account for the heating of the entire quiet-Sun corona. However, the form and location of energy-dissipation regions qualitatively agree with what is observed on small scales on the Sun. Future MHD modelling using the same synthetic magnetograms may lead to a higher energy release.

  19. Continuous hydrologic simulation of runoff for the Middle Fork and South Fork of the Beargrass Creek basin in Jefferson County, Kentucky

    USGS Publications Warehouse

    Jarrett, G. Lynn; Downs, Aimee C.; Grace-Jarrett, Patricia A.

    1998-01-01

    The Hydrological Simulation Pro-gram-FORTRAN (HSPF) was applied to an urban drainage basin in Jefferson County, Ky to integrate the large amounts of information being collected on water quantity and quality into an analytical framework that could be used as a management and planning tool. Hydrologic response units were developed using geographic data and a K-means analysis to characterize important hydrologic and physical factors in the basin. The Hydrological Simulation Program FORTRAN Expert System (HSPEXP) was used to calibrate the model parameters for the Middle Fork Beargrass Creek Basin for 3 years (June 1, 1991, to May 31, 1994) of 5-minute streamflow and precipitation time series, and 3 years of hourly pan-evaporation time series. The calibrated model parameters were applied to the South Fork Beargrass Creek Basin for confirmation. The model confirmation results indicated that the model simulated the system within acceptable tolerances. The coefficient of determination and coefficient of model-fit efficiency between simulated and observed daily flows were 0.91 and 0.82, respectively, for model calibration and 0.88 and 0.77, respectively, for model confirmation. The model is most sensitive to estimates of the area of effective impervious land in the basin; the spatial distribution of rain-fall; and the lower-zone evapotranspiration, lower-zone nominal storage, and infiltration-capacity parameters during recession and low-flow periods. The error contribution from these sources varies with season and antecedent conditions.

  20. Probabilistic modelling of overflow, surcharge and flooding in urban drainage using the first-order reliability method and parameterization of local rain series.

    PubMed

    Thorndahl, S; Willems, P

    2008-01-01

    Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.

  1. Design and analysis considerations for deployment mechanisms in a space environment

    NASA Technical Reports Server (NTRS)

    Vorlicek, P. L.; Gore, J. V.; Plescia, C. T.

    1982-01-01

    On the second flight of the INTELSAT V spacecraft the time required for successful deployment of the north solar array was longer than originally predicted. The south solar array deployed as predicted. As a result of the difference in deployment times a series of experiments was conducted to locate the cause of the difference. Deployment rate sensitivity to hinge friction and temperature levels was investigated. A digital computer simulation of the deployment was created to evaluate the effects of parameter changes on deployment. Hinge design was optimized for nominal solar array deployment time for future INTELSAT V satellites. The nominal deployment times of both solar arrays on the third flight of INTELSAT V confirms the validity of the simulation and design optimization.

  2. Evaluation of physiologic complexity in time series using generalized sample entropy and surrogate data analysis

    NASA Astrophysics Data System (ADS)

    Eduardo Virgilio Silva, Luiz; Otavio Murta, Luiz

    2012-12-01

    Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiffmax) for q ≠1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiffmax values were capable of distinguish HRV groups (p-values 5.10×10-3, 1.11×10-7, and 5.50×10-7 for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis.

  3. Time-Series Modeling and Simulation for Comparative Cost-Effective Analysis in Cancer Chemotherapy: An Application to Platinum-Based Regimens for Advanced Non-small Cell Lung Cancer.

    PubMed

    Chisaki, Yugo; Nakamura, Nobuhiko; Yano, Yoshitaka

    2017-01-01

    The purpose of this study was to propose a time-series modeling and simulation (M&S) strategy for probabilistic cost-effective analysis in cancer chemotherapy using a Monte-Carlo method based on data available from the literature. The simulation included the cost for chemotherapy, for pharmaceutical care for adverse events (AEs) and other medical costs. As an application example, we describe the analysis for the comparison of four regimens, cisplatin plus irinotecan, carboplatin plus paclitaxel, cisplatin plus gemcitabine (GP), and cisplatin plus vinorelbine, for advanced non-small cell lung cancer. The factors, drug efficacy explained by overall survival or time to treatment failure, frequency and severity of AEs, utility value of AEs to determine QOL, the drugs' and other medical costs in Japan, were included in the model. The simulation was performed and quality adjusted life years (QALY) and incremental cost-effectiveness ratios (ICER) were calculated. An index, percentage of superiority (%SUP) which is the rate of the increased cost vs. QALY-gained plots within the area of positive QALY-gained and also below some threshold values of the ICER, was calculated as functions of threshold values of the ICER. An M&S process was developed, and for the simulation example, the GP regimen was the most cost-effective, in case of threshold values of the ICER=$70000/year, the %SUP for the GP are more than 50%. We developed an M&S process for probabilistic cost-effective analysis, this method would be useful for decision-making in choosing a cancer chemotherapy regimen in terms of pharmacoeconomic.

  4. Spatial independent component analysis of functional MRI time-series: to what extent do results depend on the algorithm used?

    PubMed

    Esposito, Fabrizio; Formisano, Elia; Seifritz, Erich; Goebel, Rainer; Morrone, Renato; Tedeschi, Gioacchino; Di Salle, Francesco

    2002-07-01

    Independent component analysis (ICA) has been successfully employed to decompose functional MRI (fMRI) time-series into sets of activation maps and associated time-courses. Several ICA algorithms have been proposed in the neural network literature. Applied to fMRI, these algorithms might lead to different spatial or temporal readouts of brain activation. We compared the two ICA algorithms that have been used so far for spatial ICA (sICA) of fMRI time-series: the Infomax (Bell and Sejnowski [1995]: Neural Comput 7:1004-1034) and the Fixed-Point (Hyvärinen [1999]: Adv Neural Inf Proc Syst 10:273-279) algorithms. We evaluated the Infomax- and Fixed Point-based sICA decompositions of simulated motor, and real motor and visual activation fMRI time-series using an ensemble of measures. Log-likelihood (McKeown et al. [1998]: Hum Brain Mapp 6:160-188) was used as a measure of how significantly the estimated independent sources fit the statistical structure of the data; receiver operating characteristics (ROC) and linear correlation analyses were used to evaluate the algorithms' accuracy of estimating the spatial layout and the temporal dynamics of simulated and real activations; cluster sizing calculations and an estimation of a residual gaussian noise term within the components were used to examine the anatomic structure of ICA components and for the assessment of noise reduction capabilities. Whereas both algorithms produced highly accurate results, the Fixed-Point outperformed the Infomax in terms of spatial and temporal accuracy as long as inferential statistics were employed as benchmarks. Conversely, the Infomax sICA was superior in terms of global estimation of the ICA model and noise reduction capabilities. Because of its adaptive nature, the Infomax approach appears to be better suited to investigate activation phenomena that are not predictable or adequately modelled by inferential techniques. Copyright 2002 Wiley-Liss, Inc.

  5. Evaluation of Bayesian estimation of a hidden continuous-time Markov chain model with application to threshold violation in water-quality indicators

    USGS Publications Warehouse

    Deviney, Frank A.; Rice, Karen; Brown, Donald E.

    2012-01-01

    Natural resource managers require information concerning  the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates.  An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.

  6. a Spiral-Based Downscaling Method for Generating 30 M Time Series Image Data

    NASA Astrophysics Data System (ADS)

    Liu, B.; Chen, J.; Xing, H.; Wu, H.; Zhang, J.

    2017-09-01

    The spatial detail and updating frequency of land cover data are important factors influencing land surface dynamic monitoring applications in high spatial resolution scale. However, the fragmentized patches and seasonal variable of some land cover types (e. g. small crop field, wetland) make it labor-intensive and difficult in the generation of land cover data. Utilizing the high spatial resolution multi-temporal image data is a possible solution. Unfortunately, the spatial and temporal resolution of available remote sensing data like Landsat or MODIS datasets can hardly satisfy the minimum mapping unit and frequency of current land cover mapping / updating at the same time. The generation of high resolution time series may be a compromise to cover the shortage in land cover updating process. One of popular way is to downscale multi-temporal MODIS data with other high spatial resolution auxiliary data like Landsat. But the usual manner of downscaling pixel based on a window may lead to the underdetermined problem in heterogeneous area, result in the uncertainty of some high spatial resolution pixels. Therefore, the downscaled multi-temporal data can hardly reach high spatial resolution as Landsat data. A spiral based method was introduced to downscale low spatial and high temporal resolution image data to high spatial and high temporal resolution image data. By the way of searching the similar pixels around the adjacent region based on the spiral, the pixel set was made up in the adjacent region pixel by pixel. The underdetermined problem is prevented to a large extent from solving the linear system when adopting the pixel set constructed. With the help of ordinary least squares, the method inverted the endmember values of linear system. The high spatial resolution image was reconstructed on the basis of high spatial resolution class map and the endmember values band by band. Then, the high spatial resolution time series was formed with these high spatial resolution images image by image. Simulated experiment and remote sensing image downscaling experiment were conducted. In simulated experiment, the 30 meters class map dataset Globeland30 was adopted to investigate the effect on avoid the underdetermined problem in downscaling procedure and a comparison between spiral and window was conducted. Further, the MODIS NDVI and Landsat image data was adopted to generate the 30m time series NDVI in remote sensing image downscaling experiment. Simulated experiment results showed that the proposed method had a robust performance in downscaling pixel in heterogeneous region and indicated that it was superior to the traditional window-based methods. The high resolution time series generated may be a benefit to the mapping and updating of land cover data.

  7. The application of computational mechanics to the analysis of natural data: An example in geomagnetism.

    NASA Astrophysics Data System (ADS)

    Watkins, Nicholas; Clarke, Richard; Freeman, Mervyn

    2002-11-01

    We discuss how the ideal formalism of Computational Mechanics can be adapted to apply to a non-infinite series of corrupted and correlated data, that is typical of most observed natural time series. Specifically, a simple filter that removes the corruption that creates rare unphysical causal states is demonstrated, and the new concept of effective soficity is introduced. The benefits of these new concepts are demonstrated on simulated time series by (a) the effective elimination of white noise corruption from a periodic signal using the expletive filter and (b) the appearance of an effectively sofic region in the statistical complexity of a biased Poisson switch time series that is insensitive to changes in the word length (memory) used in the analysis. The new algorithm is then applied to analysis of a real geomagnetic time series measured at Halley, Antarctica. Two principal components in the structure are detected that are interpreted as the diurnal variation due to the rotation of the earth-based station under an electrical current pattern that is fixed with respect to the sun-earth axis and the random occurrence of a signature likely to be that of the magnetic substorm. In conclusion, a hypothesis is advanced about model construction in general (see also Clarke et al; arXiv::cond-mat/0110228).

  8. An at-site flood estimation method in the context of nonstationarity I. A simulation study

    NASA Astrophysics Data System (ADS)

    Gado, Tamer A.; Nguyen, Van-Thanh-Van

    2016-04-01

    The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed ;stationary; series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.

  9. Application of Lidar Data to the Performance Evaluations of ...

    EPA Pesticide Factsheets

    The Tropospheric Ozone (O3) Lidar Network (TOLNet) provides time/height O3 measurements from near the surface to the top of the troposphere to describe in high-fidelity spatial-temporal distributions, which is uniquely useful to evaluate the temporal evolution of O3 profiles in air quality models. This presentation describes the application of the Lidar data to the performance evaluation of CMAQ simulated O3 vertical profiles during the summer, 2014. Two-way coupled WRF-CMAQ simulations with 12km and 4km domains centered over Boulder, Colorado were performed during this time period. The analysis on the time series of observed and modeled O3 mixing ratios at different vertical layers indicates that the model frequently underestimated the observed values, and the underestimation was amplified in the middle model layers (~1km above the ground). When the lightning strikes detected by the National Lightning Detection Network (NLDN) were analyzed along with the observed O3 time series, it was found that the daily maximum O3 mixing ratios correlated well with the lightning strikes in the vicinity of the Lidar station. The analysis on temporal vertical profiles of both observed and modeled O3 mixing ratios on episodic days suggests that the model resolutions (12km and 4km) do not make any significant difference for this analysis (at this specific location and simulation period), but high O3 levels in the middle layers were linked to lightning activity that occurred in t

  10. MCNP modelling of scintillation-detector gamma-ray spectra from natural radionuclides.

    PubMed

    Hendriks, P H G M; Maucec, M; de Meijer, R J

    2002-09-01

    gamma-ray spectra of natural radionuclides are simulated for a BGO detector in a borehole geometry using the Monte Carlo code MCNP. All gamma-ray emissions of the decay of 40K and the series of 232Th and 238U are used to describe the source. A procedure is proposed which excludes the time-consuming electron tracking in less relevant areas of the geometry. The simulated gamma-ray spectra are benchmarked against laboratory data.

  11. Predicting Long-term Temperature Increase for Time-Dependent SAR Levels with a Single Short-term Temperature Response

    PubMed Central

    Carluccio, Giuseppe; Bruno, Mary; Collins, Christopher M.

    2015-01-01

    Purpose Present a novel method for rapid prediction of temperature in vivo for a series of pulse sequences with differing levels and distributions of specific energy absorption rate (SAR). Methods After the temperature response to a brief period of heating is characterized, a rapid estimate of temperature during a series of periods at different heating levels is made using a linear heat equation and Impulse-Response (IR) concepts. Here the initial characterization and long-term prediction for a complete spine exam are made with the Pennes’ bioheat equation where, at first, core body temperature is allowed to increase and local perfusion is not. Then corrections through time allowing variation in local perfusion are introduced. Results The fast IR-based method predicted maximum temperature increase within 1% of that with a full finite difference simulation, but required less than 3.5% of the computation time. Even higher accelerations are possible depending on the time step size chosen, with loss in temporal resolution. Correction for temperature-dependent perfusion requires negligible additional time, and can be adjusted to be more or less conservative than the corresponding finite difference simulation. Conclusion With appropriate methods, it is possible to rapidly predict temperature increase throughout the body for actual MR examinations. (200/200 words) PMID:26096947

  12. Predicting long-term temperature increase for time-dependent SAR levels with a single short-term temperature response.

    PubMed

    Carluccio, Giuseppe; Bruno, Mary; Collins, Christopher M

    2016-05-01

    Present a novel method for rapid prediction of temperature in vivo for a series of pulse sequences with differing levels and distributions of specific energy absorption rate (SAR). After the temperature response to a brief period of heating is characterized, a rapid estimate of temperature during a series of periods at different heating levels is made using a linear heat equation and impulse-response (IR) concepts. Here the initial characterization and long-term prediction for a complete spine exam are made with the Pennes' bioheat equation where, at first, core body temperature is allowed to increase and local perfusion is not. Then corrections through time allowing variation in local perfusion are introduced. The fast IR-based method predicted maximum temperature increase within 1% of that with a full finite difference simulation, but required less than 3.5% of the computation time. Even higher accelerations are possible depending on the time step size chosen, with loss in temporal resolution. Correction for temperature-dependent perfusion requires negligible additional time and can be adjusted to be more or less conservative than the corresponding finite difference simulation. With appropriate methods, it is possible to rapidly predict temperature increase throughout the body for actual MR examinations. © 2015 Wiley Periodicals, Inc.

  13. A simulation study to quantify the impacts of exposure ...

    EPA Pesticide Factsheets

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health.MethodsZIP-code level estimates of exposure for six pollutants (CO, NOx, EC, PM2.5, SO4, O3) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error.Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs.ResultsSubstantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3–85% for population error, and 31–85% for total error. When CO, NOx or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copoll

  14. Cell Phones and Cancer Risk

    MedlinePlus

    ... cell phone use in this country during that time ( 24 ). An analysis of incidence data from Denmark, Finland, Norway, and Sweden for the period 1974–2008 similarly revealed no increase in age-adjusted incidence of brain tumors ( 25 ). A series of studies testing different scenarios (called simulations by ...

  15. Storm Water Management Model User’s Manual Version 5.1 - manual

    EPA Science Inventory

    SWMM 5 provides an integrated environment for editing study area input data, running hydrologic, hydraulic and water quality simulations, and viewing the results in a variety of formats. These include color-coded drainage area and conveyance system maps, time series graphs and ta...

  16. Andy Walker | NREL

    Science.gov Websites

    efficiency and renewable energy projects. His patent on the Renewable Energy Optimization (REO) method of distribution function for time-series simulation Analytical and numerical optimization Project delivery with System Operations and Maintenance: 2nd Edition, 2016, NREL/Sandia/Sunspec Alliance SuNLaMP PV O&M

  17. Evaluating hydrological model performance using information theory-based metrics

    USDA-ARS?s Scientific Manuscript database

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  18. Information and complexity measures for hydrologic model evaluation

    USDA-ARS?s Scientific Manuscript database

    Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...

  19. Using a generalized additive model with autoregressive terms to study the effects of daily temperature on mortality.

    PubMed

    Yang, Lei; Qin, Guoyou; Zhao, Naiqing; Wang, Chunfang; Song, Guixiang

    2012-10-30

    Generalized Additive Model (GAM) provides a flexible and effective technique for modelling nonlinear time-series in studies of the health effects of environmental factors. However, GAM assumes that errors are mutually independent, while time series can be correlated in adjacent time points. Here, a GAM with Autoregressive terms (GAMAR) is introduced to fill this gap. Parameters in GAMAR are estimated by maximum partial likelihood using modified Newton's method, and the difference between GAM and GAMAR is demonstrated using two simulation studies and a real data example. GAMM is also compared to GAMAR in simulation study 1. In the simulation studies, the bias of the mean estimates from GAM and GAMAR are similar but GAMAR has better coverage and smaller relative error. While the results from GAMM are similar to GAMAR, the estimation procedure of GAMM is much slower than GAMAR. In the case study, the Pearson residuals from the GAM are correlated, while those from GAMAR are quite close to white noise. In addition, the estimates of the temperature effects are different between GAM and GAMAR. GAMAR incorporates both explanatory variables and AR terms so it can quantify the nonlinear impact of environmental factors on health outcome as well as the serial correlation between the observations. It can be a useful tool in environmental epidemiological studies.

  20. Using a generalized additive model with autoregressive terms to study the effects of daily temperature on mortality

    PubMed Central

    2012-01-01

    Background Generalized Additive Model (GAM) provides a flexible and effective technique for modelling nonlinear time-series in studies of the health effects of environmental factors. However, GAM assumes that errors are mutually independent, while time series can be correlated in adjacent time points. Here, a GAM with Autoregressive terms (GAMAR) is introduced to fill this gap. Methods Parameters in GAMAR are estimated by maximum partial likelihood using modified Newton’s method, and the difference between GAM and GAMAR is demonstrated using two simulation studies and a real data example. GAMM is also compared to GAMAR in simulation study 1. Results In the simulation studies, the bias of the mean estimates from GAM and GAMAR are similar but GAMAR has better coverage and smaller relative error. While the results from GAMM are similar to GAMAR, the estimation procedure of GAMM is much slower than GAMAR. In the case study, the Pearson residuals from the GAM are correlated, while those from GAMAR are quite close to white noise. In addition, the estimates of the temperature effects are different between GAM and GAMAR. Conclusions GAMAR incorporates both explanatory variables and AR terms so it can quantify the nonlinear impact of environmental factors on health outcome as well as the serial correlation between the observations. It can be a useful tool in environmental epidemiological studies. PMID:23110601

  1. Comparison of data transformation procedures to enhance topographical accuracy in time-series analysis of the human EEG.

    PubMed

    Hauk, O; Keil, A; Elbert, T; Müller, M M

    2002-01-30

    We describe a methodology to apply current source density (CSD) and minimum norm (MN) estimation as pre-processing tools for time-series analysis of single trial EEG data. The performance of these methods is compared for the case of wavelet time-frequency analysis of simulated gamma-band activity. A reasonable comparison of CSD and MN on the single trial level requires regularization such that the corresponding transformed data sets have similar signal-to-noise ratios (SNRs). For region-of-interest approaches, it should be possible to optimize the SNR for single estimates rather than for the whole distributed solution. An effective implementation of the MN method is described. Simulated data sets were created by modulating the strengths of a radial and a tangential test dipole with wavelets in the frequency range of the gamma band, superimposed with simulated spatially uncorrelated noise. The MN and CSD transformed data sets as well as the average reference (AR) representation were subjected to wavelet frequency-domain analysis, and power spectra were mapped for relevant frequency bands. For both CSD and MN, the influence of noise can be sufficiently suppressed by regularization to yield meaningful information, but only MN represents both radial and tangential dipole sources appropriately as single peaks. Therefore, when relating wavelet power spectrum topographies to their neuronal generators, MN should be preferred.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, E.A.; Smed, P.F.; Bryndum, M.B.

    The paper describes the numerical program, PIPESIN, that simulates the behavior of a pipeline placed on an erodible seabed. PIPEline Seabed INteraction from installation until a stable pipeline seabed configuration has occurred is simulated in the time domain including all important physical processes. The program is the result of the joint research project, ``Free Span Development and Self-lowering of Offshore Pipelines`` sponsored by EU and a group of companies and carried out by the Danish Hydraulic Institute and Delft Hydraulics. The basic modules of PIPESIN are described. The description of the scouring processes has been based on and verified throughmore » physical model tests carried out as part of the research project. The program simulates a section of the pipeline (typically 500 m) in the time domain, the main input being time series of the waves and current. The main results include predictions of the onset of free spans, their length distribution, their variation in time, and the lowering of the pipeline as function of time.« less

  3. A Physiological Time Series Dynamics-Based Approach to Patient Monitoring and Outcome Prediction

    PubMed Central

    Lehman, Li-Wei H.; Adams, Ryan P.; Mayaud, Louis; Moody, George B.; Malhotra, Atul; Mark, Roger G.; Nemati, Shamim

    2015-01-01

    Cardiovascular variables such as heart rate (HR) and blood pressure (BP) are regulated by an underlying control system, and therefore, the time series of these vital signs exhibit rich dynamical patterns of interaction in response to external perturbations (e.g., drug administration), as well as pathological states (e.g., onset of sepsis and hypotension). A question of interest is whether “similar” dynamical patterns can be identified across a heterogeneous patient cohort, and be used for prognosis of patients’ health and progress. In this paper, we used a switching vector autoregressive framework to systematically learn and identify a collection of vital sign time series dynamics, which are possibly recurrent within the same patient and may be shared across the entire cohort. We show that these dynamical behaviors can be used to characterize the physiological “state” of a patient. We validate our technique using simulated time series of the cardiovascular system, and human recordings of HR and BP time series from an orthostatic stress study with known postural states. Using the HR and BP dynamics of an intensive care unit (ICU) cohort of over 450 patients from the MIMIC II database, we demonstrate that the discovered cardiovascular dynamics are significantly associated with hospital mortality (dynamic modes 3 and 9, p = 0.001, p = 0.006 from logistic regression after adjusting for the APACHE scores). Combining the dynamics of BP time series and SAPS-I or APACHE-III provided a more accurate assessment of patient survival/mortality in the hospital than using SAPS-I and APACHE-III alone (p = 0.005 and p = 0.045). Our results suggest that the discovered dynamics of vital sign time series may contain additional prognostic value beyond that of the baseline acuity measures, and can potentially be used as an independent predictor of outcomes in the ICU. PMID:25014976

  4. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE PAGES

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...

    2017-09-20

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  5. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  6. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    PubMed

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  7. Spectral decompositions of multiple time series: a Bayesian non-parametric approach.

    PubMed

    Macaro, Christian; Prado, Raquel

    2014-01-01

    We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.

  8. Detrended fluctuation analysis based on higher-order moments of financial time series

    NASA Astrophysics Data System (ADS)

    Teng, Yue; Shang, Pengjian

    2018-01-01

    In this paper, a generalized method of detrended fluctuation analysis (DFA) is proposed as a new measure to assess the complexity of a complex dynamical system such as stock market. We extend DFA and local scaling DFA to higher moments such as skewness and kurtosis (labeled SMDFA and KMDFA), so as to investigate the volatility scaling property of financial time series. Simulations are conducted over synthetic and financial data for providing the comparative study. We further report the results of volatility behaviors in three American countries, three Chinese and three European stock markets by using DFA and LSDFA method based on higher moments. They demonstrate the dynamics behaviors of time series in different aspects, which can quantify the changes of complexity for stock market data and provide us with more meaningful information than single exponent. And the results reveal some higher moments volatility and higher moments multiscale volatility details that cannot be obtained using the traditional DFA method.

  9. New Tools for Comparing Beliefs about the Timing of Recurrent Events with Climate Time Series Datasets

    NASA Astrophysics Data System (ADS)

    Stiller-Reeve, Mathew; Stephenson, David; Spengler, Thomas

    2017-04-01

    For climate services to be relevant and informative for users, scientific data definitions need to match users' perceptions or beliefs. This study proposes and tests novel yet simple methods to compare beliefs of timing of recurrent climatic events with empirical evidence from multiple historical time series. The methods are tested by applying them to the onset date of the monsoon in Bangladesh, where several scientific monsoon definitions can be applied, yielding different results for monsoon onset dates. It is a challenge to know which monsoon definition compares best with people's beliefs. Time series from eight different scientific monsoon definitions in six regions are compared with respondent beliefs from a previously completed survey concerning the monsoon onset. Beliefs about the timing of the monsoon onset are represented probabilistically for each respondent by constructing a probability mass function (PMF) from elicited responses about the earliest, normal, and latest dates for the event. A three-parameter circular modified triangular distribution (CMTD) is used to allow for the possibility (albeit small) of the onset at any time of the year. These distributions are then compared to the historical time series using two approaches: likelihood scores, and the mean and standard deviation of time series of dates simulated from each belief distribution. The methods proposed give the basis for further iterative discussion with decision-makers in the development of eventual climate services. This study uses Jessore, Bangladesh, as an example and finds that a rainfall definition, applying a 10 mm day-1 threshold to NCEP-NCAR reanalysis (Reanalysis-1) data, best matches the survey respondents' beliefs about monsoon onset.

  10. The application of a shift theorem analysis technique to multipoint measurements

    NASA Astrophysics Data System (ADS)

    Dieckmann, M. E.; Chapman, S. C.

    1999-03-01

    A Fourier domain technique has been proposed previously which, in principle, quantifies the extent to which multipoint in-situ measurements can identify whether or not an observed structure is time stationary in its rest frame. Once a structure, sampled for example by four spacecraft, is shown to be quasi-stationary in its rest frame, the structure's velocity vector can be determined with respect to the sampling spacecraft. We investigate the properties of this technique, which we will refer to as a stationarity test, by applying it to two point measurements of a simulated boundary layer. The boundary layer was evolved using a PIC (particle in cell) electromagnetic code. Initial and boundary conditions were chosen such, that two cases could be considered, i.e. a spacecraft pair moving through (1) a time stationary boundary structure and (2) a boundary structure which is evolving (expanding) in time. The code also introduces noise in the simulated data time series which is uncorrelated between the two spacecraft. We demonstrate that, provided that the time series is Hanning windowed, the test is effective in determining the relative velocity between the boundary layer and spacecraft and in determining the range of frequencies over which the data can be treated as time stationary or time evolving. This work presents a first step towards understanding the effectiveness of this technique, as required in order for it to be applied to multispacecraft data.

  11. PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems

    PubMed Central

    Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota

    2016-01-01

    PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems. PMID:27174940

  12. Appropriate use of the increment entropy for electrophysiological time series.

    PubMed

    Liu, Xiaofeng; Wang, Xue; Zhou, Xu; Jiang, Aimin

    2018-04-01

    The increment entropy (IncrEn) is a new measure for quantifying the complexity of a time series. There are three critical parameters in the IncrEn calculation: N (length of the time series), m (dimensionality), and q (quantifying precision). However, the question of how to choose the most appropriate combination of IncrEn parameters for short datasets has not been extensively explored. The purpose of this research was to provide guidance on choosing suitable IncrEn parameters for short datasets by exploring the effects of varying the parameter values. We used simulated data, epileptic EEG data and cardiac interbeat (RR) data to investigate the effects of the parameters on the calculated IncrEn values. The results reveal that IncrEn is sensitive to changes in m, q and N for short datasets (N≤500). However, IncrEn reaches stability at a data length of N=1000 with m=2 and q=2, and for short datasets (N=100), it shows better relative consistency with 2≤m≤6 and 2≤q≤8 We suggest that the value of N should be no less than 100. To enable a clear distinction between different classes based on IncrEn, we recommend that m and q should take values between 2 and 4. With appropriate parameters, IncrEn enables the effective detection of complexity variations in physiological time series, suggesting that IncrEn should be useful for the analysis of physiological time series in clinical applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. An Iterative Time Windowed Signature Algorithm for Time Dependent Transcription Module Discovery

    PubMed Central

    Meng, Jia; Gao, Shou-Jiang; Huang, Yufei

    2010-01-01

    An algorithm for the discovery of time varying modules using genome-wide expression data is present here. When applied to large-scale time serious data, our method is designed to discover not only the transcription modules but also their timing information, which is rarely annotated by the existing approaches. Rather than assuming commonly defined time constant transcription modules, a module is depicted as a set of genes that are co-regulated during a specific period of time, i.e., a time dependent transcription module (TDTM). A rigorous mathematical definition of TDTM is provided, which is serve as an objective function for retrieving modules. Based on the definition, an effective signature algorithm is proposed that iteratively searches the transcription modules from the time series data. The proposed method was tested on the simulated systems and applied to the human time series microarray data during Kaposi's sarcoma-associated herpesvirus (KSHV) infection. The result has been verified by Expression Analysis Systematic Explorer. PMID:21552463

  14. Approaches to quantifying long-term continental shelf sediment transport with an example from the Northern California STRESS mid-shelf site

    NASA Astrophysics Data System (ADS)

    Harris, Courtney K.; Wiberg, Patricia L.

    1997-09-01

    Modeling shelf sediment transport rates and bed reworking depths is problematic when the wave and current forcing conditions are not precisely known, as is usually the case when long-term sedimentation patterns are of interest. Two approaches to modeling sediment transport under such circumstances are considered. The first relies on measured or simulated time series of flow conditions to drive model calculations. The second approach uses as model input probability distribution functions of bottom boundary layer flow conditions developed from wave and current measurements. Sediment transport rates, frequency of bed resuspension by waves and currents, and bed reworking calculated using the two methods are compared at the mid-shelf STRESS (Sediment TRansport on Shelves and Slopes) site on the northern California continental shelf. Current, wave and resuspension measurements at the site are used to generate model inputs and test model results. An 11-year record of bottom wave orbital velocity, calculated from surface wave spectra measured by the National Data Buoy Center (NDBC) Buoy 46013 and verified against bottom tripod measurements, is used to characterize the frequency and duration of wave-driven transport events and to estimate the joint probability distribution of wave orbital velocity and period. A 109-day record of hourly current measurements 10 m above bottom is used to estimate the probability distribution of bottom boundary layer current velocity at this site and to develop an auto-regressive model to simulate current velocities for times when direct measurements of currents are not available. Frequency of transport, the maximum volume of suspended sediment, and average flux calculated using measured wave and simulated current time series agree well with values calculated using measured time series. A probabilistic approach is more amenable to calculations over time scales longer than existing wave records, but it tends to underestimate net transport because it does not capture the episodic nature of transport events. Both methods enable estimates to be made of the uncertainty in transport quantities that arise from an incomplete knowledge of the specific timing of wave and current conditions. 1997 Elsevier Science Ltd

  15. From blackbirds to black holes: Investigating capture-recapture methods for time domain astronomy

    NASA Astrophysics Data System (ADS)

    Laycock, Silas G. T.

    2017-07-01

    In time domain astronomy, recurrent transients present a special problem: how to infer total populations from limited observations. Monitoring observations may give a biassed view of the underlying population due to limitations on observing time, visibility and instrumental sensitivity. A similar problem exists in the life sciences, where animal populations (such as migratory birds) or disease prevalence, must be estimated from sparse and incomplete data. The class of methods termed Capture-Recapture is used to reconstruct population estimates from time-series records of encounters with the study population. This paper investigates the performance of Capture-Recapture methods in astronomy via a series of numerical simulations. The Blackbirds code simulates monitoring of populations of transients, in this case accreting binary stars (neutron star or black hole accreting from a stellar companion) under a range of observing strategies. We first generate realistic light-curves for populations of binaries with contrasting orbital period distributions. These models are then randomly sampled at observing cadences typical of existing and planned monitoring surveys. The classical capture-recapture methods, Lincoln-Peterson, Schnabel estimators, related techniques, and newer methods implemented in the Rcapture package are compared. A general exponential model based on the radioactive decay law is introduced which is demonstrated to recover (at 95% confidence) the underlying population abundance and duty cycle, in a fraction of the observing visits (10-50%) required to discover all the sources in the simulation. Capture-Recapture is a promising addition to the toolbox of time domain astronomy, and methods implemented in R by the biostats community can be readily called from within python.

  16. Estimating long-term evolution of fine sediment budget in the Iffezheim reservoir using a simplified method based on classification of boundary conditions

    NASA Astrophysics Data System (ADS)

    Zhang, Qing; Hillebrand, Gudrun; Hoffmann, Thomas; Hinkelmann, Reinhard

    2017-04-01

    The Iffezheim reservoir is the last of a series of reservoirs on the Upper Rhine in Germany. Since its construction in 1977, approximately 115,000 m3 of fine sediments accumulate annually in the weir channel (WSA Freiburg, 2011). In order to obtain detailed information about the space-time development of the topography, the riverbed evolution was measured using echo sounding by the German Federal Waterways and Shipping Administration (WSV). 37 sets of sounding data, which have been obtained between July 2000 and February 2011, were used in this research. In a previous work, the morphodynamic processes in the Iffezheim reservoir were investigated using a high-resolution 3D model. The 3D computational fluid dynamic software SSIIM II (Olsen, 2014) was used for this purpose (Zhang et al., 2015). The model was calibrated using field measurements. A computational time of 14.5 hours, using 24 cores of a 2.4 GHz reference computer, was needed for simulating a period of three months on a grid of 238,013 cells. Thus, the long-term (e.g. 30 years) simulation of morphodynamics of the fine sediment budget in the Iffezheim reservoir with this model is not feasible. A low complexity approach of "classification of the boundary conditions of discharge and suspended sediment concentration" was applied in this research for a long-term numerical simulation. The basic idea of the approach is to replace instationary or quasi-steady simulations of deposition by a limited series of stationary ones. For these, daily volume changes were calculated considering representative discharge and concentration. Representative boundary conditions were determined by subdividing time series of discharge and concentration into classes and using central values per class. The amount of the deposition in the reservoir for a certain period can then be obtained by adding up the calculated daily depositions. This approach was applied to 10 short-term periods, between two successive echo sounding measurements, and 2 longer ones, which include several short-term periods. Short-term periods spread from 1 to 3 months, whereas long-term periods indicate 2 and 5 years. The simulation results showed an acceptable agreement with the measurements. It was also found that the long-term periods had less deviation to the measurements than the short ones. This simplified method exhibited clear savings in computational time compared to the instationary simulations; in this case only 3 hours of computational time were needed for 5 years simulation period using the reference computer mentioned above. Further research is needed with respect to the limits of this linear approach, i.e. with respect to the frequency with which the set of steady simulations has to be updated due to significant changes in morphology and in turn in hydraulics. Yet, the preliminary results are promising, suggesting that the developed approach is very suitable for a long-term simulation of riverbed evolution. REFERENCES Olsen, N.R.B. 2014. A three-dimensional numerical model for simulation of sediment movements in water intakes with multiblock option. Version 1 and 2. User's manual. Department of Hydraulic and Environmental Engineering. The Norwegian University of Science and Technology, Trondheim, Norway. Wasser- und Schifffahrtsamt (WSA) Freiburg. 2011. Sachstandsbericht oberer Wehrkanal Staustufe Iffezheim. Technical report - Upper weir channel of the Iffezheim hydropower reservoir. Zhang, Q., Hillebrand, G. Moser, H. & Hinkelmann, R. 2015. Simulation of non-uniform sediment transport in a German Reservoir with the SSIIM Model and sensitivity analysis. Proceedings of the 36th IAHR World Congress. The Hague, The Netherland.

  17. WUVS simulator: detectability of spectral lines with the WSO-UV spectrographs

    NASA Astrophysics Data System (ADS)

    Marcos-Arenal, Pablo; de Castro, Ana I. Gómez; Abarca, Belén Perea; Sachkov, Mikhail

    2017-04-01

    The World Space Observatory Ultraviolet telescope is equipped with high dispersion (55,000) spectrographs working in the 1150 to 3100 Å spectral range. To evaluate the impact of the design on the scientific objectives of the mission, a simulation software tool has been developed. This simulator builds on the development made for the PLATO space mission and it is designed to generate synthetic time-series of images by including models of all important noise sources. We describe its design and performance. Moreover, its application to the detectability of important spectral features for star formation and exoplanetary research is addressed.

  18. Near-road air pollutant concentrations of CO and PM 2.5: A comparison of MOBILE6.2/CALINE4 and generalized additive models

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Batterman, Stuart

    2010-05-01

    The contribution of vehicular traffic to air pollutant concentrations is often difficult to establish. This paper utilizes both time-series and simulation models to estimate vehicle contributions to pollutant levels near roadways. The time-series model used generalized additive models (GAMs) and fitted pollutant observations to traffic counts and meteorological variables. A one year period (2004) was analyzed on a seasonal basis using hourly measurements of carbon monoxide (CO) and particulate matter less than 2.5 μm in diameter (PM 2.5) monitored near a major highway in Detroit, Michigan, along with hourly traffic counts and local meteorological data. Traffic counts showed statistically significant and approximately linear relationships with CO concentrations in fall, and piecewise linear relationships in spring, summer and winter. The same period was simulated using emission and dispersion models (Motor Vehicle Emissions Factor Model/MOBILE6.2; California Line Source Dispersion Model/CALINE4). CO emissions derived from the GAM were similar, on average, to those estimated by MOBILE6.2. The same analyses for PM 2.5 showed that GAM emission estimates were much higher (by 4-5 times) than the dispersion model results, and that the traffic-PM 2.5 relationship varied seasonally. This analysis suggests that the simulation model performed reasonably well for CO, but it significantly underestimated PM 2.5 concentrations, a likely result of underestimating PM 2.5 emission factors. Comparisons between statistical and simulation models can help identify model deficiencies and improve estimates of vehicle emissions and near-road air quality.

  19. Inference for local autocorrelations in locally stationary models.

    PubMed

    Zhao, Zhibiao

    2015-04-01

    For non-stationary processes, the time-varying correlation structure provides useful insights into the underlying model dynamics. We study estimation and inferences for local autocorrelation process in locally stationary time series. Our constructed simultaneous confidence band can be used to address important hypothesis testing problems, such as whether the local autocorrelation process is indeed time-varying and whether the local autocorrelation is zero. In particular, our result provides an important generalization of the R function acf() to locally stationary Gaussian processes. Simulation studies and two empirical applications are developed. For the global temperature series, we find that the local autocorrelations are time-varying and have a "V" shape during 1910-1960. For the S&P 500 index, we conclude that the returns satisfy the efficient-market hypothesis whereas the magnitudes of returns show significant local autocorrelations.

  20. Power-balancing instantaneous optimization energy management for a novel series-parallel hybrid electric bus

    NASA Astrophysics Data System (ADS)

    Sun, Dongye; Lin, Xinyou; Qin, Datong; Deng, Tao

    2012-11-01

    Energy management(EM) is a core technique of hybrid electric bus(HEB) in order to advance fuel economy performance optimization and is unique for the corresponding configuration. There are existing algorithms of control strategy seldom take battery power management into account with international combustion engine power management. In this paper, a type of power-balancing instantaneous optimization(PBIO) energy management control strategy is proposed for a novel series-parallel hybrid electric bus. According to the characteristic of the novel series-parallel architecture, the switching boundary condition between series and parallel mode as well as the control rules of the power-balancing strategy are developed. The equivalent fuel model of battery is implemented and combined with the fuel of engine to constitute the objective function which is to minimize the fuel consumption at each sampled time and to coordinate the power distribution in real-time between the engine and battery. To validate the proposed strategy effective and reasonable, a forward model is built based on Matlab/Simulink for the simulation and the dSPACE autobox is applied to act as a controller for hardware in-the-loop integrated with bench test. Both the results of simulation and hardware-in-the-loop demonstrate that the proposed strategy not only enable to sustain the battery SOC within its operational range and keep the engine operation point locating the peak efficiency region, but also the fuel economy of series-parallel hybrid electric bus(SPHEB) dramatically advanced up to 30.73% via comparing with the prototype bus and a similar improvement for PBIO strategy relative to rule-based strategy, the reduction of fuel consumption is up to 12.38%. The proposed research ensures the algorithm of PBIO is real-time applicability, improves the efficiency of SPHEB system, as well as suite to complicated configuration perfectly.

  1. Traveltime delay relative to the maximum energy of the wave train for dispersive tsunamis propagating across the Pacific Ocean: the case of 2010 and 2015 Chilean Tsunamis

    NASA Astrophysics Data System (ADS)

    Poupardin, A.; Heinrich, P.; Hébert, H.; Schindelé, F.; Jamelot, A.; Reymond, D.; Sugioka, H.

    2018-05-01

    This paper evaluates the importance of frequency dispersion in the propagation of recent trans-Pacific tsunamis. Frequency dispersion induces a time delay for the most energetic waves, which increases for long propagation distances and short source dimensions. To calculate this time delay, propagation of tsunamis is simulated and analyzed from spectrograms of time-series at specific gauges in the Pacific Ocean. One- and two-dimensional simulations are performed by solving either shallow water or Boussinesq equations and by considering realistic seismic sources. One-dimensional sensitivity tests are first performed in a constant-depth channel to study the influence of the source width. Two-dimensional tests are then performed in a simulated Pacific Ocean with a 4000-m constant depth and by considering tectonic sources of 2010 and 2015 Chilean earthquakes. For these sources, both the azimuth and the distance play a major role in the frequency dispersion of tsunamis. Finally, simulations are performed considering the real bathymetry of the Pacific Ocean. Multiple reflections, refractions as well as shoaling of waves result in much more complex time series for which the effects of the frequency dispersion are hardly discernible. The main point of this study is to evaluate frequency dispersion in terms of traveltime delays by calculating spectrograms for a time window of 6 hours after the arrival of the first wave. Results of the spectral analysis show that the wave packets recorded by pressure and tide sensors in the Pacific Ocean seem to be better reproduced by the Boussinesq model than the shallow water model and approximately follow the theoretical dispersion relationship linking wave arrival times and frequencies. Additionally, a traveltime delay is determined above which effects of frequency dispersion are considered to be significant in terms of maximum surface elevations.

  2. A Comparison of Grid-based and SPH Binary Mass-transfer and Merger Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motl, Patrick M.; Frank, Juhan; Clayton, Geoffrey C.

    2017-04-01

    There is currently a great amount of interest in the outcomes and astrophysical implications of mergers of double degenerate binaries. In a commonly adopted approximation, the components of such binaries are represented by polytropes with an index of n  = 3/2. We present detailed comparisons of stellar mass-transfer and merger simulations of polytropic binaries that have been carried out using two very different numerical algorithms—a finite-volume “grid” code and a smoothed-particle hydrodynamics (SPH) code. We find that there is agreement in both the ultimate outcomes of the evolutions and the intermediate stages if the initial conditions for each code are chosen to matchmore » as closely as possible. We find that even with closely matching initial setups, the time it takes to reach a concordant evolution differs between the two codes because the initial depth of contact cannot be matched exactly. There is a general tendency for SPH to yield higher mass transfer rates and faster evolution to the final outcome. We also present comparisons of simulations calculated from two different energy equations: in one series, we assume a polytropic equation of state and in the other series an ideal gas equation of state. In the latter series of simulations, an atmosphere forms around the accretor, which can exchange angular momentum and cause a more rapid loss of orbital angular momentum. In the simulations presented here, the effect of the ideal equation of state is to de-stabilize the binary in both SPH and grid simulations, but the effect is more pronounced in the grid code.« less

  3. Flight Tests of a Remaining Flying Time Prediction System for Small Electric Aircraft in the Presence of Faults

    NASA Technical Reports Server (NTRS)

    Hogge, Edward F.; Kulkarni, Chetan S.; Vazquez, Sixto L.; Smalling, Kyle M.; Strom, Thomas H.; Hill, Boyd L.; Quach, Cuong C.

    2017-01-01

    This paper addresses the problem of building trust in the online prediction of a battery powered aircraft's remaining flying time. A series of flight tests is described that make use of a small electric powered unmanned aerial vehicle (eUAV) to verify the performance of the remaining flying time prediction algorithm. The estimate of remaining flying time is used to activate an alarm when the predicted remaining time is two minutes. This notifies the pilot to transition to the landing phase of the flight. A second alarm is activated when the battery charge falls below a specified limit threshold. This threshold is the point at which the battery energy reserve would no longer safely support two repeated aborted landing attempts. During the test series, the motor system is operated with the same predefined timed airspeed profile for each test. To test the robustness of the prediction, half of the tests were performed with, and half were performed without, a simulated powertrain fault. The pilot remotely engages a resistor bank at a specified time during the test flight to simulate a partial powertrain fault. The flying time prediction system is agnostic of the pilot's activation of the fault and must adapt to the vehicle's state. The time at which the limit threshold on battery charge is reached is then used to measure the accuracy of the remaining flying time predictions. Accuracy requirements for the alarms are considered and the results discussed.

  4. Ensemble reconstruction of severe low flow events in France since 1871

    NASA Astrophysics Data System (ADS)

    Caillouet, Laurie; Vidal, Jean-Philippe; Sauquet, Eric; Devers, Alexandre; Graff, Benjamin

    2016-04-01

    This work presents a study of severe low flow events that occurred from 1871 onwards for a large number of near-natural catchments in France. It aims at assessing and comparing their characteristics to improve our knowledge on historical events and to provide a selection of benchmark events for climate change adaptation purposes. The historical depth of streamflow observations is generally limited to the last 50 years and therefore offers too small a sample of severe low flow events to properly explore the long-term evolution of their characteristics and associated impacts. In order to overcome this limit, this work takes advantage of a 140-year ensemble hydrometeorological dataset over France based on: (1) a probabilistic precipitation and temperature downscaling of the Twentieth Century Reanalysis over France (Caillouet et al., 2015), and (2) a continuous hydrological modelling that uses the high-resolution meteorological reconstructions as forcings over the whole period. This dataset provides an ensemble of 25 equally plausible daily streamflow time series for a reference network of stations in France over the whole 1871-2012 period. Severe low flow events are identified based on a combination of a fixed threshold and a daily variable threshold. Each event is characterized by its deficit, duration and timing by applying the Sequent Peak Algorithm. The procedure is applied to the 25 simulated time series as well as to the observed time series in order to compare observed and simulated events over the recent period, and to characterize in a probabilistic way unrecorded historical events. The ensemble aspect of the reconstruction leads to address specific issues, for properly defining events across ensemble simulations, as well as for adequately comparing the simulated characteristics to the observed ones. This study brings forward the outstanding 1921 and 1940s events but also older and less known ones that occurred during the last decade of the 19th century. For the first time, severe low flow events are qualified in a homogeneous way over 140 years on a large set of near-natural French catchments, allowing for detailed analyses of the effect of climate variability and anthropogenic climate change on low flow hydrology. Caillouet, L., Vidal, J.-P., Sauquet, E., and Graff, B. (2015) Probabilistic precipitation and temperature downscaling of the Twentieth Century Reanalysis over France, Clim. Past Discuss., 11, 4425-4482, doi:10.5194/cpd-11-4425-2015

  5. How does the host population's network structure affect the estimation accuracy of epidemic parameters?

    NASA Astrophysics Data System (ADS)

    Yashima, Kenta; Ito, Kana; Nakamura, Kazuyuki

    2013-03-01

    When an Infectious disease where to prevail throughout the population, epidemic parameters such as the basic reproduction ratio, initial point of infection etc. are estimated from the time series data of infected population. However, it is unclear how does the structure of host population affects this estimation accuracy. In other words, what kind of city is difficult to estimate its epidemic parameters? To answer this question, epidemic data are simulated by constructing a commuting network with different network structure and running the infection process over this network. From the given time series data for each network structure, we would like to analyzed estimation accuracy of epidemic parameters.

  6. Towards Solving the Mixing Problem in the Decomposition of Geophysical Time Series by Independent Component Analysis

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)

    2000-01-01

    The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.

  7. Convergent Cross Mapping: Basic concept, influence of estimation parameters and practical application.

    PubMed

    Schiecke, Karin; Pester, Britta; Feucht, Martha; Leistritz, Lutz; Witte, Herbert

    2015-01-01

    In neuroscience, data are typically generated from neural network activity. Complex interactions between measured time series are involved, and nothing or only little is known about the underlying dynamic system. Convergent Cross Mapping (CCM) provides the possibility to investigate nonlinear causal interactions between time series by using nonlinear state space reconstruction. Aim of this study is to investigate the general applicability, and to show potentials and limitation of CCM. Influence of estimation parameters could be demonstrated by means of simulated data, whereas interval-based application of CCM on real data could be adapted for the investigation of interactions between heart rate and specific EEG components of children with temporal lobe epilepsy.

  8. Real-time flutter boundary prediction based on time series models

    NASA Astrophysics Data System (ADS)

    Gu, Wenjing; Zhou, Li

    2018-03-01

    For the purpose of predicting the flutter boundary in real time during flutter flight tests, two time series models accompanied with corresponding stability criterion are adopted in this paper. The first method simplifies a long nonstationary response signal as many contiguous intervals and each is considered to be stationary. The traditional AR model is then established to represent each interval of signal sequence. While the second employs a time-varying AR model to characterize actual measured signals in flutter test with progression variable speed (FTPVS). To predict the flutter boundary, stability parameters are formulated by the identified AR coefficients combined with Jury's stability criterion. The behavior of the parameters is examined using both simulated and wind-tunnel experiment data. The results demonstrate that both methods show significant effectiveness in predicting the flutter boundary at lower speed level. A comparison between the two methods is also given in this paper.

  9. Temporal asymmetry in precipitation time series and its influence on flow simulations in combined sewer systems

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Schütze, Manfred; Bárdossy, András

    2017-09-01

    A property of natural processes is temporal irreversibility. However, this property cannot be reflected by most statistics used to describe precipitation time series and, consequently, is not considered in most precipitation models. In this paper, a new statistic, the asymmetry measure, is introduced and applied to precipitation enabling to detect and quantify irreversibility. It is used to analyze two different data sets of Singapore and Germany. The data of both locations show a significant asymmetry for high temporal resolutions. The asymmetry is more pronounced for Singapore where the climate is dominated by convective precipitation events. The impact of irreversibility on applications is analyzed on two different hydrological sewer system models. The results show that the effect of the irreversibility can lead to biases in combined sewer overflow statistics. This bias is in the same order as the effect that can be achieved by real time control of sewer systems. Consequently, wrong conclusion can be drawn if synthetic time series are used for sewer systems if asymmetry is present, but not considered in precipitation modeling.

  10. Hydrologic data for water years 1933-97 used in the River and Reservoir Operations Model, Truckee River basin, California and Nevada

    USGS Publications Warehouse

    Berris, Steven N.; Hess, Glen W.; Bohman, Larry R.

    2000-01-01

    Title II of Public Law 101-618, the Truckee?Carson?Pyramid Lake Water Rights Settlement Act of 1990, provides direction, authority, and a mechanism for resolving conflicts over water rights in the Truckee and Carson River Basins. The Truckee Carson Program of the U.S. Geological Survey, to support implementation of Public Law 101-618, has developed an operations model to simulate lake/reservoir and river operations for the Truckee River Basin including diversion of Truckee River water to the Truckee Canal for transport to the Carson River Basin. Several types of hydrologic data, formatted in a chronological order with a daily time interval called 'time series,' are described in this report. Time series from water years 1933 to 1997 can be used to run the operations model. Auxiliary hydrologic data not currently used by the model are also described. The time series of hydrologic data consist of flow, lake/reservoir elevation and storage, precipitation, evaporation, evapotranspiration, municipal and industrial (M&I) demand, and streamflow and lake/reservoir level forecast data.

  11. An M-estimator for reduced-rank system identification.

    PubMed

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S; Vogelstein, Joshua T

    2017-01-15

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ 1 and ℓ 2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models.

  12. An M-estimator for reduced-rank system identification

    PubMed Central

    Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S.; Vogelstein, Joshua T.

    2018-01-01

    High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ1 and ℓ2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models. PMID:29391659

  13. GridLAB-D: An Agent-Based Simulation Framework for Smart Grids

    DOE PAGES

    Chassin, David P.; Fuller, Jason C.; Djilali, Ned

    2014-01-01

    Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control systemmore » design, and integration of wind power in a smart grid.« less

  14. GridLAB-D: An Agent-Based Simulation Framework for Smart Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Fuller, Jason C.; Djilali, Ned

    2014-06-23

    Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control systemmore » design, and integration of wind power in a smart grid.« less

  15. Freshening of the Labrador Sea Surface Waters in the 1990s: Another Great Salinity Anomaly

    NASA Technical Reports Server (NTRS)

    Hakkinen, Sirpa; Koblinsky, Chester J. (Technical Monitor)

    2002-01-01

    Both the observed and simulated time series of the Labrador Sea surface salinities show a major freshening event since the middles. It continues the series of decoder events of the 1970s and 1980s from which the freshening in the early 1970's was named as the Great Salinity Anomaly (GSA). These events are especially distinguishable in the late summer (August and September) time series. The observed data suggests that the 1990's freshening may equal the GSA in magnitude. This recent event is associated with a large reduction in the overturning rate between the early and latter part of the 1990s. Both the observations and model results indicate that the surface salinity conditions appear to be returning towards normal daring 1999 and 2000 in the coastal area, but offshore, the model predicts the freshening to linger on after peaking 1997.

  16. Generalized sample entropy analysis for traffic signals based on similarity measure

    NASA Astrophysics Data System (ADS)

    Shang, Du; Xu, Mengjia; Shang, Pengjian

    2017-05-01

    Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.

  17. Topological properties of the limited penetrable horizontal visibility graph family

    NASA Astrophysics Data System (ADS)

    Wang, Minggang; Vilela, André L. M.; Du, Ruijin; Zhao, Longfeng; Dong, Gaogao; Tian, Lixin; Stanley, H. Eugene

    2018-05-01

    The limited penetrable horizontal visibility graph algorithm was recently introduced to map time series in complex networks. In this work, we extend this algorithm to create a directed-limited penetrable horizontal visibility graph and an image-limited penetrable horizontal visibility graph. We define two algorithms and provide theoretical results on the topological properties of these graphs associated with different types of real-value series. We perform several numerical simulations to check the accuracy of our theoretical results. Finally, we present an application of the directed-limited penetrable horizontal visibility graph to measure real-value time series irreversibility and an application of the image-limited penetrable horizontal visibility graph that discriminates noise from chaos. We also propose a method to measure the systematic risk using the image-limited penetrable horizontal visibility graph, and the empirical results show the effectiveness of our proposed algorithms.

  18. Potential of VIIRS Time Series Data for Aiding the USDA Forest Service Early Warning System for Forest Health Threats: A Gypsy Moth Defoliation Case Study

    NASA Technical Reports Server (NTRS)

    Spruce, Joseph P.; Ryan, Robert E.; Smoot, James; Kuper, Phillip; Prados, Donald; Russell, Jeffrey; Ross, Kenton; Gasser, Gerald; Sader, Steven; McKellip, Rodney

    2007-01-01

    This report details one of three experiments performed during FY 2007 for the NASA RPC (Rapid Prototyping Capability) at Stennis Space Center. This RPC experiment assesses the potential of VIIRS (Visible/Infrared Imager/Radiometer Suite) and MODIS (Moderate Resolution Imaging Spectroradiometer) data for detecting and monitoring forest defoliation from the non-native Eurasian gypsy moth (Lymantria dispar). The intent of the RPC experiment was to assess the degree to which VIIRS data can provide forest disturbance monitoring information as an input to a forest threat EWS (Early Warning System) as compared to the level of information that can be obtained from MODIS data. The USDA Forest Service (USFS) plans to use MODIS products for generating broad-scaled, regional monitoring products as input to an EWS for forest health threat assessment. NASA SSC is helping the USFS to evaluate and integrate currently available satellite remote sensing technologies and data products for the EWS, including the use of MODIS products for regional monitoring of forest disturbance. Gypsy moth defoliation of the mid-Appalachian highland region was selected as a case study. Gypsy moth is one of eight major forest insect threats listed in the Healthy Forest Restoration Act (HFRA) of 2003; the gypsy moth threatens eastern U.S. hardwood forests, which are also a concern highlighted in the HFRA of 2003. This region was selected for the project because extensive gypsy moth defoliation occurred there over multiple years during the MODIS operational period. This RPC experiment is relevant to several nationally important mapping applications, including agricultural efficiency, coastal management, ecological forecasting, disaster management, and carbon management. In this experiment, MODIS data and VIIRS data simulated from MODIS were assessed for their ability to contribute broad, regional geospatial information on gypsy moth defoliation. Landsat and ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) data were used to assess the quality of gypsy moth defoliation mapping products derived from MODIS data and from simulated VIIRS data. The project focused on use of data from MODIS Terra as opposed to MODIS Aqua mainly because only MODIS Terra data was collected during 2000 and 2001-years with comparatively high amounts of gypsy moth defoliation within the study area. The project assessed the quality of VIIRS data simulation products. Hyperion data was employed to assess the quality of MODIS-based VIIRS simulation datasets using image correlation analysis techniques. The ART (Application Research Toolbox) software was used for data simulation. Correlation analysis between MODIS-simulated VIIRS data and Hyperion-simulated VIIRS data for red, NIR (near-infrared), and NDVI (Normalized Difference Vegetation Index) image data products collectively indicate that useful, effective VIIRS simulations can be produced using Hyperion and MODIS data sources. The r(exp 2) for red, NIR, and NDVI products were 0.56, 0.63, and 0.62, respectively, indicating a moderately high correlation between the 2 data sources. Temporal decorrelation from different data acquisition times and image misregistration may have lowered correlation results. The RPC experiment also generated MODIS-based time series data products using the TSPT (Time Series Product Tool) software. Time series of simulated VIIRS NDVI products were produced at approximately 400-meter resolution GSD (Ground Sampling Distance) at nadir for comparison to MODIS NDVI products at either 250- or 500-meter GSD. The project also computed MODIS (MOD02) NDMI (Normalized Difference Moisture Index) products at 500-meter GSD for comparison to NDVI-based products. For each year during 2000-2006, MODIS and VIIRS (simulated from MOD02) time series were computed during the peak gypsy moth defoliation time frame in the study area (approximately June 10 through July 27). Gypsy moth defoliation mapping products from simated VIIRS and MOD02 time series were produced using multiple methods, including image classification and change detection via image differencing. The latter enabled an automated defoliation detection product computed using percent change in maximum NDVI for a peak defoliation period during 2001 compared to maximum NDVI across the entire 2000-2006 time frame. Final gypsy moth defoliation mapping products were assessed for accuracy using randomly sampled locations found on available geospatial reference data (Landsat and ASTER data in conjunction with defoliation map data from the USFS). Extensive gypsy moth defoliation patches were evident on screen displays of multitemporal color composites derived from MODIS data and from simulated VIIRS vegetation index data. Such defoliation was particularly evident for 2001, although widespread denuded forests were also seen for 2000 and 2003. These visualizations were validated using aforementioned reference data. Defoliation patches were visible on displays of MODIS-based NDVI and NDMI data. The viewing of apparent defoliation patches on all of these products necessitated adoption of a specialized temporal data processing method (e.g., maximum NDVI during the peak defoliation time frame). The frequency of cloud cover necessitated this approach. Multitemporal simulated VIIRS and MODIS Terra data both produced effective general classifications of defoliated forest versus other land cover. For 2001, the MOD02-simulated VIIRS 400-meter NDVI classification produced a similar yet slightly lower overall accuracy (87.28 percent with 0.72 Kappa) than the MOD02 250-meter NDVI classification (88.44 percent with 0.75 Kappa). The MOD13 250-meter NDVI classification had a lower overall accuracy (79.13 percent) and a much lower Kappa (0.46). The report discusses accuracy assessment results in much more detail, comparing overall classification and individual class accuracy statistics for simulated VIIRS 400-meter NDVI, MOD02 250-meter NDVI, MOD02-500 meter NDVI, MOD13 250-meter NDVI, and MOD02 500-meter NDMI classifications. Automated defoliation detection products from simulated VIIRS and MOD02 data for 2001 also yielded similar, relatively high overall classification accuracy (85.55 percent for the VIIRS 400-meter NDVI versus 87.28 percent for the MOD02 250-meter NDVI). In contrast, the USFS aerial sketch map of gypsy moth defoliation showed a lower overall classification accuracy at 73.64 percent. The overall classification Kappa values were also similar for the VIIRS (approximately 0.67 Kappa) versus the MOD02 (approximately 0.72 Kappa) automated defoliation detection product, which were much higher than the values exhibited by the USFS sketch map product (overall Kappa of approximately 0.47). The report provides additional details on the accuracy of automated gypsy moth defoliation detection products compared with USFS sketch maps. The results suggest that VIIRS data can be effectively simulated from MODIS data and that VIIRS data will produce gypsy moth defoliation mapping products that are similar to MODIS-based products. The results of the RPC experiment indicate that VIIRS and MODIS data products have good potential for integration into the forest threat EWS. The accuracy assessment was performed only for 2001 because of time constraints and a relative scarcity of cloud-free Landsat and ASTER data for the peak defoliation period of the other years in the 2000-2006 time series. Additional work should be performed to assess the accuracy of gypsy moth defoliation detection products for additional years.The study area (mid-Appalachian highlands) and application (gypsy moth forest defoliation) are not necessarily representative of all forested regions and of all forest threat disturbance agents. Additional work should be performed on other inland and coastal regions as well as for other major forest threats.

  19. Historical gaseous and primary aerosol emissions in the United States from 1990-2010

    EPA Science Inventory

    An accurate description of emissions is crucial for model simulations to reproduce and interpret observed phenomena over extended time periods. In this study, we used an approach based on activity data to develop a consistent series of spatially resolved emissions in the United S...

  20. The Macro - Games Course Package.

    ERIC Educational Resources Information Center

    Heriot-Watt Univ., Edinburgh (Scotland). Esmee Fairbairn Economics Research Centre.

    Part of an Economic Education Series, the course package is designed to teach basic concepts and fundamental principles of macroeconomics and how they can be applied to various world problems. For use with college students, learning is gained through lectures, discussion, simulation games, programmed learning, and text. Time allotment is a 15-week…

Top