NASA Astrophysics Data System (ADS)
Hermosilla, Txomin; Wulder, Michael A.; White, Joanne C.; Coops, Nicholas C.; Hobart, Geordie W.
2017-12-01
The use of time series satellite data allows for the temporally dense, systematic, transparent, and synoptic capture of land dynamics over time. Subsequent to the opening of the Landsat archive, several time series approaches for characterizing landscape change have been developed, often representing a particular analytical time window. The information richness and widespread utility of these time series data have created a need to maintain the currency of time series information via the addition of new data, as it becomes available. When an existing time series is temporally extended, it is critical that previously generated change information remains consistent, thereby not altering reported change statistics or science outcomes based on that change information. In this research, we investigate the impacts and implications of adding additional years to an existing 29-year annual Landsat time series for forest change. To do so, we undertook a spatially explicit comparison of the 29 overlapping years of a time series representing 1984-2012, with a time series representing 1984-2016. Surface reflectance values, and presence, year, and type of change were compared. We found that the addition of years to extend the time series had minimal effect on the annual surface reflectance composites, with slight band-specific differences (r ≥ 0.1) in the final years of the original time series being updated. The area of stand replacing disturbances and determination of change year are virtually unchanged for the overlapping period between the two time-series products. Over the overlapping temporal period (1984-2012), the total area of change differs by 0.53%, equating to an annual difference in change area of 0.019%. Overall, the spatial and temporal agreement of the changes detected by both time series was 96%. Further, our findings suggest that the entire pre-existing historic time series does not need to be re-processed during the update process. Critically, given the time series change detection and update approach followed here, science outcomes or reports representing one temporal epoch can be considered stable and will not be altered when a time series is updated with newly available data.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-13
...''). The specified time period will commence for an option when a transaction occurs in any series in such... contracts executed among all series during the specified time period that represents an issue percentage... executed among all series during the specified time period that represents an issue percentage equal to the...
Multidimensional stock network analysis: An Escoufier's RV coefficient approach
NASA Astrophysics Data System (ADS)
Lee, Gan Siew; Djauhari, Maman A.
2013-09-01
The current practice of stocks network analysis is based on the assumption that the time series of closed stock price could represent the behaviour of the each stock. This assumption leads to consider minimal spanning tree (MST) and sub-dominant ultrametric (SDU) as an indispensible tool to filter the economic information contained in the network. Recently, there is an attempt where researchers represent stock not only as a univariate time series of closed price but as a bivariate time series of closed price and volume. In this case, they developed the so-called multidimensional MST to filter the important economic information. However, in this paper, we show that their approach is only applicable for that bivariate time series only. This leads us to introduce a new methodology to construct MST where each stock is represented by a multivariate time series. An example of Malaysian stock exchange will be presented and discussed to illustrate the advantages of the method.
Volatility of linear and nonlinear time series
NASA Astrophysics Data System (ADS)
Kalisky, Tomer; Ashkenazy, Yosef; Havlin, Shlomo
2005-07-01
Previous studies indicated that nonlinear properties of Gaussian distributed time series with long-range correlations, ui , can be detected and quantified by studying the correlations in the magnitude series ∣ui∣ , the “volatility.” However, the origin for this empirical observation still remains unclear and the exact relation between the correlations in ui and the correlations in ∣ui∣ is still unknown. Here we develop analytical relations between the scaling exponent of linear series ui and its magnitude series ∣ui∣ . Moreover, we find that nonlinear time series exhibit stronger (or the same) correlations in the magnitude time series compared with linear time series with the same two-point correlations. Based on these results we propose a simple model that generates multifractal time series by explicitly inserting long range correlations in the magnitude series; the nonlinear multifractal time series is generated by multiplying a long-range correlated time series (that represents the magnitude series) with uncorrelated time series [that represents the sign series sgn(ui) ]. We apply our techniques on daily deep ocean temperature records from the equatorial Pacific, the region of the El-Ninõ phenomenon, and find: (i) long-range correlations from several days to several years with 1/f power spectrum, (ii) significant nonlinear behavior as expressed by long-range correlations of the volatility series, and (iii) broad multifractal spectrum.
Multiscale Poincaré plots for visualizing the structure of heartbeat time series.
Henriques, Teresa S; Mariani, Sara; Burykin, Anton; Rodrigues, Filipa; Silva, Tiago F; Goldberger, Ary L
2016-02-09
Poincaré delay maps are widely used in the analysis of cardiac interbeat interval (RR) dynamics. To facilitate visualization of the structure of these time series, we introduce multiscale Poincaré (MSP) plots. Starting with the original RR time series, the method employs a coarse-graining procedure to create a family of time series, each of which represents the system's dynamics in a different time scale. Next, the Poincaré plots are constructed for the original and the coarse-grained time series. Finally, as an optional adjunct, color can be added to each point to represent its normalized frequency. We illustrate the MSP method on simulated Gaussian white and 1/f noise time series. The MSP plots of 1/f noise time series reveal relative conservation of the phase space area over multiple time scales, while those of white noise show a marked reduction in area. We also show how MSP plots can be used to illustrate the loss of complexity when heartbeat time series from healthy subjects are compared with those from patients with chronic (congestive) heart failure syndrome or with atrial fibrillation. This generalized multiscale approach to Poincaré plots may be useful in visualizing other types of time series.
Sensitivity analysis of machine-learning models of hydrologic time series
NASA Astrophysics Data System (ADS)
O'Reilly, A. M.
2017-12-01
Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.
Time series decomposition methods were applied to meteorological and air quality data and their numerical model estimates. Decomposition techniques express a time series as the sum of a small number of independent modes which hypothetically represent identifiable forcings, thereb...
Rotation in the Dynamic Factor Modeling of Multivariate Stationary Time Series.
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
2001-01-01
Proposes a special rotation procedure for the exploratory dynamic factor model for stationary multivariate time series. The rotation procedure applies separately to each univariate component series of a q-variate latent factor series and transforms such a component, initially represented as white noise, into a univariate moving-average.…
2016-08-01
the POI. ............................................................... 17 Figure 9. Discharge time series for the Miller pump system...2. In C2, the Miller Canal pump system was implicitly simulated by a time series of outflows assigned to model cells. This flow time series was...representative of how the pump system would operate during the storm events simulated in this work (USACE 2004). The outflow time series for the Miller
Watanabe, Hayafumi; Sano, Yukie; Takayasu, Hideki; Takayasu, Misako
2016-11-01
To elucidate the nontrivial empirical statistical properties of fluctuations of a typical nonsteady time series representing the appearance of words in blogs, we investigated approximately 3×10^{9} Japanese blog articles over a period of six years and analyze some corresponding mathematical models. First, we introduce a solvable nonsteady extension of the random diffusion model, which can be deduced by modeling the behavior of heterogeneous random bloggers. Next, we deduce theoretical expressions for both the temporal and ensemble fluctuation scalings of this model, and demonstrate that these expressions can reproduce all empirical scalings over eight orders of magnitude. Furthermore, we show that the model can reproduce other statistical properties of time series representing the appearance of words in blogs, such as functional forms of the probability density and correlations in the total number of blogs. As an application, we quantify the abnormality of special nationwide events by measuring the fluctuation scalings of 1771 basic adjectives.
Analysing the Image Building Effects of TV Advertisements Using Internet Community Data
NASA Astrophysics Data System (ADS)
Uehara, Hiroshi; Sato, Tadahiko; Yoshida, Kenichi
This paper proposes a method to measure the effects of TV advertisements on the Internet bulletin boards. It aims to clarify how the viewes' interests on TV advertisements are reflected on their images on the promoted products. Two kinds of time series data are generated based on the proposed method. First one represents the time series fluctuation of the interests on the TV advertisements. Another one represents the time series fluctuation of the images on the products. By analysing the correlations between these two time series data, we try to clarify the implicit relationship between the viewer's interests on the TV advertisement and their images on the promoted products. By applying the proposed method to an Internet bulletin board that deals with certain cosmetic brand, we show that the images on the products vary depending on the difference of the interests on each TV advertisement.
Defect-Repairable Latent Feature Extraction of Driving Behavior via a Deep Sparse Autoencoder
Taniguchi, Tadahiro; Takenaka, Kazuhito; Bando, Takashi
2018-01-01
Data representing driving behavior, as measured by various sensors installed in a vehicle, are collected as multi-dimensional sensor time-series data. These data often include redundant information, e.g., both the speed of wheels and the engine speed represent the velocity of the vehicle. Redundant information can be expected to complicate the data analysis, e.g., more factors need to be analyzed; even varying the levels of redundancy can influence the results of the analysis. We assume that the measured multi-dimensional sensor time-series data of driving behavior are generated from low-dimensional data shared by the many types of one-dimensional data of which multi-dimensional time-series data are composed. Meanwhile, sensor time-series data may be defective because of sensor failure. Therefore, another important function is to reduce the negative effect of defective data when extracting low-dimensional time-series data. This study proposes a defect-repairable feature extraction method based on a deep sparse autoencoder (DSAE) to extract low-dimensional time-series data. In the experiments, we show that DSAE provides high-performance latent feature extraction for driving behavior, even for defective sensor time-series data. In addition, we show that the negative effect of defects on the driving behavior segmentation task could be reduced using the latent features extracted by DSAE. PMID:29462931
Sun, Tao; Liu, Hongbo; Yu, Hong; Chen, C L Philip
2016-06-28
The central time series crystallizes the common patterns of the set it represents. In this paper, we propose a global constrained degree-pruning dynamic programming (g(dp)²) approach to obtain the central time series through minimizing dynamic time warping (DTW) distance between two time series. The DTW matching path theory with global constraints is proved theoretically for our degree-pruning strategy, which is helpful to reduce the time complexity and computational cost. Our approach can achieve the optimal solution between two time series. An approximate method to the central time series of multiple time series [called as m_g(dp)²] is presented based on DTW barycenter averaging and our g(dp)² approach by considering hierarchically merging strategy. As illustrated by the experimental results, our approaches provide better within-group sum of squares and robustness than other relevant algorithms.
Featureless classification of light curves
NASA Astrophysics Data System (ADS)
Kügler, S. D.; Gianniotis, N.; Polsterer, K. L.
2015-08-01
In the era of rapidly increasing amounts of time series data, classification of variable objects has become the main objective of time-domain astronomy. Classification of irregularly sampled time series is particularly difficult because the data cannot be represented naturally as a vector which can be directly fed into a classifier. In the literature, various statistical features serve as vector representations. In this work, we represent time series by a density model. The density model captures all the information available, including measurement errors. Hence, we view this model as a generalization to the static features which directly can be derived, e.g. as moments from the density. Similarity between each pair of time series is quantified by the distance between their respective models. Classification is performed on the obtained distance matrix. In the numerical experiments, we use data from the OGLE (Optical Gravitational Lensing Experiment) and ASAS (All Sky Automated Survey) surveys and demonstrate that the proposed representation performs up to par with the best currently used feature-based approaches. The density representation preserves all static information present in the observational data, in contrast to a less-complete description by features. The density representation is an upper boundary in terms of information made available to the classifier. Consequently, the predictive power of the proposed classification depends on the choice of similarity measure and classifier, only. Due to its principled nature, we advocate that this new approach of representing time series has potential in tasks beyond classification, e.g. unsupervised learning.
Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application
NASA Astrophysics Data System (ADS)
Chen, Jinduan; Boccelli, Dominic L.
2018-02-01
Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.
Lara, Juan A; Lizcano, David; Pérez, Aurora; Valente, Juan P
2014-10-01
There are now domains where information is recorded over a period of time, leading to sequences of data known as time series. In many domains, like medicine, time series analysis requires to focus on certain regions of interest, known as events, rather than analyzing the whole time series. In this paper, we propose a framework for knowledge discovery in both one-dimensional and multidimensional time series containing events. We show how our approach can be used to classify medical time series by means of a process that identifies events in time series, generates time series reference models of representative events and compares two time series by analyzing the events they have in common. We have applied our framework on time series generated in the areas of electroencephalography (EEG) and stabilometry. Framework performance was evaluated in terms of classification accuracy, and the results confirmed that the proposed schema has potential for classifying EEG and stabilometric signals. The proposed framework is useful for discovering knowledge from medical time series containing events, such as stabilometric and electroencephalographic time series. These results would be equally applicable to other medical domains generating iconographic time series, such as, for example, electrocardiography (ECG). Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.
2008-11-01
We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.
40 CFR 75.41 - Precision criteria.
Code of Federal Regulations, 2011 CFR
2011-07-01
... for the entire 30- to 90-day period. (9) The owner or operator shall provide two separate time series... time where the vertical axis represents the percentage difference between each paired hourly reading... monitoring system (or reference method) readings versus time where the vertical axis represents hourly...
40 CFR 75.41 - Precision criteria.
Code of Federal Regulations, 2014 CFR
2014-07-01
... for the entire 30- to 90-day period. (9) The owner or operator shall provide two separate time series... time where the vertical axis represents the percentage difference between each paired hourly reading... monitoring system (or reference method) readings versus time where the vertical axis represents hourly...
40 CFR 75.41 - Precision criteria.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for the entire 30- to 90-day period. (9) The owner or operator shall provide two separate time series... time where the vertical axis represents the percentage difference between each paired hourly reading... monitoring system (or reference method) readings versus time where the vertical axis represents hourly...
40 CFR 75.41 - Precision criteria.
Code of Federal Regulations, 2013 CFR
2013-07-01
... for the entire 30- to 90-day period. (9) The owner or operator shall provide two separate time series... time where the vertical axis represents the percentage difference between each paired hourly reading... monitoring system (or reference method) readings versus time where the vertical axis represents hourly...
40 CFR 75.41 - Precision criteria.
Code of Federal Regulations, 2012 CFR
2012-07-01
... for the entire 30- to 90-day period. (9) The owner or operator shall provide two separate time series... time where the vertical axis represents the percentage difference between each paired hourly reading... monitoring system (or reference method) readings versus time where the vertical axis represents hourly...
Characterizing time series: when Granger causality triggers complex networks
NASA Astrophysics Data System (ADS)
Ge, Tian; Cui, Yindong; Lin, Wei; Kurths, Jürgen; Liu, Chong
2012-08-01
In this paper, we propose a new approach to characterize time series with noise perturbations in both the time and frequency domains by combining Granger causality and complex networks. We construct directed and weighted complex networks from time series and use representative network measures to describe their physical and topological properties. Through analyzing the typical dynamical behaviors of some physical models and the MIT-BIHMassachusetts Institute of Technology-Beth Israel Hospital. human electrocardiogram data sets, we show that the proposed approach is able to capture and characterize various dynamics and has much potential for analyzing real-world time series of rather short length.
Bispectral Inversion: The Construction of a Time Series from Its Bispectrum
1988-04-13
take the inverse transform . Since the goal is to compute a time series given its bispectrum, it would also be nice to stay entirely in the frequency...domain and be able to go directly from the bispectrum to the Fourier transform of the time series without the need to inverse transform continuous...the picture. The approximations arise from representing the bicovariance, which is the inverse transform of a continuous function, by the inverse disrte
Validating the Modeling and Simulation of a Generic Tracking Radar
2009-07-28
order Gauss-Markov time series with CTGM = 250 units and rGM = 10 s is shown in the top panel of Figure 1. The time series, ifr , can represent any...are shared among the sensors. The total position and velocity estimation errors valid at time index k are given by < fr *|fc = rk\\k - rk and
Spatial Representativeness of Surface-Measured Variations of Downward Solar Radiation
NASA Astrophysics Data System (ADS)
Schwarz, M.; Folini, D.; Hakuba, M. Z.; Wild, M.
2017-12-01
When using time series of ground-based surface solar radiation (SSR) measurements in combination with gridded data, the spatial and temporal representativeness of the point observations must be considered. We use SSR data from surface observations and high-resolution (0.05°) satellite-derived data to infer the spatiotemporal representativeness of observations for monthly and longer time scales in Europe. The correlation analysis shows that the squared correlation coefficients (R2) between SSR times series decrease linearly with increasing distance between the surface observations. For deseasonalized monthly mean time series, R2 ranges from 0.85 for distances up to 25 km between the stations to 0.25 at distances of 500 km. A decorrelation length (i.e., the e-folding distance of R2) on the order of 400 km (with spread of 100-600 km) was found. R2 from correlations between point observations and colocated grid box area means determined from satellite data were found to be 0.80 for a 1° grid. To quantify the error which arises when using a point observation as a surrogate for the area mean SSR of larger surroundings, we calculated a spatial sampling error (SSE) for a 1° grid of 8 (3) W/m2 for monthly (annual) time series. The SSE based on a 1° grid, therefore, is of the same magnitude as the measurement uncertainty. The analysis generally reveals that monthly mean (or longer temporally aggregated) point observations of SSR capture the larger-scale variability well. This finding shows that comparing time series of SSR measurements with gridded data is feasible for those time scales.
Recurrence plots and recurrence quantification analysis of human motion data
NASA Astrophysics Data System (ADS)
Josiński, Henryk; Michalczuk, Agnieszka; Świtoński, Adam; Szczesna, Agnieszka; Wojciechowski, Konrad
2016-06-01
The authors present exemplary application of recurrence plots, cross recurrence plots and recurrence quantification analysis for the purpose of exploration of experimental time series describing selected aspects of human motion. Time series were extracted from treadmill gait sequences which were recorded in the Human Motion Laboratory (HML) of the Polish-Japanese Academy of Information Technology in Bytom, Poland by means of the Vicon system. Analysis was focused on the time series representing movements of hip, knee, ankle and wrist joints in the sagittal plane.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-20
... survey from 2008-2010 (0.245 kg/tow) is the lowest in the time series. The petitioners state that the... represents the all-time low in the time series for thorny skate, the biomass survey index increased modestly.... Accordingly, we will not initiate a review of the status of thorny skate at this time. FOR FURTHER INFORMATION...
Time series analysis of the developed financial markets' integration using visibility graphs
NASA Astrophysics Data System (ADS)
Zhuang, Enyu; Small, Michael; Feng, Gang
2014-09-01
A time series representing the developed financial markets' segmentation from 1973 to 2012 is studied. The time series reveals an obvious market integration trend. To further uncover the features of this time series, we divide it into seven windows and generate seven visibility graphs. The measuring capabilities of the visibility graphs provide means to quantitatively analyze the original time series. It is found that the important historical incidents that influenced market integration coincide with variations in the measured graphical node degree. Through the measure of neighborhood span, the frequencies of the historical incidents are disclosed. Moreover, it is also found that large "cycles" and significant noise in the time series are linked to large and small communities in the generated visibility graphs. For large cycles, how historical incidents significantly affected market integration is distinguished by density and compactness of the corresponding communities.
Rivera, Diego; Lillo, Mario; Granda, Stalin
2014-12-01
The concept of time stability has been widely used in the design and assessment of monitoring networks of soil moisture, as well as in hydrological studies, because it is as a technique that allows identifying of particular locations having the property of representing mean values of soil moisture in the field. In this work, we assess the effect of time stability calculations as new information is added and how time stability calculations are affected at shorter periods, subsampled from the original time series, containing different amounts of precipitation. In doing so, we defined two experiments to explore the time stability behavior. The first experiment sequentially adds new data to the previous time series to investigate the long-term influence of new data in the results. The second experiment applies a windowing approach, taking sequential subsamples from the entire time series to investigate the influence of short-term changes associated with the precipitation in each window. Our results from an operating network (seven monitoring points equipped with four sensors each in a 2-ha blueberry field) show that as information is added to the time series, there are changes in the location of the most stable point (MSP), and that taking the moving 21-day windows, it is clear that most of the variability of soil water content changes is associated with both the amount and intensity of rainfall. The changes of the MSP over each window depend on the amount of water entering the soil and the previous state of the soil water content. For our case study, the upper strata are proxies for hourly to daily changes in soil water content, while the deeper strata are proxies for medium-range stored water. Thus, different locations and depths are representative of processes at different time scales. This situation must be taken into account when water management depends on soil water content values from fixed locations.
In-situ fault detection apparatus and method for an encased energy storing device
Hagen, Ronald A.; Comte, Christophe; Knudson, Orlin B.; Rosenthal, Brian; Rouillard, Jean
2000-01-01
An apparatus and method for detecting a breach in an electrically insulating surface of an electrically conductive power system enclosure within which a number of series connected energy storing devices are disposed. The energy storing devices disposed in the enclosure are connected to a series power connection. A detector is coupled to the series connection and detects a change of state in a test signal derived from the series connected energy storing devices. The detector detects a breach in the insulating layer of the enclosure by detecting a state change in the test signal from a nominal state to a non-nominal state. A voltage detector detects a state change of the test signals from a nominal state, represented by a voltage of a selected end energy storing device, to a non-nominal state, represented by a voltage that substantially exceeds the voltage of the selected opposing end energy storing device. Alternatively, the detector may comprise a signal generator that produces the test signal as a time-varying or modulated test signal and injects the test signal into the series connection. The detector detects the state change of the time-varying or modulated test signal from a nominal state, represented by a signal substantially equivalent to the test signal, to a non-nominal state, representative by an absence of the test signal.
Highly comparative time-series analysis: the empirical structure of time series and their methods.
Fulcher, Ben D; Little, Max A; Jones, Nick S
2013-06-06
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.
Highly comparative time-series analysis: the empirical structure of time series and their methods
Fulcher, Ben D.; Little, Max A.; Jones, Nick S.
2013-01-01
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines. PMID:23554344
Statistical fingerprinting for malware detection and classification
Prowell, Stacy J.; Rathgeb, Christopher T.
2015-09-15
A system detects malware in a computing architecture with an unknown pedigree. The system includes a first computing device having a known pedigree and operating free of malware. The first computing device executes a series of instrumented functions that, when executed, provide a statistical baseline that is representative of the time it takes the software application to run on a computing device having a known pedigree. A second computing device executes a second series of instrumented functions that, when executed, provides an actual time that is representative of the time the known software application runs on the second computing device. The system detects malware when there is a difference in execution times between the first and the second computing devices.
Multivariate time series clustering on geophysical data recorded at Mt. Etna from 1996 to 2003
NASA Astrophysics Data System (ADS)
Di Salvo, Roberto; Montalto, Placido; Nunnari, Giuseppe; Neri, Marco; Puglisi, Giuseppe
2013-02-01
Time series clustering is an important task in data analysis issues in order to extract implicit, previously unknown, and potentially useful information from a large collection of data. Finding useful similar trends in multivariate time series represents a challenge in several areas including geophysics environment research. While traditional time series analysis methods deal only with univariate time series, multivariate time series analysis is a more suitable approach in the field of research where different kinds of data are available. Moreover, the conventional time series clustering techniques do not provide desired results for geophysical datasets due to the huge amount of data whose sampling rate is different according to the nature of signal. In this paper, a novel approach concerning geophysical multivariate time series clustering is proposed using dynamic time series segmentation and Self Organizing Maps techniques. This method allows finding coupling among trends of different geophysical data recorded from monitoring networks at Mt. Etna spanning from 1996 to 2003, when the transition from summit eruptions to flank eruptions occurred. This information can be used to carry out a more careful evaluation of the state of volcano and to define potential hazard assessment at Mt. Etna.
A harmonic linear dynamical system for prominent ECG feature extraction.
Thi, Ngoc Anh Nguyen; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc
2014-01-01
Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.
Barry, Dwight; McDonald, Shea
2013-01-01
Climate change could significantly influence seasonal streamflow and water availability in the snowpack-fed watersheds of Washington, USA. Descriptions of snowpack decline often use linear ordinary least squares (OLS) models to quantify this change. However, the region's precipitation is known to be related to climate cycles. If snowpack decline is more closely related to these cycles, an OLS model cannot account for this effect, and thus both descriptions of trends and estimates of decline could be inaccurate. We used intervention analysis to determine whether snow water equivalent (SWE) in 25 long-term snow courses within the Olympic and Cascade Mountains are more accurately described by OLS (to represent gradual change), stationary (to represent no change), or step-stationary (to represent climate cycling) models. We used Bayesian information-theoretic methods to determine these models' relative likelihood, and we found 90 models that could plausibly describe the statistical structure of the 25 snow courses' time series. Posterior model probabilities of the 29 "most plausible" models ranged from 0.33 to 0.91 (mean = 0.58, s = 0.15). The majority of these time series (55%) were best represented as step-stationary models with a single breakpoint at 1976/77, coinciding with a major shift in the Pacific Decadal Oscillation. However, estimates of SWE decline differed by as much as 35% between statistically plausible models of a single time series. This ambiguity is a critical problem for water management policy. Approaches such as intervention analysis should become part of the basic analytical toolkit for snowpack or other climatic time series data.
Modelling road accidents: An approach using structural time series
NASA Astrophysics Data System (ADS)
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
Time series behaviour of the number of Air Asia passengers: A distributional approach
NASA Astrophysics Data System (ADS)
Asrah, Norhaidah Mohd; Djauhari, Maman Abdurachman
2013-09-01
The common practice to time series analysis is by fitting a model and then further analysis is conducted on the residuals. However, if we know the distributional behavior of time series, the analyses in model identification, parameter estimation, and model checking are more straightforward. In this paper, we show that the number of Air Asia passengers can be represented as a geometric Brownian motion process. Therefore, instead of using the standard approach in model fitting, we use an appropriate transformation to come up with a stationary, normally distributed and even independent time series. An example in forecasting the number of Air Asia passengers will be given to illustrate the advantages of the method.
Self-organising mixture autoregressive model for non-stationary time series modelling.
Ni, He; Yin, Hujun
2008-12-01
Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-03
... for certificates representing units of fractional undivided interest in the Series' portfolio (``Units... offering price plus a front-end sales charge. If such a market is not maintained at any time for any Series... Equity Series will not restrict their portfolio investments to ``eligible trust securities.'' D. Capital...
Time series modeling by a regression approach based on a latent process.
Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice
2009-01-01
Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
Drunk driving detection based on classification of multivariate time series.
Li, Zhenlong; Jin, Xue; Zhao, Xiaohua
2015-09-01
This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.
The time series approach to short term load forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagan, M.T.; Behr, S.M.
The application of time series analysis methods to load forecasting is reviewed. It is shown than Box and Jenkins time series models, in particular, are well suited to this application. The logical and organized procedures for model development using the autocorrelation function make these models particularly attractive. One of the drawbacks of these models is the inability to accurately represent the nonlinear relationship between load and temperature. A simple procedure for overcoming this difficulty is introduced, and several Box and Jenkins models are compared with a forecasting procedure currently used by a utility company.
A Comparison of Alternative Approaches to the Analysis of Interrupted Time-Series.
ERIC Educational Resources Information Center
Harrop, John W.; Velicer, Wayne F.
1985-01-01
Computer generated data representative of 16 Auto Regressive Integrated Moving Averages (ARIMA) models were used to compare the results of interrupted time-series analysis using: (1) the known model identification, (2) an assumed (l,0,0) model, and (3) an assumed (3,0,0) model as an approximation to the General Transformation approach. (Author/BW)
Studies in astronomical time series analysis: Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1979-01-01
Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-02
... trading platform at a given time, not both. What trading platform an individual series trades on is... remove) series from the Hybrid Trading Platform we plan to revert back to the general approach of... quotes which represent the aggregate Market-Maker quoting interest in the series for the trading crowd...
Characterization of time series via Rényi complexity-entropy curves
NASA Astrophysics Data System (ADS)
Jauregui, M.; Zunino, L.; Lenzi, E. K.; Mendes, R. S.; Ribeiro, H. V.
2018-05-01
One of the most useful tools for distinguishing between chaotic and stochastic time series is the so-called complexity-entropy causality plane. This diagram involves two complexity measures: the Shannon entropy and the statistical complexity. Recently, this idea has been generalized by considering the Tsallis monoparametric generalization of the Shannon entropy, yielding complexity-entropy curves. These curves have proven to enhance the discrimination among different time series related to stochastic and chaotic processes of numerical and experimental nature. Here we further explore these complexity-entropy curves in the context of the Rényi entropy, which is another monoparametric generalization of the Shannon entropy. By combining the Rényi entropy with the proper generalization of the statistical complexity, we associate a parametric curve (the Rényi complexity-entropy curve) with a given time series. We explore this approach in a series of numerical and experimental applications, demonstrating the usefulness of this new technique for time series analysis. We show that the Rényi complexity-entropy curves enable the differentiation among time series of chaotic, stochastic, and periodic nature. In particular, time series of stochastic nature are associated with curves displaying positive curvature in a neighborhood of their initial points, whereas curves related to chaotic phenomena have a negative curvature; finally, periodic time series are represented by vertical straight lines.
Use of a Principal Components Analysis for the Generation of Daily Time Series.
NASA Astrophysics Data System (ADS)
Dreveton, Christine; Guillou, Yann
2004-07-01
A new approach for generating daily time series is considered in response to the weather-derivatives market. This approach consists of performing a principal components analysis to create independent variables, the values of which are then generated separately with a random process. Weather derivatives are financial or insurance products that give companies the opportunity to cover themselves against adverse climate conditions. The aim of a generator is to provide a wider range of feasible situations to be used in an assessment of risk. Generation of a temperature time series is required by insurers or bankers for pricing weather options. The provision of conditional probabilities and a good representation of the interannual variance are the main challenges of a generator when used for weather derivatives. The generator was developed according to this new approach using a principal components analysis and was applied to the daily average temperature time series of the Paris-Montsouris station in France. The observed dataset was homogenized and the trend was removed to represent correctly the present climate. The results obtained with the generator show that it represents correctly the interannual variance of the observed climate; this is the main result of the work, because one of the main discrepancies of other generators is their inability to represent accurately the observed interannual climate variance—this discrepancy is not acceptable for an application to weather derivatives. The generator was also tested to calculate conditional probabilities: for example, the knowledge of the aggregated value of heating degree-days in the middle of the heating season allows one to estimate the probability if reaching a threshold at the end of the heating season. This represents the main application of a climate generator for use with weather derivatives.
Metric projection for dynamic multiplex networks.
Jurman, Giuseppe
2016-08-01
Evolving multiplex networks are a powerful model for representing the dynamics along time of different phenomena, such as social networks, power grids, biological pathways. However, exploring the structure of the multiplex network time series is still an open problem. Here we propose a two-step strategy to tackle this problem based on the concept of distance (metric) between networks. Given a multiplex graph, first a network of networks is built for each time step, and then a real valued time series is obtained by the sequence of (simple) networks by evaluating the distance from the first element of the series. The effectiveness of this approach in detecting the occurring changes along the original time series is shown on a synthetic example first, and then on the Gulf dataset of political events.
Ryberg, Karen R.; Vecchia, Aldo V.
2012-01-01
Hydrologic time series data and associated anomalies (multiple components of the original time series representing variability at longer-term and shorter-term time scales) are useful for modeling trends in hydrologic variables, such as streamflow, and for modeling water-quality constituents. An R package, called waterData, has been developed for importing daily hydrologic time series data from U.S. Geological Survey streamgages into the R programming environment. In addition to streamflow, data retrieval may include gage height and continuous physical property data, such as specific conductance, pH, water temperature, turbidity, and dissolved oxygen. The package allows for importing daily hydrologic data into R, plotting the data, fixing common data problems, summarizing the data, and the calculation and graphical presentation of anomalies.
Computer models of social processes: the case of migration.
Beshers, J M
1967-06-01
The demographic model is a program for representing births, deaths, migration, and social mobility as social processes in a non-stationary stochastic process (Markovian). Transition probabilities for each age group are stored and then retrieved at the next appearance of that age cohort. In this way new transition probabilities can be calculated as a function of the old transition probabilities and of two successive distribution vectors.Transition probabilities can be calculated to represent effects of the whole age-by-state distribution at any given time period, too. Such effects as saturation or queuing may be represented by a market mechanism; for example, migration between metropolitan areas can be represented as depending upon job supplies and labor markets. Within metropolitan areas, migration can be represented as invasion and succession processes with tipping points (acceleration curves), and the market device has been extended to represent this phenomenon.Thus, the demographic model makes possible the representation of alternative classes of models of demographic processes. With each class of model one can deduce implied time series (varying parame-terswithin the class) and the output of the several classes can be compared to each other and to outside criteria, such as empirical time series.
ERIC Educational Resources Information Center
St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D.
2016-01-01
We explore the conditions under which short, comparative interrupted time-series (CITS) designs represent valid alternatives to randomized experiments in educational evaluations. To do so, we conduct three within-study comparisons, each of which uses a unique data set to test the validity of the CITS design by comparing its causal estimates to…
Higher-Order Hurst Signatures: Dynamical Information in Time Series
NASA Astrophysics Data System (ADS)
Ferenbaugh, Willis
2005-10-01
Understanding and comparing time series from different systems requires characteristic measures of the dynamics embedded in the series. The Hurst exponent is a second-order dynamical measure of a time series which grew up within the blossoming fractal world of Mandelbrot. This characteristic measure is directly related to the behavior of the autocorrelation, the power-spectrum, and other second-order things. And as with these other measures, the Hurst exponent captures and quantifies some but not all of the intrinsic nature of a series. The more elusive characteristics live in the phase spectrum and the higher-order spectra. This research is a continuing quest to (more) fully characterize the dynamical information in time series produced by plasma experiments or models. The goal is to supplement the series information which can be represented by a Hurst exponent, and we would like to develop supplemental techniques in analogy with Hurst's original R/S analysis. These techniques should be another way to plumb the higher-order dynamics.
Coherence and Chaos Phenomena in Josephson Oscillators for Superconducting Electronics.
1989-01-25
represents dissipation due j+(a+/b)+ b--i(a-) to the surface resistance of the superconducting films , y is the uniform bias current normalized to the...represents series loss due series of time-dependent Fourier spatial compo- to surface resistance of the superconducting films , nents. Tis approach provides...case is that in which there is no ing films , y is the spatially uniform bias current normal- external magnetic field applied to the junction. In this
Makowiec, Danuta; Struzik, Zbigniew; Graff, Beata; Wdowczyk-Szulc, Joanna; Zarczynska-Buchnowiecka, Marta; Gruchala, Marcin; Rynkiewicz, Andrzej
2013-01-01
Network models have been used to capture, represent and analyse characteristics of living organisms and general properties of complex systems. The use of network representations in the characterization of time series complexity is a relatively new but quickly developing branch of time series analysis. In particular, beat-to-beat heart rate variability can be mapped out in a network of RR-increments, which is a directed and weighted graph with vertices representing RR-increments and the edges of which correspond to subsequent increments. We evaluate entropy measures selected from these network representations in records of healthy subjects and heart transplant patients, and provide an interpretation of the results.
Analysis of brain patterns using temporal measures
Georgopoulos, Apostolos
2015-08-11
A set of brain data representing a time series of neurophysiologic activity acquired by spatially distributed sensors arranged to detect neural signaling of a brain (such as by the use of magnetoencephalography) is obtained. The set of brain data is processed to obtain a dynamic brain model based on a set of statistically-independent temporal measures, such as partial cross correlations, among groupings of different time series within the set of brain data. The dynamic brain model represents interactions between neural populations of the brain occurring close in time, such as with zero lag, for example. The dynamic brain model can be analyzed to obtain the neurophysiologic assessment of the brain. Data processing techniques may be used to assess structural or neurochemical brain pathologies.
Evaluation of human dynamic balance in Grassmann manifold
NASA Astrophysics Data System (ADS)
Michalczuk, Agnieszka; Wereszczyński, Kamil; Mucha, Romualda; Świtoński, Adam; Josiński, Henryk; Wojciechowski, Konrad
2017-07-01
The authors present an application of Grassmann manifold to the evaluation of human dynamic balance based on the time series representing movements of hip, knee and ankle joints in the sagittal, frontal and transverse planes. Time series were extracted from gait sequences which were recorded in the Human Motion Laboratory (HML) of the Polish-Japanese Academy of Information Technology in Bytom, Poland using the Vicon system.
ImpulseDE: detection of differentially expressed genes in time series data using impulse models.
Sander, Jil; Schultze, Joachim L; Yosef, Nir
2017-03-01
Perturbations in the environment lead to distinctive gene expression changes within a cell. Observed over time, those variations can be characterized by single impulse-like progression patterns. ImpulseDE is an R package suited to capture these patterns in high throughput time series datasets. By fitting a representative impulse model to each gene, it reports differentially expressed genes across time points from a single or between two time courses from two experiments. To optimize running time, the code uses clustering and multi-threading. By applying ImpulseDE , we demonstrate its power to represent underlying biology of gene expression in microarray and RNA-Seq data. ImpulseDE is available on Bioconductor ( https://bioconductor.org/packages/ImpulseDE/ ). niryosef@berkeley.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
2011-01-01
Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598
Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp
2011-08-18
Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.
The excel file contains time series data of flow rates, concentrations of alachlor , atrazine, ammonia, total phosphorus, and total suspended solids observed in two watersheds in Indiana from 2002 to 2007. The aggregate time series data corresponding or representative to all these parameters was obtained using a specialized, data-driven technique. The aggregate data is hypothesized in the published paper to represent the overall health of both watersheds with respect to various potential water quality impairments. The time series data for each of the individual water quality parameters were used to compute corresponding risk measures (Rel, Res, and Vul) that are reported in Table 4 and 5. The aggregation of the risk measures, which is computed from the aggregate time series and water quality standards in Table 1, is also reported in Table 4 and 5 of the published paper. Values under column heading uncertainty reports uncertainties associated with reconstruction of missing records of the water quality parameters. Long-term records of the water quality parameters were reconstructed in order to estimate the (R-R-V) and corresponding aggregate risk measures. This dataset is associated with the following publication:Hoque, Y., S. Tripathi, M. Hantush , and R. Govindaraju. Aggregate Measures of Watershed Health from Reconstructed Water Quality Data with Uncertainty. Ed Gregorich JOURNAL OF ENVIRONMENTAL QUALITY. American Society of Agronomy, MADISON, WI,
Using Evolved Fuzzy Neural Networks for Injury Detection from Isokinetic Curves
NASA Astrophysics Data System (ADS)
Couchet, Jorge; Font, José María; Manrique, Daniel
In this paper we propose an evolutionary fuzzy neural networks system for extracting knowledge from a set of time series containing medical information. The series represent isokinetic curves obtained from a group of patients exercising the knee joint on an isokinetic dynamometer. The system has two parts: i) it analyses the time series input in order generate a simplified model of an isokinetic curve; ii) it applies a grammar-guided genetic program to obtain a knowledge base represented by a fuzzy neural network. Once the knowledge base has been generated, the system is able to perform knee injuries detection. The results suggest that evolved fuzzy neural networks perform better than non-evolutionary approaches and have a high accuracy rate during both the training and testing phases. Additionally, they are robust, as the system is able to self-adapt to changes in the problem without human intervention.
Change Point Detection in Correlation Networks
NASA Astrophysics Data System (ADS)
Barnett, Ian; Onnela, Jukka-Pekka
2016-01-01
Many systems of interacting elements can be conceptualized as networks, where network nodes represent the elements and network ties represent interactions between the elements. In systems where the underlying network evolves, it is useful to determine the points in time where the network structure changes significantly as these may correspond to functional change points. We propose a method for detecting change points in correlation networks that, unlike previous change point detection methods designed for time series data, requires minimal distributional assumptions. We investigate the difficulty of change point detection near the boundaries of the time series in correlation networks and study the power of our method and competing methods through simulation. We also show the generalizable nature of the method by applying it to stock price data as well as fMRI data.
Model-based Clustering of Categorical Time Series with Multinomial Logit Classification
NASA Astrophysics Data System (ADS)
Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea
2010-09-01
A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen
2016-04-01
Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5
Code of Federal Regulations, 2011 CFR
2011-10-01
... recipient in the market. Present value means the value at the time of calculation of a future payment, or series of future payments discounted by the time value of money as represented by an interest rate or...
On power series representing solutions of the one-dimensional time-independent Schrödinger equation
NASA Astrophysics Data System (ADS)
Trotsenko, N. P.
2017-06-01
For the equation χ″( x) = u( x)χ( x) with infinitely smooth u( x), the general solution χ( x) is found in the form of a power series. The coefficients of the series are expressed via all derivatives u ( m)( y) of the function u( x) at a fixed point y. Examples of solutions for particular functions u( x) are considered.
Mesoscopic Community Structure of Financial Markets Revealed by Price and Sign Fluctuations.
Almog, Assaf; Besamusca, Ferry; MacMahon, Mel; Garlaschelli, Diego
2015-01-01
The mesoscopic organization of complex systems, from financial markets to the brain, is an intermediate between the microscopic dynamics of individual units (stocks or neurons, in the mentioned cases), and the macroscopic dynamics of the system as a whole. The organization is determined by "communities" of units whose dynamics, represented by time series of activity, is more strongly correlated internally than with the rest of the system. Recent studies have shown that the binary projections of various financial and neural time series exhibit nontrivial dynamical features that resemble those of the original data. This implies that a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. Here, we explore whether the binary signatures of multiple time series can replicate the same complex community organization of the financial market, as the original weighted time series. We adopt a method that has been specifically designed to detect communities from cross-correlation matrices of time series data. Our analysis shows that the simpler binary representation leads to a community structure that is almost identical with that obtained using the full weighted representation. These results confirm that binary projections of financial time series contain significant structural information.
A Langevin equation for the rates of currency exchange based on the Markov analysis
NASA Astrophysics Data System (ADS)
Farahpour, F.; Eskandari, Z.; Bahraminasab, A.; Jafari, G. R.; Ghasemi, F.; Sahimi, Muhammad; Reza Rahimi Tabar, M.
2007-11-01
We propose a method for analyzing the data for the rates of exchange of various currencies versus the U.S. dollar. The method analyzes the return time series of the data as a Markov process, and develops an effective equation which reconstructs it. We find that the Markov time scale, i.e., the time scale over which the data are Markov-correlated, is one day for the majority of the daily exchange rates that we analyze. We derive an effective Langevin equation to describe the fluctuations in the rates. The equation contains two quantities, D and D, representing the drift and diffusion coefficients, respectively. We demonstrate how the two coefficients are estimated directly from the data, without using any assumptions or models for the underlying stochastic time series that represent the daily rates of exchange of various currencies versus the U.S. dollar.
Lee, Hyokyeong; Moody-Davis, Asher; Saha, Utsab; Suzuki, Brian M; Asarnow, Daniel; Chen, Steven; Arkin, Michelle; Caffrey, Conor R; Singh, Rahul
2012-01-01
Neglected tropical diseases, especially those caused by helminths, constitute some of the most common infections of the world's poorest people. Development of techniques for automated, high-throughput drug screening against these diseases, especially in whole-organism settings, constitutes one of the great challenges of modern drug discovery. We present a method for enabling high-throughput phenotypic drug screening against diseases caused by helminths with a focus on schistosomiasis. The proposed method allows for a quantitative analysis of the systemic impact of a drug molecule on the pathogen as exhibited by the complex continuum of its phenotypic responses. This method consists of two key parts: first, biological image analysis is employed to automatically monitor and quantify shape-, appearance-, and motion-based phenotypes of the parasites. Next, we represent these phenotypes as time-series and show how to compare, cluster, and quantitatively reason about them using techniques of time-series analysis. We present results on a number of algorithmic issues pertinent to the time-series representation of phenotypes. These include results on appropriate representation of phenotypic time-series, analysis of different time-series similarity measures for comparing phenotypic responses over time, and techniques for clustering such responses by similarity. Finally, we show how these algorithmic techniques can be used for quantifying the complex continuum of phenotypic responses of parasites. An important corollary is the ability of our method to recognize and rigorously group parasites based on the variability of their phenotypic response to different drugs. The methods and results presented in this paper enable automatic and quantitative scoring of high-throughput phenotypic screens focused on helmintic diseases. Furthermore, these methods allow us to analyze and stratify parasites based on their phenotypic response to drugs. Together, these advancements represent a significant breakthrough for the process of drug discovery against schistosomiasis in particular and can be extended to other helmintic diseases which together afflict a large part of humankind.
2012-01-01
Background Neglected tropical diseases, especially those caused by helminths, constitute some of the most common infections of the world's poorest people. Development of techniques for automated, high-throughput drug screening against these diseases, especially in whole-organism settings, constitutes one of the great challenges of modern drug discovery. Method We present a method for enabling high-throughput phenotypic drug screening against diseases caused by helminths with a focus on schistosomiasis. The proposed method allows for a quantitative analysis of the systemic impact of a drug molecule on the pathogen as exhibited by the complex continuum of its phenotypic responses. This method consists of two key parts: first, biological image analysis is employed to automatically monitor and quantify shape-, appearance-, and motion-based phenotypes of the parasites. Next, we represent these phenotypes as time-series and show how to compare, cluster, and quantitatively reason about them using techniques of time-series analysis. Results We present results on a number of algorithmic issues pertinent to the time-series representation of phenotypes. These include results on appropriate representation of phenotypic time-series, analysis of different time-series similarity measures for comparing phenotypic responses over time, and techniques for clustering such responses by similarity. Finally, we show how these algorithmic techniques can be used for quantifying the complex continuum of phenotypic responses of parasites. An important corollary is the ability of our method to recognize and rigorously group parasites based on the variability of their phenotypic response to different drugs. Conclusions The methods and results presented in this paper enable automatic and quantitative scoring of high-throughput phenotypic screens focused on helmintic diseases. Furthermore, these methods allow us to analyze and stratify parasites based on their phenotypic response to drugs. Together, these advancements represent a significant breakthrough for the process of drug discovery against schistosomiasis in particular and can be extended to other helmintic diseases which together afflict a large part of humankind. PMID:22369037
Multiscale synchrony behaviors of paired financial time series by 3D multi-continuum percolation
NASA Astrophysics Data System (ADS)
Wang, M.; Wang, J.; Wang, B. T.
2018-02-01
Multiscale synchrony behaviors and nonlinear dynamics of paired financial time series are investigated, in an attempt to study the cross correlation relationships between two stock markets. A random stock price model is developed by a new system called three-dimensional (3D) multi-continuum percolation system, which is utilized to imitate the formation mechanism of price dynamics and explain the nonlinear behaviors found in financial time series. We assume that the price fluctuations are caused by the spread of investment information. The cluster of 3D multi-continuum percolation represents the cluster of investors who share the same investment attitude. In this paper, we focus on the paired return series, the paired volatility series, and the paired intrinsic mode functions which are decomposed by empirical mode decomposition. A new cross recurrence quantification analysis is put forward, combining with multiscale cross-sample entropy, to investigate the multiscale synchrony of these paired series from the proposed model. The corresponding research is also carried out for two China stock markets as comparison.
Wavelet-based tracking of bacteria in unreconstructed off-axis holograms.
Marin, Zach; Wallace, J Kent; Nadeau, Jay; Khalil, Andre
2018-03-01
We propose an automated wavelet-based method of tracking particles in unreconstructed off-axis holograms to provide rough estimates of the presence of motion and particle trajectories in digital holographic microscopy (DHM) time series. The wavelet transform modulus maxima segmentation method is adapted and tailored to extract Airy-like diffraction disks, which represent bacteria, from DHM time series. In this exploratory analysis, the method shows potential for estimating bacterial tracks in low-particle-density time series, based on a preliminary analysis of both living and dead Serratia marcescens, and for rapidly providing a single-bit answer to whether a sample chamber contains living or dead microbes or is empty. Copyright © 2017 Elsevier Inc. All rights reserved.
Modeling turbidity and flow at daily steps in karst using ARIMA/ARFIMA-GARCH error models
NASA Astrophysics Data System (ADS)
Massei, N.
2013-12-01
Hydrological and physico-chemical variations recorded at karst springs usually reflect highly non-linear processes and the corresponding time series are then very often also highly non-linear. Among others, turbidity, as an important parameter regarding water quality and management, is a very complex response of karst systems to rain events, involving direct transfer of particles from point-source recharge as well as resuspension of particles previously deposited and stored within the system. For those reasons, turbidity modeling has not been well taken in karst hydrological models so far. Most of the time, the modeling approaches would involve stochastic linear models such ARIMA-type models and their derivatives (ARMA, ARMAX, ARIMAX, ARFIMA...). Yet, linear models usually fail to represent well the whole (stochastic) process variability, and their residuals still contain useful information that can be used to either understand the whole variability or to enhance short-term predictability and forecasting. Model residuals are actually not i.i.d., which can be identified by the fact that squared residuals still present clear and significant serial correlation. Indeed, high (low) amplitudes are followed in time by high (low) amplitudes, which can be seen on residuals time series as periods of time during which amplitudes are higher (lower) then the mean amplitude. This is known as the ARCH effet (AutoRegressive Conditional Heteroskedasticity), and the corresponding non-linear process affecting residuals of a linear model can be modeled using ARCH or generalized ARCH (GARCH) non-linear modeling, which approaches are very well known in econometrics. Here we investigated the capability of ARIMA-GARCH error models to represent a ~20-yr daily turbidity time series recorded at a karst spring used for water supply of the city of Le Havre (Upper Normandy, France). ARIMA and ARFIMA models were used to represent the mean behavior of the time series and the residuals clearly appeared to present a pronounced ARCH effect, as confirmed by Ljung-Box and McLeod-Li tests. We then identified and fitted GARCH models to the residuals of ARIMA and ARFIMA models in order to model the conditional variance and volatility of the turbidity time series. The results eventually showed that serial correlation was succesfully removed in the last standardized residuals of the GARCH model, and hence that the ARIMA-GARCH error model appeared consistent for modeling such time series. The approach finally improved short-term (e.g a few steps-ahead) turbidity forecasting.
Bendel, David; Beck, Ferdinand; Dittmer, Ulrich
2013-01-01
In the presented study climate change impacts on combined sewer overflows (CSOs) in Baden-Wuerttemberg, Southern Germany, were assessed based on continuous long-term rainfall-runoff simulations. As input data, synthetic rainfall time series were used. The applied precipitation generator NiedSim-Klima accounts for climate change effects on precipitation patterns. Time series for the past (1961-1990) and future (2041-2050) were generated for various locations. Comparing the simulated CSO activity of both periods we observe significantly higher overflow frequencies for the future. Changes in overflow volume and overflow duration depend on the type of overflow structure. Both values will increase at simple CSO structures that merely divide the flow, whereas they will decrease when the CSO structure is combined with a storage tank. However, there is a wide variation between the results of different precipitation time series (representative for different locations).
Ratio-based lengths of intervals to improve fuzzy time series forecasting.
Huarng, Kunhuang; Yu, Tiffany Hui-Kuang
2006-04-01
The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.
5 CFR 330.205 - Agency RPL applications.
Code of Federal Regulations, 2013 CFR
2013-01-01
... register for positions at the same representative rate and work schedule (full-time, part-time, seasonal... grades or pay levels, appointment type (permanent or time-limited), occupations (e.g., position classification series or career groups), and minimum number of hours of work per week, as applicable. ...
Spatio-temporal representativeness of ground-based downward solar radiation measurements
NASA Astrophysics Data System (ADS)
Schwarz, Matthias; Wild, Martin; Folini, Doris
2017-04-01
Surface solar radiation (SSR) is most directly observed with ground based pyranometer measurements. Besides measurement uncertainties, which arise from the pyranometer instrument itself, also errors attributed to the limited spatial representativeness of observations from single sites for their large-scale surrounding have to be taken into account when using such measurements for energy balance studies. In this study the spatial representativeness of 157 homogeneous European downward surface solar radiation time series from the Global Energy Balance Archive (GEBA) and the Baseline Surface Radiation Network (BSRN) were examined for the period 1983-2015 by using the high resolution (0.05°) surface solar radiation data set from the Satellite Application Facility on Climate Monitoring (CM-SAF SARAH) as a proxy for the spatiotemporal variability of SSR. By correlating deseasonalized monthly SSR time series form surface observations against single collocated satellite derived SSR time series, a mean spatial correlation pattern was calculated and validated against purely observational based patterns. Generally decreasing correlations with increasing distance from station, with high correlations (R2 = 0.7) in proximity to the observational sites (±0.5°), was found. When correlating surface observations against time series from spatially averaged satellite derived SSR data (and thereby simulating coarser and coarser grids), very high correspondence between sites and the collocated pixels has been found for pixel sizes up to several degrees. Moreover, special focus was put on the quantification of errors which arise in conjunction to spatial sampling when estimating the temporal variability and trends for a larger region from a single surface observation site. For 15-year trends on a 1° grid, errors due to spatial sampling in the order of half of the measurement uncertainty for monthly mean values were found.
NASA Astrophysics Data System (ADS)
Chorozoglou, D.; Kugiumtzis, D.; Papadimitriou, E.
2018-06-01
The seismic hazard assessment in the area of Greece is attempted by studying the earthquake network structure, such as small-world and random. In this network, a node represents a seismic zone in the study area and a connection between two nodes is given by the correlation of the seismic activity of two zones. To investigate the network structure, and particularly the small-world property, the earthquake correlation network is compared with randomized ones. Simulations on multivariate time series of different length and number of variables show that for the construction of randomized networks the method randomizing the time series performs better than methods randomizing directly the original network connections. Based on the appropriate randomization method, the network approach is applied to time series of earthquakes that occurred between main shocks in the territory of Greece spanning the period 1999-2015. The characterization of networks on sliding time windows revealed that small-world structure emerges in the last time interval, shortly before the main shock.
Visualizing Rank Time Series of Wikipedia Top-Viewed Pages.
Xia, Jing; Hou, Yumeng; Chen, Yingjie Victor; Qian, Zhenyu Cheryl; Ebert, David S; Chen, Wei
2017-01-01
Visual clutter is a common challenge when visualizing large rank time series data. WikiTopReader, a reader of Wikipedia page rank, lets users explore connections among top-viewed pages by connecting page-rank behaviors with page-link relations. Such a combination enhances the unweighted Wikipedia page-link network and focuses attention on the page of interest. A set of user evaluations shows that the system effectively represents evolving ranking patterns and page-wise correlation.
On the Prony series representation of stretched exponential relaxation
NASA Astrophysics Data System (ADS)
Mauro, John C.; Mauro, Yihong Z.
2018-09-01
Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.
NASA Astrophysics Data System (ADS)
Donges, Jonathan F.; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik V.; Marwan, Norbert; Dijkstra, Henk A.; Kurths, Jürgen
2015-11-01
We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Xiaoran, E-mail: sxr0806@gmail.com; School of Mathematics and Statistics, The University of Western Australia, Crawley WA 6009; Small, Michael, E-mail: michael.small@uwa.edu.au
In this work, we propose a novel method to transform a time series into a weighted and directed network. For a given time series, we first generate a set of segments via a sliding window, and then use a doubly symbolic scheme to characterize every windowed segment by combining absolute amplitude information with an ordinal pattern characterization. Based on this construction, a network can be directly constructed from the given time series: segments corresponding to different symbol-pairs are mapped to network nodes and the temporal succession between nodes is represented by directed links. With this conversion, dynamics underlying the timemore » series has been encoded into the network structure. We illustrate the potential of our networks with a well-studied dynamical model as a benchmark example. Results show that network measures for characterizing global properties can detect the dynamical transitions in the underlying system. Moreover, we employ a random walk algorithm to sample loops in our networks, and find that time series with different dynamics exhibits distinct cycle structure. That is, the relative prevalence of loops with different lengths can be used to identify the underlying dynamics.« less
Fontana, Fabio; Rixen, Christian; Jonas, Tobias; Aberegg, Gabriel; Wunderle, Stefan
2008-01-01
This study evaluates the ability to track grassland growth phenology in the Swiss Alps with NOAA-16 Advanced Very High Resolution Radiometer (AVHRR) Normalized Difference Vegetation Index (NDVI) time series. Three growth parameters from 15 alpine and subalpine grassland sites were investigated between 2001 and 2005: Melt-Out (MO), Start Of Growth (SOG), and End Of Growth (EOG). We tried to estimate these phenological dates from yearly NDVI time series by identifying dates, where certain fractions (thresholds) of the maximum annual NDVI amplitude were crossed for the first time. For this purpose, the NDVI time series were smoothed using two commonly used approaches (Fourier adjustment or alternatively Savitzky-Golay filtering). Moreover, AVHRR NDVI time series were compared against data from the newer generation sensors SPOT VEGETATION and TERRA MODIS. All remote sensing NDVI time series were highly correlated with single point ground measurements and therefore accurately represented growth dynamics of alpine grassland. The newer generation sensors VGT and MODIS performed better than AVHRR, however, differences were minor. Thresholds for the determination of MO, SOG, and EOG were similar across sensors and smoothing methods, which demonstrated the robustness of the results. For our purpose, the Fourier adjustment algorithm created better NDVI time series than the Savitzky-Golay filter, since latter appeared to be more sensitive to noisy NDVI time series. Findings show that the application of various thresholds to NDVI time series allows the observation of the temporal progression of vegetation growth at the selected sites with high consistency. Hence, we believe that our study helps to better understand large-scale vegetation growth dynamics above the tree line in the European Alps. PMID:27879852
Fontana, Fabio; Rixen, Christian; Jonas, Tobias; Aberegg, Gabriel; Wunderle, Stefan
2008-04-23
This study evaluates the ability to track grassland growth phenology in the Swiss Alps with NOAA-16 Advanced Very High Resolution Radiometer (AVHRR) Normalized Difference Vegetation Index (NDVI) time series. Three growth parameters from 15 alpine and subalpine grassland sites were investigated between 2001 and 2005: Melt-Out (MO), Start Of Growth (SOG), and End Of Growth (EOG).We tried to estimate these phenological dates from yearly NDVI time series by identifying dates, where certain fractions (thresholds) of the maximum annual NDVI amplitude were crossed for the first time. For this purpose, the NDVI time series were smoothed using two commonly used approaches (Fourier adjustment or alternatively Savitzky-Golay filtering). Moreover, AVHRR NDVI time series were compared against data from the newer generation sensors SPOT VEGETATION and TERRA MODIS. All remote sensing NDVI time series were highly correlated with single point ground measurements and therefore accurately represented growth dynamics of alpine grassland. The newer generation sensors VGT and MODIS performed better than AVHRR, however, differences were minor. Thresholds for the determination of MO, SOG, and EOG were similar across sensors and smoothing methods, which demonstrated the robustness of the results. For our purpose, the Fourier adjustment algorithm created better NDVI time series than the Savitzky-Golay filter, since latter appeared to be more sensitive to noisy NDVI time series. Findings show that the application of various thresholds to NDVI time series allows the observation of the temporal progression of vegetation growth at the selected sites with high consistency. Hence, we believe that our study helps to better understand largescale vegetation growth dynamics above the tree line in the European Alps.
2017-09-01
via visual sensors onboard the UAV. Both the hardware and software architecture design are discussed at length. Then, a series of tests that were...visual sensors onboard the UAV. Both the hardware and software architecture design are discussed at length. Then, a series of tests that were conducted...and representing the change in time . (1) Horn and Schunck (1981) further simplified this equation by taking the Taylor series
Recursive Bayesian recurrent neural networks for time-series modeling.
Mirikitani, Derrick T; Nikolaev, Nikolay
2010-02-01
This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.
An approach to checking case-crossover analyses based on equivalence with time-series methods.
Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L
2008-03-01
The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.
Vis-A-Plan /visualize a plan/ management technique provides performance-time scale
NASA Technical Reports Server (NTRS)
Ranck, N. H.
1967-01-01
Vis-A-Plan is a bar-charting technique for representing and evaluating project activities on a performance-time basis. This rectilinear method presents the logic diagram of a project as a series of horizontal time bars. It may be used supplementary to PERT or independently.
Investigating parameters participating in the infant respiratory control system attractor.
Terrill, Philip I; Wilson, Stephen J; Suresh, Sadasivam; Cooper, David M; Dakin, Carolyn
2008-01-01
Theoretically, any participating parameter in a non-linear system represents the dynamics of the whole system. Taken's time delay embedding theory provides the fundamental basis for allowing non-linear analysis to be performed on physiological, time-series data. In practice, only one measurable parameter is required to be measured to convey an accurate representation of the system dynamics. In this paper, the infant respiratory control system is represented using three variables-a digitally sampled respiratory inductive plethysmography waveform, and the derived parameters tidal volume and inter-breath interval time series data. For 14 healthy infants, these data streams were analysed using recurrence plot analysis across one night of sleep. The measured attractor size of these variables followed the same qualitative trends across the nights study. Results suggest that the attractor size measures of the derived IBI and tidal volume are representative surrogates for the raw respiratory waveform. The extent to which the relative attractor sizes of IBI and tidal volume remain constant through changing sleep state could potentially be used to quantify pathology, or maturation of breathing control.
Identification of market trends with string and D2-brane maps
NASA Astrophysics Data System (ADS)
Bartoš, Erik; Pinčák, Richard
2017-08-01
The multidimensional string objects are introduced as a new alternative for an application of string models for time series forecasting in trading on financial markets. The objects are represented by open string with 2-endpoints and D2-brane, which are continuous enhancement of 1-endpoint open string model. We show how new object properties can change the statistics of the predictors, which makes them the candidates for modeling a wide range of time series systems. String angular momentum is proposed as another tool to analyze the stability of currency rates except the historical volatility. To show the reliability of our approach with application of string models for time series forecasting we present the results of real demo simulations for four currency exchange pairs.
Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Hou, Zhangshuan; Meng, Da
2016-07-17
In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.
User's manual for the Graphical Constituent Loading Analysis System (GCLAS)
Koltun, G.F.; Eberle, Michael; Gray, J.R.; Glysson, G.D.
2006-01-01
This manual describes the Graphical Constituent Loading Analysis System (GCLAS), an interactive cross-platform program for computing the mass (load) and average concentration of a constituent that is transported in stream water over a period of time. GCLAS computes loads as a function of an equal-interval streamflow time series and an equal- or unequal-interval time series of constituent concentrations. The constituent-concentration time series may be composed of measured concentrations or a combination of measured and estimated concentrations. GCLAS is not intended for use in situations where concentration data (or an appropriate surrogate) are collected infrequently or where an appreciable amount of the concentration values are censored. It is assumed that the constituent-concentration time series used by GCLAS adequately represents the true time-varying concentration. Commonly, measured constituent concentrations are collected at a frequency that is less than ideal (from a load-computation standpoint), so estimated concentrations must be inserted in the time series to better approximate the expected chemograph. GCLAS provides tools to facilitate estimation and entry of instantaneous concentrations for that purpose. Water-quality samples collected for load computation frequently are collected in a single vertical or at single point in a stream cross section. Several factors, some of which may vary as a function of time and (or) streamflow, can affect whether the sample concentrations are representative of the mean concentration in the cross section. GCLAS provides tools to aid the analyst in assessing whether concentrations in samples collected in a single vertical or at single point in a stream cross section exhibit systematic bias with respect to the mean concentrations. In cases where bias is evident, the analyst can construct coefficient relations in GCLAS to reduce or eliminate the observed bias. GCLAS can export load and concentration data in formats suitable for entry into the U.S. Geological Survey's National Water Information System. GCLAS can also import and export data in formats that are compatible with various commonly used spreadsheet and statistics programs.
Mesoscopic Community Structure of Financial Markets Revealed by Price and Sign Fluctuations
Almog, Assaf; Besamusca, Ferry; MacMahon, Mel; Garlaschelli, Diego
2015-01-01
The mesoscopic organization of complex systems, from financial markets to the brain, is an intermediate between the microscopic dynamics of individual units (stocks or neurons, in the mentioned cases), and the macroscopic dynamics of the system as a whole. The organization is determined by “communities” of units whose dynamics, represented by time series of activity, is more strongly correlated internally than with the rest of the system. Recent studies have shown that the binary projections of various financial and neural time series exhibit nontrivial dynamical features that resemble those of the original data. This implies that a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. Here, we explore whether the binary signatures of multiple time series can replicate the same complex community organization of the financial market, as the original weighted time series. We adopt a method that has been specifically designed to detect communities from cross-correlation matrices of time series data. Our analysis shows that the simpler binary representation leads to a community structure that is almost identical with that obtained using the full weighted representation. These results confirm that binary projections of financial time series contain significant structural information. PMID:26226226
Acosta-Mesa, Héctor-Gabriel; Rechy-Ramírez, Fernando; Mezura-Montes, Efrén; Cruz-Ramírez, Nicandro; Hernández Jiménez, Rodolfo
2014-06-01
In this work, we present a novel application of time series discretization using evolutionary programming for the classification of precancerous cervical lesions. The approach optimizes the number of intervals in which the length and amplitude of the time series should be compressed, preserving the important information for classification purposes. Using evolutionary programming, the search for a good discretization scheme is guided by a cost function which considers three criteria: the entropy regarding the classification, the complexity measured as the number of different strings needed to represent the complete data set, and the compression rate assessed as the length of the discrete representation. This discretization approach is evaluated using a time series data based on temporal patterns observed during a classical test used in cervical cancer detection; the classification accuracy reached by our method is compared with the well-known times series discretization algorithm SAX and the dimensionality reduction method PCA. Statistical analysis of the classification accuracy shows that the discrete representation is as efficient as the complete raw representation for the present application, reducing the dimensionality of the time series length by 97%. This representation is also very competitive in terms of classification accuracy when compared with similar approaches. Copyright © 2014 Elsevier Inc. All rights reserved.
Wu, Zi Yi; Xie, Ping; Sang, Yan Fang; Gu, Hai Ting
2018-04-01
The phenomenon of jump is one of the importantly external forms of hydrological variabi-lity under environmental changes, representing the adaption of hydrological nonlinear systems to the influence of external disturbances. Presently, the related studies mainly focus on the methods for identifying the jump positions and jump times in hydrological time series. In contrast, few studies have focused on the quantitative description and classification of jump degree in hydrological time series, which make it difficult to understand the environmental changes and evaluate its potential impacts. Here, we proposed a theatrically reliable and easy-to-apply method for the classification of jump degree in hydrological time series, using the correlation coefficient as a basic index. The statistical tests verified the accuracy, reasonability, and applicability of this method. The relationship between the correlation coefficient and the jump degree of series were described using mathematical equation by derivation. After that, several thresholds of correlation coefficients under different statistical significance levels were chosen, based on which the jump degree could be classified into five levels: no, weak, moderate, strong and very strong. Finally, our method was applied to five diffe-rent observed hydrological time series, with diverse geographic and hydrological conditions in China. The results of the classification of jump degrees in those series were closely accorded with their physically hydrological mechanisms, indicating the practicability of our method.
Normalization methods in time series of platelet function assays
Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham
2016-01-01
Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217
NASA Astrophysics Data System (ADS)
Piburn, J.; Stewart, R.; Morton, A.
2017-10-01
Identifying erratic or unstable time-series is an area of interest to many fields. Recently, there have been successful developments towards this goal. These new developed methodologies however come from domains where it is typical to have several thousand or more temporal observations. This creates a challenge when attempting to apply these methodologies to time-series with much fewer temporal observations such as for socio-cultural understanding, a domain where a typical time series of interest might only consist of 20-30 annual observations. Most existing methodologies simply cannot say anything interesting with so few data points, yet researchers are still tasked to work within in the confines of the data. Recently a method for characterizing instability in a time series with limitedtemporal observations was published. This method, Attribute Stability Index (ASI), uses an approximate entropy based method tocharacterize a time series' instability. In this paper we propose an explicitly spatially weighted extension of the Attribute StabilityIndex. By including a mechanism to account for spatial autocorrelation, this work represents a novel approach for the characterizationof space-time instability. As a case study we explore national youth male unemployment across the world from 1991-2014.
NASA Technical Reports Server (NTRS)
Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.
2013-01-01
Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.
Statistical analysis of low level atmospheric turbulence
NASA Technical Reports Server (NTRS)
Tieleman, H. W.; Chen, W. W. L.
1974-01-01
The statistical properties of low-level wind-turbulence data were obtained with the model 1080 total vector anemometer and the model 1296 dual split-film anemometer, both manufactured by Thermo Systems Incorporated. The data obtained from the above fast-response probes were compared with the results obtained from a pair of Gill propeller anemometers. The digitized time series representing the three velocity components and the temperature were each divided into a number of blocks, the length of which depended on the lowest frequency of interest and also on the storage capacity of the available computer. A moving-average and differencing high-pass filter was used to remove the trend and the low frequency components in the time series. The calculated results for each of the anemometers used are represented in graphical or tabulated form.
Empirical forecast of quiet time ionospheric Total Electron Content maps over Europe
NASA Astrophysics Data System (ADS)
Badeke, Ronny; Borries, Claudia; Hoque, Mainul M.; Minkwitz, David
2018-06-01
An accurate forecast of the atmospheric Total Electron Content (TEC) is helpful to investigate space weather influences on the ionosphere and technical applications like satellite-receiver radio links. The purpose of this work is to compare four empirical methods for a 24-h forecast of vertical TEC maps over Europe under geomagnetically quiet conditions. TEC map data are obtained from the Space Weather Application Center Ionosphere (SWACI) and the Universitat Politècnica de Catalunya (UPC). The time-series methods Standard Persistence Model (SPM), a 27 day median model (MediMod) and a Fourier Series Expansion are compared to maps for the entire year of 2015. As a representative of the climatological coefficient models the forecast performance of the Global Neustrelitz TEC model (NTCM-GL) is also investigated. Time periods of magnetic storms, which are identified with the Dst index, are excluded from the validation. By calculating the TEC values with the most recent maps, the time-series methods perform slightly better than the coefficient model NTCM-GL. The benefit of NTCM-GL is its independence on observational TEC data. Amongst the time-series methods mentioned, MediMod delivers the best overall performance regarding accuracy and data gap handling. Quiet-time SWACI maps can be forecasted accurately and in real-time by the MediMod time-series approach.
Frequency-phase analysis of resting-state functional MRI
Goelman, Gadi; Dan, Rotem; Růžička, Filip; Bezdicek, Ondrej; Růžička, Evžen; Roth, Jan; Vymazal, Josef; Jech, Robert
2017-01-01
We describe an analysis method that characterizes the correlation between coupled time-series functions by their frequencies and phases. It provides a unified framework for simultaneous assessment of frequency and latency of a coupled time-series. The analysis is demonstrated on resting-state functional MRI data of 34 healthy subjects. Interactions between fMRI time-series are represented by cross-correlation (with time-lag) functions. A general linear model is used on the cross-correlation functions to obtain the frequencies and phase-differences of the original time-series. We define symmetric, antisymmetric and asymmetric cross-correlation functions that correspond respectively to in-phase, 90° out-of-phase and any phase difference between a pair of time-series, where the last two were never introduced before. Seed maps of the motor system were calculated to demonstrate the strength and capabilities of the analysis. Unique types of functional connections, their dominant frequencies and phase-differences have been identified. The relation between phase-differences and time-delays is shown. The phase-differences are speculated to inform transfer-time and/or to reflect a difference in the hemodynamic response between regions that are modulated by neurotransmitters concentration. The analysis can be used with any coupled functions in many disciplines including electrophysiology, EEG or MEG in neuroscience. PMID:28272522
A cluster merging method for time series microarray with production values.
Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio
2014-09-01
A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.
NASA Astrophysics Data System (ADS)
Diao, Chunyuan
In today's big data era, the increasing availability of satellite and airborne platforms at various spatial and temporal scales creates unprecedented opportunities to understand the complex and dynamic systems (e.g., plant invasion). Time series remote sensing is becoming more and more important to monitor the earth system dynamics and interactions. To date, most of the time series remote sensing studies have been conducted with the images acquired at coarse spatial scale, due to their relatively high temporal resolution. The construction of time series at fine spatial scale, however, is limited to few or discrete images acquired within or across years. The objective of this research is to advance the time series remote sensing at fine spatial scale, particularly to shift from discrete time series remote sensing to continuous time series remote sensing. The objective will be achieved through the following aims: 1) Advance intra-annual time series remote sensing under the pure-pixel assumption; 2) Advance intra-annual time series remote sensing under the mixed-pixel assumption; 3) Advance inter-annual time series remote sensing in monitoring the land surface dynamics; and 4) Advance the species distribution model with time series remote sensing. Taking invasive saltcedar as an example, four methods (i.e., phenological time series remote sensing model, temporal partial unmixing method, multiyear spectral angle clustering model, and time series remote sensing-based spatially explicit species distribution model) were developed to achieve the objectives. Results indicated that the phenological time series remote sensing model could effectively map saltcedar distributions through characterizing the seasonal phenological dynamics of plant species throughout the year. The proposed temporal partial unmixing method, compared to conventional unmixing methods, could more accurately estimate saltcedar abundance within a pixel by exploiting the adequate temporal signatures of saltcedar. The multiyear spectral angle clustering model could guide the selection of the most representative remotely sensed image for repetitive saltcedar mapping over space and time. Through incorporating spatial autocorrelation, the species distribution model developed in the study could identify the suitable habitats of saltcedar at a fine spatial scale and locate appropriate areas at high risk of saltcedar infestation. Among 10 environmental variables, the distance to the river and the phenological attributes summarized by the time series remote sensing were regarded as the most important. These methods developed in the study provide new perspectives on how the continuous time series can be leveraged under various conditions to investigate the plant invasion dynamics.
Low-dimensional chaos in magnetospheric activity from AE time series
NASA Technical Reports Server (NTRS)
Vassiliadis, D. V.; Sharma, A. S.; Eastman, T. E.; Papadopoulos, K.
1990-01-01
The magnetospheric response to the solar-wind input, as represented by the time-series measurements of the auroral electrojet (AE) index, has been examined using phase-space reconstruction techniques. The system was found to behave as a low-dimensional chaotic system with a fractal dimension of 3.6 and has Kolmogorov entropy less than 0.2/min. These indicate that the dynamics of the system can be adequately described by four independent variables, and that the corresponding intrinsic time scale is of the order of 5 min. The relevance of the results to magnetospheric modeling is discussed.
NASA Astrophysics Data System (ADS)
Rehfeld, Kira; Goswami, Bedartha; Marwan, Norbert; Breitenbach, Sebastian; Kurths, Jürgen
2013-04-01
Statistical analysis of dependencies amongst paleoclimate data helps to infer on the climatic processes they reflect. Three key challenges have to be addressed, however: the datasets are heterogeneous in time (i) and space (ii), and furthermore time itself is a variable that needs to be reconstructed, which (iii) introduces additional uncertainties. To address these issues in a flexible way we developed the paleoclimate network framework, inspired by the increasing application of complex networks in climate research. Nodes in the paleoclimate network represent a paleoclimate archive, and an associated time series. Links between these nodes are assigned, if these time series are significantly similar. Therefore, the base of the paleoclimate network is formed by linear and nonlinear estimators for Pearson correlation, mutual information and event synchronization, which quantify similarity from irregularly sampled time series. Age uncertainties are propagated into the final network analysis using time series ensembles which reflect the uncertainty. We discuss how spatial heterogeneity influences the results obtained from network measures, and demonstrate the power of the approach by inferring teleconnection variability of the Asian summer monsoon for the past 1000 years.
West Africa land use and land cover time series
Cotillon, Suzanne E.
2017-02-16
Started in 1999, the West Africa Land Use Dynamics project represents an effort to map land use and land cover, characterize the trends in time and space, and understand their effects on the environment across West Africa. The outcome of the West Africa Land Use Dynamics project is the production of a three-time period (1975, 2000, and 2013) land use and land cover dataset for the Sub-Saharan region of West Africa, including the Cabo Verde archipelago. The West Africa Land Use Land Cover Time Series dataset offers a unique basis for characterizing and analyzing land changes across the region, systematically and at an unprecedented level of detail.
Valdés, Julio J; Bonham-Carter, Graeme
2006-03-01
A computational intelligence approach is used to explore the problem of detecting internal state changes in time dependent processes; described by heterogeneous, multivariate time series with imprecise data and missing values. Such processes are approximated by collections of time dependent non-linear autoregressive models represented by a special kind of neuro-fuzzy neural network. Grid and high throughput computing model mining procedures based on neuro-fuzzy networks and genetic algorithms, generate: (i) collections of models composed of sets of time lag terms from the time series, and (ii) prediction functions represented by neuro-fuzzy networks. The composition of the models and their prediction capabilities, allows the identification of changes in the internal structure of the process. These changes are associated with the alternation of steady and transient states, zones with abnormal behavior, instability, and other situations. This approach is general, and its sensitivity for detecting subtle changes of state is revealed by simulation experiments. Its potential in the study of complex processes in earth sciences and astrophysics is illustrated with applications using paleoclimate and solar data.
NASA Astrophysics Data System (ADS)
Radhakrishnan, Srinivasan; Duvvuru, Arjun; Sultornsanee, Sivarit; Kamarthi, Sagar
2016-02-01
The cross correlation coefficient has been widely applied in financial time series analysis, in specific, for understanding chaotic behaviour in terms of stock price and index movements during crisis periods. To better understand time series correlation dynamics, the cross correlation matrices are represented as networks, in which a node stands for an individual time series and a link indicates cross correlation between a pair of nodes. These networks are converted into simpler trees using different schemes. In this context, Minimum Spanning Trees (MST) are the most favoured tree structures because of their ability to preserve all the nodes and thereby retain essential information imbued in the network. Although cross correlations underlying MSTs capture essential information, they do not faithfully capture dynamic behaviour embedded in the time series data of financial systems because cross correlation is a reliable measure only if the relationship between the time series is linear. To address the issue, this work investigates a new measure called phase synchronization (PS) for establishing correlations among different time series which relate to one another, linearly or nonlinearly. In this approach the strength of a link between a pair of time series (nodes) is determined by the level of phase synchronization between them. We compare the performance of phase synchronization based MST with cross correlation based MST along selected network measures across temporal frame that includes economically good and crisis periods. We observe agreement in the directionality of the results across these two methods. They show similar trends, upward or downward, when comparing selected network measures. Though both the methods give similar trends, the phase synchronization based MST is a more reliable representation of the dynamic behaviour of financial systems than the cross correlation based MST because of the former's ability to quantify nonlinear relationships among time series or relations among phase shifted time series.
NASA Astrophysics Data System (ADS)
WANG, D.; Wang, Y.; Zeng, X.
2017-12-01
Accurate, fast forecasting of hydro-meteorological time series is presently a major challenge in drought and flood mitigation. This paper proposes a hybrid approach, Wavelet De-noising (WD) and Rank-Set Pair Analysis (RSPA), that takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. WD allows decomposition and reconstruction of a time series by the wavelet transform, and hence separation of the noise from the original series. RSPA, a more reliable and efficient version of Set Pair Analysis, is integrated with WD to form the hybrid WD-RSPA approach. Two types of hydro-meteorological data sets with different characteristics and different levels of human influences at some representative stations are used to illustrate the WD-RSPA approach. The approach is also compared to three other generic methods: the conventional Auto Regressive Integrated Moving Average (ARIMA) method, Artificial Neural Networks (ANNs) (BP-error Back Propagation, MLP-Multilayer Perceptron and RBF-Radial Basis Function), and RSPA alone. Nine error metrics are used to evaluate the model performance. The results show that WD-RSPA is accurate, feasible, and effective. In particular, WD-RSPA is found to be the best among the various generic methods compared in this paper, even when the extreme events are included within a time series.
Observing climate change trends in ocean biogeochemistry: when and where.
Henson, Stephanie A; Beaulieu, Claudie; Lampitt, Richard
2016-04-01
Understanding the influence of anthropogenic forcing on the marine biosphere is a high priority. Climate change-driven trends need to be accurately assessed and detected in a timely manner. As part of the effort towards detection of long-term trends, a network of ocean observatories and time series stations provide high quality data for a number of key parameters, such as pH, oxygen concentration or primary production (PP). Here, we use an ensemble of global coupled climate models to assess the temporal and spatial scales over which observations of eight biogeochemically relevant variables must be made to robustly detect a long-term trend. We find that, as a global average, continuous time series are required for between 14 (pH) and 32 (PP) years to distinguish a climate change trend from natural variability. Regional differences are extensive, with low latitudes and the Arctic generally needing shorter time series (<~30 years) to detect trends than other areas. In addition, we quantify the 'footprint' of existing and planned time series stations, that is the area over which a station is representative of a broader region. Footprints are generally largest for pH and sea surface temperature, but nevertheless the existing network of observatories only represents 9-15% of the global ocean surface. Our results present a quantitative framework for assessing the adequacy of current and future ocean observing networks for detection and monitoring of climate change-driven responses in the marine ecosystem. © 2016 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
The Living Planet Index: using species population time series to track trends in biodiversity
Loh, Jonathan; Green, Rhys E; Ricketts, Taylor; Lamoreux, John; Jenkins, Martin; Kapos, Valerie; Randers, Jorgen
2005-01-01
The Living Planet Index was developed to measure the changing state of the world's biodiversity over time. It uses time-series data to calculate average rates of change in a large number of populations of terrestrial, freshwater and marine vertebrate species. The dataset contains about 3000 population time series for over 1100 species. Two methods of calculating the index are outlined: the chain method and a method based on linear modelling of log-transformed data. The dataset is analysed to compare the relative representation of biogeographic realms, ecoregional biomes, threat status and taxonomic groups among species contributing to the index. The two methods show very similar results: terrestrial species declined on average by 25% from 1970 to 2000. Birds and mammals are over-represented in comparison with other vertebrate classes, and temperate species are over-represented compared with tropical species, but there is little difference in representation between threatened and non-threatened species. Some of the problems arising from over-representation are reduced by the way in which the index is calculated. It may be possible to reduce this further by post-stratification and weighting, but new information would first need to be collected for data-poor classes, realms and biomes. PMID:15814346
Measurement of cardiac output from dynamic pulmonary circulation time CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yee, Seonghwan, E-mail: Seonghwan.Yee@Beaumont.edu; Scalzetti, Ernest M.
Purpose: To introduce a method of estimating cardiac output from the dynamic pulmonary circulation time CT that is primarily used to determine the optimal time window of CT pulmonary angiography (CTPA). Methods: Dynamic pulmonary circulation time CT series, acquired for eight patients, were retrospectively analyzed. The dynamic CT series was acquired, prior to the main CTPA, in cine mode (1 frame/s) for a single slice at the level of the main pulmonary artery covering the cross sections of ascending aorta (AA) and descending aorta (DA) during the infusion of iodinated contrast. The time series of contrast changes obtained for DA,more » which is the downstream of AA, was assumed to be related to the time series for AA by the convolution with a delay function. The delay time constant in the delay function, representing the average time interval between the cross sections of AA and DA, was determined by least square error fitting between the convoluted AA time series and the DA time series. The cardiac output was then calculated by dividing the volume of the aortic arch between the cross sections of AA and DA (estimated from the single slice CT image) by the average time interval, and multiplying the result by a correction factor. Results: The mean cardiac output value for the six patients was 5.11 (l/min) (with a standard deviation of 1.57 l/min), which is in good agreement with the literature value; the data for the other two patients were too noisy for processing. Conclusions: The dynamic single-slice pulmonary circulation time CT series also can be used to estimate cardiac output.« less
NASA Astrophysics Data System (ADS)
Ramli, Nazirah; Mutalib, Siti Musleha Ab; Mohamad, Daud
2017-08-01
Fuzzy time series forecasting model has been proposed since 1993 to cater for data in linguistic values. Many improvement and modification have been made to the model such as enhancement on the length of interval and types of fuzzy logical relation. However, most of the improvement models represent the linguistic term in the form of discrete fuzzy sets. In this paper, fuzzy time series model with data in the form of trapezoidal fuzzy numbers and natural partitioning length approach is introduced for predicting the unemployment rate. Two types of fuzzy relations are used in this study which are first order and second order fuzzy relation. This proposed model can produce the forecasted values under different degree of confidence.
Code of Federal Regulations, 2010 CFR
2010-10-01
... eligible for capital assistance. Capital assistance means Federal financial assistance for capital projects... recipient in the market. Present value means the value at the time of calculation of a future payment, or series of future payments discounted by the time value of money as represented by an interest rate or...
Called to a Higher Duty: 1945-1961. Dwight D. Eisenhower.
ERIC Educational Resources Information Center
Barbieri, Kim E.
This document represents the fourth part of a five-part curriculum series on the life and times of Dwight D. Eisenhower covers Eisenhower's life from the end of World War II through the series of events that led him to accept the people's call to the presidency. This curriculum highlights Eisenhower's 2 terms of office as the 34th U.S. President.…
Wavelet analysis in ecology and epidemiology: impact of statistical tests
Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario
2014-01-01
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method. PMID:24284892
Wavelet analysis in ecology and epidemiology: impact of statistical tests.
Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario
2014-02-06
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.
31 CFR 360.61 - Payment after death.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STATES SAVINGS BONDS, SERIES I Minors, Incompetents, Aged Persons, Absentees, et al. § 360.61 Payment after death. After the death of the ward, and at any time prior to the representative's discharge, the...
31 CFR 360.61 - Payment after death.
Code of Federal Regulations, 2012 CFR
2012-07-01
... STATES SAVINGS BONDS, SERIES I Minors, Incompetents, Aged Persons, Absentees, et al. § 360.61 Payment after death. After the death of the ward, and at any time prior to the representative's discharge, the...
31 CFR 360.61 - Payment after death.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STATES SAVINGS BONDS, SERIES I Minors, Incompetents, Aged Persons, Absentees, et al. § 360.61 Payment after death. After the death of the ward, and at any time prior to the representative's discharge, the...
31 CFR 360.61 - Payment after death.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STATES SAVINGS BONDS, SERIES I Minors, Incompetents, Aged Persons, Absentees, et al. § 360.61 Payment after death. After the death of the ward, and at any time prior to the representative's discharge, the...
31 CFR 360.61 - Payment after death.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STATES SAVINGS BONDS, SERIES I Minors, Incompetents, Aged Persons, Absentees, et al. § 360.61 Payment after death. After the death of the ward, and at any time prior to the representative's discharge, the...
From fuzzy recurrence plots to scalable recurrence networks of time series
NASA Astrophysics Data System (ADS)
Pham, Tuan D.
2017-04-01
Recurrence networks, which are derived from recurrence plots of nonlinear time series, enable the extraction of hidden features of complex dynamical systems. Because fuzzy recurrence plots are represented as grayscale images, this paper presents a variety of texture features that can be extracted from fuzzy recurrence plots. Based on the notion of fuzzy recurrence plots, defuzzified, undirected, and unweighted recurrence networks are introduced. Network measures can be computed for defuzzified recurrence networks that are scalable to meet the demand for the network-based analysis of big data.
Strakova, Eva; Zikova, Alice; Vohradsky, Jiri
2014-01-01
A computational model of gene expression was applied to a novel test set of microarray time series measurements to reveal regulatory interactions between transcriptional regulators represented by 45 sigma factors and the genes expressed during germination of a prokaryote Streptomyces coelicolor. Using microarrays, the first 5.5 h of the process was recorded in 13 time points, which provided a database of gene expression time series on genome-wide scale. The computational modeling of the kinetic relations between the sigma factors, individual genes and genes clustered according to the similarity of their expression kinetics identified kinetically plausible sigma factor-controlled networks. Using genome sequence annotations, functional groups of genes that were predominantly controlled by specific sigma factors were identified. Using external binding data complementing the modeling approach, specific genes involved in the control of the studied process were identified and their function suggested.
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
NASA Astrophysics Data System (ADS)
Jia, Duo; Wang, Cangjiao; Lei, Shaogang
2018-01-01
Mapping vegetation dynamic types in mining areas is significant for revealing the mechanisms of environmental damage and for guiding ecological construction. Dynamic types of vegetation can be identified by applying interannual normalized difference vegetation index (NDVI) time series. However, phase differences and time shifts in interannual time series decrease mapping accuracy in mining regions. To overcome these problems and to increase the accuracy of mapping vegetation dynamics, an interannual Landsat time series for optimum vegetation growing status was constructed first by using the enhanced spatial and temporal adaptive reflectance fusion model algorithm. We then proposed a Markov random field optimized semisupervised Gaussian dynamic time warping kernel-based fuzzy c-means (FCM) cluster algorithm for interannual NDVI time series to map dynamic vegetation types in mining regions. The proposed algorithm has been tested in the Shengli mining region and Shendong mining region, which are typical representatives of China's open-pit and underground mining regions, respectively. Experiments show that the proposed algorithm can solve the problems of phase differences and time shifts to achieve better performance when mapping vegetation dynamic types. The overall accuracies for the Shengli and Shendong mining regions were 93.32% and 89.60%, respectively, with improvements of 7.32% and 25.84% when compared with the original semisupervised FCM algorithm.
31 CFR 353.61 - Payment after death.
Code of Federal Regulations, 2012 CFR
2012-07-01
... STATES SAVINGS BONDS, SERIES EE AND HH Minors, Incompetents, Aged Persons, Absentees, et al. § 353.61 Payment after death. After the death of the ward, and at any time prior to the representative's discharge...
31 CFR 353.61 - Payment after death.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STATES SAVINGS BONDS, SERIES EE AND HH Minors, Incompetents, Aged Persons, Absentees, et al. § 353.61 Payment after death. After the death of the ward, and at any time prior to the representative's discharge...
31 CFR 353.61 - Payment after death.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STATES SAVINGS BONDS, SERIES EE AND HH Minors, Incompetents, Aged Persons, Absentees, et al. § 353.61 Payment after death. After the death of the ward, and at any time prior to the representative's discharge...
31 CFR 353.61 - Payment after death.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STATES SAVINGS BONDS, SERIES EE AND HH Minors, Incompetents, Aged Persons, Absentees, et al. § 353.61 Payment after death. After the death of the ward, and at any time prior to the representative's discharge...
31 CFR 353.61 - Payment after death.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STATES SAVINGS BONDS, SERIES EE AND HH Minors, Incompetents, Aged Persons, Absentees, et al. § 353.61 Payment after death. After the death of the ward, and at any time prior to the representative's discharge...
InSAR Deformation Time Series Processed On-Demand in the Cloud
NASA Astrophysics Data System (ADS)
Horn, W. B.; Weeden, R.; Dimarchi, H.; Arko, S. A.; Hogenson, K.
2017-12-01
During this past year, ASF has developed a cloud-based on-demand processing system known as HyP3 (http://hyp3.asf.alaska.edu/), the Hybrid Pluggable Processing Pipeline, for Synthetic Aperture Radar (SAR) data. The system makes it easy for a user who doesn't have the time or inclination to install and use complex SAR processing software to leverage SAR data in their research or operations. One such processing algorithm is generation of a deformation time series product, which is a series of images representing ground displacements over time, which can be computed using a time series of interferometric SAR (InSAR) products. The set of software tools necessary to generate this useful product are difficult to install, configure, and use. Moreover, for a long time series with many images, the processing of just the interferograms can take days. Principally built by three undergraduate students at the ASF DAAC, the deformation time series processing relies the new Amazon Batch service, which enables processing of jobs with complex interconnected dependencies in a straightforward and efficient manner. In the case of generating a deformation time series product from a stack of single-look complex SAR images, the system uses Batch to serialize the up-front processing, interferogram generation, optional tropospheric correction, and deformation time series generation. The most time consuming portion is the interferogram generation, because even for a fairly small stack of images many interferograms need to be processed. By using AWS Batch, the interferograms are all generated in parallel; the entire process completes in hours rather than days. Additionally, the individual interferograms are saved in Amazon's cloud storage, so that when new data is acquired in the stack, an updated time series product can be generated with minimal addiitonal processing. This presentation will focus on the development techniques and enabling technologies that were used in developing the time series processing in the ASF HyP3 system. Data and process flow from job submission through to order completion will be shown, highlighting the benefits of the cloud for each step.
OceanSITES: Sustained Ocean Time Series Observations in the Global Ocean.
NASA Astrophysics Data System (ADS)
Weller, R. A.; Gallage, C.; Send, U.; Lampitt, R. S.; Lukas, R.
2016-02-01
Time series observations at critical or representative locations are an essential element of a global ocean observing system that is unique and complements other approaches to sustained observing. OceanSITES is an international group of oceanographers associated with such time series sites. OceanSITES exists to promote the continuation and extension of ocean time series sites around the globe. It also exists to plan and oversee the global array of sites in order to address the needs of research, climate change detection, operational applications, and policy makers. OceanSITES is a voluntary group that sits as an Action Group of the JCOMM-OPS Data Buoy Cooperation Panel, where JCOMM-OPS is the operational ocean observing oversight group of the Joint Commission on Oceanography and Marine Meteorology of the International Oceanographic Commission and the World Meteorological Organization. The way forward includes working to complete the global array, moving toward multidisciplinary instrumentation on a subset of the sites, and increasing utilization of the time series data, which are freely available from two Global Data Assembly Centers, one at the National Data Buoy Center and one at Coriolis at IFREMER. One recnet OceanSITES initiative and several results from OceanSITES time series sites are presented. The recent initiative was the assembly of a pool of temperature/conductivity recorders fro provision to OceanSITES sites in order to provide deep ocean temperature and salinity time series. Examples from specific sites include: a 15-year record of surface meteorology and air-sea fluxes from off northern Chile that shows evidence of long-term trends in surface forcing; change in upper ocean salinity and stratification in association with regional change in the hydrological cycle can be seen at the Hawaii time series site; results from monitoring Atlantic meridional transport; and results from a European multidisciplinary time series site.
Voltage and Current Clamp Transients with Membrane Dielectric Loss
Fitzhugh, R.; Cole, K. S.
1973-01-01
Transient responses of a space-clamped squid axon membrane to step changes of voltage or current are often approximated by exponential functions of time, corresponding to a series resistance and a membrane capacity of 1.0 μF/cm2. Curtis and Cole (1938, J. Gen. Physiol. 21:757) found, however, that the membrane had a constant phase angle impedance z = z1(jωτ)-α, with a mean α = 0.85. (α = 1.0 for an ideal capacitor; α < 1.0 may represent dielectric loss.) This result is supported by more recently published experimental data. For comparison with experiments, we have computed functions expressing voltage and current transients with constant phase angle capacitance, a parallel leakage conductance, and a series resistance, at nine values of α from 0.5 to 1.0. A series in powers of tα provided a good approximation for short times; one in powers of t-α, for long times; for intermediate times, a rational approximation matching both series for a finite number of terms was used. These computations may help in determining experimental series resistances and parallel leakage conductances from membrane voltage or current clamp data. PMID:4754194
Analysis of crude oil markets with improved multiscale weighted permutation entropy
NASA Astrophysics Data System (ADS)
Niu, Hongli; Wang, Jun; Liu, Cheng
2018-03-01
Entropy measures are recently extensively used to study the complexity property in nonlinear systems. Weighted permutation entropy (WPE) can overcome the ignorance of the amplitude information of time series compared with PE and shows a distinctive ability to extract complexity information from data having abrupt changes in magnitude. Improved (or sometimes called composite) multi-scale (MS) method possesses the advantage of reducing errors and improving the accuracy when applied to evaluate multiscale entropy values of not enough long time series. In this paper, we combine the merits of WPE and improved MS to propose the improved multiscale weighted permutation entropy (IMWPE) method for complexity investigation of a time series. Then it is validated effective through artificial data: white noise and 1 / f noise, and real market data of Brent and Daqing crude oil. Meanwhile, the complexity properties of crude oil markets are explored respectively of return series, volatility series with multiple exponents and EEMD-produced intrinsic mode functions (IMFs) which represent different frequency components of return series. Moreover, the instantaneous amplitude and frequency of Brent and Daqing crude oil are analyzed by the Hilbert transform utilized to each IMF.
Distinguishing time-delayed causal interactions using convergent cross mapping
Ye, Hao; Deyle, Ethan R.; Gilarranz, Luis J.; Sugihara, George
2015-01-01
An important problem across many scientific fields is the identification of causal effects from observational data alone. Recent methods (convergent cross mapping, CCM) have made substantial progress on this problem by applying the idea of nonlinear attractor reconstruction to time series data. Here, we expand upon the technique of CCM by explicitly considering time lags. Applying this extended method to representative examples (model simulations, a laboratory predator-prey experiment, temperature and greenhouse gas reconstructions from the Vostok ice core, and long-term ecological time series collected in the Southern California Bight), we demonstrate the ability to identify different time-delayed interactions, distinguish between synchrony induced by strong unidirectional-forcing and true bidirectional causality, and resolve transitive causal chains. PMID:26435402
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1983-10-04
Reduction in the maximum time uncertainty (t[sub max]--t[sub min]) of a series of paired time signals t[sub 1] and t[sub 2] varying between two input terminals and representative of a series of single events where t[sub 1][<=]t[sub 2] and t[sub 1]+t[sub 2] equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t[sub min]) of the first signal t[sub 1] closer to t[sub max] and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20--800. 6 figs.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, George E.; Dawson, John W.
1983-01-01
Reduction in the maximum time uncertainty (t.sub.max -t.sub.min) of a series of paired time signals t.sub.1 and t.sub.2 varying between two input terminals and representative of a series of single events where t.sub.1 .ltoreq.t.sub.2 and t.sub.1 +t.sub.2 equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t.sub.min) of the first signal t.sub.1 closer to t.sub.max and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20-800.
Monitoring vegetation phenology using MODIS
Zhang, Xiayong; Friedl, Mark A.; Schaaf, Crystal B.; Strahler, Alan H.; Hodges, John C.F.; Gao, Feng; Reed, Bradley C.; Huete, Alfredo
2003-01-01
Accurate measurements of regional to global scale vegetation dynamics (phenology) are required to improve models and understanding of inter-annual variability in terrestrial ecosystem carbon exchange and climate–biosphere interactions. Since the mid-1980s, satellite data have been used to study these processes. In this paper, a new methodology to monitor global vegetation phenology from time series of satellite data is presented. The method uses series of piecewise logistic functions, which are fit to remotely sensed vegetation index (VI) data, to represent intra-annual vegetation dynamics. Using this approach, transition dates for vegetation activity within annual time series of VI data can be determined from satellite data. The method allows vegetation dynamics to be monitored at large scales in a fashion that it is ecologically meaningful and does not require pre-smoothing of data or the use of user-defined thresholds. Preliminary results based on an annual time series of Moderate Resolution Imaging Spectroradiometer (MODIS) data for the northeastern United States demonstrate that the method is able to monitor vegetation phenology with good success.
Resolution Enhancement of MODIS-derived Water Indices for Studying Persistent Flooding
NASA Astrophysics Data System (ADS)
Underwood, L. W.; Kalcic, M. T.; Fletcher, R. M.
2012-12-01
Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.
Resolution Enhancement of MODIS-Derived Water Indices for Studying Persistent Flooding
NASA Technical Reports Server (NTRS)
Underwood, L. W.; Kalcic, Maria; Fletcher, Rose
2012-01-01
Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.
Time series analysis of infrared satellite data for detecting thermal anomalies: a hybrid approach
NASA Astrophysics Data System (ADS)
Koeppen, W. C.; Pilger, E.; Wright, R.
2011-07-01
We developed and tested an automated algorithm that analyzes thermal infrared satellite time series data to detect and quantify the excess energy radiated from thermal anomalies such as active volcanoes. Our algorithm enhances the previously developed MODVOLC approach, a simple point operation, by adding a more complex time series component based on the methods of the Robust Satellite Techniques (RST) algorithm. Using test sites at Anatahan and Kīlauea volcanoes, the hybrid time series approach detected ~15% more thermal anomalies than MODVOLC with very few, if any, known false detections. We also tested gas flares in the Cantarell oil field in the Gulf of Mexico as an end-member scenario representing very persistent thermal anomalies. At Cantarell, the hybrid algorithm showed only a slight improvement, but it did identify flares that were undetected by MODVOLC. We estimate that at least 80 MODIS images for each calendar month are required to create good reference images necessary for the time series analysis of the hybrid algorithm. The improved performance of the new algorithm over MODVOLC will result in the detection of low temperature thermal anomalies that will be useful in improving our ability to document Earth's volcanic eruptions, as well as detecting low temperature thermal precursors to larger eruptions.
Predicting long-term catchment nutrient export: the use of nonlinear time series models
NASA Astrophysics Data System (ADS)
Valent, Peter; Howden, Nicholas J. K.; Szolgay, Jan; Komornikova, Magda
2010-05-01
After the Second World War the nitrate concentrations in European water bodies changed significantly as the result of increased nitrogen fertilizer use and changes in land use. However, in the last decades, as a consequence of the implementation of nitrate-reducing measures in Europe, the nitrate concentrations in water bodies slowly decrease. This causes that the mean and variance of the observed time series also changes with time (nonstationarity and heteroscedascity). In order to detect changes and properly describe the behaviour of such time series by time series analysis, linear models (such as autoregressive (AR), moving average (MA) and autoregressive moving average models (ARMA)), are no more suitable. Time series with sudden changes in statistical characteristics can cause various problems in the calibration of traditional water quality models and thus give biased predictions. Proper statistical analysis of these non-stationary and heteroscedastic time series with the aim of detecting and subsequently explaining the variations in their statistical characteristics requires the use of nonlinear time series models. This information can be then used to improve the model building and calibration of conceptual water quality model or to select right calibration periods in order to produce reliable predictions. The objective of this contribution is to analyze two long time series of nitrate concentrations of the rivers Ouse and Stour with advanced nonlinear statistical modelling techniques and compare their performance with traditional linear models of the ARMA class in order to identify changes in the time series characteristics. The time series were analysed with nonlinear models with multiple regimes represented by self-exciting threshold autoregressive (SETAR) and Markov-switching models (MSW). The analysis showed that, based on the value of residual sum of squares (RSS) in both datasets, SETAR and MSW models described the time-series better than models of the ARMA class. In most cases the relative improvement of SETAR models against AR models of first order was low ranging between 1% and 4% with the exception of the three-regime model for the River Stour time-series where the improvement was 48.9%. In comparison, the relative improvement of MSW models was between 44.6% and 52.5 for two-regime and from 60.4% to 75% for three-regime models. However, the visual assessment of models plotted against original datasets showed that despite a high value of RSS, some ARMA models could describe the analyzed time-series better than AR, MA and SETAR models with lower values of RSS. In both datasets MSW models provided a very good visual fit describing most of the extreme values.
NASA Astrophysics Data System (ADS)
Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal
2018-06-01
Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.
USDA-ARS?s Scientific Manuscript database
We conduct a novel comprehensive investigation that seeks to prove the connection between spatial and time scales in surface soil moisture (SM) within the satellite footprint (~50 km). Modeled and measured point series at Yanco and Little Washita in situ networks are first decomposed into anomalies ...
NASA Astrophysics Data System (ADS)
Donges, Jonathan; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik; Marwan, Norbert; Dijkstra, Henk; Kurths, Jürgen
2016-04-01
We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology. pyunicorn is available online at https://github.com/pik-copan/pyunicorn. Reference: J.F. Donges, J. Heitzig, B. Beronov, M. Wiedermann, J. Runge, Q.-Y. Feng, L. Tupikina, V. Stolbova, R.V. Donner, N. Marwan, H.A. Dijkstra, and J. Kurths, Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package, Chaos 25, 113101 (2015), DOI: 10.1063/1.4934554, Preprint: arxiv.org:1507.01571 [physics.data-an].
A Markovian Entropy Measure for the Analysis of Calcium Activity Time Series.
Marken, John P; Halleran, Andrew D; Rahman, Atiqur; Odorizzi, Laura; LeFew, Michael C; Golino, Caroline A; Kemper, Peter; Saha, Margaret S
2016-01-01
Methods to analyze the dynamics of calcium activity often rely on visually distinguishable features in time series data such as spikes, waves, or oscillations. However, systems such as the developing nervous system display a complex, irregular type of calcium activity which makes the use of such methods less appropriate. Instead, for such systems there exists a class of methods (including information theoretic, power spectral, and fractal analysis approaches) which use more fundamental properties of the time series to analyze the observed calcium dynamics. We present a new analysis method in this class, the Markovian Entropy measure, which is an easily implementable calcium time series analysis method which represents the observed calcium activity as a realization of a Markov Process and describes its dynamics in terms of the level of predictability underlying the transitions between the states of the process. We applied our and other commonly used calcium analysis methods on a dataset from Xenopus laevis neural progenitors which displays irregular calcium activity and a dataset from murine synaptic neurons which displays activity time series that are well-described by visually-distinguishable features. We find that the Markovian Entropy measure is able to distinguish between biologically distinct populations in both datasets, and that it can separate biologically distinct populations to a greater extent than other methods in the dataset exhibiting irregular calcium activity. These results support the benefit of using the Markovian Entropy measure to analyze calcium dynamics, particularly for studies using time series data which do not exhibit easily distinguishable features.
GPS Time Series and Geodynamic Implications for the Hellenic Arc Area, Greece
NASA Astrophysics Data System (ADS)
Hollenstein, Ch.; Heller, O.; Geiger, A.; Kahle, H.-G.; Veis, G.
The quantification of crustal deformation and its temporal behavior is an important contribution to earthquake hazard assessment. With GPS measurements, especially from continuous operating stations, pre-, co-, post- and interseismic movements can be recorded and monitored. We present results of a continuous GPS network which has been operated in the Hellenic Arc area, Greece, since 1995. In order to obtain coordinate time series of high precision which are representative for crustal deformation, a main goal was to eliminate effects which are not of tectonic origin. By applying different steps of improvement, non-tectonic irregularities were reduced significantly, and the precision could be improved by an average of 40%. The improved time series are used to study the crustal movements in space and time. They serve as a base for the estimation of velocities and for the visualization of the movements in terms of trajectories. Special attention is given to large earthquakes (M>6), which occurred near GPS sites during the measuring time span.
Arbitrary-order corrections for finite-time drift and diffusion coefficients
NASA Astrophysics Data System (ADS)
Anteneodo, C.; Riera, R.
2009-09-01
We address a standard class of diffusion processes with linear drift and quadratic diffusion coefficients. These contributions to dynamic equations can be directly drawn from data time series. However, real data are constrained to finite sampling rates and therefore it is crucial to establish a suitable mathematical description of the required finite-time corrections. Based on Itô-Taylor expansions, we present the exact corrections to the finite-time drift and diffusion coefficients. These results allow to reconstruct the real hidden coefficients from the empirical estimates. We also derive higher-order finite-time expressions for the third and fourth conditional moments that furnish extra theoretical checks for this class of diffusion models. The analytical predictions are compared with the numerical outcomes of representative artificial time series.
van Aart, C; Boshuizen, H; Dekkers, A; Korthals Altes, H
2017-05-01
In low-incidence countries, most tuberculosis (TB) cases are foreign-born. We explored the temporal relationship between immigration and TB in first-generation immigrants between 1995 and 2012 to assess whether immigration can be a predictor for TB in immigrants from high-incidence countries. We obtained monthly data on immigrant TB cases and immigration for the three countries of origin most frequently represented among TB cases in the Netherlands: Morocco, Somalia and Turkey. The best-fit seasonal autoregressive integrated moving average (SARIMA) model to the immigration time-series was used to prewhiten the TB time series. The cross-correlation function (CCF) was then computed on the residual time series to detect time lags between immigration and TB rates. We identified a 17-month lag between Somali immigration and Somali immigrant TB cases, but no time lag for immigrants from Morocco and Turkey. The absence of a lag in the Moroccan and Turkish population may be attributed to the relatively low TB prevalence in the countries of origin and an increased likelihood of reactivation TB in an ageing immigrant population. Understanding the time lag between Somali immigration and TB disease would benefit from a closer epidemiological analysis of cohorts of Somali cases diagnosed within the first years after entry.
Groundwater similarity across a watershed derived from time-warped and flow-corrected time series
NASA Astrophysics Data System (ADS)
Rinderer, M.; McGlynn, B. L.; van Meerveld, H. J.
2017-05-01
Information about catchment-scale groundwater dynamics is necessary to understand how catchments store and release water and why water quantity and quality varies in streams. However, groundwater level monitoring is often restricted to a limited number of sites. Knowledge of the factors that determine similarity between monitoring sites can be used to predict catchment-scale groundwater storage and connectivity of different runoff source areas. We used distance-based and correlation-based similarity measures to quantify the spatial and temporal differences in shallow groundwater similarity for 51 monitoring sites in a Swiss prealpine catchment. The 41 months long time series were preprocessed using Dynamic Time-Warping and a Flow-corrected Time Transformation to account for small timing differences and bias toward low-flow periods. The mean distance-based groundwater similarity was correlated to topographic indices, such as upslope contributing area, topographic wetness index, and local slope. Correlation-based similarity was less related to landscape position but instead revealed differences between seasons. Analysis of variance and partial Mantel tests showed that landscape position, represented by the topographic wetness index, explained 52% of the variability in mean distance-based groundwater similarity, while spatial distance, represented by the Euclidean distance, explained only 5%. The variability in distance-based similarity and correlation-based similarity between groundwater and streamflow time series was significantly larger for midslope locations than for other landscape positions. This suggests that groundwater dynamics at these midslope sites, which are important to understand runoff source areas and hydrological connectivity at the catchment scale, are most difficult to predict.
Turbulence Time Series Data Hole Filling using Karhunen-Loeve and ARIMA methods
2007-01-01
memory is represented by higher values of d. 4.1. ARIMA and EMD We applied an ARIMA (0,d,0) model to predict the behaviour of the final section of the...to a simplified ARIMA (0,d,0) model , which performed better than the linear interpolant but less effectively than the KL algorithm, disregarding edge...ar X iv :p hy si cs /0 70 12 38 v1 22 J an 2 00 7 Turbulence Time Series Data Hole Filling using Karhunen-Loève and ARIMA methods M P J L
POD Model Reconstruction for Gray-Box Fault Detection
NASA Technical Reports Server (NTRS)
Park, Han; Zak, Michail
2007-01-01
Proper orthogonal decomposition (POD) is the mathematical basis of a method of constructing low-order mathematical models for the "gray-box" fault-detection algorithm that is a component of a diagnostic system known as beacon-based exception analysis for multi-missions (BEAM). POD has been successfully applied in reducing computational complexity by generating simple models that can be used for control and simulation for complex systems such as fluid flows. In the present application to BEAM, POD brings the same benefits to automated diagnosis. BEAM is a method of real-time or offline, automated diagnosis of a complex dynamic system.The gray-box approach makes it possible to utilize incomplete or approximate knowledge of the dynamics of the system that one seeks to diagnose. In the gray-box approach, a deterministic model of the system is used to filter a time series of system sensor data to remove the deterministic components of the time series from further examination. What is left after the filtering operation is a time series of residual quantities that represent the unknown (or at least unmodeled) aspects of the behavior of the system. Stochastic modeling techniques are then applied to the residual time series. The procedure for detecting abnormal behavior of the system then becomes one of looking for statistical differences between the residual time series and the predictions of the stochastic model.
Describing temporal variability of the mean Estonian precipitation series in climate time scale
NASA Astrophysics Data System (ADS)
Post, P.; Kärner, O.
2009-04-01
Applicability of the random walk type models to represent the temporal variability of various atmospheric temperature series has been successfully demonstrated recently (e.g. Kärner, 2002). Main problem in the temperature modeling is connected to the scale break in the generally self similar air temperature anomaly series (Kärner, 2005). The break separates short-range strong non-stationarity from nearly stationary longer range variability region. This is an indication of the fact that several geophysical time series show a short-range non-stationary behaviour and a stationary behaviour in longer range (Davis et al., 1996). In order to model series like that the choice of time step appears to be crucial. To characterize the long-range variability we can neglect the short-range non-stationary fluctuations, provided that we are able to model properly the long-range tendencies. The structure function (Monin and Yaglom, 1975) was used to determine an approximate segregation line between the short and the long scale in terms of modeling. The longer scale can be called climate one, because such models are applicable in scales over some decades. In order to get rid of the short-range fluctuations in daily series the variability can be examined using sufficiently long time step. In the present paper, we show that the same philosophy is useful to find a model to represent a climate-scale temporal variability of the Estonian daily mean precipitation amount series over 45 years (1961-2005). Temporal variability of the obtained daily time series is examined by means of an autoregressive and integrated moving average (ARIMA) family model of the type (0,1,1). This model is applicable for daily precipitation simulating if to select an appropriate time step that enables us to neglet the short-range non-stationary fluctuations. A considerably longer time step than one day (30 days) is used in the current paper to model the precipitation time series variability. Each ARIMA (0,1,1) model can be interpreted to be consisting of random walk in a noisy environment (Box and Jenkins, 1976). The fitted model appears to be weakly non-stationary, that gives us the possibility to use stationary approximation if only the noise component from that sum of white noise and random walk is exploited. We get a convenient routine to generate a stationary precipitation climatology with a reasonable accuracy, since the noise component variance is much larger than the dispersion of the random walk generator. This interpretation emphasizes dominating role of a random component in the precipitation series. The result is understandable due to a small territory of Estonia that is situated in the mid-latitude cyclone track. References Box, J.E.P. and G. Jenkins 1976: Time Series Analysis, Forecasting and Control (revised edn.), Holden Day San Francisco, CA, 575 pp. Davis, A., Marshak, A., Wiscombe, W. and R. Cahalan 1996: Multifractal characterizations of intermittency in nonstationary geophysical signals and fields.in G. Trevino et al. (eds) Current Topics in Nonsstationarity Analysis. World-Scientific, Singapore, 97-158. Kärner, O. 2002: On nonstationarity and antipersistency in global temperature series. J. Geophys. Res. D107; doi:10.1029/2001JD002024. Kärner, O. 2005: Some examples on negative feedback in the Earth climate system. Centr. European J. Phys. 3; 190-208. Monin, A.S. and A.M. Yaglom 1975: Statistical Fluid Mechanics, Vol 2. Mechanics of Turbulence , MIT Press Boston Mass, 886 pp.
Ordinary kriging as a tool to estimate historical daily streamflow records
Farmer, William H.
2016-01-01
Efficient and responsible management of water resources relies on accurate streamflow records. However, many watersheds are ungaged, limiting the ability to assess and understand local hydrology. Several tools have been developed to alleviate this data scarcity, but few provide continuous daily streamflow records at individual streamgages within an entire region. Building on the history of hydrologic mapping, ordinary kriging was extended to predict daily streamflow time series on a regional basis. Pooling parameters to estimate a single, time-invariant characterization of spatial semivariance structure is shown to produce accurate reproduction of streamflow. This approach is contrasted with a time-varying series of variograms, representing the temporal evolution and behavior of the spatial semivariance structure. Furthermore, the ordinary kriging approach is shown to produce more accurate time series than more common, single-index hydrologic transfers. A comparison between topological kriging and ordinary kriging is less definitive, showing the ordinary kriging approach to be significantly inferior in terms of Nash–Sutcliffe model efficiencies while maintaining significantly superior performance measured by root mean squared errors. Given the similarity of performance and the computational efficiency of ordinary kriging, it is concluded that ordinary kriging is useful for first-order approximation of daily streamflow time series in ungaged watersheds.
Feature extraction for change analysis in SAR time series
NASA Astrophysics Data System (ADS)
Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan
2015-10-01
In remote sensing, the change detection topic represents a broad field of research. If time series data is available, change detection can be used for monitoring applications. These applications require regular image acquisitions at identical time of day along a defined period. Focusing on remote sensing sensors, radar is especially well-capable for applications requiring regularity, since it is independent from most weather and atmospheric influences. Furthermore, regarding the image acquisitions, the time of day plays no role due to the independence from daylight. Since 2007, the German SAR (Synthetic Aperture Radar) satellite TerraSAR-X (TSX) permits the acquisition of high resolution radar images capable for the analysis of dense built-up areas. In a former study, we presented the change analysis of the Stuttgart (Germany) airport. The aim of this study is the categorization of detected changes in the time series. This categorization is motivated by the fact that it is a poor statement only to describe where and when a specific area has changed. At least as important is the statement about what has caused the change. The focus is set on the analysis of so-called high activity areas (HAA) representing areas changing at least four times along the investigated period. As first step for categorizing these HAAs, the matching HAA changes (blobs) have to be identified. Afterwards, operating in this object-based blob level, several features are extracted which comprise shape-based, radiometric, statistic, morphological values and one context feature basing on a segmentation of the HAAs. This segmentation builds on the morphological differential attribute profiles (DAPs). Seven context classes are established: Urban, infrastructure, rural stable, rural unstable, natural, water and unclassified. A specific HA blob is assigned to one of these classes analyzing the CovAmCoh time series signature of the surrounding segments. In combination, also surrounding GIS information is included to verify the CovAmCoh based context assignment. In this paper, the focus is set on the features extracted for a later change categorization procedure.
Machine learning for cardiac ultrasound time series data
NASA Astrophysics Data System (ADS)
Yuan, Baichuan; Chitturi, Sathya R.; Iyer, Geoffrey; Li, Nuoyu; Xu, Xiaochuan; Zhan, Ruohan; Llerena, Rafael; Yen, Jesse T.; Bertozzi, Andrea L.
2017-03-01
We consider the problem of identifying frames in a cardiac ultrasound video associated with left ventricular chamber end-systolic (ES, contraction) and end-diastolic (ED, expansion) phases of the cardiac cycle. Our procedure involves a simple application of non-negative matrix factorization (NMF) to a series of frames of a video from a single patient. Rank-2 NMF is performed to compute two end-members. The end members are shown to be close representations of the actual heart morphology at the end of each phase of the heart function. Moreover, the entire time series can be represented as a linear combination of these two end-member states thus providing a very low dimensional representation of the time dynamics of the heart. Unlike previous work, our methods do not require any electrocardiogram (ECG) information in order to select the end-diastolic frame. Results are presented for a data set of 99 patients including both healthy and diseased examples.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1981-02-11
Reduction in the maximum time uncertainty (t/sub max/ - t/sub min/) of a series of paired time signals t/sub 1/ and t/sub 2/ varying between two input terminals and representative of a series of single events where t/sub 1/ less than or equal to t/sub 2/ and t/sub 1/ + t/sub 2/ equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t/sub min/) of the first signal t/sub 1/ closer to t/sub max/ and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20 to 800.
A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis
NASA Astrophysics Data System (ADS)
Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz
2018-04-01
For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.
NASA Astrophysics Data System (ADS)
Van Uytven, Els; Willems, Patrick
2017-04-01
Current trends in the hydro-meteorological variables indicate the potential impact of climate change on hydrological extremes. Therefore, they trigger an increased importance climate adaptation strategies in water management. The impact of climate change on hydro-meteorological and hydrological extremes is, however, highly uncertain. This is due to uncertainties introduced by the climate models, the internal variability inherent to the climate system, the greenhouse gas scenarios and the statistical downscaling methods. In view of the need to define sustainable climate adaptation strategies, there is a need to assess these uncertainties. This is commonly done by means of ensemble approaches. Because more and more climate models and statistical downscaling methods become available, there is a need to facilitate the climate impact and uncertainty analysis. A Climate Perturbation Tool has been developed for that purpose, which combines a set of statistical downscaling methods including weather typing, weather generator, transfer function and advanced perturbation based approaches. By use of an interactive interface, climate impact modelers can apply these statistical downscaling methods in a semi-automatic way to an ensemble of climate model runs. The tool is applicable to any region, but has been demonstrated so far to cases in Belgium, Suriname, Vietnam and Bangladesh. Time series representing future local-scale precipitation, temperature and potential evapotranspiration (PET) conditions were obtained, starting from time series of historical observations. Uncertainties on the future meteorological conditions are represented in two different ways: through an ensemble of time series, and a reduced set of synthetic scenarios. The both aim to span the full uncertainty range as assessed from the ensemble of climate model runs and downscaling methods. For Belgium, for instance, use was made of 100-year time series of 10-minutes precipitation observations and daily temperature and PET observations at Uccle and a large ensemble of 160 global climate model runs (CMIP5). They cover all four representative concentration pathway based greenhouse gas scenarios. While evaluating the downscaled meteorological series, particular attention was given to the performance of extreme value metrics (e.g. for precipitation, by means of intensity-duration-frequency statistics). Moreover, the total uncertainty was decomposed in the fractional uncertainties for each of the uncertainty sources considered. Research assessing the additional uncertainty due to parameter and structural uncertainties of the hydrological impact model is ongoing.
ERIC Educational Resources Information Center
Yang, Manshu; Chow, Sy-Miin
2010-01-01
Facial electromyography (EMG) is a useful physiological measure for detecting subtle affective changes in real time. A time series of EMG data contains bursts of electrical activity that increase in magnitude when the pertinent facial muscles are activated. Whereas previous methods for detecting EMG activation are often based on deterministic or…
An evaluation of Dynamic TOPMODEL for low flow simulation
NASA Astrophysics Data System (ADS)
Coxon, G.; Freer, J. E.; Quinn, N.; Woods, R. A.; Wagener, T.; Howden, N. J. K.
2015-12-01
Hydrological models are essential tools for drought risk management, often providing input to water resource system models, aiding our understanding of low flow processes within catchments and providing low flow predictions. However, simulating low flows and droughts is challenging as hydrological systems often demonstrate threshold effects in connectivity, non-linear groundwater contributions and a greater influence of water resource system elements during low flow periods. These dynamic processes are typically not well represented in commonly used hydrological models due to data and model limitations. Furthermore, calibrated or behavioural models may not be effectively evaluated during more extreme drought periods. A better understanding of the processes that occur during low flows and how these are represented within models is thus required if we want to be able to provide robust and reliable predictions of future drought events. In this study, we assess the performance of dynamic TOPMODEL for low flow simulation. Dynamic TOPMODEL was applied to a number of UK catchments in the Thames region using time series of observed rainfall and potential evapotranspiration data that captured multiple historic droughts over a period of several years. The model performance was assessed against the observed discharge time series using a limits of acceptability framework, which included uncertainty in the discharge time series. We evaluate the models against multiple signatures of catchment low-flow behaviour and investigate differences in model performance between catchments, model diagnostics and for different low flow periods. We also considered the impact of surface water and groundwater abstractions and discharges on the observed discharge time series and how this affected the model evaluation. From analysing the model performance, we suggest future improvements to Dynamic TOPMODEL to improve the representation of low flow processes within the model structure.
NASA Astrophysics Data System (ADS)
Gutowska, Dorota; Piskozub, Jacek
2017-04-01
There is a vast literature body on the climate indices and processes they represent. A large part of it deals with "teleconnections" or causal relations between them. However until recently time lagged correlations was the best tool of studying causation. However no correlation (even lagged) proves causation. We use a recently developed method of studying casual relations between short time series, Convergent Cross Mapping (CCM), to search for causation between the atmospheric (AO and NAO) and oceanic (AMO) indices. The version we have chosen (available as an R language package rEDM) allows for comparing time series with time lags. This work builds on previous one, showing with time-lagged correlations that AO/NAO precedes AMO by about 15 years and at the same time is preceded by AMO (but with an inverted sign) also by the same amount of time. This behaviour is identical to the relationship of a sine and cosine with the same period. This may suggest that the multidecadal oscillatory parts of the atmospheric and oceanic indices represent the same global-scale set of processes. In other words they may be symptoms of the same oscillation. The aim of present study is to test this hypothesis with a tool created specially for discovering causal relationships in dynamic systems.
Johnson, Timothy C.; Slater, Lee D.; Ntarlagiannis, Dimitris; Day-Lewis, Frederick D.; Elwaseif, Mehrez
2012-01-01
Time-lapse resistivity imaging is increasingly used to monitor hydrologic processes. Compared to conventional hydrologic measurements, surface time-lapse resistivity provides superior spatial coverage in two or three dimensions, potentially high-resolution information in time, and information in the absence of wells. However, interpretation of time-lapse electrical tomograms is complicated by the ever-increasing size and complexity of long-term, three-dimensional (3-D) time series conductivity data sets. Here we use 3-D surface time-lapse electrical imaging to monitor subsurface electrical conductivity variations associated with stage-driven groundwater-surface water interactions along a stretch of the Columbia River adjacent to the Hanford 300 near Richland, Washington, USA. We reduce the resulting 3-D conductivity time series using both time-series and time-frequency analyses to isolate a paleochannel causing enhanced groundwater-surface water interactions. Correlation analysis on the time-lapse imaging results concisely represents enhanced groundwater-surface water interactions within the paleochannel, and provides information concerning groundwater flow velocities. Time-frequency analysis using the Stockwell (S) transform provides additional information by identifying the stage periodicities driving groundwater-surface water interactions due to upstream dam operations, and identifying segments in time-frequency space when these interactions are most active. These results provide new insight into the distribution and timing of river water intrusion into the Hanford 300 Area, which has a governing influence on the behavior of a uranium plume left over from historical nuclear fuel processing operations.
Detecting and interpreting distortions in hierarchical organization of complex time series
NASA Astrophysics Data System (ADS)
DroŻdŻ, Stanisław; OświÈ©cimka, Paweł
2015-03-01
Hierarchical organization is a cornerstone of complexity and multifractality constitutes its central quantifying concept. For model uniform cascades the corresponding singularity spectra are symmetric while those extracted from empirical data are often asymmetric. Using selected time series representing such diverse phenomena as price changes and intertransaction times in financial markets, sentence length variability in narrative texts, Missouri River discharge, and sunspot number variability as examples, we show that the resulting singularity spectra appear strongly asymmetric, more often left sided but in some cases also right sided. We present a unified view on the origin of such effects and indicate that they may be crucially informative for identifying the composition of the time series. One particularly intriguing case of this latter kind of asymmetry is detected in the daily reported sunspot number variability. This signals that either the commonly used famous Wolf formula distorts the real dynamics in expressing the largest sunspot numbers or, if not, that their dynamics is governed by a somewhat different mechanism.
NASA Astrophysics Data System (ADS)
Gudmundsson, Lukas; Do, Hong Xuan; Leonard, Michael; Westra, Seth
2018-04-01
This is Part 2 of a two-paper series presenting the Global Streamflow Indices and Metadata Archive (GSIM), which is a collection of daily streamflow observations at more than 30 000 stations around the world. While Part 1 (Do et al., 2018a) describes the data collection process as well as the generation of auxiliary catchment data (e.g. catchment boundary, land cover, mean climate), Part 2 introduces a set of quality controlled time-series indices representing (i) the water balance, (ii) the seasonal cycle, (iii) low flows and (iv) floods. To this end we first consider the quality of individual daily records using a combination of quality flags from data providers and automated screening methods. Subsequently, streamflow time-series indices are computed for yearly, seasonal and monthly resolution. The paper provides a generalized assessment of the homogeneity of all generated streamflow time-series indices, which can be used to select time series that are suitable for a specific task. The newly generated global set of streamflow time-series indices is made freely available with an digital object identifier at https://doi.pangaea.de/10.1594/PANGAEA.887470 and is expected to foster global freshwater research, by acting as a ground truth for model validation or as a basis for assessing the role of human impacts on the terrestrial water cycle. It is hoped that a renewed interest in streamflow data at the global scale will foster efforts in the systematic assessment of data quality and provide momentum to overcome administrative barriers that lead to inconsistencies in global collections of relevant hydrological observations.
The construction of a Central Netherlands temperature
NASA Astrophysics Data System (ADS)
van der Schrier, G.; van Ulden, A.; van Oldenborgh, G. J.
2011-05-01
The Central Netherlands Temperature (CNT) is a monthly daily mean temperature series constructed from homogenized time series from the centre of the Netherlands. The purpose of this series is to offer a homogeneous time series representative of a larger area in order to study large-scale temperature changes. It will also facilitate a comparison with climate models, which resolve similar scales. From 1906 onwards, temperature measurements in the Netherlands have been sufficiently standardized to construct a high-quality series. Long time series have been constructed by merging nearby stations and using the overlap to calibrate the differences. These long time series and a few time series of only a few decades in length have been subjected to a homogeneity analysis in which significant breaks and artificial trends have been corrected. Many of the detected breaks correspond to changes in the observations that are documented in the station metadata. This version of the CNT, to which we attach the version number 1.1, is constructed as the unweighted average of four stations (De Bilt, Winterswijk/Hupsel, Oudenbosch/Gilze-Rijen and Gemert/Volkel) with the stations Eindhoven and Deelen added from 1951 and 1958 onwards, respectively. The global gridded datasets used for detecting and attributing climate change are based on raw observational data. Although some homogeneity adjustments are made, these are not based on knowledge of local circumstances but only on statistical evidence. Despite this handicap, and the fact that these datasets use grid boxes that are far larger then the area associated with that of the Central Netherlands Temperature, the temperature interpolated to the CNT region shows a warming trend that is broadly consistent with the CNT trend in all of these datasets. The actual trends differ from the CNT trend up to 30 %, which highlights the need to base future global gridded temperature datasets on homogenized time series.
A Markovian Entropy Measure for the Analysis of Calcium Activity Time Series
Rahman, Atiqur; Odorizzi, Laura; LeFew, Michael C.; Golino, Caroline A.; Kemper, Peter; Saha, Margaret S.
2016-01-01
Methods to analyze the dynamics of calcium activity often rely on visually distinguishable features in time series data such as spikes, waves, or oscillations. However, systems such as the developing nervous system display a complex, irregular type of calcium activity which makes the use of such methods less appropriate. Instead, for such systems there exists a class of methods (including information theoretic, power spectral, and fractal analysis approaches) which use more fundamental properties of the time series to analyze the observed calcium dynamics. We present a new analysis method in this class, the Markovian Entropy measure, which is an easily implementable calcium time series analysis method which represents the observed calcium activity as a realization of a Markov Process and describes its dynamics in terms of the level of predictability underlying the transitions between the states of the process. We applied our and other commonly used calcium analysis methods on a dataset from Xenopus laevis neural progenitors which displays irregular calcium activity and a dataset from murine synaptic neurons which displays activity time series that are well-described by visually-distinguishable features. We find that the Markovian Entropy measure is able to distinguish between biologically distinct populations in both datasets, and that it can separate biologically distinct populations to a greater extent than other methods in the dataset exhibiting irregular calcium activity. These results support the benefit of using the Markovian Entropy measure to analyze calcium dynamics, particularly for studies using time series data which do not exhibit easily distinguishable features. PMID:27977764
HydroClimATe: hydrologic and climatic analysis toolkit
Dickinson, Jesse; Hanson, Randall T.; Predmore, Steven K.
2014-01-01
The potential consequences of climate variability and climate change have been identified as major issues for the sustainability and availability of the worldwide water resources. Unlike global climate change, climate variability represents deviations from the long-term state of the climate over periods of a few years to several decades. Currently, rich hydrologic time-series data are available, but the combination of data preparation and statistical methods developed by the U.S. Geological Survey as part of the Groundwater Resources Program is relatively unavailable to hydrologists and engineers who could benefit from estimates of climate variability and its effects on periodic recharge and water-resource availability. This report documents HydroClimATe, a computer program for assessing the relations between variable climatic and hydrologic time-series data. HydroClimATe was developed for a Windows operating system. The software includes statistical tools for (1) time-series preprocessing, (2) spectral analysis, (3) spatial and temporal analysis, (4) correlation analysis, and (5) projections. The time-series preprocessing tools include spline fitting, standardization using a normal or gamma distribution, and transformation by a cumulative departure. The spectral analysis tools include discrete Fourier transform, maximum entropy method, and singular spectrum analysis. The spatial and temporal analysis tool is empirical orthogonal function analysis. The correlation analysis tools are linear regression and lag correlation. The projection tools include autoregressive time-series modeling and generation of many realizations. These tools are demonstrated in four examples that use stream-flow discharge data, groundwater-level records, gridded time series of precipitation data, and the Multivariate ENSO Index.
Multiprocessor Real-Time Locking Protocols for Replicated Resources
2016-07-01
circular buffer of slots, each representing a discrete segment of time . For example, if the maintenance of a timing wheel occurs af- ter an interrupt ...Experimental Evaluation To evaluate Algs. 2, 3, and 4, we conducted a series of ex- periments in which we measured relevant overheads and blocking times . We...Multiprocessor Real- Time Locking Protocols for Replicated Resources ∗ Catherine E. Jarrett1, Kecheng Yang1, Ming Yang1, Pontus Ekberg2, and James H
Acoustical Applications of the HHT Method
NASA Technical Reports Server (NTRS)
Huang, Norden E.
2003-01-01
A document discusses applications of a method based on the Huang-Hilbert transform (HHT). The method was described, without the HHT name, in Analyzing Time Series Using EMD and Hilbert Spectra (GSC-13817), NASA Tech Briefs, Vol. 24, No. 10 (October 2000), page 63. To recapitulate: The method is especially suitable for analyzing time-series data that represent nonstationary and nonlinear physical phenomena. The method involves the empirical mode decomposition (EMD), in which a complicated signal is decomposed into a finite number of functions, called intrinsic mode functions (IMFs), that admit well-behaved Hilbert transforms. The HHT consists of the combination of EMD and Hilbert spectral analysis.
Re-analysis of Alaskan benchmark glacier mass-balance data using the index method
Van Beusekom, Ashely E.; O'Nell, Shad R.; March, Rod S.; Sass, Louis C.; Cox, Leif H.
2010-01-01
At Gulkana and Wolverine Glaciers, designated the Alaskan benchmark glaciers, we re-analyzed and re-computed the mass balance time series from 1966 to 2009 to accomplish our goal of making more robust time series. Each glacier's data record was analyzed with the same methods. For surface processes, we estimated missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernized the traditional degree-day model and derived new degree-day factors in an effort to match the balance time series more closely. We estimated missing yearly-site data with a new balance gradient method. These efforts showed that an additional step needed to be taken at Wolverine Glacier to adjust for non-representative index sites. As with the previously calculated mass balances, the re-analyzed balances showed a continuing trend of mass loss. We noted that the time series, and thus our estimate of the cumulative mass loss over the period of record, was very sensitive to the data input, and suggest the need to add data-collection sites and modernize our weather stations.
Clustering Multivariate Time Series Using Hidden Markov Models
Ghassempour, Shima; Girosi, Federico; Maeder, Anthony
2014-01-01
In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs), where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers. PMID:24662996
About the relationships among variables observed in the real world
NASA Astrophysics Data System (ADS)
Petkov, Boyan H.
2018-06-01
Since a stationary chaotic system is determined by nonlinear equations connecting its components, the appurtenance of two variables to such a system has been considered a sign of nontrivial relationships between them including also other quantities. These relationships could remain hidden for the approach usually employed in the research analyses, which is based on the extent of the correlation that characterises the dependence of one variable on the other. The appurtenance to the same system can be hypothesized if the topological features of the attractors reconstructed from two time series representing the evolution of the corresponding variables are close to each other. However, the possibility that both attractors represent different systems with similar behaviour cannot be excluded. For that reason, an approach allowing the reconstruction of the attractor by using jointly two time series was proposed and the conclusion about the common origin of the variables under study can be made if this attractor is topologically similar to those built separately from the two time series. In the present study, the features of the attractors were presented by the correlation dimension and the largest Lyapunov exponent and the proposed algorithm has been tested on numerically generated sequences obtained from various maps. It is believed that this approach could be used to reveal connections among the variables observed in experiments or field measurements.
Langbein, John O.
2012-01-01
Recent studies have documented that global positioning system (GPS) time series of position estimates have temporal correlations which have been modeled as a combination of power-law and white noise processes. When estimating quantities such as a constant rate from GPS time series data, the estimated uncertainties on these quantities are more realistic when using a noise model that includes temporal correlations than simply assuming temporally uncorrelated noise. However, the choice of the specific representation of correlated noise can affect the estimate of uncertainty. For many GPS time series, the background noise can be represented by either: (1) a sum of flicker and random-walk noise or, (2) as a power-law noise model that represents an average of the flicker and random-walk noise. For instance, if the underlying noise model is a combination of flicker and random-walk noise, then incorrectly choosing the power-law model could underestimate the rate uncertainty by a factor of two. Distinguishing between the two alternate noise models is difficult since the flicker component can dominate the assessment of the noise properties because it is spread over a significant portion of the measurable frequency band. But, although not necessarily detectable, the random-walk component can be a major constituent of the estimated rate uncertainty. None the less, it is possible to determine the upper bound on the random-walk noise.
Cross-entropy clustering framework for catchment classification
NASA Astrophysics Data System (ADS)
Tongal, Hakan; Sivakumar, Bellie
2017-09-01
There is an increasing interest in catchment classification and regionalization in hydrology, as they are useful for identification of appropriate model complexity and transfer of information from gauged catchments to ungauged ones, among others. This study introduces a nonlinear cross-entropy clustering (CEC) method for classification of catchments. The method specifically considers embedding dimension (m), sample entropy (SampEn), and coefficient of variation (CV) to represent dimensionality, complexity, and variability of the time series, respectively. The method is applied to daily streamflow time series from 217 gauging stations across Australia. The results suggest that a combination of linear and nonlinear parameters (i.e. m, SampEn, and CV), representing different aspects of the underlying dynamics of streamflows, could be useful for determining distinct patterns of flow generation mechanisms within a nonlinear clustering framework. For the 217 streamflow time series, nine hydrologically homogeneous clusters that have distinct patterns of flow regime characteristics and specific dominant hydrological attributes with different climatic features are obtained. Comparison of the results with those obtained using the widely employed k-means clustering method (which results in five clusters, with the loss of some information about the features of the clusters) suggests the superiority of the cross-entropy clustering method. The outcomes from this study provide a useful guideline for employing the nonlinear dynamic approaches based on hydrologic signatures and for gaining an improved understanding of streamflow variability at a large scale.
ARIMA representation for daily solar irradiance and surface air temperature time series
NASA Astrophysics Data System (ADS)
Kärner, Olavi
2009-06-01
Autoregressive integrated moving average (ARIMA) models are used to compare long-range temporal variability of the total solar irradiance (TSI) at the top of the atmosphere (TOA) and surface air temperature series. The comparison shows that one and the same type of the model is applicable to represent the TSI and air temperature series. In terms of the model type surface air temperature imitates closely that for the TSI. This may mean that currently no other forcing to the climate system is capable to change the random walk type variability established by the varying activity of the rotating Sun. The result should inspire more detailed examination of the dependence of various climate series on short-range fluctuations of TSI.
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen; Talpe, Matthieu; Shum, C. K.; Schmidt, Michael
2017-12-01
In recent decades, decomposition techniques have enabled increasingly more applications for dimension reduction, as well as extraction of additional information from geophysical time series. Traditionally, the principal component analysis (PCA)/empirical orthogonal function (EOF) method and more recently the independent component analysis (ICA) have been applied to extract, statistical orthogonal (uncorrelated), and independent modes that represent the maximum variance of time series, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the autocovariance matrix and diagonalizing higher (than two) order statistical tensors from centered time series, respectively. However, the stationarity assumption in these techniques is not justified for many geophysical and climate variables even after removing cyclic components, e.g., the commonly removed dominant seasonal cycles. In this paper, we present a novel decomposition method, the complex independent component analysis (CICA), which can be applied to extract non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA, where (a) we first define a new complex dataset that contains the observed time series in its real part, and their Hilbert transformed series as its imaginary part, (b) an ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex dataset in (a), and finally, (c) the dominant independent complex modes are extracted and used to represent the dominant space and time amplitudes and associated phase propagation patterns. The performance of CICA is examined by analyzing synthetic data constructed from multiple physically meaningful modes in a simulation framework, with known truth. Next, global terrestrial water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) gravimetry mission (2003-2016), and satellite radiometric sea surface temperature (SST) data (1982-2016) over the Atlantic and Pacific Oceans are used with the aim of demonstrating signal separations of the North Atlantic Oscillation (NAO) from the Atlantic Multi-decadal Oscillation (AMO), and the El Niño Southern Oscillation (ENSO) from the Pacific Decadal Oscillation (PDO). CICA results indicate that ENSO-related patterns can be extracted from the Gravity Recovery And Climate Experiment Terrestrial Water Storage (GRACE TWS) with an accuracy of 0.5-1 cm in terms of equivalent water height (EWH). The magnitude of errors in extracting NAO or AMO from SST data using the complex EOF (CEOF) approach reaches up to 50% of the signal itself, while it is reduced to 16% when applying CICA. Larger errors with magnitudes of 100% and 30% of the signal itself are found while separating ENSO from PDO using CEOF and CICA, respectively. We thus conclude that the CICA is more effective than CEOF in separating non-stationary patterns.
The nature of turbulence in a triangular lattice gas automaton
NASA Astrophysics Data System (ADS)
Duong-Van, Minh; Feit, M. D.; Keller, P.; Pound, M.
1986-12-01
Power spectra calculated from the coarse-graining of a simple lattice gas automaton, and those of time averaging other stochastic times series that we have investigated, have exponents in the range -1.6 to -2, consistent with observation of fully developed turbulence. This power spectrum is a natural consequence of coarse-graining; the exponent -2 represents the continuum limit.
Velocimetry system was then used to acquire flow field data across a series of three horizontal planes spanning from 0.25 to 1.5 times the ship hangar height...included six separate data points at gust-frequency referenced Strouhal numbers ranging from 0.430 to1.474. A 725-Hertz time -resolved Particle Image
Complexity analysis of the turbulent environmental fluid flow time series
NASA Astrophysics Data System (ADS)
Mihailović, D. T.; Nikolić-Đorić, E.; Drešković, N.; Mimić, G.
2014-02-01
We have used the Kolmogorov complexities, sample and permutation entropies to quantify the randomness degree in river flow time series of two mountain rivers in Bosnia and Herzegovina, representing the turbulent environmental fluid, for the period 1926-1990. In particular, we have examined the monthly river flow time series from two rivers (the Miljacka and the Bosnia) in the mountain part of their flow and then calculated the Kolmogorov complexity (KL) based on the Lempel-Ziv Algorithm (LZA) (lower-KLL and upper-KLU), sample entropy (SE) and permutation entropy (PE) values for each time series. The results indicate that the KLL, KLU, SE and PE values in two rivers are close to each other regardless of the amplitude differences in their monthly flow rates. We have illustrated the changes in mountain river flow complexity by experiments using (i) the data set for the Bosnia River and (ii) anticipated human activities and projected climate changes. We have explored the sensitivity of considered measures in dependence on the length of time series. In addition, we have divided the period 1926-1990 into three subintervals: (a) 1926-1945, (b) 1946-1965, (c) 1966-1990, and calculated the KLL, KLU, SE, PE values for the various time series in these subintervals. It is found that during the period 1946-1965, there is a decrease in their complexities, and corresponding changes in the SE and PE, in comparison to the period 1926-1990. This complexity loss may be primarily attributed to (i) human interventions, after the Second World War, on these two rivers because of their use for water consumption and (ii) climate change in recent times.
A data mining framework for time series estimation.
Hu, Xiao; Xu, Peng; Wu, Shaozhi; Asgari, Shadnaz; Bergsneider, Marvin
2010-04-01
Time series estimation techniques are usually employed in biomedical research to derive variables less accessible from a set of related and more accessible variables. These techniques are traditionally built from systems modeling approaches including simulation, blind decovolution, and state estimation. In this work, we define target time series (TTS) and its related time series (RTS) as the output and input of a time series estimation process, respectively. We then propose a novel data mining framework for time series estimation when TTS and RTS represent different sets of observed variables from the same dynamic system. This is made possible by mining a database of instances of TTS, its simultaneously recorded RTS, and the input/output dynamic models between them. The key mining strategy is to formulate a mapping function for each TTS-RTS pair in the database that translates a feature vector extracted from RTS to the dissimilarity between true TTS and its estimate from the dynamic model associated with the same TTS-RTS pair. At run time, a feature vector is extracted from an inquiry RTS and supplied to the mapping function associated with each TTS-RTS pair to calculate a dissimilarity measure. An optimal TTS-RTS pair is then selected by analyzing these dissimilarity measures. The associated input/output model of the selected TTS-RTS pair is then used to simulate the TTS given the inquiry RTS as an input. An exemplary implementation was built to address a biomedical problem of noninvasive intracranial pressure assessment. The performance of the proposed method was superior to that of a simple training-free approach of finding the optimal TTS-RTS pair by a conventional similarity-based search on RTS features. 2009 Elsevier Inc. All rights reserved.
The matrix exponential in transient structural analysis
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
1987-01-01
The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.
Advanced Space Shuttle simulation model
NASA Technical Reports Server (NTRS)
Tatom, F. B.; Smith, S. R.
1982-01-01
A non-recursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed. It provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gusts gradients. Based on this model the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes, entitled Shuttle Simulation Turbulence Tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 120,000 meters. A description of the turbulence generation procedure is provided. The results of validating the simulated turbulence are described. Conclusions and recommendations are presented. One-dimensional von Karman spectra are tabulated, while a discussion of the minimum frequency simulated is provided. The results of spectral and statistical analyses of the SSTT are presented.
Monitoring of seismic time-series with advanced parallel computational tools and complex networks
NASA Astrophysics Data System (ADS)
Kechaidou, M.; Sirakoulis, G. Ch.; Scordilis, E. M.
2012-04-01
Earthquakes have been in the focus of human and research interest for several centuries due to their catastrophic effect to the everyday life as they occur almost all over the world demonstrating a hard to be modelled unpredictable behaviour. On the other hand, their monitoring with more or less technological updated instruments has been almost continuous and thanks to this fact several mathematical models have been presented and proposed so far to describe possible connections and patterns found in the resulting seismological time-series. Especially, in Greece, one of the most seismically active territories on earth, detailed instrumental seismological data are available from the beginning of the past century providing the researchers with valuable and differential knowledge about the seismicity levels all over the country. Considering available powerful parallel computational tools, such as Cellular Automata, these data can be further successfully analysed and, most important, modelled to provide possible connections between different parameters of the under study seismic time-series. More specifically, Cellular Automata have been proven very effective to compose and model nonlinear complex systems resulting in the advancement of several corresponding models as possible analogues of earthquake fault dynamics. In this work preliminary results of modelling of the seismic time-series with the help of Cellular Automata so as to compose and develop the corresponding complex networks are presented. The proposed methodology will be able to reveal under condition hidden relations as found in the examined time-series and to distinguish the intrinsic time-series characteristics in an effort to transform the examined time-series to complex networks and graphically represent their evolvement in the time-space. Consequently, based on the presented results, the proposed model will eventually serve as a possible efficient flexible computational tool to provide a generic understanding of the possible triggering mechanisms as arrived from the adequately monitoring and modelling of the regional earthquake phenomena.
10 CFR 431.17 - Determination of efficiency.
Code of Federal Regulations, 2011 CFR
2011-01-01
... different horsepowers without duplication; (C) The basic models should be of different frame number series... be produced over a reasonable period of time (approximately 180 days), then each unit shall be tested... design may be substituted without requiring additional testing if the represented measures of energy...
SEPARATING DIFFERENT SCALES OF MOTION IN TIME SERIES OF METEOROLOGICAL VARIABLES. (R825260)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Multi-frequency complex network from time series for uncovering oil-water flow structure.
Gao, Zhong-Ke; Yang, Yu-Xuan; Fang, Peng-Cheng; Jin, Ning-De; Xia, Cheng-Yi; Hu, Li-Dan
2015-02-04
Uncovering complex oil-water flow structure represents a challenge in diverse scientific disciplines. This challenge stimulates us to develop a new distributed conductance sensor for measuring local flow signals at different positions and then propose a novel approach based on multi-frequency complex network to uncover the flow structures from experimental multivariate measurements. In particular, based on the Fast Fourier transform, we demonstrate how to derive multi-frequency complex network from multivariate time series. We construct complex networks at different frequencies and then detect community structures. Our results indicate that the community structures faithfully represent the structural features of oil-water flow patterns. Furthermore, we investigate the network statistic at different frequencies for each derived network and find that the frequency clustering coefficient enables to uncover the evolution of flow patterns and yield deep insights into the formation of flow structures. Current results present a first step towards a network visualization of complex flow patterns from a community structure perspective.
NASA Astrophysics Data System (ADS)
Watanabe, T.; Nohara, D.
2017-12-01
The shorter temporal scale variation in the downward solar irradiance at the ground level (DSI) is not understood well because researches in the shorter-scale variation in the DSI is based on the ground observation and ground observation stations are located coarsely. Use of dataset derived from satellite observation will overcome such defect. DSI data and MODIS cloud properties product are analyzed simultaneously. Three metrics: mean, standard deviation and sample entropy, are used to evaluate time-series properties of the DSI. Three metrics are computed from two-hours time-series centered at the observation time of MODIS over the ground observation stations. We apply the regression methods to design prediction models of each three metrics from cloud properties. The validation of the model accuracy show that mean and standard deviation are predicted with a higher degree of accuracy and that the accuracy of prediction of sample entropy, which represents the complexity of time-series, is not high. One of causes of lower prediction skill of sample entropy is the resolution of the MODIS cloud properties. Higher sample entropy is corresponding to the rapid fluctuation, which is caused by the small and unordered cloud. It seems that such clouds isn't retrieved well.
Wang, Dong; Borthwick, Alistair G; He, Handan; Wang, Yuankun; Zhu, Jieyu; Lu, Yuan; Xu, Pengcheng; Zeng, Xiankui; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Liu, Jiufu; Zou, Ying; He, Ruimin
2018-01-01
Accurate, fast forecasting of hydro-meteorological time series is presently a major challenge in drought and flood mitigation. This paper proposes a hybrid approach, wavelet de-noising (WD) and Rank-Set Pair Analysis (RSPA), that takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. WD allows decomposition and reconstruction of a time series by the wavelet transform, and hence separation of the noise from the original series. RSPA, a more reliable and efficient version of Set Pair Analysis, is integrated with WD to form the hybrid WD-RSPA approach. Two types of hydro-meteorological data sets with different characteristics and different levels of human influences at some representative stations are used to illustrate the WD-RSPA approach. The approach is also compared to three other generic methods: the conventional Auto Regressive Integrated Moving Average (ARIMA) method, Artificial Neural Networks (ANNs) (BP-error Back Propagation, MLP-Multilayer Perceptron and RBF-Radial Basis Function), and RSPA alone. Nine error metrics are used to evaluate the model performance. Compared to three other generic methods, the results generated by WD-REPA model presented invariably smaller error measures which means the forecasting capability of the WD-REPA model is better than other models. The results show that WD-RSPA is accurate, feasible, and effective. In particular, WD-RSPA is found to be the best among the various generic methods compared in this paper, even when the extreme events are included within a time series. Copyright © 2017 Elsevier Inc. All rights reserved.
Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu
2016-12-07
The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long ( 6 × 10 5 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series.
Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu
2016-01-01
The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long (6×105 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series. PMID:27941600
Actinomycetal complex of light sierozem on the Kopet-Dag piedmont plain
NASA Astrophysics Data System (ADS)
Zenova, G. M.; Zvyagintsev, D. G.; Manucharova, N. A.; Stepanova, O. A.; Chernov, I. Yu.
2016-10-01
The population density of actinomycetes in the samples of light sierozem from the Kopet Dag piedmont plain (75 km from Ashkhabad, Turkmenistan) reaches hundreds of thousand CFU/g soil. The actinomycetal complex is represented by two genera: Streptomyces and Micromonospora. Representatives of the Streptomyces genus predominate and comprise 73 to 87% of the actinomycetal complex. In one sample, representatives of the Micromonospora genus predominated in the complex (75%). The Streptomyces genus in the studied soil samples is represented by the species from several sections and series: the species of section Helvolo-Flavus series Helvolus represent the dominant component of the streptomycetal complex; their portion is up to 77% of all isolated actinomycetes. The species of other sections and series are much less abundant. Thus, the percentage of the Cinereus Achromogenes section in the actinomycetal complex does not exceed 28%; representatives of the Albus section Albus series, Roseus section Lavendulae-Roseus series, and Imperfectus section belong to rare species; they have been isolated not from all the studied samples of light sierozem, and their portion does not exceed 10% of the actinomycetal complex.
A statistical approach for generating synthetic tip stress data from limited CPT soundings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basalams, M.K.
CPT tip stress data obtained from a Uranium mill tailings impoundment are treated as time series. A statistical class of models that was developed to model time series is explored to investigate its applicability in modeling the tip stress series. These models were developed by Box and Jenkins (1970) and are known as Autoregressive Moving Average (ARMA) models. This research demonstrates how to apply the ARMA models to tip stress series. Generation of synthetic tip stress series that preserve the main statistical characteristics of the measured series is also investigated. Multiple regression analysis is used to model the regional variationmore » of the ARMA model parameters as well as the regional variation of the mean and the standard deviation of the measured tip stress series. The reliability of the generated series is investigated from a geotechnical point of view as well as from a statistical point of view. Estimation of the total settlement using the measured and the generated series subjected to the same loading condition are performed. The variation of friction angle with depth of the impoundment materials is also investigated. This research shows that these series can be modeled by the Box and Jenkins ARMA models. A third degree Autoregressive model AR(3) is selected to represent these series. A theoretical double exponential density function is fitted to the AR(3) model residuals. Synthetic tip stress series are generated at nearby locations. The generated series are shown to be reliable in estimating the total settlement and the friction angle variation with depth for this particular site.« less
Wavelet-based group and phase velocity measurements: Method
NASA Astrophysics Data System (ADS)
Yang, H. Y.; Wang, W. W.; Hung, S. H.
2016-12-01
Measurements of group and phase velocities of surface waves are often carried out by applying a series of narrow bandpass or stationary Gaussian filters localized at specific frequencies to wave packets and estimating the corresponding arrival times at the peak envelopes and phases of the Fourier spectra. However, it's known that seismic waves are inherently nonstationary and not well represented by a sum of sinusoids. Alternatively, a continuous wavelet transform (CWT) which decomposes a time series into a family of wavelets, translated and scaled copies of a generally fast oscillating and decaying function known as the mother wavelet, is capable of retaining localization in both the time and frequency domain and well-suited for the time-frequency analysis of nonstationary signals. Here we develop a wavelet-based method to measure frequency-dependent group and phase velocities, an essential dataset used in crust and mantle tomography. For a given time series, we employ the complex morlet wavelet to obtain the scalogram of amplitude modulus |Wg| and phase φ on the time-frequency plane. The instantaneous frequency (IF) is then calculated by taking the derivative of phase with respect to time, i.e., (1/2π)dφ(f, t)/dt. Time windows comprising strong energy arrivals to be measured can be identified by those IFs close to the frequencies with the maximum modulus and varying smoothly and monotonically with time. The respective IFs in each selected time window are further interpolated to yield a smooth branch of ridge points or representative IFs at which the arrival time, tridge(f), and phase, φridge(f), after unwrapping and correcting cycle skipping based on a priori knowledge of the possible velocity range, are determined for group and phase velocity estimation. We will demonstrate our measurement method using both ambient noise cross correlation functions and multi-mode surface waves from earthquakes. The obtained dispersion curves will be compared with those by a conventional narrow bandpass method.
On Clear-Cut Mapping with Time-Series of Sentinel-1 Data in Boreal Forest
NASA Astrophysics Data System (ADS)
Rauste, Yrjo; Antropov, Oleg; Mutanen, Teemu; Hame, Tuomas
2016-08-01
Clear-cutting is the most drastic and wide-spread change that affects the hydrological and carbon-balance proper- ties of forested land in the Boreal forest zone1.A time-series of 36 Sentinel-1 images was used to study the potential for mapping clear-cut areas. The time series covered one and half year (2014-10-09 ... 2016-03-20) in a 200-km-by-200-km study site in Finland. The Sentinel- 1 images were acquired in Interferometric Wide-swath (IW), dual-polarized mode (VV+VH). All scenes were acquired in the same orbit configuration. Amplitude im- ages (GRDH product) were used. The Sentinel-1 scenes were ortho-rectified with in-house software using a digi- tal elevation model (DEM) produced by the Land Survey of Finland. The Sentinel-1 amplitude data were radio- metrically corrected for topographic effects.The temporal behaviour of C-band backscatter was stud- ied for areas representing 1) areas clear-cut during the ac- quisition of the Sentinel-1 time-series, 2) areas remaining forest during the acquisition of the Sentinel-1 time-series, and 3) areas that had been clear-cut before the acquisition of the Sentinel-1 time-series.The following observations were made:1. The separation between clear-cut areas and forest was generally low;2. Under certain acquisition conditions, clear-cut areas were well separable from forest;3. The good scenes were acquired: 1) in winter during thick snow cover, and 2) in late summer towards the end of a warm and dry period;4. The separation between clear-cut and forest was higher in VH polarized data than in VV-polarized data.5. The separation between clear-cut and forest was higher in the winter/snow scenes than in the dry summer scenes.
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
FINDING A COMMON DATA REPRESENTATION AND INTERCHANGE APPROACH FOR MULTIMEDIA MODELS
Within many disciplines, multiple approaches are used to represent and access very similar data (e.g., a time series of values), often due to the lack of commonly accepted standards. When projects must use data from multiple disciplines, the problems quickly compound. Often sig...
RESOLUTION OF THE DESTRUCTIVE EFFECT OF NOISE ON LINEAR REGRESSION OF TWO TIME SERIES. (R825260)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Complex-valued time-series correlation increases sensitivity in FMRI analysis.
Kociuba, Mary C; Rowe, Daniel B
2016-07-01
To develop a linear matrix representation of correlation between complex-valued (CV) time-series in the temporal Fourier frequency domain, and demonstrate its increased sensitivity over correlation between magnitude-only (MO) time-series in functional MRI (fMRI) analysis. The standard in fMRI is to discard the phase before the statistical analysis of the data, despite evidence of task related change in the phase time-series. With a real-valued isomorphism representation of Fourier reconstruction, correlation is computed in the temporal frequency domain with CV time-series data, rather than with the standard of MO data. A MATLAB simulation compares the Fisher-z transform of MO and CV correlations for varying degrees of task related magnitude and phase amplitude change in the time-series. The increased sensitivity of the complex-valued Fourier representation of correlation is also demonstrated with experimental human data. Since the correlation description in the temporal frequency domain is represented as a summation of second order temporal frequencies, the correlation is easily divided into experimentally relevant frequency bands for each voxel's temporal frequency spectrum. The MO and CV correlations for the experimental human data are analyzed for four voxels of interest (VOIs) to show the framework with high and low contrast-to-noise ratios in the motor cortex and the supplementary motor cortex. The simulation demonstrates the increased strength of CV correlations over MO correlations for low magnitude contrast-to-noise time-series. In the experimental human data, the MO correlation maps are noisier than the CV maps, and it is more difficult to distinguish the motor cortex in the MO correlation maps after spatial processing. Including both magnitude and phase in the spatial correlation computations more accurately defines the correlated left and right motor cortices. Sensitivity in correlation analysis is important to preserve the signal of interest in fMRI data sets with high noise variance, and avoid excessive processing induced correlation. Copyright © 2016 Elsevier Inc. All rights reserved.
Wavelet entropy of BOLD time series: An application to Rolandic epilepsy.
Gupta, Lalit; Jansen, Jacobus F A; Hofman, Paul A M; Besseling, René M H; de Louw, Anton J A; Aldenkamp, Albert P; Backes, Walter H
2017-12-01
To assess the wavelet entropy for the characterization of intrinsic aberrant temporal irregularities in the time series of resting-state blood-oxygen-level-dependent (BOLD) signal fluctuations. Further, to evaluate the temporal irregularities (disorder/order) on a voxel-by-voxel basis in the brains of children with Rolandic epilepsy. The BOLD time series was decomposed using the discrete wavelet transform and the wavelet entropy was calculated. Using a model time series consisting of multiple harmonics and nonstationary components, the wavelet entropy was compared with Shannon and spectral (Fourier-based) entropy. As an application, the wavelet entropy in 22 children with Rolandic epilepsy was compared to 22 age-matched healthy controls. The images were obtained by performing resting-state functional magnetic resonance imaging (fMRI) using a 3T system, an 8-element receive-only head coil, and an echo planar imaging pulse sequence ( T2*-weighted). The wavelet entropy was also compared to spectral entropy, regional homogeneity, and Shannon entropy. Wavelet entropy was found to identify the nonstationary components of the model time series. In Rolandic epilepsy patients, a significantly elevated wavelet entropy was observed relative to controls for the whole cerebrum (P = 0.03). Spectral entropy (P = 0.41), regional homogeneity (P = 0.52), and Shannon entropy (P = 0.32) did not reveal significant differences. The wavelet entropy measure appeared more sensitive to detect abnormalities in cerebral fluctuations represented by nonstationary effects in the BOLD time series than more conventional measures. This effect was observed in the model time series as well as in Rolandic epilepsy. These observations suggest that the brains of children with Rolandic epilepsy exhibit stronger nonstationary temporal signal fluctuations than controls. 2 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2017;46:1728-1737. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Do, Hong Xuan; Gudmundsson, Lukas; Leonard, Michael; Westra, Seth
2018-04-01
This is the first part of a two-paper series presenting the Global Streamflow Indices and Metadata archive (GSIM), a worldwide collection of metadata and indices derived from more than 35 000 daily streamflow time series. This paper focuses on the compilation of the daily streamflow time series based on 12 free-to-access streamflow databases (seven national databases and five international collections). It also describes the development of three metadata products (freely available at https://doi.pangaea.de/10.1594/PANGAEA.887477): (1) a GSIM catalogue collating basic metadata associated with each time series, (2) catchment boundaries for the contributing area of each gauge, and (3) catchment metadata extracted from 12 gridded global data products representing essential properties such as land cover type, soil type, and climate and topographic characteristics. The quality of the delineated catchment boundary is also made available and should be consulted in GSIM application. The second paper in the series then explores production and analysis of streamflow indices. Having collated an unprecedented number of stations and associated metadata, GSIM can be used to advance large-scale hydrological research and improve understanding of the global water cycle.
Automatising the analysis of stochastic biochemical time-series
2015-01-01
Background Mathematical and computational modelling of biochemical systems has seen a lot of effort devoted to the definition and implementation of high-performance mechanistic simulation frameworks. Within these frameworks it is possible to analyse complex models under a variety of configurations, eventually selecting the best setting of, e.g., parameters for a target system. Motivation This operational pipeline relies on the ability to interpret the predictions of a model, often represented as simulation time-series. Thus, an efficient data analysis pipeline is crucial to automatise time-series analyses, bearing in mind that errors in this phase might mislead the modeller's conclusions. Results For this reason we have developed an intuitive framework-independent Python tool to automate analyses common to a variety of modelling approaches. These include assessment of useful non-trivial statistics for simulation ensembles, e.g., estimation of master equations. Intuitive and domain-independent batch scripts will allow the researcher to automatically prepare reports, thus speeding up the usual model-definition, testing and refinement pipeline. PMID:26051821
Kim, Jongrae; Bates, Declan G; Postlethwaite, Ian; Heslop-Harrison, Pat; Cho, Kwang-Hyun
2008-05-15
Inherent non-linearities in biomolecular interactions make the identification of network interactions difficult. One of the principal problems is that all methods based on the use of linear time-invariant models will have fundamental limitations in their capability to infer certain non-linear network interactions. Another difficulty is the multiplicity of possible solutions, since, for a given dataset, there may be many different possible networks which generate the same time-series expression profiles. A novel algorithm for the inference of biomolecular interaction networks from temporal expression data is presented. Linear time-varying models, which can represent a much wider class of time-series data than linear time-invariant models, are employed in the algorithm. From time-series expression profiles, the model parameters are identified by solving a non-linear optimization problem. In order to systematically reduce the set of possible solutions for the optimization problem, a filtering process is performed using a phase-portrait analysis with random numerical perturbations. The proposed approach has the advantages of not requiring the system to be in a stable steady state, of using time-series profiles which have been generated by a single experiment, and of allowing non-linear network interactions to be identified. The ability of the proposed algorithm to correctly infer network interactions is illustrated by its application to three examples: a non-linear model for cAMP oscillations in Dictyostelium discoideum, the cell-cycle data for Saccharomyces cerevisiae and a large-scale non-linear model of a group of synchronized Dictyostelium cells. The software used in this article is available from http://sbie.kaist.ac.kr/software
Delay differential analysis of time series.
Lainscsek, Claudia; Sejnowski, Terrence J
2015-03-01
Nonlinear dynamical system analysis based on embedding theory has been used for modeling and prediction, but it also has applications to signal detection and classification of time series. An embedding creates a multidimensional geometrical object from a single time series. Traditionally either delay or derivative embeddings have been used. The delay embedding is composed of delayed versions of the signal, and the derivative embedding is composed of successive derivatives of the signal. The delay embedding has been extended to nonuniform embeddings to take multiple timescales into account. Both embeddings provide information on the underlying dynamical system without having direct access to all the system variables. Delay differential analysis is based on functional embeddings, a combination of the derivative embedding with nonuniform delay embeddings. Small delay differential equation (DDE) models that best represent relevant dynamic features of time series data are selected from a pool of candidate models for detection or classification. We show that the properties of DDEs support spectral analysis in the time domain where nonlinear correlation functions are used to detect frequencies, frequency and phase couplings, and bispectra. These can be efficiently computed with short time windows and are robust to noise. For frequency analysis, this framework is a multivariate extension of discrete Fourier transform (DFT), and for higher-order spectra, it is a linear and multivariate alternative to multidimensional fast Fourier transform of multidimensional correlations. This method can be applied to short or sparse time series and can be extended to cross-trial and cross-channel spectra if multiple short data segments of the same experiment are available. Together, this time-domain toolbox provides higher temporal resolution, increased frequency and phase coupling information, and it allows an easy and straightforward implementation of higher-order spectra across time compared with frequency-based methods such as the DFT and cross-spectral analysis.
NASA Astrophysics Data System (ADS)
Stiller-Reeve, Mathew; Stephenson, David; Spengler, Thomas
2017-04-01
For climate services to be relevant and informative for users, scientific data definitions need to match users' perceptions or beliefs. This study proposes and tests novel yet simple methods to compare beliefs of timing of recurrent climatic events with empirical evidence from multiple historical time series. The methods are tested by applying them to the onset date of the monsoon in Bangladesh, where several scientific monsoon definitions can be applied, yielding different results for monsoon onset dates. It is a challenge to know which monsoon definition compares best with people's beliefs. Time series from eight different scientific monsoon definitions in six regions are compared with respondent beliefs from a previously completed survey concerning the monsoon onset. Beliefs about the timing of the monsoon onset are represented probabilistically for each respondent by constructing a probability mass function (PMF) from elicited responses about the earliest, normal, and latest dates for the event. A three-parameter circular modified triangular distribution (CMTD) is used to allow for the possibility (albeit small) of the onset at any time of the year. These distributions are then compared to the historical time series using two approaches: likelihood scores, and the mean and standard deviation of time series of dates simulated from each belief distribution. The methods proposed give the basis for further iterative discussion with decision-makers in the development of eventual climate services. This study uses Jessore, Bangladesh, as an example and finds that a rainfall definition, applying a 10 mm day-1 threshold to NCEP-NCAR reanalysis (Reanalysis-1) data, best matches the survey respondents' beliefs about monsoon onset.
The geometry of chaotic dynamics — a complex network perspective
NASA Astrophysics Data System (ADS)
Donner, R. V.; Heitzig, J.; Donges, J. F.; Zou, Y.; Marwan, N.; Kurths, J.
2011-12-01
Recently, several complex network approaches to time series analysis have been developed and applied to study a wide range of model systems as well as real-world data, e.g., geophysical or financial time series. Among these techniques, recurrence-based concepts and prominently ɛ-recurrence networks, most faithfully represent the geometrical fine structure of the attractors underlying chaotic (and less interestingly non-chaotic) time series. In this paper we demonstrate that the well known graph theoretical properties local clustering coefficient and global (network) transitivity can meaningfully be exploited to define two new local and two new global measures of dimension in phase space: local upper and lower clustering dimension as well as global upper and lower transitivity dimension. Rigorous analytical as well as numerical results for self-similar sets and simple chaotic model systems suggest that these measures are well-behaved in most non-pathological situations and that they can be estimated reasonably well using ɛ-recurrence networks constructed from relatively short time series. Moreover, we study the relationship between clustering and transitivity dimensions on the one hand, and traditional measures like pointwise dimension or local Lyapunov dimension on the other hand. We also provide further evidence that the local clustering coefficients, or equivalently the local clustering dimensions, are useful for identifying unstable periodic orbits and other dynamically invariant objects from time series. Our results demonstrate that ɛ-recurrence networks exhibit an important link between dynamical systems and graph theory.
Calibrating binary lumped parameter models
NASA Astrophysics Data System (ADS)
Morgenstern, Uwe; Stewart, Mike
2017-04-01
Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.
ERIC Educational Resources Information Center
Mason, Emily
2010-01-01
Research investigating music textbook series is limited and has primarily focused on series no longer in publication, on two grade levels, and/or on limited cultures. The purpose of this study is to examine what countries are and have been represented in current music textbook series. Additional questions in the study pertain to frequency and…
Spectral analysis of hydrological time series of a river basin in southern Spain
NASA Astrophysics Data System (ADS)
Luque-Espinar, Juan Antonio; Pulido-Velazquez, David; Pardo-Igúzquiza, Eulogio; Fernández-Chacón, Francisca; Jiménez-Sánchez, Jorge; Chica-Olmo, Mario
2016-04-01
Spectral analysis has been applied with the aim to determine the presence and statistical significance of climate cycles in data series from different rainfall, piezometric and gauging stations located in upper Genil River Basin. This river starts in Sierra Nevada Range at 3,480 m a.s.l. and is one of the most important rivers of this region. The study area has more than 2.500 km2, with large topographic differences. For this previous study, we have used more than 30 rain data series, 4 piezometric data series and 3 data series from gauging stations. Considering a monthly temporal unit, the studied period range from 1951 to 2015 but most of the data series have some lacks. Spectral analysis is a methodology widely used to discover cyclic components in time series. The time series is assumed to be a linear combination of sinusoidal functions of known periods but of unknown amplitude and phase. The amplitude is related with the variance of the time series, explained by the oscillation at each frequency (Blackman and Tukey, 1958, Bras and Rodríguez-Iturbe, 1985, Chatfield, 1991, Jenkins and Watts, 1968, among others). The signal component represents the structured part of the time series, made up of a small number of embedded periodicities. Then, we take into account the known result for the one-sided confidence band of the power spectrum estimator. For this study, we established confidence levels of <90%, 90%, 95%, and 99%. Different climate signals have been identified: ENSO, QBO, NAO, Sun Spot cycles, as well as others related to sun activity, but the most powerful signals correspond to the annual cycle, followed by the 6 month and NAO cycles. Nevertheless, significant differences between rain data series and piezometric/flow data series have been pointed out. In piezometric data series and flow data series, ENSO and NAO signals could be stronger than others with high frequencies. The climatic peaks in lower frequencies in rain data are smaller and the confidence level too. On the other hand, the most important influence on groundwater resources and river flows are NAO, Sun Spot, ENSO and annual cycle. Acknowledgments: This research has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO funds and Junta de Andalucía (Group RNM122).
Kumaraswamy autoregressive moving average models for double bounded environmental data
NASA Astrophysics Data System (ADS)
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
Flight test experience using advanced airborne equipment in a time-based metered traffic environment
NASA Technical Reports Server (NTRS)
Morello, S. A.
1980-01-01
A series of test flights have demonstrated that time-based metering guidance and control was acceptable to pilots and air traffic controllers. The descent algorithm of the technique, with good representation of aircraft performance and wind modeling, yielded arrival time accuracy within 12 sec. It is expected that this will represent significant fuel savings (1) through a reduction of the time error dispersions at the metering fix for the entire fleet, and (2) for individual aircraft as well, through the presentation of guidance for a fuel-efficient descent. Air traffic controller workloads were also reduced, in keeping with the reduction of required communications resulting from the transfer of navigation responsibilities to pilots. A second series of test flights demonstrated that an existing flight management system could be modified to operate in the new mode.
Decadal-Scale Crustal Deformation Transients in Japan Prior to the March 11, 2011 Tohoku Earthquake
NASA Astrophysics Data System (ADS)
Mavrommatis, A. P.; Segall, P.; Miyazaki, S.; Owen, S. E.; Moore, A. W.
2012-12-01
Excluding postseismic transients and slow-slip events, interseismic deformation is generally believed to accumulate linearly in time. We test this assumption using data from Japan's GPS Earth Observation Network System (GEONET), which provides high-precision time series spanning over 10 years. Here we report regional signals of decadal transients that in some cases appear to be unrelated to any known source of deformation. We analyze GPS position time series processed independently, using the BERNESE and GIPSY-PPP software, provided by the Geospatial Information Authority of Japan (GSI) and a collaborative effort of Jet Propulsion Laboratory (JPL) and Dr. Mark Simons (Caltech), respectively. We use time series from 891 GEONET stations, spanning an average of ~14 years prior to the Mw 9.0 March 11, 2011 Tohoku earthquake. We assume a time series model that includes a linear term representing constant velocity, as well as a quadratic term representing constant acceleration. Postseismic transients, where observed, are modeled by A log(1 + t/tc). We also model seasonal terms and antenna offsets, and solve for the best-fitting parameters using standard nonlinear least squares. Uncertainties in model parameters are determined by linear propagation of errors. Noise parameters are inferred from time series that lack obvious transients using maximum-likelihood estimation and assuming a combination of power-law and white noise. Resulting velocity uncertainties are on the order of 1.0 to 1.5 mm/yr. Excluding stations with high misfit to the time series model, our results reveal several spatially coherent patterns of statistically significant (at as much as 5σ) apparent crustal acceleration in various regions of Japan. The signal exhibits similar patterns in both the GSI and JPL solutions and is not coherent across the entire network, which indicates that the pattern is not a reference frame artifact. We interpret most of the accelerations to represent transient deformation due to known sources, including slow-slip events (e.g., the post-2000 Tokai event) or postseismic transients due to large earthquakes prior to 1996 (e.g., the M 7.7 1993 Hokkaido-Nansei-Oki and M 7.7 1994 Sanriku-Oki earthquakes). Viscoelastic modeling will be required to confirm the influence of past earthquakes on the acceleration field. In addition to these signals, we find spatially coherent accelerations in the Tohoku and Kyushu regions. Specifically, we observe generally southward acceleration extending for ~400 km near the west coast of Tohoku, east-southeastward acceleration covering ~200 km along the southeast coast of Tohoku, and west-northwestward acceleration spanning ~100 km across the south coast of Kyushu. Interestingly, the eastward acceleration field in Tohoku is spatially correlated with the extent of the March 11, 2011 Mw 9.0 rupture area. We note that the inferred acceleration is present prior to the sequence of M 7+ earthquakes beginning in 2003, and that short-term transients following these events have been accounted for in the analysis. A possible, although non-unique, cause of the acceleration is increased slip rate on the Japan Trench. However, such widespread changes would not be predicted by standard earthquake nucleation models.
Stochastic modeling of hourly rainfall times series in Campania (Italy)
NASA Astrophysics Data System (ADS)
Giorgio, M.; Greco, R.
2009-04-01
Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil protection agency meteorological warning network. ACKNOWLEDGEMENTS The research was co-financed by the Italian Ministry of University, by means of the PRIN 2006 PRIN program, within the research project entitled ‘Definition of critical rainfall thresholds for destructive landslides for civil protection purposes'. REFERENCES Cowpertwait, P.S.P., Kilsby, C.G. and O'Connell, P.E., 2002. A space-time Neyman-Scott model of rainfall: Empirical analysis of extremes, Water Resources Research, 38(8):1-14. Salas, J.D., 1992. Analysis and modeling of hydrological time series, in D.R. Maidment, ed., Handbook of Hydrology, McGraw-Hill, New York. Heneker, T.M., Lambert, M.F. and Kuczera G., 2001. A point rainfall model for risk-based design, Journal of Hydrology, 247(1-2):54-71.
An Algorithm Framework for Isolating Anomalous Signals in Electromagnetic Data
NASA Astrophysics Data System (ADS)
Kappler, K. N.; Schneider, D.; Bleier, T.; MacLean, L. S.
2016-12-01
QuakeFinder and its international collaborators have installed and currently maintain an array of 165 three-axis induction magnetometer instrument sites in California, Peru, Taiwan, Greece, Chile and Sumatra. Based on research by Bleier et al. (2009), Fraser-Smith et al. (1990), and Freund (2007), the electromagnetic data from these instruments are being analyzed for pre-earthquake signatures. This analysis consists of both private research by QuakeFinder, and institutional collaborators (PUCP in Peru, NCU in Taiwan, NOA in Greece, LASP at University of Colorado, Stanford, UCLA, NASA-ESI, NASA-AMES and USC-CSEP). QuakeFinder has developed an algorithm framework aimed at isolating anomalous signals (pulses) in the time series. Results are presented from an application of this framework to induction-coil magnetometer data. Our data driven approach starts with sliding windows applied to uniformly resampled array data with a variety of lengths and overlap. Data variance (a proxy for energy) is calculated on each window and a short-term average/ long-term average (STA/LTA) filter is applied to the variance time series. Pulse identification is done by flagging time intervals in the STA/LTA filtered time series which exceed a threshold. Flagged time intervals are subsequently fed into a feature extraction program which computes statistical properties of the resampled data. These features are then filtered using a Principal Component Analysis (PCA) based method to cluster similar pulses. We explore the extent to which this approach categorizes pulses with known sources (e.g. cars, lightning, etc.) and the remaining pulses of unknown origin can be analyzed with respect to their relationship with seismicity. We seek a correlation between these daily pulse-counts (with known sources removed) and subsequent (days to weeks) seismic events greater than M5 within 15km radius. Thus we explore functions which map daily pulse-counts to a time series representing the likelihood of a seismic event occurring at some future time. These "pseudo-probabilities" can in turn be represented as Molchan diagrams. The Molchan curve provides an effective cost function for optimization and allows for a rigorous statistical assessment of the validity of pre-earthquake signals in the electromagnetic data.
NASA Astrophysics Data System (ADS)
Duveiller, G.; Donatelli, M.; Fumagalli, D.; Zucchini, A.; Nelson, R.; Baruth, B.
2017-02-01
Coupled atmosphere-ocean general circulation models (GCMs) simulate different realizations of possible future climates at global scale under contrasting scenarios of land-use and greenhouse gas emissions. Such data require several additional processing steps before it can be used to drive impact models. Spatial downscaling, typically by regional climate models (RCM), and bias-correction are two such steps that have already been addressed for Europe. Yet, the errors in resulting daily meteorological variables may be too large for specific model applications. Crop simulation models are particularly sensitive to these inconsistencies and thus require further processing of GCM-RCM outputs. Moreover, crop models are often run in a stochastic manner by using various plausible weather time series (often generated using stochastic weather generators) to represent climate time scale for a period of interest (e.g. 2000 ± 15 years), while GCM simulations typically provide a single time series for a given emission scenario. To inform agricultural policy-making, data on near- and medium-term decadal time scale is mostly requested, e.g. 2020 or 2030. Taking a sample of multiple years from these unique time series to represent time horizons in the near future is particularly problematic because selecting overlapping years may lead to spurious trends, creating artefacts in the results of the impact model simulations. This paper presents a database of consolidated and coherent future daily weather data for Europe that addresses these problems. Input data consist of daily temperature and precipitation from three dynamically downscaled and bias-corrected regional climate simulations of the IPCC A1B emission scenario created within the ENSEMBLES project. Solar radiation is estimated from temperature based on an auto-calibration procedure. Wind speed and relative air humidity are collected from historical series. From these variables, reference evapotranspiration and vapour pressure deficit are estimated ensuring consistency within daily records. The weather generator ClimGen is then used to create 30 synthetic years of all variables to characterize the time horizons of 2000, 2020 and 2030, which can readily be used for crop modelling studies.
Real-time flutter boundary prediction based on time series models
NASA Astrophysics Data System (ADS)
Gu, Wenjing; Zhou, Li
2018-03-01
For the purpose of predicting the flutter boundary in real time during flutter flight tests, two time series models accompanied with corresponding stability criterion are adopted in this paper. The first method simplifies a long nonstationary response signal as many contiguous intervals and each is considered to be stationary. The traditional AR model is then established to represent each interval of signal sequence. While the second employs a time-varying AR model to characterize actual measured signals in flutter test with progression variable speed (FTPVS). To predict the flutter boundary, stability parameters are formulated by the identified AR coefficients combined with Jury's stability criterion. The behavior of the parameters is examined using both simulated and wind-tunnel experiment data. The results demonstrate that both methods show significant effectiveness in predicting the flutter boundary at lower speed level. A comparison between the two methods is also given in this paper.
Memory persistency and nonlinearity in daily mean dew point across India
NASA Astrophysics Data System (ADS)
Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik; Bhattacharjee, Anup Kumar
2016-04-01
Enterprising endeavour has been taken in this work to realize and estimate the persistence in memory of the daily mean dew point time series obtained from seven different weather stations viz. Kolkata, Chennai (Madras), New Delhi, Mumbai (Bombay), Bhopal, Agartala and Ahmedabad representing different geographical zones in India. Hurst exponent values reveal an anti-persistent behaviour of these dew point series. To affirm the Hurst exponent values, five different scaling methods have been used and the corresponding results are compared to synthesize a finer and reliable conclusion out of it. The present analysis also bespeaks that the variation in daily mean dew point is governed by a non-stationary process with stationary increments. The delay vector variance (DVV) method has been exploited to investigate nonlinearity, and the present calculation confirms the presence of deterministic nonlinear profile in the daily mean dew point time series of the seven stations.
Modeling and clustering water demand patterns from real-world smart meter data
NASA Astrophysics Data System (ADS)
Cheifetz, Nicolas; Noumir, Zineb; Samé, Allou; Sandraz, Anne-Claire; Féliers, Cédric; Heim, Véronique
2017-08-01
Nowadays, drinking water utilities need an acute comprehension of the water demand on their distribution network, in order to efficiently operate the optimization of resources, manage billing and propose new customer services. With the emergence of smart grids, based on automated meter reading (AMR), a better understanding of the consumption modes is now accessible for smart cities with more granularities. In this context, this paper evaluates a novel methodology for identifying relevant usage profiles from the water consumption data produced by smart meters. The methodology is fully data-driven using the consumption time series which are seen as functions or curves observed with an hourly time step. First, a Fourier-based additive time series decomposition model is introduced to extract seasonal patterns from time series. These patterns are intended to represent the customer habits in terms of water consumption. Two functional clustering approaches are then used to classify the extracted seasonal patterns: the functional version of K-means, and the Fourier REgression Mixture (FReMix) model. The K-means approach produces a hard segmentation and K representative prototypes. On the other hand, the FReMix is a generative model and also produces K profiles as well as a soft segmentation based on the posterior probabilities. The proposed approach is applied to a smart grid deployed on the largest water distribution network (WDN) in France. The two clustering strategies are evaluated and compared. Finally, a realistic interpretation of the consumption habits is given for each cluster. The extensive experiments and the qualitative interpretation of the resulting clusters allow one to highlight the effectiveness of the proposed methodology.
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Designing Instructor-Led Schools with Rapid Prototyping.
ERIC Educational Resources Information Center
Lange, Steven R.; And Others
1996-01-01
Rapid prototyping involves abandoning many of the linear steps of traditional prototyping; it is instead a series of design iterations representing each major stage. This article describes the development of an instructor-led course for midlevel auditors using the principles and procedures of rapid prototyping, focusing on the savings in time and…
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
USDA-ARS?s Scientific Manuscript database
Stochastic weather generators are widely used in hydrological, environmental, and agricultural applications to simulate and forecast weather time series. However, such stochastic processes usually produce random outputs hence the question on how representative the generated data are if obtained fro...
Rényi’s information transfer between financial time series
NASA Astrophysics Data System (ADS)
Jizba, Petr; Kleinert, Hagen; Shefaat, Mohammad
2012-05-01
In this paper, we quantify the statistical coherence between financial time series by means of the Rényi entropy. With the help of Campbell’s coding theorem, we show that the Rényi entropy selectively emphasizes only certain sectors of the underlying empirical distribution while strongly suppressing others. This accentuation is controlled with Rényi’s parameter q. To tackle the issue of the information flow between time series, we formulate the concept of Rényi’s transfer entropy as a measure of information that is transferred only between certain parts of underlying distributions. This is particularly pertinent in financial time series, where the knowledge of marginal events such as spikes or sudden jumps is of a crucial importance. We apply the Rényian information flow to stock market time series from 11 world stock indices as sampled at a daily rate in the time period 02.01.1990-31.12.2009. Corresponding heat maps and net information flows are represented graphically. A detailed discussion of the transfer entropy between the DAX and S&P500 indices based on minute tick data gathered in the period 02.04.2008-11.09.2009 is also provided. Our analysis shows that the bivariate information flow between world markets is strongly asymmetric with a distinct information surplus flowing from the Asia-Pacific region to both European and US markets. An important yet less dramatic excess of information also flows from Europe to the US. This is particularly clearly seen from a careful analysis of Rényi information flow between the DAX and S&P500 indices.
Wavelet application to the time series analysis of DORIS station coordinates
NASA Astrophysics Data System (ADS)
Bessissi, Zahia; Terbeche, Mekki; Ghezali, Boualem
2009-06-01
The topic developed in this article relates to the residual time series analysis of DORIS station coordinates using the wavelet transform. Several analysis techniques, already developed in other disciplines, were employed in the statistical study of the geodetic time series of stations. The wavelet transform allows one, on the one hand, to provide temporal and frequential parameter residual signals, and on the other hand, to determine and quantify systematic signals such as periodicity and tendency. Tendency is the change in short or long term signals; it is an average curve which represents the general pace of the signal evolution. On the other hand, periodicity is a process which is repeated, identical to itself, after a time interval called the period. In this context, the topic of this article consists, on the one hand, in determining the systematic signals by wavelet analysis of time series of DORIS station coordinates, and on the other hand, in applying the denoising signal to the wavelet packet, which makes it possible to obtain a well-filtered signal, smoother than the original signal. The DORIS data used in the treatment are a set of weekly residual time series from 1993 to 2004 from eight stations: DIOA, COLA, FAIB, KRAB, SAKA, SODB, THUB and SYPB. It is the ign03wd01 solution expressed in stcd format, which is derived by the IGN/JPL analysis center. Although these data are not very recent, the goal of this study is to detect the contribution of the wavelet analysis method on the DORIS data, compared to the other analysis methods already studied.
Identification of varying time scales in sediment transport using the Hilbert-Huang Transform method
NASA Astrophysics Data System (ADS)
Kuai, Ken Z.; Tsai, Christina W.
2012-02-01
SummarySediment transport processes vary at a variety of time scales - from seconds, hours, days to months and years. Multiple time scales exist in the system of flow, sediment transport and bed elevation change processes. As such, identification and selection of appropriate time scales for flow and sediment processes can assist in formulating a system of flow and sediment governing equations representative of the dynamic interaction of flow and particles at the desired details. Recognizing the importance of different varying time scales in the fluvial processes of sediment transport, we introduce the Hilbert-Huang Transform method (HHT) to the field of sediment transport for the time scale analysis. The HHT uses the Empirical Mode Decomposition (EMD) method to decompose a time series into a collection of the Intrinsic Mode Functions (IMFs), and uses the Hilbert Spectral Analysis (HSA) to obtain instantaneous frequency data. The EMD extracts the variability of data with different time scales, and improves the analysis of data series. The HSA can display the succession of time varying time scales, which cannot be captured by the often-used Fast Fourier Transform (FFT) method. This study is one of the earlier attempts to introduce the state-of-the-art technique for the multiple time sales analysis of sediment transport processes. Three practical applications of the HHT method for data analysis of both suspended sediment and bedload transport time series are presented. The analysis results show the strong impact of flood waves on the variations of flow and sediment time scales at a large sampling time scale, as well as the impact of flow turbulence on those time scales at a smaller sampling time scale. Our analysis reveals that the existence of multiple time scales in sediment transport processes may be attributed to the fractal nature in sediment transport. It can be demonstrated by the HHT analysis that the bedload motion time scale is better represented by the ratio of the water depth to the settling velocity, h/ w. In the final part, HHT results are compared with an available time scale formula in literature.
Construction of the Non-Rigid Earth Rotation Series
NASA Astrophysics Data System (ADS)
Pashkevich, V. V.
2007-01-01
Last years a lot of attempts to derive a high-precision theory of the non-rigid Earth rotation are carried out. For these purposes different transfer functions are used. Usually these transfer functions are applied to the series representing the nutation in the longitude and the obliquity of the rigid Earth rotation with respect to the ecliptic of date. The aim of this investigation is a construction of new high-precision non-rigid Earth rotation series (SN9000), dynamically adequate to the DE404/LE404 ephemeris over 2000 time span years, which are presented as functions of the Euler angles Ψ, θ and φ with respect to the fixed ecliptic plane and equinox J2000.0.
ELECTRONIC ANALOG COMPUTER FOR DETERMINING RADIOACTIVE DISINTEGRATION
Robinson, H.P.
1959-07-14
A computer is presented for determining growth and decay curves for elements in a radioactive disintegration series wherein one unstable element decays to form a second unstable element or isotope, which in turn forms a third element, etc. The growth and decay curves of radioactive elements are simulated by the charge and discharge curves of a resistance-capacitance network. Several such networks having readily adjustable values are connected in series with an amplifier between each successive pair. The time constant of each of the various networks is set proportional to the half-life of a corresponding element in the series represented and the charge and discharge curves of each of the networks simulates the element growth and decay curve.
Koda, Satoru; Onda, Yoshihiko; Matsui, Hidetoshi; Takahagi, Kotaro; Yamaguchi-Uehara, Yukiko; Shimizu, Minami; Inoue, Komaki; Yoshida, Takuhiro; Sakurai, Tetsuya; Honda, Hiroshi; Eguchi, Shinto; Nishii, Ryuei; Mochida, Keiichi
2017-01-01
We report the comprehensive identification of periodic genes and their network inference, based on a gene co-expression analysis and an Auto-Regressive eXogenous (ARX) model with a group smoothly clipped absolute deviation (SCAD) method using a time-series transcriptome dataset in a model grass, Brachypodium distachyon . To reveal the diurnal changes in the transcriptome in B. distachyon , we performed RNA-seq analysis of its leaves sampled through a diurnal cycle of over 48 h at 4 h intervals using three biological replications, and identified 3,621 periodic genes through our wavelet analysis. The expression data are feasible to infer network sparsity based on ARX models. We found that genes involved in biological processes such as transcriptional regulation, protein degradation, and post-transcriptional modification and photosynthesis are significantly enriched in the periodic genes, suggesting that these processes might be regulated by circadian rhythm in B. distachyon . On the basis of the time-series expression patterns of the periodic genes, we constructed a chronological gene co-expression network and identified putative transcription factors encoding genes that might be involved in the time-specific regulatory transcriptional network. Moreover, we inferred a transcriptional network composed of the periodic genes in B. distachyon , aiming to identify genes associated with other genes through variable selection by grouping time points for each gene. Based on the ARX model with the group SCAD regularization using our time-series expression datasets of the periodic genes, we constructed gene networks and found that the networks represent typical scale-free structure. Our findings demonstrate that the diurnal changes in the transcriptome in B. distachyon leaves have a sparse network structure, demonstrating the spatiotemporal gene regulatory network over the cyclic phase transitions in B. distachyon diurnal growth.
Predictive Mining of Time Series Data
NASA Astrophysics Data System (ADS)
Java, A.; Perlman, E. S.
2002-05-01
All-sky monitors are a relatively new development in astronomy, and their data represent a largely untapped resource. Proper utilization of this resource could lead to important discoveries not only in the physics of variable objects, but in how one observes such objects. We discuss the development of a Java toolbox for astronomical time series data. Rather than using methods conventional in astronomy (e.g., power spectrum and cross-correlation analysis) we employ rule discovery techniques commonly used in analyzing stock-market data. By clustering patterns found within the data, rule discovery allows one to build predictive models, allowing one to forecast when a given event might occur or whether the occurrence of one event will trigger a second. We have tested the toolbox and accompanying display tool on datasets (representing several classes of objects) from the RXTE All Sky Monitor. We use these datasets to illustrate the methods and functionality of the toolbox. We have found predictive patterns in several ASM datasets. We also discuss problems faced in the development process, particularly the difficulties of dealing with discretized and irregularly sampled data. A possible application would be in scheduling target of opportunity observations where the astronomer wants to observe an object when a certain event or series of events occurs. By combining such a toolbox with an automatic, Java query tool which regularly gathers data on objects of interest, the astronomer or telescope operator could use the real-time datastream to efficiently predict the occurrence of (for example) a flare or other event. By combining the toolbox with dynamic time warping data-mining tools, one could predict events which may happen on variable time scales.
Endogenous time-varying risk aversion and asset returns.
Berardi, Michele
2016-01-01
Stylized facts about statistical properties for short horizon returns in financial markets have been identified in the literature, but a satisfactory understanding for their manifestation is yet to be achieved. In this work, we show that a simple asset pricing model with representative agent is able to generate time series of returns that replicate such stylized facts if the risk aversion coefficient is allowed to change endogenously over time in response to unexpected excess returns under evolutionary forces. The same model, under constant risk aversion, would instead generate returns that are essentially Gaussian. We conclude that an endogenous time-varying risk aversion represents a very parsimonious way to make the model match real data on key statistical properties, and therefore deserves careful consideration from economists and practitioners alike.
Muhs, D.R.; Kennedy, G.L.; Rockwell, T.K.
1994-01-01
Few of the marine terraces along the Pacific coast of North America have been dated using uranium-series techniques. Ten terrace sequences from southern Oregon to southern Baja California Sur have yielded fossil corals in quantities suitable for U-series dating by alpha spectrometry. U-series-dated terraces representing the ???80,000 yr sea-level high stand are identified in five areas (Bandon, Oregon; Point Arena, San Nicolas Island, and Point Loma, California; and Punta Banda, Baja California); terraces representing the ???125,000 yr sea-level high stand are identified in eight areas (Cayucos, San Luis Obispo Bay, San Nicolas Island, San Clemente Island, and Point Loma, California; Punta Bands and Isla Guadalupe, Baja California; and Cabo Pulmo, Baja California Sur). On San Nicolas Island, Point Loma, and Punta Bands, both the ???80,000 and the ???125,000 yr terraces are dated. Terraces that may represent the ???105,000 sea-level high stand are rarely preserved and none has yielded corals for U-series dating. Similarity of coral ages from midlatitude, erosional marine terraces with coral ages from emergent, constructional reefs on tropical coastlines suggests a common forcing mechanism, namely glacioeustatically controlled fluctuations in sea level superimposed on steady tectonic uplift. The low marine terrace dated at ???125,000 yr on Isla Guadalupe, Baja California, presumed to be tectonically stable, supports evidence from other localities for a +6-m sea level at that time. Data from the Pacific Coast and a compilation of data from other coasts indicate that sea levels at ???80,000 and ???105,000 yr may have been closer to present sea level (within a few meters) than previous studies have suggested.
Land science with Sentinel-2 and Sentinel-3 data series synergy
NASA Astrophysics Data System (ADS)
Moreno, Jose; Guanter, Luis; Alonso, Luis; Gomez, Luis; Amoros, Julia; Camps, Gustavo; Delegido, Jesus
2010-05-01
Although the GMES/Sentinel satellite series were primarily designed to provide observations for operational services and routine applications, there is a growing interest in the scientific community towards the usage of Sentinel data for more advanced and innovative science. Apart from the improved spatial and spectral capabilities, the availability of consistent time series covering a period of over 20 years opens possibilities never explored before, such as systematic data assimilation approaches exploiting the time-series concept, or the incorporation in the modelling approaches of processes covering time scales from weeks to decades. Sentinel-3 will provide continuity to current ENVISAT MERIS/AATSR capabilities. The results already derived from MERIS/AATRS will be more systematically exploited by using OLCI in synergy with SLST. Particularly innovative is the case of Sentinel-2, which is specifically designed for land applications. Built on a constellation of two satellites operating simultaneously to provide 5 days geometric revisit time, the Sentinel-2 system will providing global and systematic acquisitions with high spatial resolution and with a high revisit time tailored towards the needs of land monitoring. Apart from providing continuity to Landsat and SPOT time series, the Sentinel-2 Multi-Spectral Instrument (MSI) incorporates new narrow bands around the red-edge for improved retrievals of biophysical parameters. The limitations imposed by the need of a proper cloud screening and atmospheric corrections have represented a serious constraint in the past for optical data. The fact that both Sentinel-2 and 3 have dedicated bands to allow such needed corrections for optical data represents an important step towards a proper exploitation, guarantying consistent time series showing actual variability in land surface conditions without the artefacts introduced by the atmosphere. Expected operational products (such as Land Cover maps, Leaf Area Index, Fractional Vegetation Cover, Fraction of Absorbed Photosynthetically Active Radiation, and Leaf Chlorophyll and Water Contents), will be enhanced with new scientific applications. Higher level products will also be provided, by means of mosaicking, averaging, synthesising or compositing of spatially and temporally resampled data. A key element in the exploitation of the Sentinel series will be the adequate use of data synergy, which will open new possibilities for improved Land Models. This paper analyses in particular the possibilities offered by mosaicking and compositing information derived from Sentinel-2 observations in high spatial resolution to complement dense time series derived from Sentinel-3 data with more frequent coverage. Interpolation of gaps in high spatial resolution time series (from Sentinel-2 data) by using medium/low resolution data from Sentinel-3 (OLCI and SLSTR) is also a way of making series more temporally consistent with high spatial resolution. The primary goal of such temporal interpolation / spatial mosaicking techniques is to derive consistent surface reflectance data virtually for every date and geographical location, no matter the initial spatial/temporal coverage of the original data used to produce the composite. As a result, biophysical products can be derived in a more consistent way from the spectral information of Sentinel-3 data by making use of a description of surface heterogeneity derived from Sentinel-2 data. Using data from dedicated experiments (SEN2FLEX, CEFLES2, SEN3EXP), that include a large dataset of satellite and airborne data and of ground-based measurements of atmospheric and vegetation parameters, different techniques are tested, including empirical / statistical approaches that builds nonlinear regression by mapping spectra to a high dimensional space, up to model inversion / data assimilation scenarios. Exploitation of the temporal domain and spatial multi-scale domain becomes then a driver for the systematic exploitation of GMES/Sentinels data time series. This paper review current status, and identifies research priorities in such direction.
Shaping low-thrust trajectories with thrust-handling feature
NASA Astrophysics Data System (ADS)
Taheri, Ehsan; Kolmanovsky, Ilya; Atkins, Ella
2018-02-01
Shape-based methods are becoming popular in low-thrust trajectory optimization due to their fast computation speeds. In existing shape-based methods constraints are treated at the acceleration level but not at the thrust level. These two constraint types are not equivalent since spacecraft mass decreases over time as fuel is expended. This paper develops a shape-based method based on a Fourier series approximation that is capable of representing trajectories defined in spherical coordinates and that enforces thrust constraints. An objective function can be incorporated to minimize overall mission cost, i.e., achieve minimum ΔV . A representative mission from Earth to Mars is studied. The proposed Fourier series technique is demonstrated capable of generating feasible and near-optimal trajectories. These attributes can facilitate future low-thrust mission designs where different trajectory alternatives must be rapidly constructed and evaluated.
NASA Astrophysics Data System (ADS)
Solazzo, E.; Galmarini, S.
2015-07-01
A more sensible use of monitoring data for the evaluation and development of regional-scale atmospheric models is proposed. The motivation stems from observing current practices in this realm where the quality of monitoring data is seldom questioned and model-to-data deviation is uniquely attributed to model deficiency. Efforts are spent to quantify the uncertainty intrinsic to the measurement process, but aspects connected to model evaluation and development have recently emerged that remain obscure, such as the spatial representativeness and the homogeneity of signals subjects of our investigation. By using time series of hourly records of ozone for a whole year (2006) collected by the European AirBase network the area of representativeness is firstly analysed showing, for similar class of stations (urban, suburban, rural), large heterogeneity and high sensitivity to the density of the network and to the noise of the signal, suggesting the mere station classification to be not a suitable candidate to help select the pool of stations used in model evaluation. Therefore a novel, more robust technique is developed based on the spatial properties of the associativity of the spectral components of the ozone time series, in an attempt to determine the level of homogeneity. The spatial structure of the associativity among stations is informative of the spatial representativeness of that specific component and automatically tells about spatial anisotropy. Time series of ozone data from North American networks have also been analysed to support the methodology. We find that the low energy components (especially the intra-day signal) suffer from a too strong influence of country-level network set-up in Europe, and different networks in North America, showing spatial heterogeneity exactly at the administrative border that separates countries in Europe and at areas separating different networks in North America. For model evaluation purposes these elements should be treated as purely stochastic and discarded, while retaining the portion of the signal useful to the evaluation process. Trans-boundary discontinuity of the intra-day signal along with cross-network grouping has been found to be predominant. Skills of fifteen regional chemical-transport modelling systems have been assessed in light of this result, finding an improved accuracy of up to 5% when the intra-day signal is removed with respect to the case where all components are analysed.
Del Sorbo, Maria Rosaria; Balzano, Walter; Donato, Michele; Draghici, Sorin
2013-11-01
Differential expression of genes detected with the analysis of high throughput genomic experiments is a commonly used intermediate step for the identification of signaling pathways involved in the response to different biological conditions. The impact analysis was the first approach for the analysis of signaling pathways involved in a certain biological process that was able to take into account not only the magnitude of the expression change of the genes but also the topology of signaling pathways including the type of each interactions between the genes. In the impact analysis, signaling pathways are represented as weighted directed graphs with genes as nodes and the interactions between genes as edges. Edges weights are represented by a β factor, the regulatory efficiency, which is assumed to be equal to 1 in inductive interactions between genes and equal to -1 in repressive interactions. This study presents a similarity analysis between gene expression time series aimed to find correspondences with the regulatory efficiency, i.e. the β factor as found in a widely used pathway database. Here, we focused on correlations among genes directly connected in signaling pathways, assuming that the expression variations of upstream genes impact immediately downstream genes in a short time interval and without significant influences by the interactions with other genes. Time series were processed using three different similarity metrics. The first metric is based on the bit string matching; the second one is a specific application of the Dynamic Time Warping to detect similarities even in presence of stretching and delays; the third one is a quantitative comparative analysis resulting by an evaluation of frequency domain representation of time series: the similarity metric is the correlation between dominant spectral components. These three approaches are tested on real data and pathways, and a comparison is performed using Information Retrieval benchmark tools, indicating the frequency approach as the best similarity metric among the three, for its ability to detect the correlation based on the correspondence of the most significant frequency components. Copyright © 2013. Published by Elsevier Ireland Ltd.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-26
... and generally requires the Series 7 examination for Registered Representatives,\\4\\ Principals,\\5\\ off....\\11\\ Before their registration can become effective, they must pass the Series 7 examination. The... Rule 1(cc) register with the Exchange as a General Securities Representative and pass the Series 7...
Statistical analysis of CSP plants by simulating extensive meteorological series
NASA Astrophysics Data System (ADS)
Pavón, Manuel; Fernández, Carlos M.; Silva, Manuel; Moreno, Sara; Guisado, María V.; Bernardos, Ana
2017-06-01
The feasibility analysis of any power plant project needs the estimation of the amount of energy it will be able to deliver to the grid during its lifetime. To achieve this, its feasibility study requires a precise knowledge of the solar resource over a long term period. In Concentrating Solar Power projects (CSP), financing institutions typically requires several statistical probability of exceedance scenarios of the expected electric energy output. Currently, the industry assumes a correlation between probabilities of exceedance of annual Direct Normal Irradiance (DNI) and energy yield. In this work, this assumption is tested by the simulation of the energy yield of CSP plants using as input a 34-year series of measured meteorological parameters and solar irradiance. The results of this work show that, even if some correspondence between the probabilities of exceedance of annual DNI values and energy yields is found, the intra-annual distribution of DNI may significantly affect this correlation. This result highlights the need of standardized procedures for the elaboration of representative DNI time series representative of a given probability of exceedance of annual DNI.
Faes, Luca; Nollo, Giandomenico
2010-11-01
The Partial Directed Coherence (PDC) and its generalized formulation (gPDC) are popular tools for investigating, in the frequency domain, the concept of Granger causality among multivariate (MV) time series. PDC and gPDC are formalized in terms of the coefficients of an MV autoregressive (MVAR) model which describes only the lagged effects among the time series and forsakes instantaneous effects. However, instantaneous effects are known to affect linear parametric modeling, and are likely to occur in experimental time series. In this study, we investigate the impact on the assessment of frequency domain causality of excluding instantaneous effects from the model underlying PDC evaluation. Moreover, we propose the utilization of an extended MVAR model including both instantaneous and lagged effects. This model is used to assess PDC either in accordance with the definition of Granger causality when considering only lagged effects (iPDC), or with an extended form of causality, when we consider both instantaneous and lagged effects (ePDC). The approach is first evaluated on three theoretical examples of MVAR processes, which show that the presence of instantaneous correlations may produce misleading profiles of PDC and gPDC, while ePDC and iPDC derived from the extended model provide here a correct interpretation of extended and lagged causality. It is then applied to representative examples of cardiorespiratory and EEG MV time series. They suggest that ePDC and iPDC are better interpretable than PDC and gPDC in terms of the known cardiovascular and neural physiologies.
ERIC Educational Resources Information Center
Phan, Huy P.; Ngu, Bing H.
2017-01-01
In social sciences, the use of stringent methodological approaches is gaining increasing emphasis. Researchers have recognized the limitations of cross-sectional, non-manipulative data in the study of causality. True experimental designs, in contrast, are preferred as they represent rigorous standards for achieving causal flows between variables.…
Code of Federal Regulations, 2013 CFR
2013-07-01
... substance or mixture is able to represent or substitute for another in a test or series of tests, and that... submitted to EPA and ends after an amount of time equal to that which had been required to develop data or after five years, whichever is later. Sponsor means the person or persons who design, direct and finance...
Code of Federal Regulations, 2014 CFR
2014-07-01
... represent or substitute for another in a test or series of tests, and that the data from one substance can... and ends after an amount of time equal to that which had been required to develop data or after five years, whichever is later. Sponsor means the person or persons who design, direct and finance the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... substance or mixture is able to represent or substitute for another in a test or series of tests, and that... submitted to EPA and ends after an amount of time equal to that which had been required to develop data or after five years, whichever is later. Sponsor means the person or persons who design, direct and finance...
Code of Federal Regulations, 2011 CFR
2011-07-01
... substance or mixture is able to represent or substitute for another in a test or series of tests, and that... submitted to EPA and ends after an amount of time equal to that which had been required to develop data or after five years, whichever is later. Sponsor means the person or persons who design, direct and finance...
Automatic Dance Lesson Generation
ERIC Educational Resources Information Center
Yang, Yang; Leung, H.; Yue, Lihua; Deng, LiQun
2012-01-01
In this paper, an automatic lesson generation system is presented which is suitable in a learning-by-mimicking scenario where the learning objects can be represented as multiattribute time series data. The dance is used as an example in this paper to illustrate the idea. Given a dance motion sequence as the input, the proposed lesson generation…
Parameter Estimation for Real Filtered Sinusoids
1997-09-01
Statistical Signal Processing: Detection, Estimation and Time Series Analysis. New York: Addison-Wesley, 1991. 74. Serway , Raymond A . Physics for...Dr. Yung Kee Yeo, Dean’s Representative Dr. Robert A . Calico, Jr., Dean Table of Contents Page List of Abbreviations...Contributions ....... ...................... 5-4 5.4 Summary ........ ............................. 5-6 Appendix A . Vector-Matrix Differentiation
A Critical Review of Line Graphs in Behavior Analytic Journals
ERIC Educational Resources Information Center
Kubina, Richard M., Jr.; Kostewicz, Douglas E.; Brennan, Kaitlyn M.; King, Seth A.
2017-01-01
Visual displays such as graphs have played an instrumental role in psychology. One discipline relies almost exclusively on graphs in both applied and basic settings, behavior analysis. The most common graphic used in behavior analysis falls under the category of time series. The line graph represents the most frequently used display for visual…
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2005-01-01
Results obtained with interindividual techniques in a representative sample of a population are not necessarily generalizable to the individual members of this population. In this article the specific condition is presented that must be satisfied to generalize from the interindividual level to the intraindividual level. A way to investigate…
Modelling spatiotemporal change using multidimensional arrays Meng
NASA Astrophysics Data System (ADS)
Lu, Meng; Appel, Marius; Pebesma, Edzer
2017-04-01
The large variety of remote sensors, model simulations, and in-situ records provide great opportunities to model environmental change. The massive amount of high-dimensional data calls for methods to integrate data from various sources and to analyse spatiotemporal and thematic information jointly. An array is a collection of elements ordered and indexed in arbitrary dimensions, which naturally represent spatiotemporal phenomena that are identified by their geographic locations and recording time. In addition, array regridding (e.g., resampling, down-/up-scaling), dimension reduction, and spatiotemporal statistical algorithms are readily applicable to arrays. However, the role of arrays in big geoscientific data analysis has not been systematically studied: How can arrays discretise continuous spatiotemporal phenomena? How can arrays facilitate the extraction of multidimensional information? How can arrays provide a clean, scalable and reproducible change modelling process that is communicable between mathematicians, computer scientist, Earth system scientist and stakeholders? This study emphasises on detecting spatiotemporal change using satellite image time series. Current change detection methods using satellite image time series commonly analyse data in separate steps: 1) forming a vegetation index, 2) conducting time series analysis on each pixel, and 3) post-processing and mapping time series analysis results, which does not consider spatiotemporal correlations and ignores much of the spectral information. Multidimensional information can be better extracted by jointly considering spatial, spectral, and temporal information. To approach this goal, we use principal component analysis to extract multispectral information and spatial autoregressive models to account for spatial correlation in residual based time series structural change modelling. We also discuss the potential of multivariate non-parametric time series structural change methods, hierarchical modelling, and extreme event detection methods to model spatiotemporal change. We show how array operations can facilitate expressing these methods, and how the open-source array data management and analytics software SciDB and R can be used to scale the process and make it easily reproducible.
Operative Treatment of Traumatic Hallux Valgus in Elite Athletes.
Covell, D Jeff; Lareau, Craig R; Anderson, Robert B
2017-06-01
Traumatic hallux valgus is an increasingly common injury in the athletic population and represents a unique variant of turf toe. Failure to appropriately recognize and treat these injuries can lead to continued pain, decreased performance, progressive deformities, and ultimately degeneration of the hallux metatarsophalangeal joint. Limited literature currently exists to assist in the diagnosis, management, and operative treatment. Nineteen patients were reviewed in this series, including 12 National Football League, 6 college, and 1 high school player who was a college prospect. The average age for all patients at the time of surgery was 24.4 years (range, 19-33 years). Return to play and complications were evaluated. Overall, good operative results were obtained, with 74% of patients returning to their preinjury level of play at an average recovery time of 3.4 months. Traumatic hallux valgus is an increasingly common injury in the athletic population and represents a unique variant of turf toe. The impact of this injury cannot be overstated, as one-quarter of players were unable to return to play. Level IV, case series.
NASA Astrophysics Data System (ADS)
Feigin, A. M.; Mukhin, D.; Volodin, E. M.; Gavrilov, A.; Loskutov, E. M.
2013-12-01
The new method of decomposition of the Earth's climate system into well separated spatial-temporal patterns ('climatic modes') is discussed. The method is based on: (i) generalization of the MSSA (Multichannel Singular Spectral Analysis) [1] for expanding vector (space-distributed) time series in basis of spatial-temporal empirical orthogonal functions (STEOF), which makes allowance delayed correlations of the processes recorded in spatially separated points; (ii) expanding both real SST data, and longer by several times SST data generated numerically, in STEOF basis; (iii) use of the numerically produced STEOF basis for exclusion of 'too slow' (and thus not represented correctly) processes from real data. The application of the method allows by means of vector time series generated numerically by the INM RAS Coupled Climate Model [2] to separate from real SST anomalies data [3] two climatic modes possessing by noticeably different time scales: 3-5 and 9-11 years. Relations of separated modes to ENSO and PDO are investigated. Possible applications of spatial-temporal climatic patterns concept to prognosis of climate system evolution is discussed. 1. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 2. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm 3. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/
Empirical analysis of the effects of cyber security incidents.
Davis, Ginger; Garcia, Alfredo; Zhang, Weide
2009-09-01
We analyze the time series associated with web traffic for a representative set of online businesses that have suffered widely reported cyber security incidents. Our working hypothesis is that cyber security incidents may prompt (security conscious) online customers to opt out and conduct their business elsewhere or, at the very least, to refrain from accessing online services. For companies relying almost exclusively on online channels, this presents an important business risk. We test for structural changes in these time series that may have been caused by these cyber security incidents. Our results consistently indicate that cyber security incidents do not affect the structure of web traffic for the set of online businesses studied. We discuss various public policy considerations stemming from our analysis.
Cheng, Karen Elizabeth; Crary, David J; Ray, Jaideep; Safta, Cosmin
2013-01-01
Objective We discuss the use of structural models for the analysis of biosurveillance related data. Methods and results Using a combination of real and simulated data, we have constructed a data set that represents a plausible time series resulting from surveillance of a large scale bioterrorist anthrax attack in Miami. We discuss the performance of anomaly detection with structural models for these data using receiver operating characteristic (ROC) and activity monitoring operating characteristic (AMOC) analysis. In addition, we show that these techniques provide a method for predicting the level of the outbreak valid for approximately 2 weeks, post-alarm. Conclusions Structural models provide an effective tool for the analysis of biosurveillance data, in particular for time series with noisy, non-stationary background and missing data. PMID:23037798
Wangdi, Kinley; Singhasivanon, Pratap; Silawan, Tassanee; Lawpoolsri, Saranath; White, Nicholas J; Kaewkungwal, Jaranit
2010-09-03
Malaria still remains a public health problem in some districts of Bhutan despite marked reduction of cases in last few years. To strengthen the country's prevention and control measures, this study was carried out to develop forecasting and prediction models of malaria incidence in the endemic districts of Bhutan using time series and ARIMAX. This study was carried out retrospectively using the monthly reported malaria cases from the health centres to Vector-borne Disease Control Programme (VDCP) and the meteorological data from Meteorological Unit, Department of Energy, Ministry of Economic Affairs. Time series analysis was performed on monthly malaria cases, from 1994 to 2008, in seven malaria endemic districts. The time series models derived from a multiplicative seasonal autoregressive integrated moving average (ARIMA) was deployed to identify the best model using data from 1994 to 2006. The best-fit model was selected for each individual district and for the overall endemic area was developed and the monthly cases from January to December 2009 and 2010 were forecasted. In developing the prediction model, the monthly reported malaria cases and the meteorological factors from 1996 to 2008 of the seven districts were analysed. The method of ARIMAX modelling was employed to determine predictors of malaria of the subsequent month. It was found that the ARIMA (p, d, q) (P, D, Q)s model (p and P representing the auto regressive and seasonal autoregressive; d and D representing the non-seasonal differences and seasonal differencing; and q and Q the moving average parameters and seasonal moving average parameters, respectively and s representing the length of the seasonal period) for the overall endemic districts was (2,1,1)(0,1,1)12; the modelling data from each district revealed two most common ARIMA models including (2,1,1)(0,1,1)12 and (1,1,1)(0,1,1)12. The forecasted monthly malaria cases from January to December 2009 and 2010 varied from 15 to 82 cases in 2009 and 67 to 149 cases in 2010, where population in 2009 was 285,375 and the expected population of 2010 to be 289,085. The ARIMAX model of monthly cases and climatic factors showed considerable variations among the different districts. In general, the mean maximum temperature lagged at one month was a strong positive predictor of an increased malaria cases for four districts. The monthly number of cases of the previous month was also a significant predictor in one district, whereas no variable could predict malaria cases for two districts. The ARIMA models of time-series analysis were useful in forecasting the number of cases in the endemic areas of Bhutan. There was no consistency in the predictors of malaria cases when using ARIMAX model with selected lag times and climatic predictors. The ARIMA forecasting models could be employed for planning and managing malaria prevention and control programme in Bhutan.
NASA Astrophysics Data System (ADS)
Miksovsky, J.; Raidl, A.
Time delays phase space reconstruction represents one of useful tools of nonlinear time series analysis, enabling number of applications. Its utilization requires the value of time delay to be known, as well as the value of embedding dimension. There are sev- eral methods how to estimate both these parameters. Typically, time delay is computed first, followed by embedding dimension. Our presented approach is slightly different - we reconstructed phase space for various combinations of mentioned parameters and used it for prediction by means of the nearest neighbours in the phase space. Then some measure of prediction's success was computed (correlation or RMSE, e.g.). The position of its global maximum (minimum) should indicate the suitable combination of time delay and embedding dimension. Several meteorological (particularly clima- tological) time series were used for the computations. We have also created a MS- Windows based program in order to implement this approach - its basic features will be presented as well.
NASA Technical Reports Server (NTRS)
Hall, Steven R.; Walker, Bruce K.
1990-01-01
A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.
Long-term wave measurements in a climate change perspective.
NASA Astrophysics Data System (ADS)
Pomaro, Angela; Bertotti, Luciana; Cavaleri, Luigi; Lionello, Piero; Portilla-Yandun, Jesus
2017-04-01
At present multi-decadal time series of wave data needed for climate studies are generally provided by long term model simulations (hindcasts) covering the area of interest. Examples, among many, at different scales are wave hindcasts adopting the wind fields of the ERA-Interim reanalysis of the European Centre for Medium-Range Weather Forecasts (ECMWF, Reading, U.K.) at the global level and by regional re-analysis as for the Mediterranean Sea (Lionello and Sanna, 2006). Valuable as they are, these estimates are necessarily affected by the approximations involved, the more so because of the problems encountered within modelling processes in small basins using coarse resolution wind fields (Cavaleri and Bertotti, 2004). On the contrary, multi-decadal observed time series are rare. They have the evident advantage of somehow representing the real evolution of the waves, without the shortcomings associated with the limitation of models in reproducing the actual processes and the real variability within the wave fields. Obviously, observed wave time series are not exempt of problems. They represent a very local information, hence their use to describe the wave evolution at large scale is sometimes arguable and, in general, it needs the support of model simulations assessing to which extent the local value is representative of a large scale evolution. Local effects may prevent the identification of trends that are indeed present at large scale. Moreover, a regular maintenance, accurate monitoring and metadata information are crucial issues when considering the reliability of a time series for climate applications. Of course, where available, especially if for several decades, measured data are of great value for a number of reasons and can be valuable clues to delve further into the physics of the processes of interest, especially if considering that waves, as an integrated product of the local climate, if available in an area sensitive to even limited changes of the large scale pattern, can provide related compact and meaningful information. In addition, the availability for the area of interest of a 20-year long dataset of directional spectra (in frequency and direction) offers an independent, but theoretically corresponding and significantly long dataset, allowing to penetrate the wave problem through different perspectives. In particular, we investigate the contribution of the individual wave systems that modulate the variability of waves in the Adriatic Sea. A characterization of wave conditions based on wave spectra in fact brings out a more detailed description of the different wave regimes, their associated meteorological conditions and their variation in time and geographical space.
Traffic congestion and ozone precursor emissions in Bilbao, Spain.
Ibarra-Berastegi, Gabriel; Madariaga, Imanol
2003-01-01
In urban environments, the measured levels of ozone are the result of the interaction between emissions of precursors (mainly VOCs and NOx) and meteorological effects. In this work, time series of daily values of ozone, measured at three locations in Bilbao (Spain), have been built. Then, after removing meteorological effects from them, ozone and traffic data have been analyzed jointly. The goal was to identify traffic situations and link them to ozone levels in the area of Bilbao. To remove meteorological effects from the selected ozone time series, the technique developed by Rao and Zurbenko was used. This is a widely used technique and, after its application, the fraction obtained from a given ozone time series represents an ozone forming capability attributable to emissions of precursors. This fraction is devoid of any meteorological influence and includes only the apportion of periodicities above 1.7 years. In the case of Bilbao, the ozone fractions obtained at three locations have been compared on that time scale with traffic data from the area. For the 1993-1996 period, a regression analysis of the ozone and traffic fractions due to periodicities above 1.7 years (long-term fractions), shows that traffic is the main explanatory factor for ozone with R2 ranging from 0.916 to 0.996 at the three locations studied. Analysis of these longterm fractions has made it possible to identify two traffic regimes for the whole area, associated to different profiles of ozone forming capability. The first one favors low ozone forming capability, and is associated with a situation of fluent traffic. The second one shows high ozone forming capability and represents congestion. Joint analysis of raw data of ozone and traffic do not show any clear pattern due to the strong masking effects that seasonal-meteorological effects (mainly radiation) have on the measured ozone signal. If only immission data of ozone are available, as in this case, a comparison between ozone and traffic can only be made on the long-term time scale, since that is the only fraction embedded in the ozone time series that can exclusively be attributed to emissions of precursors. This fact stresses the need to study the different fractions embedded in the time series of ozone measured levels separately. Though the coefficients obtained in the regression are only valid for the 1993-1996 period, these traffic regimes represent long-term targets (congestion or fluent traffic) that can inspire policies for a joint management of the traffic and pollution by ozone in the area of Bilbao beyond that period. The results of this work show the need of a joint management of ozone and traffic in Bilbao. Since an accurate knowledge of traffic was not available, the use of emission factors to relate traffic and actual ozone levels has not been possible. For this reason, this study has focused on the long-term fractions of traffic and ozone. In the future, if a more accurate knowledge of traffic is available, it will be possible to find relationships between traffic and ozone on all time scales.
The RATIO method for time-resolved Laue crystallography
Coppens, Philip; Pitak, Mateusz; Gembicky, Milan; Messerschmidt, Marc; Scheins, Stephan; Benedict, Jason; Adachi, Shin-ichi; Sato, Tokushi; Nozawa, Shunsuke; Ichiyanagi, Kohei; Chollet, Matthieu; Koshihara, Shin-ya
2009-01-01
A RATIO method for analysis of intensity changes in time-resolved pump–probe Laue diffraction experiments is described. The method eliminates the need for scaling the data with a wavelength curve representing the spectral distribution of the source and removes the effect of possible anisotropic absorption. It does not require relative scaling of series of frames and removes errors due to all but very short term fluctuations in the synchrotron beam. PMID:19240334
ERIC Educational Resources Information Center
Metz, Allison J. R.
2007-01-01
This brief represents part 2 in a series on fostering the adoption of evidence-based practices in out-of-school time programs. Many practitioners lack information on how to implement evidence-based practice(s) in their own programs or communities. A major reason for this gap is a lack of research on the process for implementing evidence-based…
United States Forest Disturbance Trends Observed Using Landsat Time Series
NASA Technical Reports Server (NTRS)
Masek, Jeffrey G.; Goward, Samuel N.; Kennedy, Robert E.; Cohen, Warren B.; Moisen, Gretchen G.; Schleeweis, Karen; Huang, Chengquan
2013-01-01
Disturbance events strongly affect the composition, structure, and function of forest ecosystems; however, existing U.S. land management inventories were not designed to monitor disturbance. To begin addressing this gap, the North American Forest Dynamics (NAFD) project has examined a geographic sample of 50 Landsat satellite image time series to assess trends in forest disturbance across the conterminous United States for 1985-2005. The geographic sample design used a probability-based scheme to encompass major forest types and maximize geographic dispersion. For each sample location disturbance was identified in the Landsat series using the Vegetation Change Tracker (VCT) algorithm. The NAFD analysis indicates that, on average, 2.77 Mha/yr of forests were disturbed annually, representing 1.09%/yr of US forestland. These satellite-based national disturbance rates estimates tend to be lower than those derived from land management inventories, reflecting both methodological and definitional differences. In particular the VCT approach used with a biennial time step has limited sensitivity to low-intensity disturbances. Unlike prior satellite studies, our biennial forest disturbance rates vary by nearly a factor of two between high and low years. High western US disturbance rates were associated with active fire years and insect activity, while variability in the east is more strongly related to harvest rates in managed forests. We note that generating a geographic sample based on representing forest type and variability may be problematic since the spatial pattern of disturbance does not necessarily correlate with forest type. We also find that the prevalence of diffuse, non-stand clearing disturbance in US forests makes the application of a biennial geographic sample problematic. Future satellite-based studies of disturbance at regional and national scales should focus on wall-to-wall analyses with annual time step for improved accuracy.
Wavelet analysis of near-resonant series RLC circuit with time-dependent forcing frequency
NASA Astrophysics Data System (ADS)
Caccamo, M. T.; Cannuli, A.; Magazù, S.
2018-07-01
In this work, the results of an analysis of the response of a near-resonant series resistance‑inductance‑capacitance (RLC) electric circuit with time-dependent forcing frequency by means of a wavelet cross-correlation approach are reported. In particular, it is shown how the wavelet approach enables frequency and time analysis of the circuit response to be carried out simultaneously—this procedure not being possible by Fourier transform, since the frequency is not stationary in time. A series RLC circuit simulation is performed by using the Simulation Program with Integrated Circuits Emphasis (SPICE), in which an oscillatory sinusoidal voltage drive signal of constant amplitude is swept through the resonant condition by progressively increasing the frequency over a 20-second time window, linearly, from 0.32 Hz to 6.69 Hz. It is shown that the wavelet cross-correlation procedure quantifies the common power between the input signal (represented by the electromotive force) and the output signal, which in the present case is a current, highlighting not only which frequencies are present but also when they occur, i.e. providing a simultaneous time-frequency analysis. The work is directed toward graduate Physics, Engineering and Mathematics students, with the main intention of introducing wavelet analysis into their data analysis toolkit.
Abrupt Shift in the Observed Runoff from the Southwest Greenland Ice Sheet?
NASA Astrophysics Data System (ADS)
Ahlstrom, A.; Petersen, D.; Box, J.; Langen, P. P.; Citterio, M.
2016-12-01
Mass loss of the Greenland ice sheet has contributed significantly to sea level rise in recent years and is considered a crucial parameter when estimating the impact of future climate change. Few observational records of sufficient length exist to validate surface mass balance models, especially the estimated runoff. Here we present an observation time series from 1975-2014 of discharge from a large proglacial lake, Tasersiaq, in West Greenland (66.3°N, 50.4°W) with a mainly ice-covered catchment. We argue that the discharge time series is representative measure of ice sheet runoff, making it the only observational record of runoff to exceed the 30-year period needed to assess the climatological state of the ice sheet. We proceed to isolate the runoff part of the signal from precipitation and identified glacial lake outburst floods from a small sub-catchment. Similarly, the impact from major volcanic eruptions is clearly identified. We examine the trend and annual variability in the annual discharge, relating it to likely atmospheric forcing mechanisms and compare the observational time series with modelled runoff from the regional climate model HIRHAM.
Quentin, Wilm; Neubauer, Simone; Leidl, Reiner; König, Hans-Helmut
2007-01-01
This paper reviews the international literature that employed time-series analysis to evaluate the effects of advertising bans on aggregate consumption of cigarettes or tobacco. A systematic search of the literature was conducted. Three groups of studies representing analyses of advertising bans in the U.S.A., in other countries and in 22 OECD countries were defined. The estimated effects of advertising bans and their significance were analysed. 24 studies were identified. They used a wide array of explanatory variables, models, estimating methods and data sources. 18 studies found a negative effect of an advertising ban on aggregate consumption, but only ten of these studies found a significant effect. Two studies using data from 22 OECD countries suggested that partial bans would have little or no influence on aggregate consumption, whereas complete bans would significantly reduce consumption. The results imply that advertising bans have a negative but sometimes only narrow impact on consumption. Complete bans let expect a higher effectiveness. Because of methodological restrictions of analysing advertising bans' effects by time series approaches, also different approaches should be used in the future.
NASA Astrophysics Data System (ADS)
Mosier, T. M.; Hill, D. F.; Sharp, K. V.
2013-12-01
High spatial resolution time-series data are critical for many hydrological and earth science studies. Multiple groups have developed historical and forecast datasets of high-resolution monthly time-series for regions of the world such as the United States (e.g. PRISM for hindcast data and MACA for long-term forecasts); however, analogous datasets have not been available for most data scarce regions. The current work fills this data need by producing and freely distributing hindcast and forecast time-series datasets of monthly precipitation and mean temperature for all global land surfaces, gridded at a 30 arc-second resolution. The hindcast data are constructed through a Delta downscaling method, using as inputs 0.5 degree monthly time-series and 30 arc-second climatology global weather datasets developed by Willmott & Matsuura and WorldClim, respectively. The forecast data are formulated using a similar downscaling method, but with an additional step to remove bias from the climate variable's probability distribution over each region of interest. The downscaling package is designed to be compatible with a number of general circulation models (GCM) (e.g. with GCMs developed for the IPCC AR4 report and CMIP5), and is presently implemented using time-series data from the NCAR CESM1 model in conjunction with 30 arc-second future decadal climatologies distributed by the Consultative Group on International Agricultural Research. The resulting downscaled datasets are 30 arc-second time-series forecasts of monthly precipitation and mean temperature available for all global land areas. As an example of these data, historical and forecast 30 arc-second monthly time-series from 1950 through 2070 are created and analyzed for the region encompassing Pakistan. For this case study, forecast datasets corresponding to the future representative concentration pathways 45 and 85 scenarios developed by the IPCC are presented and compared. This exercise highlights a range of potential meteorological trends for the Pakistan region and more broadly serves to demonstrate the utility of the presented 30 arc-second monthly precipitation and mean temperature datasets for use in data scarce regions.
A large set of potential past, present and future hydro-meteorological time series for the UK
NASA Astrophysics Data System (ADS)
Guillod, Benoit P.; Jones, Richard G.; Dadson, Simon J.; Coxon, Gemma; Bussi, Gianbattista; Freer, James; Kay, Alison L.; Massey, Neil R.; Sparrow, Sarah N.; Wallom, David C. H.; Allen, Myles R.; Hall, Jim W.
2018-01-01
Hydro-meteorological extremes such as drought and heavy precipitation can have large impacts on society and the economy. With potentially increasing risks associated with such events due to climate change, properly assessing the associated impacts and uncertainties is critical for adequate adaptation. However, the application of risk-based approaches often requires large sets of extreme events, which are not commonly available. Here, we present such a large set of hydro-meteorological time series for recent past and future conditions for the United Kingdom based on weather@home 2, a modelling framework consisting of a global climate model (GCM) driven by observed or projected sea surface temperature (SST) and sea ice which is downscaled to 25 km over the European domain by a regional climate model (RCM). Sets of 100 time series are generated for each of (i) a historical baseline (1900-2006), (ii) five near-future scenarios (2020-2049) and (iii) five far-future scenarios (2070-2099). The five scenarios in each future time slice all follow the Representative Concentration Pathway 8.5 (RCP8.5) and sample the range of sea surface temperature and sea ice changes from CMIP5 (Coupled Model Intercomparison Project Phase 5) models. Validation of the historical baseline highlights good performance for temperature and potential evaporation, but substantial seasonal biases in mean precipitation, which are corrected using a linear approach. For extremes in low precipitation over a long accumulation period ( > 3 months) and shorter-duration high precipitation (1-30 days), the time series generally represents past statistics well. Future projections show small precipitation increases in winter but large decreases in summer on average, leading to an overall drying, consistently with the most recent UK Climate Projections (UKCP09) but larger in magnitude than the latter. Both drought and high-precipitation events are projected to increase in frequency and intensity in most regions, highlighting the need for appropriate adaptation measures. Overall, the presented dataset is a useful tool for assessing the risk associated with drought and more generally with hydro-meteorological extremes in the UK.
Geostationary Operational Environmental Statellite(GEOS-N report)
NASA Technical Reports Server (NTRS)
1991-01-01
The Advanced Missions Analysis Office (AMAO) of GSFC has completed a study of the Geostationary Operational Environmental Satellites (GOES-N) series. The feasibility, risks, schedules, and associated costs of advanced space and ground system concepts responsive to National Oceanic and Atmospheric Administration (NOAA) requirements were evaluated. The study is the first step in a multi-phased procurement effort that is expected to result in launch ready hardware in the post 2000 time frame. This represents the latest activity of GSFC in translating meteorological requirements of NOAA into viable space systems in geosynchronous earth orbits (GEO). GOES-N represents application of the latest spacecraft, sensor, and instrument technologies to enhance NOAA meteorological capabilities via remote and in-situ sensing from GEO. The GOES-N series, if successfully developed, could become another significant step in NOAA weather forecasting space systems, meeting increasingly complex emerging national needs for that agency's services.
Kihara, Daisuke; Sael, Lee; Chikhi, Rayan; Esquivel-Rodriguez, Juan
2011-09-01
The tertiary structures of proteins have been solved in an increasing pace in recent years. To capitalize the enormous efforts paid for accumulating the structure data, efficient and effective computational methods need to be developed for comparing, searching, and investigating interactions of protein structures. We introduce the 3D Zernike descriptor (3DZD), an emerging technique to describe molecular surfaces. The 3DZD is a series expansion of mathematical three-dimensional function, and thus a tertiary structure is represented compactly by a vector of coefficients of terms in the series. A strong advantage of the 3DZD is that it is invariant to rotation of target object to be represented. These two characteristics of the 3DZD allow rapid comparison of surface shapes, which is sufficient for real-time structure database screening. In this article, we review various applications of the 3DZD, which have been recently proposed.
A graph-based approach to detect spatiotemporal dynamics in satellite image time series
NASA Astrophysics Data System (ADS)
Guttler, Fabio; Ienco, Dino; Nin, Jordi; Teisseire, Maguelonne; Poncelet, Pascal
2017-08-01
Enhancing the frequency of satellite acquisitions represents a key issue for Earth Observation community nowadays. Repeated observations are crucial for monitoring purposes, particularly when intra-annual process should be taken into account. Time series of images constitute a valuable source of information in these cases. The goal of this paper is to propose a new methodological framework to automatically detect and extract spatiotemporal information from satellite image time series (SITS). Existing methods dealing with such kind of data are usually classification-oriented and cannot provide information about evolutions and temporal behaviors. In this paper we propose a graph-based strategy that combines object-based image analysis (OBIA) with data mining techniques. Image objects computed at each individual timestamp are connected across the time series and generates a set of evolution graphs. Each evolution graph is associated to a particular area within the study site and stores information about its temporal evolution. Such information can be deeply explored at the evolution graph scale or used to compare the graphs and supply a general picture at the study site scale. We validated our framework on two study sites located in the South of France and involving different types of natural, semi-natural and agricultural areas. The results obtained from a Landsat SITS support the quality of the methodological approach and illustrate how the framework can be employed to extract and characterize spatiotemporal dynamics.
NASA Astrophysics Data System (ADS)
Gibbes, C.; Southworth, J.; Waylen, P. R.
2013-05-01
How do climate variability and climate change influence vegetation cover and vegetation change in savannas? A landscape scale investigation of the effect of changes in precipitation on vegetation is undertaken through the employment of a time series analysis. The multi-national study region is located within the Kavango-Zambezi region, and is delineated by the Okavango, Kwando, and Zambezi watersheds. A mean-variance time-series analysis quantifies vegetation dynamics and characterizes vegetation response to climate. The spatially explicit approach used to quantify the persistence of vegetation productivity permits the extraction of information regarding long term climate-landscape dynamics. Results show a pattern of reduced mean annual precipitation and increased precipitation variability across key social and ecological areas within the study region. Despite decreased mean annual precipitation since the mid to late 1970's vegetation trends predominantly indicate increasing biomass. The limited areas which have diminished vegetative cover relate to specific vegetation types, and are associated with declines in precipitation variability. Results indicate that in addition to short term changes in vegetation cover, long term trends in productive biomass are apparent, relate to spatial differences in precipitation variability, and potentially represent shifts vegetation composition. This work highlights the importance of time-series analyses for examining climate-vegetation linkages in a spatially explicit manner within a highly vulnerable region of the world.
Hierarchical Aligned Cluster Analysis for Temporal Clustering of Human Motion.
Zhou, Feng; De la Torre, Fernando; Hodgins, Jessica K
2013-03-01
Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.
Multivariable Time Series Prediction for the Icing Process on Overhead Power Transmission Line
Li, Peng; Zhao, Na; Zhou, Donghua; Cao, Min; Li, Jingjie; Shi, Xinling
2014-01-01
The design of monitoring and predictive alarm systems is necessary for successful overhead power transmission line icing. Given the characteristics of complexity, nonlinearity, and fitfulness in the line icing process, a model based on a multivariable time series is presented here to predict the icing load of a transmission line. In this model, the time effects of micrometeorology parameters for the icing process have been analyzed. The phase-space reconstruction theory and machine learning method were then applied to establish the prediction model, which fully utilized the history of multivariable time series data in local monitoring systems to represent the mapping relationship between icing load and micrometeorology factors. Relevant to the characteristic of fitfulness in line icing, the simulations were carried out during the same icing process or different process to test the model's prediction precision and robustness. According to the simulation results for the Tao-Luo-Xiong Transmission Line, this model demonstrates a good accuracy of prediction in different process, if the prediction length is less than two hours, and would be helpful for power grid departments when deciding to take action in advance to address potential icing disasters. PMID:25136653
Frequency Analysis of Modis Ndvi Time Series for Determining Hotspot of Land Degradation in Mongolia
NASA Astrophysics Data System (ADS)
Nasanbat, E.; Sharav, S.; Sanjaa, T.; Lkhamjav, O.; Magsar, E.; Tuvdendorj, B.
2018-04-01
This study examines MODIS NDVI satellite imagery time series can be used to determine hotspot of land degradation area in whole Mongolia. The trend statistical analysis of Mann-Kendall was applied to a 16-year MODIS NDVI satellite imagery record, based on 16-day composited temporal data (from May to September) for growing seasons and from 2000 to 2016. We performed to frequency analysis that resulting NDVI residual trend pattern would enable successful determined of negative and positive changes in photo synthetically health vegetation. Our result showed that negative and positive values and generated a map of significant trends. Also, we examined long-term of meteorological parameters for the same period. The result showed positive and negative NDVI trends concurred with land cover types change representing an improve or a degrade in vegetation, respectively. Also, integrated the climate parameters which were precipitation and air temperature changes in the same time period seem to have had an affecting on huge NDVI trend area. The time series trend analysis approach applied successfully determined hotspot of an improvement and a degraded area due to land degradation and desertification.
Selection of Worst-Case Pesticide Leaching Scenarios for Pesticide Registration
NASA Astrophysics Data System (ADS)
Vereecken, H.; Tiktak, A.; Boesten, J.; Vanderborght, J.
2010-12-01
The use of pesticides, fertilizers and manure in intensive agriculture may have a negative impact on the quality of ground- and surface water resources. Legislative action has been undertaken in many countries to protect surface and groundwater resources from contamination by surface applied agrochemicals. Of particular concern are pesticides. The registration procedure plays an important role in the regulation of pesticide use in the European Union. In order to register a certain pesticide use, the notifier needs to prove that the use does not entail a risk of groundwater contamination. Therefore, leaching concentrations of the pesticide need to be assessed using model simulations for so called worst-case scenarios. In the current procedure, a worst-case scenario represents a parameterized pesticide fate model for a certain soil and a certain time series of weather conditions that tries to represent all relevant processes such as transient water flow, root water uptake, pesticide transport, sorption, decay and volatilisation as accurate as possible. Since this model has been parameterized for only one soil and weather time series, it is uncertain whether it represents a worst-case condition for a certain pesticide use. We discuss an alternative approach that uses a simpler model that requires less detailed information about the soil and weather conditions but still represents the effect of soil and climate on pesticide leaching using information that is available for the entire European Union. A comparison between the two approaches demonstrates that the higher precision that the detailed model provides for the prediction of pesticide leaching at a certain site is counteracted by its smaller accuracy to represent a worst case condition. The simpler model predicts leaching concentrations less precise at a certain site but has a complete coverage of the area so that it selects a worst-case condition more accurately.
Remotely Sensed Based Lake/Reservoir Routing in Congo River Basin
NASA Astrophysics Data System (ADS)
Raoufi, R.; Beighley, E.; Lee, H.
2017-12-01
Lake and reservoir dynamics can influence local to regional water cycles but are often not well represented in hydrologic models. One challenge that limits their inclusion in models is the need for detailed storage-discharge behavior that can be further complicated in reservoirs where specific operation rules are employed. Here, the Hillslope River Routing (HRR) model is combined with a remotely sensed based Reservoir Routing (RR) method and applied to the Congo River Basin. Given that topographic data are often continuous over the entire terrestrial surface (i.e., does not differentiate between land and open water), the HRR-RR model integrates topographic derived river networks and catchment boundaries (e.g., HydroSHEDs) with water boundary extents (e.g., Global Lakes and Wetlands Database) to develop the computational framework. The catchments bordering lakes and reservoirs are partitioned into water and land portions, where representative flowpath characteristics are determined and vertical water balance and lateral routings is performed separately on each partition based on applicable process models (e.g., open water evaporation vs. evapotranspiration). To enable reservoir routing, remotely sensed water surface elevations and extents are combined to determine the storage change time series. Based on the available time series, representative storage change patterns are determined. Lake/reservoir routing is performed by combining inflows from the HRR-RR model and the representative storage change patterns to determine outflows. In this study, a suite of storage change patterns derived from remotely sensed measurements are determined representative patterns for wet, dry and average conditions. The HRR-RR model dynamically selects and uses the optimal storage change pattern for the routing process based on these hydrologic conditions. The HRR-RR model results are presented to highlight the importance of lake attenuation/routing in the Congo Basin.
Developing a New Curriculum for School-Age Learners. TESOL Language Curriculum Development Series
ERIC Educational Resources Information Center
Graves, Kathleen, Ed.; Lopriore, Lucilla, Ed.
2009-01-01
These are exciting and challenging times for English language curriculum development for school-age learners. The global reach of English has spurred a rethinking of its role in education and, consequently, a rethinking of how to teach it. The accounts in this volume represent differences in educational systems, language teaching traditions,…
ERIC Educational Resources Information Center
Minter, W. John; Bowen, Howard R.
The fifth report in an annual series designed to provide timely and reliable information on the condition of the independent sector of American higher education is presented. Information was gathered from a sample of 127 institutions representative of all independent, nonprofit, accredited institutions, except free-standing professional schools…
Power laws reveal phase transitions in landscape controls of fire regimes
Donald McKenzie; Maureen C. Kennedy
2012-01-01
Understanding the environmental controls on historical wildfires, and how they changed across spatial scales, is difficult because there are no surviving explicit records of either weather or vegetation (fuels). Here we show how power laws associated with fire-event time series arise in limited domains of parameters that represent critical transitions in the controls...
Fenn, Daniel J; Porter, Mason A; McDonald, Mark; Williams, Stacy; Johnson, Neil F; Jones, Nick S
2009-09-01
We study the cluster dynamics of multichannel (multivariate) time series by representing their correlations as time-dependent networks and investigating the evolution of network communities. We employ a node-centric approach that allows us to track the effects of the community evolution on the functional roles of individual nodes without having to track entire communities. As an example, we consider a foreign exchange market network in which each node represents an exchange rate and each edge represents a time-dependent correlation between the rates. We study the period 2005-2008, which includes the recent credit and liquidity crisis. Using community detection, we find that exchange rates that are strongly attached to their community are persistently grouped with the same set of rates, whereas exchange rates that are important for the transfer of information tend to be positioned on the edges of communities. Our analysis successfully uncovers major trading changes that occurred in the market during the credit crisis.
NASA Astrophysics Data System (ADS)
Fenn, Daniel J.; Porter, Mason A.; McDonald, Mark; Williams, Stacy; Johnson, Neil F.; Jones, Nick S.
2009-09-01
We study the cluster dynamics of multichannel (multivariate) time series by representing their correlations as time-dependent networks and investigating the evolution of network communities. We employ a node-centric approach that allows us to track the effects of the community evolution on the functional roles of individual nodes without having to track entire communities. As an example, we consider a foreign exchange market network in which each node represents an exchange rate and each edge represents a time-dependent correlation between the rates. We study the period 2005-2008, which includes the recent credit and liquidity crisis. Using community detection, we find that exchange rates that are strongly attached to their community are persistently grouped with the same set of rates, whereas exchange rates that are important for the transfer of information tend to be positioned on the edges of communities. Our analysis successfully uncovers major trading changes that occurred in the market during the credit crisis.
Argo, Paul E.; Fitzgerald, T. Joseph
1993-01-01
Fading channel effects on a transmitted communication signal are simulated with both frequency and time variations using a channel scattering function to affect the transmitted signal. A conventional channel scattering function is converted to a series of channel realizations by multiplying the square root of the channel scattering function by a complex number of which the real and imaginary parts are each independent variables. The two-dimensional inverse-FFT of this complex-valued channel realization yields a matrix of channel coefficients that provide a complete frequency-time description of the channel. The transmitted radio signal is segmented to provide a series of transmitted signal and each segment is subject to FFT to generate a series of signal coefficient matrices. The channel coefficient matrices and signal coefficient matrices are then multiplied and subjected to inverse-FFT to output a signal representing the received affected radio signal. A variety of channel scattering functions can be used to characterize the response of a transmitter-receiver system to such atmospheric effects.
Space shuttle simulation model
NASA Technical Reports Server (NTRS)
Tatom, F. B.; Smith, S. R.
1980-01-01
The effects of atmospheric turbulence in both horizontal and near horizontal flight, during the return of the space shuttle, are important for determining design, control, and 'pilot-in-the-loop' effects. A nonrecursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed which provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gust gradients. Based on this model, the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes which are entitled shuttle simulation turbulence tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 10,000 meters. The turbulence generation procedure is described as well as the results of validating the simulated turbulence. Conclusions and recommendations are presented and references cited. The tabulated one dimensional von Karman spectra and the results of spectral and statistical analyses of the SSTT are contained in the appendix.
Distance-dependent processing of pictures and words.
Amit, Elinor; Algom, Daniel; Trope, Yaacov
2009-08-01
A series of 8 experiments investigated the association between pictorial and verbal representations and the psychological distance of the referent objects from the observer. The results showed that people better process pictures that represent proximal objects and words that represent distal objects than pictures that represent distal objects and words that represent proximal objects. These results were obtained with various psychological distance dimensions (spatial, temporal, and social), different tasks (classification and categorization), and different measures (speed of processing and selective attention). The authors argue that differences in the processing of pictures and words emanate from the physical similarity of pictures, but not words, to the referents. Consequently, perceptual analysis is commonly applied to pictures but not to words. Pictures thus impart a sense of closeness to the referent objects and are preferably used to represent such objects, whereas words do not convey proximity and are preferably used to represent distal objects in space, time, and social perspective.
NASA Astrophysics Data System (ADS)
Staniec, Allison; Vlahos, Penny
2017-12-01
Long-term time series represent a critical part of the oceanographic community's efforts to discern natural and anthropogenically forced variations in the environment. They provide regular measurements of climate relevant indicators including temperature, oxygen concentrations, and salinity. When evaluating time series, it is essential to isolate long-term trends from autocorrelation in data and noise due to natural variability. Herein we apply a statistical approach, well-established in atmospheric time series, to key parameters in the U.S. east coast's Long Island Sound estuary (LIS). Analysis shows that the LIS time series (established in the early 1990s) is sufficiently long to detect significant trends in physical-chemical parameters including temperature (T) and dissolved oxygen (DO). Over the last two decades, overall (combined surface and deep) LIS T has increased at an average rate of 0.08 ± 0.03 °C yr-1 while overall DO has dropped at an average rate of 0.03 ± 0.01 mg L-1yr-1 since 1994 at the 95% confidence level. This trend is notably faster than the global open ocean T trend (0.01 °C yr-1), as might be expected for a shallower estuarine system. T and DO trends were always significant for the existing time series using four month data increments. Rates of change of DO and T in LIS are strongly correlated and the rate of decrease of DO concentrations is consistent with the expected reduced solubility of DO at these higher temperatures. Thus, changes in T alone, across decadal timescales can account for between 33 and 100% of the observed decrease in DO. This has significant implications for other dissolved gases and the long-term management of LIS hypoxia.
Investigation of a long time series of CO2 from a tall tower using WRF-SPA
NASA Astrophysics Data System (ADS)
Smallman, Luke; Williams, Mathew; Moncrieff, John B.
2013-04-01
Atmospheric observations from tall towers are an important source of information about CO2 exchange at the regional scale. Here, we have used a forward running model, WRF-SPA, to generate a time series of CO2 at a tall tower for comparison with observations from Scotland over multiple years (2006-2008). We use this comparison to infer strength and distribution of sources and sinks of carbon and ecosystem process information at the seasonal scale. The specific aim of this research is to combine a high resolution (6 km) forward running meteorological model (WRF) with a modified version of a mechanistic ecosystem model (SPA). SPA provides surface fluxes calculated from coupled energy, hydrological and carbon cycles. This closely coupled representation of the biosphere provides realistic surface exchanges to drive mixing within the planetary boundary layer. The combined model is used to investigate the sources and sinks of CO2 and to explore which land surfaces contribute to a time series of hourly observations of atmospheric CO2 at a tall tower, Angus, Scotland. In addition to comparing the modelled CO2 time series to observations, modelled ecosystem specific (i.e. forest, cropland, grassland) CO2 tracers (e.g., assimilation and respiration) have been compared to the modelled land surface assimilation to investigate how representative tall tower observations are of land surface processes. WRF-SPA modelled CO2 time series compares well to observations (R2 = 0.67, rmse = 3.4 ppm, bias = 0.58 ppm). Through comparison of model-observation residuals, we have found evidence that non-cropped components of agricultural land (e.g., hedgerows and forest patches) likely contribute a significant and observable impact on regional carbon balance.
NASA Astrophysics Data System (ADS)
Kim, Y.; Johnson, M. S.
2017-12-01
Spectral entropy (Hs) is an index which can be used to measure the structural complexity of time series data. When a time series is made up of one periodic function, the Hs value becomes smaller, while Hs becomes larger when a time series is composed of several periodic functions. We hypothesized that this characteristic of the Hs could be used to quantify the water stress history of vegetation. For the ideal condition for which sufficient water is supplied to an agricultural crop or natural vegetation, there should be a single distinct phenological cycle represented in a vegetation index time series (e.g., NDVI and EVI). However, time series data for a vegetation area that repeatedly experiences water stress may include several fluctuations that can be observed in addition to the predominant phenological cycle. This is because the process of experiencing water stress and recovering from it generates small fluctuations in phenological characteristics. Consequently, the value of Hs increases when vegetation experiences several water shortages. Therefore, the Hs could be used as an indicator for water stress history. To test this hypothesis, we analyzed Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) data for a natural area in comparison to a nearby sugarcane area in seasonally-dry western Costa Rica. In this presentation we will illustrate the use of spectral entropy to evaluate the vegetative responses of natural vegetation (dry tropical forest) and sugarcane under three different irrigation techniques (center pivot irrigation, drip irrigation and flood irrigation). Through this comparative analysis, the utility of Hs as an indicator will be tested. Furthermore, crop response to the different irrigation methods will be discussed in terms of Hs, NDVI and yield.
NASA Astrophysics Data System (ADS)
Nickles, C.; Zhao, Y.; Beighley, E.; Durand, M. T.; David, C. H.; Lee, H.
2017-12-01
The Surface Water and Ocean Topography (SWOT) satellite mission is jointly developed by NASA, the French space agency (CNES), with participation from the Canadian and UK space agencies to serve both the hydrology and oceanography communities. The SWOT mission will sample global surface water extents and elevations (lakes/reservoirs, rivers, estuaries, oceans, sea and land ice) at a finer spatial resolution than is currently possible enabling hydrologic discovery, model advancements and new applications that are not currently possible or likely even conceivable. Although the mission will provide global cover, analysis and interpolation of the data generated from the irregular space/time sampling represents a significant challenge. In this study, we explore the applicability of the unique space/time sampling for understanding river discharge dynamics throughout the Ohio River Basin. River network topology, SWOT sampling (i.e., orbit and identified SWOT river reaches) and spatial interpolation concepts are used to quantify the fraction of effective sampling of river reaches each day of the three-year mission. Streamflow statistics for SWOT generated river discharge time series are compared to continuous daily river discharge series. Relationships are presented to transform SWOT generated streamflow statistics to equivalent continuous daily discharge time series statistics intended to support hydrologic applications using low-flow and annual flow duration statistics.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
Permutation entropy with vector embedding delays
NASA Astrophysics Data System (ADS)
Little, Douglas J.; Kane, Deb M.
2017-12-01
Permutation entropy (PE) is a statistic used widely for the detection of structure within a time series. Embedding delay times at which the PE is reduced are characteristic timescales for which such structure exists. Here, a generalized scheme is investigated where embedding delays are represented by vectors rather than scalars, permitting PE to be calculated over a (D -1 ) -dimensional space, where D is the embedding dimension. This scheme is applied to numerically generated noise, sine wave and logistic map series, and experimental data sets taken from a vertical-cavity surface emitting laser exhibiting temporally localized pulse structures within the round-trip time of the laser cavity. Results are visualized as PE maps as a function of embedding delay, with low PE values indicating combinations of embedding delays where correlation structure is present. It is demonstrated that vector embedding delays enable identification of structure that is ambiguous or masked, when the embedding delay is constrained to scalar form.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Bugbee, Bruce; Gotseff, Peter
Capturing technical and economic impacts of solar photovoltaics (PV) and other distributed energy resources (DERs) on electric distribution systems can require high-time resolution (e.g. 1 minute), long-duration (e.g. 1 year) simulations. However, such simulations can be computationally prohibitive, particularly when including complex control schemes in quasi-steady-state time series (QSTS) simulation. Various approaches have been used in the literature to down select representative time segments (e.g. days), but typically these are best suited for lower time resolutions or consider only a single data stream (e.g. PV production) for selection. We present a statistical approach that combines stratified sampling and bootstrapping tomore » select representative days while also providing a simple method to reassemble annual results. We describe the approach in the context of a recent study with a utility partner. This approach enables much faster QSTS analysis by simulating only a subset of days, while maintaining accurate annual estimates.« less
Statistical inference methods for sparse biological time series data.
Ndukum, Juliet; Fonseca, Luís L; Santos, Helena; Voit, Eberhard O; Datta, Susmita
2011-04-25
Comparing metabolic profiles under different biological perturbations has become a powerful approach to investigating the functioning of cells. The profiles can be taken as single snapshots of a system, but more information is gained if they are measured longitudinally over time. The results are short time series consisting of relatively sparse data that cannot be analyzed effectively with standard time series techniques, such as autocorrelation and frequency domain methods. In this work, we study longitudinal time series profiles of glucose consumption in the yeast Saccharomyces cerevisiae under different temperatures and preconditioning regimens, which we obtained with methods of in vivo nuclear magnetic resonance (NMR) spectroscopy. For the statistical analysis we first fit several nonlinear mixed effect regression models to the longitudinal profiles and then used an ANOVA likelihood ratio method in order to test for significant differences between the profiles. The proposed methods are capable of distinguishing metabolic time trends resulting from different treatments and associate significance levels to these differences. Among several nonlinear mixed-effects regression models tested, a three-parameter logistic function represents the data with highest accuracy. ANOVA and likelihood ratio tests suggest that there are significant differences between the glucose consumption rate profiles for cells that had been--or had not been--preconditioned by heat during growth. Furthermore, pair-wise t-tests reveal significant differences in the longitudinal profiles for glucose consumption rates between optimal conditions and heat stress, optimal and recovery conditions, and heat stress and recovery conditions (p-values <0.0001). We have developed a nonlinear mixed effects model that is appropriate for the analysis of sparse metabolic and physiological time profiles. The model permits sound statistical inference procedures, based on ANOVA likelihood ratio tests, for testing the significance of differences between short time course data under different biological perturbations.
NASA Astrophysics Data System (ADS)
Iannacone, J.; Berti, M.; Allievi, J.; Del Conte, S.; Corsini, A.
2013-12-01
Space borne InSAR has proven to be very valuable for landslides detection. In particular, extremely slow landslides (Cruden and Varnes, 1996) can be now clearly identified, thanks to the millimetric precision reached by recent multi-interferometric algorithms. The typical approach in radar interpretation for landslides mapping is based on average annual velocity of the deformation which is calculated over the entire times series. The Hotspot and Cluster Analysis (Lu et al., 2012) and the PSI-based matrix approach (Cigna et al., 2013) are examples of landslides mapping techniques based on average annual velocities. However, slope movements can be affected by non-linear deformation trends, (i.e. reactivation of dormant landslides, deceleration due to natural or man-made slope stabilization, seasonal activity, etc). Therefore, analyzing deformation time series is crucial in order to fully characterize slope dynamics. While this is relatively simple to be carried out manually when dealing with small dataset, the time series analysis over regional scale dataset requires automated classification procedures. Berti et al. (2013) developed an automatic procedure for the analysis of InSAR time series based on a sequence of statistical tests. The analysis allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) which are likely to represent different slope processes. The analysis also provides a series of descriptive parameters which can be used to characterize the temporal changes of ground motion. All the classification algorithms were integrated into a Graphical User Interface called PSTime. We investigated an area of about 2000 km2 in the Northern Apennines of Italy by using SqueeSAR™ algorithm (Ferretti et al., 2011). Two Radarsat-1 data stack, comprising of 112 scenes in descending orbit and 124 scenes in ascending orbit, were processed. The time coverage lasts from April 2003 to November 2012, with an average temporal frequency of 1 scene/month. Radar interpretation has been carried out by considering average annual velocities as well as acceleration/deceleration trends evidenced by PSTime. Altogether, from ascending and descending geometries respectively, this approach allowed detecting of 115 and 112 potential landslides on the basis of average displacement rate and 77 and 79 landslides on the basis of acceleration trends. In conclusion, time series analysis resulted to be very valuable for landslide mapping. In particular it highlighted areas with marked acceleration in a specific period in time while still being affected by low average annual velocity over the entire analysis period. On the other hand, even in areas with high average annual velocity, time series analysis was of primary importance to characterize the slope dynamics in terms of acceleration events.
NASA Astrophysics Data System (ADS)
Usowicz, Jerzy, B.; Marczewski, Wojciech; Usowicz, Boguslaw; Lipiec, Jerzy; Lukowski, Mateusz I.
2010-05-01
This paper presents the results of the time series analysis of the soil moisture observed at two test sites Podlasie, Polesie, in the Cal/Val AO 3275 campaigns in Poland, during the interval 2006-2009. The test sites have been selected on a basis of their contrasted hydrological conditions. The region Podlasie (Trzebieszow) is essentially drier than the wetland region Polesie (Urszulin). It is worthwhile to note that the soil moisture variations can be represented as a non-stationary random process, and therefore appropriate analysis methods are required. The so-called Empirical Mode Decomposition (EMD) method has been chosen, since it is one of the best methods for the analysis of non-stationary and nonlinear time series. To confirm the results obtained by the EMD we have also used the wavelet methods. Firstly, we have used EMD (analyze step) to decompose the original time series into the so-called Intrinsic Mode Functions (IMFs) and then by grouping and addition similar IMFs (synthesize step) to obtain a few signal components with corresponding temporal scales. Such an adaptive procedure enables to decompose the original time series into diurnal, seasonal and trend components. Revealing of all temporal scales which operates in the original time series is our main objective and this approach may prove to be useful in other studies. Secondly, we have analyzed the soil moisture time series from both sites using the cross-wavelet and wavelet coherency. These methods allow us to study the degree of spatial coherence, which may vary in various intervals of time. We hope the obtained results provide some hints and guidelines for the validation of ESA SMOS data. References: B. Usowicz, J.B. Usowicz, Spatial and temporal variation of selected physical and chemical properties of soil, Institute of Agrophysics, Polish Academy of Sciences, Lublin 2004, ISBN 83-87385-96-4 Rao, A.R., Hsu, E.-C., Hilbert-Huang Transform Analysis of Hydrological and Environmental Time Series, Springer, 2008, ISBN: 978-1-4020-6453-1 Acknowledgements. This work was funded in part by the PECS - Programme for European Cooperating States, No. 98084 "SWEX/R - Soil Water and Energy Exchange/Research".
Visual Analytics of integrated Data Systems for Space Weather Purposes
NASA Astrophysics Data System (ADS)
Rosa, Reinaldo; Veronese, Thalita; Giovani, Paulo
Analysis of information from multiple data sources obtained through high resolution instrumental measurements has become a fundamental task in all scientific areas. The development of expert methods able to treat such multi-source data systems, with both large variability and measurement extension, is a key for studying complex scientific phenomena, especially those related to systemic analysis in space and environmental sciences. In this talk, we present a time series generalization introducing the concept of generalized numerical lattice, which represents a discrete sequence of temporal measures for a given variable. In this novel representation approach each generalized numerical lattice brings post-analytical data information. We define a generalized numerical lattice as a set of three parameters representing the following data properties: dimensionality, size and post-analytical measure (e.g., the autocorrelation, Hurst exponent, etc)[1]. From this representation generalization, any multi-source database can be reduced to a closed set of classified time series in spatiotemporal generalized dimensions. As a case study, we show a preliminary application in space science data, highlighting the possibility of a real time analysis expert system. In this particular application, we have selected and analyzed, using detrended fluctuation analysis (DFA), several decimetric solar bursts associated to X flare-classes. The association with geomagnetic activity is also reported. DFA method is performed in the framework of a radio burst automatic monitoring system. Our results may characterize the variability pattern evolution, computing the DFA scaling exponent, scanning the time series by a short windowing before the extreme event [2]. For the first time, the application of systematic fluctuation analysis for space weather purposes is presented. The prototype for visual analytics is implemented in a Compute Unified Device Architecture (CUDA) by using the K20 Nvidia graphics processing units (GPUs) to reduce the integrated analysis runtime. [1] Veronese et al. doi: 10.6062/jcis.2009.01.02.0021, 2010. [2] Veronese et al. doi:http://dx.doi.org/10.1016/j.jastp.2010.09.030, 2011.
NASA Astrophysics Data System (ADS)
Wang, X.; Tu, C. Y.; He, J.; Wang, L.
2017-12-01
It has been a longstanding debate on what the nature of Elsässer variables z- observed in the Alfvénic solar wind is. It is widely believed that z- represents inward propagating Alfvén waves and undergoes non-linear interaction with z+ to produce energy cascade. However, z- variations sometimes show nature of convective structures. Here we present a new data analysis on z- autocorrelation functions to get some definite information on its nature. We find that there is usually a break point on the z- auto-correlation function when the fluctuations show nearly pure Alfvénicity. The break point observed by Helios-2 spacecraft near 0.3 AU is at the first time lag ( 81 s), where the autocorrelation coefficient has the value less than that at zero-time lag by a factor of more than 0.4. The autocorrelation function breaks also appear in the WIND observations near 1 AU. The z- autocorrelation function is separated by the break into two parts: fast decreasing part and slowly decreasing part, which cannot be described in a whole by an exponential formula. The breaks in the z- autocorrelation function may represent that the z- time series are composed of high-frequency white noise and low-frequency apparent structures, which correspond to the flat and steep parts of the function, respectively. This explanation is supported by a simple test with a superposition of an artificial random data series and a smoothed random data series. Since in many cases z- autocorrelation functions do not decrease very quickly at large time lag and cannot be considered as the Lanczos type, no reliable value for correlation-time can be derived. Our results showed that in these cases with high Alfvénicity, z- should not be considered as inward-propagating wave. The power-law spectrum of z+ should be made by fluid turbulence cascade process presented by Kolmogorov.
Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm
NASA Astrophysics Data System (ADS)
Mathai, J.; Mujumdar, P.
2017-12-01
A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.
Inductive Approaches to Improving Diagnosis and Design for Diagnosability
NASA Technical Reports Server (NTRS)
Fisher, Douglas H. (Principal Investigator)
1995-01-01
The first research area under this grant addresses the problem of classifying time series according to their morphological features in the time domain. A supervised learning system called CALCHAS, which induces a classification procedure for signatures from preclassified examples, was developed. For each of several signature classes, the system infers a model that captures the class's morphological features using Bayesian model induction and the minimum message length approach to assign priors. After induction, a time series (signature) is classified in one of the classes when there is enough evidence to support that decision. Time series with sufficiently novel features, belonging to classes not present in the training set, are recognized as such. A second area of research assumes two sources of information about a system: a model or domain theory that encodes aspects of the system under study and data from actual system operations over time. A model, when it exists, represents strong prior expectations about how a system will perform. Our work with a diagnostic model of the RCS (Reaction Control System) of the Space Shuttle motivated the development of SIG, a system which combines information from a model (or domain theory) and data. As it tracks RCS behavior, the model computes quantitative and qualitative values. Induction is then performed over the data represented by both the 'raw' features and the model-computed high-level features. Finally, work on clustering for operating mode discovery motivated some important extensions to the clustering strategy we had used. One modification appends an iterative optimization technique onto the clustering system; this optimization strategy appears to be novel in the clustering literature. A second modification improves the noise tolerance of the clustering system. In particular, we adapt resampling-based pruning strategies used by supervised learning systems to the task of simplifying hierarchical clusterings, thus making post-clustering analysis easier.
Using in-situ Glider Data to Improve the Interpretation of Time-Series Data in the San Pedro Channel
NASA Astrophysics Data System (ADS)
Teel, E.; Liu, X.; Seegers, B. N.; Ragan, M. A.; Jones, B. H.; Levine, N. M.
2016-02-01
Oceanic time-series have provided insight into biological, physical, and chemical processes and how these processes change over time. However, time-series data collected near coastal zones have not been used as broadly because of regional features that may prevent extrapolation of local results. Though these sites are inherently more affected by local processes, broadening the application of coastal data is crucial for improved modeling of processes such as total carbon drawdown and the development of oxygen minimum zones. Slocum gliders were deployed off the coast of Los Angeles from February to July of 2013 and 2014 providing high temporal and spatial resolution data of the San Pedro Channel (SPC), which includes the San Pedro Ocean Time Series (SPOT). The data were collapsed onto a standardized grid and primary and secondary characteristics of glider profiles were analyzed by principal component analysis to determine the processes impacting SPC and SPOT. The data fell into four categories: active upwelling, offshore intrusion, subsurface bloom, and surface bloom. Waters across the SPC were most similar to offshore water masses, even during the upwelling season when near-shore blooms are commonly observed. The SPOT site was found to be representative of the SPC 86% of the time, suggesting that the findings from SPOT are applicable for the entire SPC. Subsurface blooms were common in both years with co-located chlorophyll and particle maxima, and results suggested that these subsurface blooms contribute significantly to the local primary production. Satellite estimation of integrated chlorophyll was poor, possibly due to the prevalence of subsurface blooms and shallow optical depths during surface blooms. These results indicate that high resolution in-situ glider deployments can be used to determine the spatial domain of coastal time-series data, allowing for broader application of these datasets and greater integration into modeling efforts.
A stochastic model for correlated protein motions
NASA Astrophysics Data System (ADS)
Karain, Wael I.; Qaraeen, Nael I.; Ajarmah, Basem
2006-06-01
A one-dimensional Langevin-type stochastic difference equation is used to find the deterministic and Gaussian contributions of time series representing the projections of a Bovine Pancreatic Trypsin Inhibitor (BPTI) protein molecular dynamics simulation along different eigenvector directions determined using principal component analysis. The deterministic part shows a distinct nonlinear behavior only for eigenvectors contributing significantly to the collective protein motion.
ERIC Educational Resources Information Center
Harkins, Judith E., Ed.; Virvan, Barbara M., Ed.
The conference proceedings contains 23 papers on telephone relay service, real-time captioning, and automatic speech recognition, and a glossary. The keynote address, by Representative Major R. Owens, examines current issues in federal legislation. Other papers have the following titles and authors: "Telephone Relay Service: Rationale and…
The use of multiple imputation in the Southern Annual Forest Inventory System
Gregory A. Reams; Joseph M. McCollum
2000-01-01
The Southern Research Station is currently implementing an annual forest survey in 7 of the 13 States that it is responsible for surveying. The Southern Annual Forest Inventory System (SAFIS) sampling design is a systematic sample of five interpenetrating grids, whereby an equal number of plots are measured each year. The area-representative and time-series...
The use of multiple imputation in the Southern Annual Forest Inventory System
Gregory A. Reams; Joseph M. McCollum
2000-01-01
The Southern Research Station is currently implementing an annual forest survey in 7 of the 13 states that it is responsible for surveying. The Southern Annual Forest Inventory System (SAFIS) sampling design is a systematic sample of five interpenetrating grids, whereby an equal number of plots are measured each year. The area representative and time series nature of...
ERIC Educational Resources Information Center
Maggin, Daniel M.; Swaminathan, Hariharan; Rogers, Helen J.; O'Keeffe, Breda V.; Sugai, George; Horner, Robert H.
2011-01-01
A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of…
What Is Evidence-Based Practice? Research-to-Results Brief. Publication #2007-14
ERIC Educational Resources Information Center
Metz, Allison J. R.; Espiritu, Rachele; Moore, Kristin A.
2007-01-01
This brief represents part 1 in a series on fostering the adoption of evidence-based practices in out-of-school time programs. The lag between discovering effective practices and using them "on the ground" can be unnecessarily long, sometimes taking 15 to 20 years! The purpose of this brief is to provide practitioners with a better understanding…
Francaviglia, Natale; Maugeri, Rosario; Odierna Contino, Antonino; Meli, Francesco; Fiorenza, Vito; Costantino, Gabriele; Giammalva, Roberto Giuseppe; Iacopino, Domenico Gerardo
2017-01-01
Cranioplasty represents a challenge in neurosurgery. Its goal is not only plastic reconstruction of the skull but also to restore and preserve cranial function, to improve cerebral hemodynamics, and to provide mechanical protection of the neural structures. The ideal material for the reconstructive procedures and the surgical timing are still controversial. Many alloplastic materials are available for performing cranioplasty and among these, titanium still represents a widely proven and accepted choice. The aim of our study was to present our preliminary experience with a "custom-made" cranioplasty, using electron beam melting (EBM) technology, in a series of ten patients. EBM is a new sintering method for shaping titanium powder directly in three-dimensional (3D) implants. To the best of our knowledge this is the first report of a skull reconstruction performed by this technique. In a 1-year follow-up no postoperative complications have been observed and good clinical and esthetic outcomes were achieved. Costs higher than those for other types of titanium mesh, a longer production process, and the greater expertise needed for this technique are compensated by the achievement of most complex skull reconstructions with a shorter operative time.
Fractal dimension and nonlinear dynamical processes
NASA Astrophysics Data System (ADS)
McCarty, Robert C.; Lindley, John P.
1993-11-01
Mandelbrot, Falconer and others have demonstrated the existence of dimensionally invariant geometrical properties of non-linear dynamical processes known as fractals. Barnsley defines fractal geometry as an extension of classical geometry. Such an extension, however, is not mathematically trivial Of specific interest to those engaged in signal processing is the potential use of fractal geometry to facilitate the analysis of non-linear signal processes often referred to as non-linear time series. Fractal geometry has been used in the modeling of non- linear time series represented by radar signals in the presence of ground clutter or interference generated by spatially distributed reflections around the target or a radar system. It was recognized by Mandelbrot that the fractal geometries represented by man-made objects had different dimensions than the geometries of the familiar objects that abound in nature such as leaves, clouds, ferns, trees, etc. The invariant dimensional property of non-linear processes suggests that in the case of acoustic signals (active or passive) generated within a dispersive medium such as the ocean environment, there exists much rich structure that will aid in the detection and classification of various objects, man-made or natural, within the medium.
Foreman, Brady Z; Straub, Kyle M
2017-09-01
Terrestrial paleoclimate records rely on proxies hosted in alluvial strata whose beds are deposited by unsteady and nonlinear geomorphic processes. It is broadly assumed that this renders the resultant time series of terrestrial paleoclimatic variability noisy and incomplete. We evaluate this assumption using a model of oscillating climate and the precise topographic evolution of an experimental alluvial system. We find that geomorphic stochasticity can create aliasing in the time series and spurious climate signals, but these issues are eliminated when the period of climate oscillation is longer than a key time scale of internal dynamics in the geomorphic system. This emergent autogenic geomorphic behavior imparts regularity to deposition and represents a natural discretization interval of the continuous climate signal. We propose that this time scale in nature could be in excess of 10 4 years but would still allow assessments of the rates of climate change at resolutions finer than the existing age model techniques in isolation.
Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems.
Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei
2015-01-01
The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode.
Weber, Juliane; Zachow, Christopher; Witthaut, Dirk
2018-03-01
Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.
NASA Astrophysics Data System (ADS)
Weber, Juliane; Zachow, Christopher; Witthaut, Dirk
2018-03-01
Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.
New method for solving inductive electric fields in the non-uniformly conducting ionosphere
NASA Astrophysics Data System (ADS)
Vanhamäki, H.; Amm, O.; Viljanen, A.
2006-10-01
We present a new calculation method for solving inductive electric fields in the ionosphere. The time series of the potential part of the ionospheric electric field, together with the Hall and Pedersen conductances serves as the input to this method. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition, no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called the Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfvén wave reflection from a uniformly conducting ionosphere.
Foreman, Brady Z.; Straub, Kyle M.
2017-01-01
Terrestrial paleoclimate records rely on proxies hosted in alluvial strata whose beds are deposited by unsteady and nonlinear geomorphic processes. It is broadly assumed that this renders the resultant time series of terrestrial paleoclimatic variability noisy and incomplete. We evaluate this assumption using a model of oscillating climate and the precise topographic evolution of an experimental alluvial system. We find that geomorphic stochasticity can create aliasing in the time series and spurious climate signals, but these issues are eliminated when the period of climate oscillation is longer than a key time scale of internal dynamics in the geomorphic system. This emergent autogenic geomorphic behavior imparts regularity to deposition and represents a natural discretization interval of the continuous climate signal. We propose that this time scale in nature could be in excess of 104 years but would still allow assessments of the rates of climate change at resolutions finer than the existing age model techniques in isolation. PMID:28924607
NASA Astrophysics Data System (ADS)
Rivera, V. A.; Amaya, L. F.
2017-12-01
In 2016, graduate students from Northwestern University's Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) initiated the Science Sonification and Composition Project, which pairs scientists with student composers to create original music inspired by and utilizing the products of scientific research. In 2017, these pieces were performed at Northwestern for a mixed audience of scientists, musicians, and community members. Sonification of data, or the representation of data as sound, is an increasingly popular method of examining data in the geosciences, especially in astrophysics, where sonification of gravitational waves has recently made major news. Numerical time-series data are often excellent candidates for sonification, as the data can be modified by simple algorithmic means to convert numerical values which represent physical measurements to numerical values representing musical "variables" like volume, pitch, or timbre. Our collaboration, a result of the CIERA initiative, explores methods of sonification that do not involve a simple conversion of data to sound, instead attempting to create sound from data by analog methods. The piece uses both time-series groundwater elevation data and physical soil samples from the locations where the water table measurements were collected. The field site from which both data and samples were collected is Gensburg Markham Prairie, an urban nature preserve on Chicago's south side which hosts a long-term study on the collateral benefits of urban greenspace for stormwater management and storage. Our aim was to combine physical, living elements with technology to mirror the research, where we examine flows and cycles in nature by "taking the pulse" of the landscape using sensing networks. Soil samples were placed in metal vessels outfitted with contact microphones and manipulated by hand and with water, using time-series data as a guide, much like sheet music. This was repeated for samples and sensors at different physical locations throughout the prairie, with each recording comprising an audible representation of soil properties and water dynamics at each location. The resulting sound was processed to create electronic music, which was paired with live instrumental performance.
2010-01-01
Background Malaria still remains a public health problem in some districts of Bhutan despite marked reduction of cases in last few years. To strengthen the country's prevention and control measures, this study was carried out to develop forecasting and prediction models of malaria incidence in the endemic districts of Bhutan using time series and ARIMAX. Methods This study was carried out retrospectively using the monthly reported malaria cases from the health centres to Vector-borne Disease Control Programme (VDCP) and the meteorological data from Meteorological Unit, Department of Energy, Ministry of Economic Affairs. Time series analysis was performed on monthly malaria cases, from 1994 to 2008, in seven malaria endemic districts. The time series models derived from a multiplicative seasonal autoregressive integrated moving average (ARIMA) was deployed to identify the best model using data from 1994 to 2006. The best-fit model was selected for each individual district and for the overall endemic area was developed and the monthly cases from January to December 2009 and 2010 were forecasted. In developing the prediction model, the monthly reported malaria cases and the meteorological factors from 1996 to 2008 of the seven districts were analysed. The method of ARIMAX modelling was employed to determine predictors of malaria of the subsequent month. Results It was found that the ARIMA (p, d, q) (P, D, Q)s model (p and P representing the auto regressive and seasonal autoregressive; d and D representing the non-seasonal differences and seasonal differencing; and q and Q the moving average parameters and seasonal moving average parameters, respectively and s representing the length of the seasonal period) for the overall endemic districts was (2,1,1)(0,1,1)12; the modelling data from each district revealed two most common ARIMA models including (2,1,1)(0,1,1)12 and (1,1,1)(0,1,1)12. The forecasted monthly malaria cases from January to December 2009 and 2010 varied from 15 to 82 cases in 2009 and 67 to 149 cases in 2010, where population in 2009 was 285,375 and the expected population of 2010 to be 289,085. The ARIMAX model of monthly cases and climatic factors showed considerable variations among the different districts. In general, the mean maximum temperature lagged at one month was a strong positive predictor of an increased malaria cases for four districts. The monthly number of cases of the previous month was also a significant predictor in one district, whereas no variable could predict malaria cases for two districts. Conclusions The ARIMA models of time-series analysis were useful in forecasting the number of cases in the endemic areas of Bhutan. There was no consistency in the predictors of malaria cases when using ARIMAX model with selected lag times and climatic predictors. The ARIMA forecasting models could be employed for planning and managing malaria prevention and control programme in Bhutan. PMID:20813066
1999-11-01
represents the linear time invariant (LTI) response of the combined analysis /synthesis system while the second repre- sents the aliasing introduced into...effectively to implement voice scrambling systems based on time - frequency permutation . The most general form of such a system is shown in Fig. 22 where...92201 NEUILLY-SUR-SEINE CEDEX, FRANCE RTO LECTURE SERIES 216 Application of Mathematical Signal Processing Techniques to Mission Systems (1
Evidence for a fundamental and pervasive shift away from nature-based recreation
Pergams, Oliver R. W.; Zaradic, Patricia A.
2008-01-01
After 50 years of steady increase, per capita visits to U.S. National Parks have declined since 1987. To evaluate whether we are seeing a fundamental shift away from people's interest in nature, we tested for similar longitudinal declines in 16 time series representing four classes of nature participation variables: (i) visitation to various types of public lands in the U.S. and National Parks in Japan and Spain, (ii) number of various types of U.S. game licenses issued, (iii) indicators of time spent camping, and (iv) indicators of time spent backpacking or hiking. The four variables with the greatest per capita participation were visits to Japanese National Parks, U.S. State Parks, U.S. National Parks, and U.S. National Forests, with an average individual participating 0.74–2.75 times per year. All four time series are in downtrends, with linear regressions showing ongoing losses of −1.0% to −3.1% per year. The longest and most complete time series tested suggest that typical declines in per capita nature recreation began between 1981 and 1991, are proceeding at rates of −1.0% to −1.3% per year, and total to date −18% to −25%. Spearman correlation analyses were performed on untransformed time series and on transformed percentage year-to-year changes. Results showed very highly significant correlations between many of the highest per capita participation variables in both untransformed and in difference models, further corroborating the general downtrend in nature recreation. In conclusion, all major lines of evidence point to an ongoing and fundamental shift away from nature-based recreation. PMID:18250312
NASA Technical Reports Server (NTRS)
Gao, Feng; DeColstoun, Eric Brown; Ma, Ronghua; Weng, Qihao; Masek, Jeffrey G.; Chen, Jin; Pan, Yaozhong; Song, Conghe
2012-01-01
Cities have been expanding rapidly worldwide, especially over the past few decades. Mapping the dynamic expansion of impervious surface in both space and time is essential for an improved understanding of the urbanization process, land-cover and land-use change, and their impacts on the environment. Landsat and other medium-resolution satellites provide the necessary spatial details and temporal frequency for mapping impervious surface expansion over the past four decades. Since the US Geological Survey opened the historical record of the Landsat image archive for free access in 2008, the decades-old bottleneck of data limitation has gone. Remote-sensing scientists are now rich with data, and the challenge is how to make best use of this precious resource. In this article, we develop an efficient algorithm to map the continuous expansion of impervious surface using a time series of four decades of medium-resolution satellite images. The algorithm is based on a supervised classification of the time-series image stack using a decision tree. Each imerpervious class represents urbanization starting in a different image. The algorithm also allows us to remove inconsistent training samples because impervious expansion is not reversible during the study period. The objective is to extract a time series of complete and consistent impervious surface maps from a corresponding times series of images collected from multiple sensors, and with a minimal amount of image preprocessing effort. The approach was tested in the lower Yangtze River Delta region, one of the fastest urban growth areas in China. Results from nearly four decades of medium-resolution satellite data from the Landsat Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper plus (ETM+) and China-Brazil Earth Resources Satellite (CBERS) show a consistent urbanization process that is consistent with economic development plans and policies. The time-series impervious spatial extent maps derived from this study agree well with an existing urban extent polygon data set that was previously developed independently. The overall mapping accuracy was estimated at about 92.5% with 3% commission error and 12% omission error for the impervious type from all images regardless of image quality and initial spatial resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Ranjan; Chelmis, Charalampos; Aman, Saima
The advent of smart meters and advanced communication infrastructures catalyzes numerous smart grid applications such as dynamic demand response, and paves the way to solve challenging research problems in sustainable energy consumption. The space of solution possibilities are restricted primarily by the huge amount of generated data requiring considerable computational resources and efficient algorithms. To overcome this Big Data challenge, data clustering techniques have been proposed. Current approaches however do not scale in the face of the “increasing dimensionality” problem where a cluster point is represented by the entire customer consumption time series. To overcome this aspect we first rethinkmore » the way cluster points are created and designed, and then design an efficient online clustering technique for demand response (DR) in order to analyze high volume, high dimensional energy consumption time series data at scale, and on the fly. Our online algorithm is randomized in nature, and provides optimal performance guarantees in a computationally efficient manner. Unlike prior work we (i) study the consumption properties of the whole population simultaneously rather than developing individual models for each customer separately, claiming it to be a ‘killer’ approach that breaks the “curse of dimensionality” in online time series clustering, and (ii) provide tight performance guarantees in theory to validate our approach. Our insights are driven by the field of sociology, where collective behavior often emerges as the result of individual patterns and lifestyles.« less
NASA Astrophysics Data System (ADS)
Wang, Y.; Lin, L.; Chen, H.
2015-02-01
Natural disasters have enormous impacts on human society, especially on the development of the economy. To support decision making in mitigation and adaption to natural disasters, assessment of economic impacts is fundamental and of great significance. Based on a review of the literature of economic impact evaluation, this paper proposes a new assessment model of economic impact from drought by using the sugar industry in China as a case study, which focuses on the generation and transfer of economic impacts along a simple value chain involving only sugarcane growers and a sugar producing company. A perspective of profit loss rate is applied to scale economic impact with a model based on cost-and-benefit analysis. By using analysis of "with-and-without", profit loss is defined as the difference in profits between disaster-hit and disaster-free scenarios. To calculate profit, analysis on a time series of sugar price is applied. With the support of a linear regression model, an endogenous trend in sugar price is identified, and the time series of sugar price "without" disaster is obtained using an autoregressive error model to separate impact by disasters from the internal trend in sugar price. Unlike the settings in other assessment models, representative sugar prices, which represent value level in disaster-free condition and disaster-hit condition, are integrated from a long time series that covers the whole period of drought. As a result, it is found that in a rigid farming contract, sugarcane growers suffer far more than the sugar company when impacted by severe drought, which may promote the reflections on economic equality among various economic bodies at the occurrence of natural disasters.
NASA Astrophysics Data System (ADS)
Yan, Ying; Zhang, Shen; Tang, Jinjun; Wang, Xiaofei
2017-07-01
Discovering dynamic characteristics in traffic flow is the significant step to design effective traffic managing and controlling strategy for relieving traffic congestion in urban cities. A new method based on complex network theory is proposed to study multivariate traffic flow time series. The data were collected from loop detectors on freeway during a year. In order to construct complex network from original traffic flow, a weighted Froenius norm is adopt to estimate similarity between multivariate time series, and Principal Component Analysis is implemented to determine the weights. We discuss how to select optimal critical threshold for networks at different hour in term of cumulative probability distribution of degree. Furthermore, two statistical properties of networks: normalized network structure entropy and cumulative probability of degree, are utilized to explore hourly variation in traffic flow. The results demonstrate these two statistical quantities express similar pattern to traffic flow parameters with morning and evening peak hours. Accordingly, we detect three traffic states: trough, peak and transitional hours, according to the correlation between two aforementioned properties. The classifying results of states can actually represent hourly fluctuation in traffic flow by analyzing annual average hourly values of traffic volume, occupancy and speed in corresponding hours.
Hu, Weiming; Tian, Guodong; Kang, Yongxin; Yuan, Chunfeng; Maybank, Stephen
2017-09-25
In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequence of atomic activities, the action represented by the trajectory can be described in natural language in as automatic a way as possible. The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene.
A Spatiotemporal Prediction Framework for Air Pollution Based on Deep RNN
NASA Astrophysics Data System (ADS)
Fan, J.; Li, Q.; Hou, J.; Feng, X.; Karimian, H.; Lin, S.
2017-10-01
Time series data in practical applications always contain missing values due to sensor malfunction, network failure, outliers etc. In order to handle missing values in time series, as well as the lack of considering temporal properties in machine learning models, we propose a spatiotemporal prediction framework based on missing value processing algorithms and deep recurrent neural network (DRNN). By using missing tag and missing interval to represent time series patterns, we implement three different missing value fixing algorithms, which are further incorporated into deep neural network that consists of LSTM (Long Short-term Memory) layers and fully connected layers. Real-world air quality and meteorological datasets (Jingjinji area, China) are used for model training and testing. Deep feed forward neural networks (DFNN) and gradient boosting decision trees (GBDT) are trained as baseline models against the proposed DRNN. Performances of three missing value fixing algorithms, as well as different machine learning models are evaluated and analysed. Experiments show that the proposed DRNN framework outperforms both DFNN and GBDT, therefore validating the capacity of the proposed framework. Our results also provides useful insights for better understanding of different strategies that handle missing values.
Data-driven discovery of partial differential equations.
Rudy, Samuel H; Brunton, Steven L; Proctor, Joshua L; Kutz, J Nathan
2017-04-01
We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg-de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable.
Feng, Zhujing; Schilling, Keith E; Chan, Kung-Sik
2013-06-01
Nitrate-nitrogen concentrations in rivers represent challenges for water supplies that use surface water sources. Nitrate concentrations are often modeled using time-series approaches, but previous efforts have typically relied on monthly time steps. In this study, we developed a dynamic regression model of daily nitrate concentrations in the Raccoon River, Iowa, that incorporated contemporaneous and lags of precipitation and discharge occurring at several locations around the basin. Results suggested that 95 % of the variation in daily nitrate concentrations measured at the outlet of a large agricultural watershed can be explained by time-series patterns of precipitation and discharge occurring in the basin. Discharge was found to be a more important regression variable than precipitation in our model but both regression parameters were strongly correlated with nitrate concentrations. The time-series model was consistent with known patterns of nitrate behavior in the watershed, successfully identifying contemporaneous dilution mechanisms from higher relief and urban areas of the basin while incorporating the delayed contribution of nitrate from tile-drained regions in a lagged response. The first difference of the model errors were modeled as an AR(16) process and suggest that daily nitrate concentration changes remain temporally correlated for more than 2 weeks although temporal correlation was stronger in the first few days before tapering off. Consequently, daily nitrate concentrations are non-stationary, i.e. of strong memory. Using time-series models to reliably forecast daily nitrate concentrations in a river based on patterns of precipitation and discharge occurring in its basin may be of great interest to water suppliers.
Besic, Nikola; Vasile, Gabriel; Anghel, Andrei; Petrut, Teodor-Ion; Ioana, Cornel; Stankovic, Srdjan; Girard, Alexandre; d'Urso, Guy
2014-11-01
In this paper, we propose a novel ultrasonic tomography method for pipeline flow field imaging, based on the Zernike polynomial series. Having intrusive multipath time-offlight ultrasonic measurements (difference in flight time and speed of ultrasound) at the input, we provide at the output tomograms of the fluid velocity components (axial, radial, and orthoradial velocity). Principally, by representing these velocities as Zernike polynomial series, we reduce the tomography problem to an ill-posed problem of finding the coefficients of the series, relying on the acquired ultrasonic measurements. Thereupon, this problem is treated by applying and comparing Tikhonov regularization and quadratically constrained ℓ1 minimization. To enhance the comparative analysis, we additionally introduce sparsity, by employing SVD-based filtering in selecting Zernike polynomials which are to be included in the series. The first approach-Tikhonov regularization without filtering, is used because it is the most suitable method. The performances are quantitatively tested by considering a residual norm and by estimating the flow using the axial velocity tomogram. Finally, the obtained results show the relative residual norm and the error in flow estimation, respectively, ~0.3% and ~1.6% for the less turbulent flow and ~0.5% and ~1.8% for the turbulent flow. Additionally, a qualitative validation is performed by proximate matching of the derived tomograms with a flow physical model.
van Bömmel, Alena; Song, Song; Majer, Piotr; Mohr, Peter N C; Heekeren, Hauke R; Härdle, Wolfgang K
2014-07-01
Decision making usually involves uncertainty and risk. Understanding which parts of the human brain are activated during decisions under risk and which neural processes underly (risky) investment decisions are important goals in neuroeconomics. Here, we analyze functional magnetic resonance imaging (fMRI) data on 17 subjects who were exposed to an investment decision task from Mohr, Biele, Krugel, Li, and Heekeren (in NeuroImage 49, 2556-2563, 2010b). We obtain a time series of three-dimensional images of the blood-oxygen-level dependent (BOLD) fMRI signals. We apply a panel version of the dynamic semiparametric factor model (DSFM) presented in Park, Mammen, Wolfgang, and Borak (in Journal of the American Statistical Association 104(485), 284-298, 2009) and identify task-related activations in space and dynamics in time. With the panel DSFM (PDSFM) we can capture the dynamic behavior of the specific brain regions common for all subjects and represent the high-dimensional time-series data in easily interpretable low-dimensional dynamic factors without large loss of variability. Further, we classify the risk attitudes of all subjects based on the estimated low-dimensional time series. Our classification analysis successfully confirms the estimated risk attitudes derived directly from subjects' decision behavior.
REVISITING EVIDENCE OF CHAOS IN X-RAY LIGHT CURVES: THE CASE OF GRS 1915+105
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mannattil, Manu; Gupta, Himanshu; Chakraborty, Sagar, E-mail: mmanu@iitk.ac.in, E-mail: hiugupta@iitk.ac.in, E-mail: sagarc@iitk.ac.in
2016-12-20
Nonlinear time series analysis has been widely used to search for signatures of low-dimensional chaos in light curves emanating from astrophysical bodies. A particularly popular example is the microquasar GRS 1915+105, whose irregular but systematic X-ray variability has been well studied using data acquired by the Rossi X-ray Timing Explorer . With a view to building simpler models of X-ray variability, attempts have been made to classify the light curves of GRS 1915+105 as chaotic or stochastic. Contrary to some of the earlier suggestions, after careful analysis, we find no evidence for chaos or determinism in any of the GRS 1915+105 classes. Themore » dearth of long and stationary data sets representing all the different variability classes of GRS 1915+105 makes it a poor candidate for analysis using nonlinear time series techniques. We conclude that either very exhaustive data analysis with sufficiently long and stationary light curves should be performed, keeping all the pitfalls of nonlinear time series analysis in mind, or alternative schemes of classifying the light curves should be adopted. The generic limitations of the techniques that we point out in the context of GRS 1915+105 affect all similar investigations of light curves from other astrophysical sources.« less
NASA Astrophysics Data System (ADS)
Kaplan, D.; Muñoz-Carpena, R.
2011-02-01
SummaryRestoration of degraded floodplain forests requires a robust understanding of surface water, groundwater, and vadose zone hydrology. Soil moisture is of particular importance for seed germination and seedling survival, but is difficult to monitor and often overlooked in wetland restoration studies. This research hypothesizes that the complex effects of surface water and shallow groundwater on the soil moisture dynamics of floodplain wetlands are spatially complementary. To test this hypothesis, 31 long-term (4-year) hydrological time series were collected in the floodplain of the Loxahatchee River (Florida, USA), where watershed modifications have led to reduced freshwater flow, altered hydroperiod and salinity, and a degraded ecosystem. Dynamic factor analysis (DFA), a time series dimension reduction technique, was applied to model temporal and spatial variation in 12 soil moisture time series as linear combinations of common trends (representing shared, but unexplained, variability) and explanatory variables (selected from 19 additional candidate hydrological time series). The resulting dynamic factor models yielded good predictions of observed soil moisture series (overall coefficient of efficiency = 0.90) by identifying surface water elevation, groundwater elevation, and net recharge (cumulative rainfall-cumulative evapotranspiration) as important explanatory variables. Strong and complementary linear relationships were found between floodplain elevation and surface water effects (slope = 0.72, R2 = 0.86, p < 0.001), and between elevation and groundwater effects (slope = -0.71, R2 = 0.71, p = 0.001), while the effect of net recharge was homogenous across the experimental transect (slope = 0.03, R2 = 0.05, p = 0.242). This study provides a quantitative insight into the spatial structure of groundwater and surface water effects on soil moisture that will be useful for refining monitoring plans and developing ecosystem restoration and management scenarios in degraded coastal floodplains.
Reduced rank models for travel time estimation of low order mode pulses.
Chandrayadula, Tarun K; Wage, Kathleen E; Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Howe, Bruce M
2013-10-01
Mode travel time estimation in the presence of internal waves (IWs) is a challenging problem. IWs perturb the sound speed, which results in travel time wander and mode scattering. A standard approach to travel time estimation is to pulse compress the broadband signal, pick the peak of the compressed time series, and average the peak time over multiple receptions to reduce variance. The peak-picking approach implicitly assumes there is a single strong arrival and does not perform well when there are multiple arrivals due to scattering. This article presents a statistical model for the scattered mode arrivals and uses the model to design improved travel time estimators. The model is based on an Empirical Orthogonal Function (EOF) analysis of the mode time series. Range-dependent simulations and data from the Long-range Ocean Acoustic Propagation Experiment (LOAPEX) indicate that the modes are represented by a small number of EOFs. The reduced-rank EOF model is used to construct a travel time estimator based on the Matched Subspace Detector (MSD). Analysis of simulation and experimental data show that the MSDs are more robust to IW scattering than peak picking. The simulation analysis also highlights how IWs affect the mode excitation by the source.
Centrality in earthquake multiplex networks
NASA Astrophysics Data System (ADS)
Lotfi, Nastaran; Darooneh, Amir Hossein; Rodrigues, Francisco A.
2018-06-01
Seismic time series has been mapped as a complex network, where a geographical region is divided into square cells that represent the nodes and connections are defined according to the sequence of earthquakes. In this paper, we map a seismic time series to a temporal network, described by a multiplex network, and characterize the evolution of the network structure in terms of the eigenvector centrality measure. We generalize previous works that considered the single layer representation of earthquake networks. Our results suggest that the multiplex representation captures better earthquake activity than methods based on single layer networks. We also verify that the regions with highest seismological activities in Iran and California can be identified from the network centrality analysis. The temporal modeling of seismic data provided here may open new possibilities for a better comprehension of the physics of earthquakes.
Multifractal analysis of the time series of daily means of wind speed in complex regions
NASA Astrophysics Data System (ADS)
Laib, Mohamed; Golay, Jean; Telesca, Luciano; Kanevski, Mikhail
2018-04-01
In this paper, we applied the multifractal detrended fluctuation analysis to the daily means of wind speed measured by 119 weather stations distributed over the territory of Switzerland. The analysis was focused on the inner time fluctuations of wind speed, which could be more linked with the local conditions of the highly varying topography of Switzerland. Our findings point out to a persistent behaviour of all the measured wind speed series (indicated by a Hurst exponent significantly larger than 0.5), and to a high multifractality degree indicating a relative dominance of the large fluctuations in the dynamics of wind speed, especially in the Swiss plateau, which is comprised between the Jura and Alp mountain ranges. The study represents a contribution to the understanding of the dynamical mechanisms of wind speed variability in mountainous regions.
Implicit assimilation for marine ecological models
NASA Astrophysics Data System (ADS)
Weir, B.; Miller, R.; Spitz, Y. H.
2012-12-01
We use a new data assimilation method to estimate the parameters of a marine ecological model. At a given point in the ocean, the estimated values of the parameters determine the behaviors of the modeled planktonic groups, and thus indicate which species are dominant. To begin, we assimilate in situ observations, e.g., the Bermuda Atlantic Time-series Study, the Hawaii Ocean Time-series, and Ocean Weather Station Papa. From there, we estimate the parameters at surrounding points in space based on satellite observations of ocean color. Given the variation of the estimated parameters, we divide the ocean into regions meant to represent distinct ecosystems. An important feature of the data assimilation approach is that it refines the confidence limits of the optimal Gaussian approximation to the distribution of the parameters. This enables us to determine the ecological divisions with greater accuracy.
Advanced functional network analysis in the geosciences: The pyunicorn package
NASA Astrophysics Data System (ADS)
Donges, Jonathan F.; Heitzig, Jobst; Runge, Jakob; Schultz, Hanna C. H.; Wiedermann, Marc; Zech, Alraune; Feldhoff, Jan; Rheinwalt, Aljoscha; Kutza, Hannes; Radebach, Alexander; Marwan, Norbert; Kurths, Jürgen
2013-04-01
Functional networks are a powerful tool for analyzing large geoscientific datasets such as global fields of climate time series originating from observations or model simulations. pyunicorn (pythonic unified complex network and recurrence analysis toolbox) is an open-source, fully object-oriented and easily parallelizable package written in the language Python. It allows for constructing functional networks (aka climate networks) representing the structure of statistical interrelationships in large datasets and, subsequently, investigating this structure using advanced methods of complex network theory such as measures for networks of interacting networks, node-weighted statistics or network surrogates. Additionally, pyunicorn allows to study the complex dynamics of geoscientific systems as recorded by time series by means of recurrence networks and visibility graphs. The range of possible applications of the package is outlined drawing on several examples from climatology.
NASA Astrophysics Data System (ADS)
Godsey, S. E.; Kirchner, J. W.
2008-12-01
The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.
Plassmann, Merle M; Tengstrand, Erik; Åberg, K Magnus; Benskin, Jonathan P
2016-06-01
Non-targeted mass spectrometry-based approaches for detecting novel xenobiotics in biological samples are hampered by the occurrence of naturally fluctuating endogenous substances, which are difficult to distinguish from environmental contaminants. Here, we investigate a data reduction strategy for datasets derived from a biological time series. The objective is to flag reoccurring peaks in the time series based on increasing peak intensities, thereby reducing peak lists to only those which may be associated with emerging bioaccumulative contaminants. As a result, compounds with increasing concentrations are flagged while compounds displaying random, decreasing, or steady-state time trends are removed. As an initial proof of concept, we created artificial time trends by fortifying human whole blood samples with isotopically labelled standards. Different scenarios were investigated: eight model compounds had a continuously increasing trend in the last two to nine time points, and four model compounds had a trend that reached steady state after an initial increase. Each time series was investigated at three fortification levels and one unfortified series. Following extraction, analysis by ultra performance liquid chromatography high-resolution mass spectrometry, and data processing, a total of 21,700 aligned peaks were obtained. Peaks displaying an increasing trend were filtered from randomly fluctuating peaks using time trend ratios and Spearman's rank correlation coefficients. The first approach was successful in flagging model compounds spiked at only two to three time points, while the latter approach resulted in all model compounds ranking in the top 11 % of the peak lists. Compared to initial peak lists, a combination of both approaches reduced the size of datasets by 80-85 %. Overall, non-target time trend screening represents a promising data reduction strategy for identifying emerging bioaccumulative contaminants in biological samples. Graphical abstract Using time trends to filter out emerging contaminants from large peak lists.
Optimizing Use of Water Management Systems during Changes of Hydrological Conditions
NASA Astrophysics Data System (ADS)
Výleta, Roman; Škrinár, Andrej; Danáčová, Michaela; Valent, Peter
2017-10-01
When designing the water management systems and their components, there is a need of more detail research on hydrological conditions of the river basin, runoff of which creates the main source of water in the reservoir. Over the lifetime of the water management systems the hydrological time series are never repeated in the same form which served as the input for the design of the system components. The design assumes the observed time series to be representative at the time of the system use. However, it is rather unrealistic assumption, because the hydrological past will not be exactly repeated over the design lifetime. When designing the water management systems, the specialists may occasionally face the insufficient or oversized capacity design, possibly wrong specification of the management rules which may lead to their non-optimal use. It is therefore necessary to establish a comprehensive approach to simulate the fluctuations in the interannual runoff (taking into account the current dry and wet periods) in the form of stochastic modelling techniques in water management practice. The paper deals with the methodological procedure of modelling the mean monthly flows using the stochastic Thomas-Fiering model, while modification of this model by Wilson-Hilferty transformation of independent random number has been applied. This transformation usually applies in the event of significant asymmetry in the observed time series. The methodological procedure was applied on the data acquired at the gauging station of Horné Orešany in the Parná Stream. Observed mean monthly flows for the period of 1.11.1980 - 31.10.2012 served as the model input information. After extrapolation the model parameters and Wilson-Hilferty transformation parameters the synthetic time series of mean monthly flows were simulated. Those have been compared with the observed hydrological time series using basic statistical characteristics (e. g. mean, standard deviation and skewness) for testing the quality of the model simulation. The synthetic hydrological series of monthly flows were created having the same statistical properties as the time series observed in the past. The compiled model was able to take into account the diversity of extreme hydrological situations in a form of synthetic series of mean monthly flows, while the occurrence of a set of flows was confirmed, which could and may occur in the future. The results of stochastic modelling in the form of synthetic time series of mean monthly flows, which takes into account the seasonal fluctuations of runoff within the year, could be applicable in engineering hydrology (e. g. for optimum use of the existing water management system that is related to reassessment of economic risks of the system).
On the Impact of a Quadratic Acceleration Term in the Analysis of Position Time Series
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Klos, Anna; Bos, Machiel Simon; Hunegnaw, Addisu; Teferle, Felix Norman
2016-04-01
The analysis of Global Navigation Satellite System (GNSS) position time series generally assumes that each of the coordinate component series is described by the sum of a linear rate (velocity) and various periodic terms. The residuals, the deviations between the fitted model and the observations, are then a measure of the epoch-to-epoch scatter and have been used for the analysis of the stochastic character (noise) of the time series. Often the parameters of interest in GNSS position time series are the velocities and their associated uncertainties, which have to be determined with the highest reliability. It is clear that not all GNSS position time series follow this simple linear behaviour. Therefore, we have added an acceleration term in the form of a quadratic polynomial function to the model in order to better describe the non-linear motion in the position time series. This non-linear motion could be a response to purely geophysical processes, for example, elastic rebound of the Earth's crust due to ice mass loss in Greenland, artefacts due to deficiencies in bias mitigation models, for example, of the GNSS satellite and receiver antenna phase centres, or any combination thereof. In this study we have simulated 20 time series with different stochastic characteristics such as white, flicker or random walk noise of length of 23 years. The noise amplitude was assumed at 1 mm/y-/4. Then, we added the deterministic part consisting of a linear trend of 20 mm/y (that represents the averaged horizontal velocity) and accelerations ranging from minus 0.6 to plus 0.6 mm/y2. For all these data we estimated the noise parameters with Maximum Likelihood Estimation (MLE) using the Hector software package without taken into account the non-linear term. In this way we set the benchmark to then investigate how the noise properties and velocity uncertainty may be affected by any un-modelled, non-linear term. The velocities and their uncertainties versus the accelerations for different types of noise are determined. Furthermore, we have selected 40 globally distributed stations that have a clear non-linear behaviour from two different International GNSS Service (IGS) analysis centers: JPL (Jet Propulsion Laboratory) and BLT (British Isles continuous GNSS Facility and University of Luxembourg Tide Gauge Benchmark Monitoring (TIGA) Analysis Center). We obtained maximum accelerations of -1.8±1.2 mm2/y and -4.5±3.3 mm2/y for the horizontal and vertical components, respectively. The noise analysis tests have shown that the addition of the non-linear term has significantly whitened the power spectra of the position time series, i.e. shifted the spectral index from flicker towards white noise.
NASA Astrophysics Data System (ADS)
Kappler, Karl N.; Schneider, Daniel D.; MacLean, Laura S.; Bleier, Thomas E.
2017-08-01
A method for identification of pulsations in time series of magnetic field data which are simultaneously present in multiple channels of data at one or more sensor locations is described. Candidate pulsations of interest are first identified in geomagnetic time series by inspection. Time series of these "training events" are represented in matrix form and transpose-multiplied to generate time-domain covariance matrices. The ranked eigenvectors of this matrix are stored as a feature of the pulsation. In the second stage of the algorithm, a sliding window (approximately the width of the training event) is moved across the vector-valued time-series comprising the channels on which the training event was observed. At each window position, the data covariance matrix and associated eigenvectors are calculated. We compare the orientation of the dominant eigenvectors of the training data to those from the windowed data and flag windows where the dominant eigenvectors directions are similar. This was successful in automatically identifying pulses which share polarization and appear to be from the same source process. We apply the method to a case study of continuously sampled (50 Hz) data from six observatories, each equipped with three-component induction coil magnetometers. We examine a 90-day interval of data associated with a cluster of four observatories located within 50 km of Napa, California, together with two remote reference stations-one 100 km to the north of the cluster and the other 350 km south. When the training data contains signals present in the remote reference observatories, we are reliably able to identify and extract global geomagnetic signals such as solar-generated noise. When training data contains pulsations only observed in the cluster of local observatories, we identify several types of non-plane wave signals having similar polarization.
Willard, D.A.; Bernhardt, C.E.; Korejwo, D.A.; Meyers, S.R.
2005-01-01
We present paleoclimatic evidence for a series of Holocene millennial-scale cool intervals in eastern North America that occurred every ???1400 years and lasted ???300-500 years, based on pollen data from Chesapeake Bay in the mid-Atlantic region of the United States. The cool events are indicated by significant decreases in pine pollen, which we interpret as representing decreases in January temperatures of between 0.2??and 2??C. These temperature decreases include excursions during the Little Ice Age (???1300-1600 AD) and the 8 ka cold event. The timing of the pine minima is correlated with a series of quasi-periodic cold intervals documented by various proxies in Greenland, North Atlantic, and Alaskan cores and with solar minima interpreted from cosmogenic isotope records. These events may represent changes in circumpolar vortex size and configuration in response to intervals of decreased solar activity, which altered jet stream patterns to enhance meridional circulation over eastern North America. ?? 2004 Elsevier B.V. All rights reserved.
Discovery of a Novel Series of CRTH2 (DP2) Receptor Antagonists Devoid of Carboxylic Acids
2011-01-01
Antagonism of the CRTH2 receptor represents a very attractive target for a variety of allergic diseases. Most CRTH2 antagonists known to date possess a carboxylic acid moiety, which is essential for binding. However, potential acid metabolites O-acyl glucuronides might be linked to idiosynchratic toxicity in humans. In this communication, we describe a new series of compounds that lack the carboxylic acid moiety. Compounds with high affinity (Ki < 10 nM) for the receptor have been identified. Subsequent optimization succeeded in reducing the high metabolic clearance of the first compounds in human and rat liver microsomes. At the same time, inhibition of the CYP isoforms was optimized, giving rise to stable compounds with an acceptable CYP inhibition profile (IC50 CYP2C9 and 2C19 > 1 μM). Taken together, these data show that compounds devoid of carboxylic acid groups could represent an interesting alternative to current CRTH2 antagonists in development. PMID:24900284
Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S.; Perera, Subashan
2016-01-01
Human gait is a complex interaction of many nonlinear systems and stride intervals exhibit self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a biomarker of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of sixteen mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length. PMID:27960102
A spectral analysis of team dynamics and tactics in Brazilian football.
Moura, Felipe Arruda; Martins, Luiz Eduardo Barreto; Anido, Ricardo O; Ruffino, Paulo Régis C; Barros, Ricardo M L; Cunha, Sergio Augusto
2013-01-01
The purposes of this study were to characterise the total space covered and the distances between players within teams over ten Brazilian First Division Championship matches. Filmed recordings, combined with a tracking system, were used to obtain the trajectories of the players (n = 277), before and after half-time. The team surface area (the area of the convex hull formed by the positions of the players) and spread (the Frobenius norm of the distance-between-player matrix) were calculated as functions of time. A Fast Fourier Transform (FFT) was applied to each time series. The median frequency was then calculated. The results of the surface area time series median frequencies for the first half (0.63 ± 0.10 cycles · min⁻¹) were significantly greater (P < 0.01) than the second-half values (0.47 ± 0.14 cycles · min⁻¹). Similarly, the spread variable median frequencies for the first half (0.60 ± 0.14 cycles · min⁻¹) were significantly greater (P < 0.01) than the second-half values (0.46 ± 0.16 cycles · min⁻¹). The median frequencies allowed the characterisation of the time series oscillations that represent the speed at which players distribute and then compact their team formation during a match. This analysis can provide insights that allow coaches to better control the team organisation on the pitch.
NASA Technical Reports Server (NTRS)
Teng, William; Rui, Hualan; Strub, Richard; Vollmer, Bruce
2016-01-01
A long-standing "Digital Divide" in data representation exists between the preferred way of data access by the hydrology community and the common way of data archival by earth science data centers. Typically, in hydrology, earth surface features are expressed as discrete spatial objects (e.g., watersheds), and time-varying data are contained in associated time series. Data in earth science archives, although stored as discrete values (of satellite swath pixels or geographical grids), represent continuous spatial fields, one file per time step. This Divide has been an obstacle, specifically, between the Consortium of Universities for the Advancement of Hydrologic Science, Inc. and NASA earth science data systems. In essence, the way data are archived is conceptually orthogonal to the desired method of access. Our recent work has shown an optimal method of bridging the Divide, by enabling operational access to long-time series (e.g., 36 years of hourly data) of selected NASA datasets. These time series, which we have termed "data rods," are pre-generated or generated on-the-fly. This optimal solution was arrived at after extensive investigations of various approaches, including one based on "data curtains." The on-the-fly generation of data rods uses "data cubes," NASA Giovanni, and parallel processing. The optimal reorganization of NASA earth science data has significantly enhanced the access to and use of the data for the hydrology user community.
Diffusive and subdiffusive dynamics of indoor microclimate: a time series modeling.
Maciejewska, Monika; Szczurek, Andrzej; Sikora, Grzegorz; Wyłomańska, Agnieszka
2012-09-01
The indoor microclimate is an issue in modern society, where people spend about 90% of their time indoors. Temperature and relative humidity are commonly used for its evaluation. In this context, the two parameters are usually considered as behaving in the same manner, just inversely correlated. This opinion comes from observation of the deterministic components of temperature and humidity time series. We focus on the dynamics and the dependency structure of the time series of these parameters, without deterministic components. Here we apply the mean square displacement, the autoregressive integrated moving average (ARIMA), and the methodology for studying anomalous diffusion. The analyzed data originated from five monitoring locations inside a modern office building, covering a period of nearly one week. It was found that the temperature data exhibited a transition between diffusive and subdiffusive behavior, when the building occupancy pattern changed from the weekday to the weekend pattern. At the same time the relative humidity consistently showed diffusive character. Also the structures of the dependencies of the temperature and humidity data sets were different, as shown by the different structures of the ARIMA models which were found appropriate. In the space domain, the dynamics and dependency structure of the particular parameter were preserved. This work proposes an approach to describe the very complex conditions of indoor air and it contributes to the improvement of the representative character of microclimate monitoring.
Code of Federal Regulations, 2014 CFR
2014-07-01
... series of daily values represents the 98th percentile for that year. Creditable samples include daily... measured (or averaged from hourly measurements in AQS) from midnight to midnight (local standard time) from... design value (DV) or a 24-hour PM2.5 NAAQS DV to determine if those metrics, which are judged to be based...
ERIC Educational Resources Information Center
Barnette, J. Jackson; Wallis, Anne Baber
2005-01-01
We rely a great deal on the schematic descriptions that represent experimental and quasi-experimental design arrangements, as well as the discussions of threats to validity associated with these, provided by Campbell and his associates: Stanley, Cook, and Shadish. Some of these designs include descriptions of treatments removed, removed and then…
Thermophysical Properties of Matter - the TPRC Data Series. Volume 11. Viscosity
1975-01-01
he monumental accomplishment in themselves, re- has spent enough time looking. Now with the quiring for their production the combined knowledge...Thodos for naphthenic hydrocarbons Thodos [1201] have represented the reduced viscosity (701], aromatic hydrocarbons (702], and unsaturated integral by...thefluidatoneatmospherepressureat thetemperature mixtures of liquids, and examined the case of caustic of interest, and p, is the value of p at the critical soda
R. Justin DeRose; W. Shih-Yu (Simon) Wang; John D. Shaw
2012-01-01
Increment cores collected as part of the periodic inventory in the Intermountain West were examined for their potential to represent growth and be a proxy for climate (precipitation) over a large region (Utah). Standardized and crossdated time-series created from pinyon pine (n=249) and Douglas-fir (n=274) increment cores displayed spatiotemporal patterns in growth...
The Future of U.S. Doctoral Programs in Physics (May 22-23, 1989). Topical Conference Series.
ERIC Educational Resources Information Center
Neal, Homer A., Ed.; Wilson, Jack M., Ed.
The 1990's represent an unusual period in physics. Some areas are in a state of unusual excitement, while there are divisions growing within the discipline over priorities. Another problem facing the field at this time is that few U.S. nationals are going into careers related to physics. In addition, the percentage of females and minorities…
Landsat Based Woody Vegetation Loss Detection in Queensland, Australia Using the Google Earth Engine
NASA Astrophysics Data System (ADS)
Johansen, K.; Phinn, S. R.; Taylor, M.
2014-12-01
Land clearing detection and woody Foliage Projective Cover (FPC) monitoring at the state and national level in Australia has mainly been undertaken by state governments and the Terrestrial Ecosystem Research Network (TERN) because of the considerable expense, expertise, sustained duration of activities and staffing levels needed. Only recently have services become available, providing low budget, generalized access to change detection tools suited to this task. The objective of this research was to examine if a globally available service, Google Earth Engine Beta, could be used to predict woody vegetation loss with accuracies approaching the methods used by TERN and the government of the state of Queensland, Australia. Two change detection approaches were investigated using Landsat Thematic Mapper time series and the Google Earth Engine Application Programming Interface: (1) CART and Random Forest classifiers; and (2) a normalized time series of Foliage Projective Cover (FPC) and NDVI combined with a spectral index. The CART and Random Forest classifiers produced high user's and producer's mapping accuracies of clearing (77-92% and 54-77%, respectively) when detecting change within epochs for which training data were available, but extrapolation to epochs without training data reduced the mapping accuracies. The use of FPC and NDVI time series provided a more robust approach for calculation of a clearing probability, as it did not rely on training data but instead on the difference of the normalized FPC / NDVI mean and standard deviation of a single year at the change point in relation to the remaining time series. However, the FPC and NDVI time series approach represented a trade-off between user's and producer's accuracies. Both change detection approaches explored in this research were sensitive to ephemeral greening and drying of the landscape. However, the developed normalized FPC and NDVI time series approach can be tuned to provide automated alerts for large woody vegetation clearing events by selecting suitable thresholds to identify very likely clearing. This research provides a comprehensive foundation to build further capacity to use globally accessible, free, online image datasets and processing tools to accurately detect woody vegetation clearing in an automated and rapid manner.
Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Hunsberger, Randolph J
This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less
Landsat time series analysis documents beaver migration into permafrost landscapes of arctic Alaska
NASA Astrophysics Data System (ADS)
Jones, B. M.; Tape, K. D.; Nitze, I.; Arp, C. D.; Grosse, G.; Zimmerman, C. E.
2017-12-01
Landscape-scale impacts of climate change in the Arctic include increases in growing season length, shrubby vegetation, winter river discharge, snowfall, summer and winter water temperatures, and decreases in river and lake ice thickness. Combined, these changes may have created conditions that are suitable for beaver colonization of low Arctic tundra regions. We developed a semi-automated workflow that analyzes Landsat imagery time series to determine the extent to which beavers may have colonized permafrost landscapes in arctic Alaska since 1999. We tested this approach on the Lower Noatak, Wulik, and Kivalina river watersheds in northwest Alaska and identified 83 locations representing potential beaver activity. Seventy locations indicated wetting trends and 13 indicated drying trends. Verification of each site using high-resolution satellite imagery showed that 80 % of the wetting locations represented beaver activity (damming and pond formation), 11 % were unrelated to beavers, and 9 % could not readily be distinguished as being beaver related or not. For the drying locations, 31 % represented beaver activity (pond drying due to dam abandonment), 62 % were unrelated to beavers, and 7 % were undetermined. Comparison of the beaver activity database with historic aerial photography from ca. 1950 and ca. 1980 indicates that beavers have recently colonized or recolonized riparian corridors in northwest Alaska. Remote sensing time series observations associated with the migration of beavers in permafrost landscapes in arctic Alaska include thermokarst lake expansion and drainage, thaw slump initiation, ice wedge degradation, thermokarst shore fen development, and possibly development of lake and river taliks. Additionally, beaver colonization in the Arctic may alter channel courses, thermal regimes, hyporheic flow, riparian vegetation, and winter ice regimes that could impact ecosystem structure and function in this region. In particular, the combination of beaver activity and permafrost dynamics may play an important role in the formation of habitats conducive to colonization by Pacific salmon. Beaver activity in arctic tundra regions may amplify the effects of climate change on permafrost landscapes and lead to landscape-scale responses not currently being considered in ecosystem models.
Hydrodynamic Tests in the N.A.C.A. Tank of a Model of the Hull of the Short Calcutta Flying Boat
NASA Technical Reports Server (NTRS)
Ward, Kenneth E
1937-01-01
The hydrodynamic characteristics of a model of the hull of the Short Calcutta (N.A.C.A. Model 47) are presented in non-dimensional form. This model represents one of a series of hulls of successful foreign and domestic flying boats the characteristics of which are being obtained under similar test conditions in the N.A.C.A. tank. The take-off distance and time for a flying boat having the hull of the Calcutta are compared at two values of the gross load with the corresponding distances and times for the same flying boat having hulls of two representative American types, the Sikorsky S-40 and the N.A.C.A. 11-A. This comparison indicates that for hulls of the widely different forms compared, the differences in take-off time and distance are negligible.
Penning de Vries, Bas B L; Kolkert, Joé L P; Meerwaldt, Robbert; Groenwold, Rolf H H
2017-10-01
Associations between atmospheric pressure and abdominal aortic aneurysm (AAA) rupture risk have been reported, but empirical evidence is inconclusive and largely derived from studies that did not account for possible nonlinearity, seasonality, and confounding by temperature. Associations between atmospheric pressure and AAA rupture risk were investigated using local meteorological data and a case series of 358 patients admitted to hospital for ruptured AAA during the study period, January 2002 to December 2012. Two analyses were performed-a time series analysis and a case-crossover study. Results from the 2 analyses were similar; neither the time series analysis nor the case-crossover study showed a significant association between atmospheric pressure ( P = .627 and P = .625, respectively, for mean daily atmospheric pressure) or atmospheric pressure variation ( P = .464 and P = .816, respectively, for 24-hour change in mean daily atmospheric pressure) and AAA rupture risk. This study failed to support claims that atmospheric pressure causally affects AAA rupture risk. In interpreting our results, one should be aware that the range of atmospheric pressure observed in this study is not representative of the atmospheric pressure to which patients with AAA may be exposed, for example, during air travel or travel to high altitudes in the mountains. Making firm claims regarding these conditions in relation to AAA rupture risk is difficult at best. Furthermore, despite the fact that we used one of the largest case series to date to investigate the effect of atmospheric pressure on AAA rupture risk, it is possible that this study is simply too small to demonstrate a causal link.
Hsu, Emory; Phadke, Varun K; Nguyen, Minh Ly T
2016-06-01
We describe an HIV-infected patient initiated on combined antiretroviral therapy (cART) who subsequently developed immune restoration disease (IRD) hyperthyroidism-this case represents one of five such patients seen at our center within the past year. Similar to previous reports of hyperthyroidism due to IRD, all of our patients experienced a rapid early recovery in total CD4 count, but developed symptoms of hyperthyroidism on average 3 years (38 months) after beginning cART, which represents a longer time frame than previously reported. Awareness and recognition of this potential complication of cART, which may occur years after treatment initiation, will allow patients with immune restorative hyperthyroidism to receive timely therapy and avoid the long-term complications associated with undiagnosed thyroid disease.
Neumeister, Veronique M; Anagnostou, Valsamo; Siddiqui, Summar; England, Allison Michal; Zarrella, Elizabeth R; Vassilakopoulou, Maria; Parisi, Fabio; Kluger, Yuval; Hicks, David G; Rimm, David L
2012-12-05
Companion diagnostic tests can depend on accurate measurement of protein expression in tissues. Preanalytic variables, especially cold ischemic time (time from tissue removal to fixation in formalin) can affect the measurement and may cause false-negative results. We examined 23 proteins, including four commonly used breast cancer biomarker proteins, to quantify their sensitivity to cold ischemia in breast cancer tissues. A series of 93 breast cancer specimens with known time-to-fixation represented in a tissue microarray and a second series of 25 matched pairs of core needle biopsies and breast cancer resections were used to evaluate changes in antigenicity as a function of cold ischemic time. Estrogen receptor (ER), progesterone receptor (PgR), HER2 or Ki67, and 19 other antigens were tested. Each antigen was measured using the AQUA method of quantitative immunofluorescence on at least one series. All statistical tests were two-sided. We found no evidence for loss of antigenicity with time-to-fixation for ER, PgR, HER2, or Ki67 in a 4-hour time window. However, with a bootstrapping analysis, we observed a trend toward loss for ER and PgR, a statistically significant loss of antigenicity for phosphorylated tyrosine (P = .0048), and trends toward loss for other proteins. There was evidence of increased antigenicity in acetylated lysine, AKAP13 (P = .009), and HIF1A (P = .046), which are proteins known to be expressed in conditions of hypoxia. The loss of antigenicity for phosphorylated tyrosine and increase in expression of AKAP13, and HIF1A were confirmed in the biopsy/resection series. Key breast cancer biomarkers show no evidence of loss of antigenicity, although this dataset assesses the relatively short time beyond the 1-hour limit in recent guidelines. Other proteins show changes in antigenicity in both directions. Future studies that extend the time range and normalize for heterogeneity will provide more comprehensive information on preanalytic variation due to cold ischemic time.
Investigation of multifractality in the Brazilian stock market
NASA Astrophysics Data System (ADS)
Maganini, Natália Diniz; Da Silva Filho, Antônio Carlos; Lima, Fabiano Guasti
2018-05-01
Many studies point to a possible new stylized fact for financial time series: the multifractality. Several authors have already detected this characteristic in multiple time series in several countries. With that in mind and based on Multifractal Detrended Fluctuation Analysis (MFDFA) method, this paper analyzes the multifractality in the Brazilian market. This analysis is performed with daily data from IBOVESPA index (Brazilian stock exchange's main index) and other four highly marketable stocks in the Brazilian market (VALE5, ITUB4, BBDC4 and CIEL3), which represent more than 25% of the index composition, making up 1961 observations for each asset in the period from June 26 2009 to May 31 2017. We found that the studied stock prices and Brazilian index are multifractal, but that the multifractality degree is not the same for all the assets. The use of shuffled and surrogated series indicates that for the period and the actions considered the long-range correlations do not strongly influence the multifractality, but the distribution (fat tails) exerts a possible influence on IBOVESPA and CIEL3.
Cycles in oceanic teleconnections and global temperature change
NASA Astrophysics Data System (ADS)
Seip, Knut L.; Grøn, Øyvind
2018-06-01
Three large ocean currents are represented by proxy time series: the North Atlantic Oscillation (NAO), the Southern Oscillation Index (SOI), and the Pacific Decadal Oscillation (PDO). We here show how proxies for the currents interact with each other and with the global temperature anomaly (GTA). Our results are obtained by a novel method, which identifies running average leading-lagging (LL) relations, between paired series. We find common cycle times for a paired series of 6-7 and 25-28 years and identify years when the LL relations switch. Switching occurs with 18.4 ± 14.3-year intervals for the short 6-7-year cycles and with 27 ± 15-year intervals for the 25-28-year cycles. During the period 1940-1950, the LL relations for the long cycles were circular (nomenclature x leads y: x → y): GTA → NAO → SOI → PDO → GTA. However, after 1960, the LL relations become more complex and there are indications that GTA leads to both NAO and PDO. The switching years are related to ocean current tie points and reversals reported in the literature.
Memetic Approaches for Optimizing Hidden Markov Models: A Case Study in Time Series Prediction
NASA Astrophysics Data System (ADS)
Bui, Lam Thu; Barlow, Michael
We propose a methodology for employing memetics (local search) within the framework of evolutionary algorithms to optimize parameters of hidden markov models. With this proposal, the rate and frequency of using local search are automatically changed over time either at a population or individual level. At the population level, we allow the rate of using local search to decay over time to zero (at the final generation). At the individual level, each individual is equipped with information of when it will do local search and for how long. This information evolves over time alongside the main elements of the chromosome representing the individual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Christopher J; Ahrens, James P; Wang, Jun
2010-10-15
Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visuaUzation of this time series is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying data; as they are not configured or provisioned to provide necessary I/O read rates. In this paper, we propose a new I/O library for visualization applications: VisIO. Visualization applications commonly use N-to-N reads within their parallel enabled readers which provides an incentive for a shared-nothing approach to I/O, similar tomore » other data-intensive approaches such as Hadoop. However, unlike other data-intensive applications, visualization requires: (1) interactive performance for large data volumes, (2) compatibility with MPI and POSIX file system semantics for compatibility with existing infrastructure, and (3) use of existing file formats and their stipulated data partitioning rules. VisIO, provides a mechanism for using a non-POSIX distributed file system to provide linear scaling of 110 bandwidth. In addition, we introduce a novel scheduling algorithm that helps to co-locate visualization processes on nodes with the requested data. Testing using VisIO integrated into Para View was conducted using the Hadoop Distributed File System (HDFS) on TACC's Longhorn cluster. A representative dataset, VPIC, across 128 nodes showed a 64.4% read performance improvement compared to the provided Lustre installation. Also tested, was a dataset representing a global ocean salinity simulation that showed a 51.4% improvement in read performance over Lustre when using our VisIO system. VisIO, provides powerful high-performance I/O services to visualization applications, allowing for interactive performance with ultra-scale, time-series data.« less
The impact of a national alcohol policy on deaths due to transport accidents in Russia.
Pridemore, William Alex; Chamlin, Mitchell B; Kaylen, Maria T; Andreev, Evgeny
2013-12-01
To determine the impact of a suite of 2006 Russian alcohol control policies on deaths due to traffic accidents in the country. We used autoregressive integrated moving average (ARIMA) interrupted time-series techniques to model the impact of the intervention on the outcome series. The time-series began in January 2000 and ended in December 2010. The alcohol policy was implemented in January 2006, providing 132 monthly observations in the outcome series, with 72 months of pre-intervention data and 60 months of post-intervention data. The outcome variables were the monthly number of male- and female-specific deaths of those aged 15+ years due to transport accidents in Russia. The 2006 set of alcohol policies had no impact on female deaths due to traffic accidents (ω0 = -50.31, P = 0.27). However, the intervention model revealed an immediate and sustained monthly decrease of 203 deaths due to transport accidents for males (ω0 = -203.40, P = 0.04), representing an 11% reduction relative to pre-intervention levels. The implementation of the suite of 2006 Russian alcohol control policies is partially responsible for saving more than 2400 male lives annually that would otherwise have been lost to traffic accidents. © 2013 Society for the Study of Addiction.
Analysis of PV Advanced Inverter Functions and Setpoints under Time Series Simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seuss, John; Reno, Matthew J.; Broderick, Robert Joseph
Utilities are increasingly concerned about the potential negative impacts distributed PV may have on the operational integrity of their distribution feeders. Some have proposed novel methods for controlling a PV system's grid - tie inverter to mitigate poten tial PV - induced problems. This report investigates the effectiveness of several of these PV advanced inverter controls on improving distribution feeder operational metrics. The controls are simulated on a large PV system interconnected at several locations within two realistic distribution feeder models. Due to the time - domain nature of the advanced inverter controls, quasi - static time series simulations aremore » performed under one week of representative variable irradiance and load data for each feeder. A para metric study is performed on each control type to determine how well certain measurable network metrics improve as a function of the control parameters. This methodology is used to determine appropriate advanced inverter settings for each location on the f eeder and overall for any interconnection location on the feeder.« less
Stuebner, Michael; Haider, Mansoor A
2010-06-18
A new and efficient method for numerical solution of the continuous spectrum biphasic poroviscoelastic (BPVE) model of articular cartilage is presented. Development of the method is based on a composite Gauss-Legendre quadrature approximation of the continuous spectrum relaxation function that leads to an exponential series representation. The separability property of the exponential terms in the series is exploited to develop a numerical scheme that can be reduced to an update rule requiring retention of the strain history at only the previous time step. The cost of the resulting temporal discretization scheme is O(N) for N time steps. Application and calibration of the method is illustrated in the context of a finite difference solution of the one-dimensional confined compression BPVE stress-relaxation problem. Accuracy of the numerical method is demonstrated by comparison to a theoretical Laplace transform solution for a range of viscoelastic relaxation times that are representative of articular cartilage. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Memory effects in stock price dynamics: evidences of technical trading
Garzarelli, Federico; Cristelli, Matthieu; Pompa, Gabriele; Zaccaria, Andrea; Pietronero, Luciano
2014-01-01
Technical trading represents a class of investment strategies for Financial Markets based on the analysis of trends and recurrent patterns in price time series. According standard economical theories these strategies should not be used because they cannot be profitable. On the contrary, it is well-known that technical traders exist and operate on different time scales. In this paper we investigate if technical trading produces detectable signals in price time series and if some kind of memory effects are introduced in the price dynamics. In particular, we focus on a specific figure called supports and resistances. We first develop a criterion to detect the potential values of supports and resistances. Then we show that memory effects in the price dynamics are associated to these selected values. In fact we show that prices more likely re-bounce than cross these values. Such an effect is a quantitative evidence of the so-called self-fulfilling prophecy, that is the self-reinforcement of agents' belief and sentiment about future stock prices' behavior. PMID:24671011
Memory effects in stock price dynamics: evidences of technical trading
NASA Astrophysics Data System (ADS)
Garzarelli, Federico; Cristelli, Matthieu; Pompa, Gabriele; Zaccaria, Andrea; Pietronero, Luciano
2014-03-01
Technical trading represents a class of investment strategies for Financial Markets based on the analysis of trends and recurrent patterns in price time series. According standard economical theories these strategies should not be used because they cannot be profitable. On the contrary, it is well-known that technical traders exist and operate on different time scales. In this paper we investigate if technical trading produces detectable signals in price time series and if some kind of memory effects are introduced in the price dynamics. In particular, we focus on a specific figure called supports and resistances. We first develop a criterion to detect the potential values of supports and resistances. Then we show that memory effects in the price dynamics are associated to these selected values. In fact we show that prices more likely re-bounce than cross these values. Such an effect is a quantitative evidence of the so-called self-fulfilling prophecy, that is the self-reinforcement of agents' belief and sentiment about future stock prices' behavior.
McLeod, Michael C; Aubé, Jeffrey; Frankowski, Kevin J
2016-12-01
Analogues of the decahydrobenzoquinolin-5-one class of sigma (σ) receptor ligands were used to probe the structure-activity relationship trends for this recently discovered series of σ ligands. In all, 29 representatives were tested for σ and opioid receptor affinity, leading to the identification of compounds possessing improved σ 1 selectivity and, for the first time in this series, examples possessing preferential σ 2 affinity. Several structural features associated with these selectivity trends have been identified. Two analogues of improved selectivity were evaluated in a binding panel of 43 CNS-relevant targets to confirm their sigma receptor preference. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Sabaka, T. J.; Rowlands, D. D.; Luthcke, S. B.; Boy, J.-P.
2010-01-01
We describe Earth's mass flux from April 2003 through November 2008 by deriving a time series of mas cons on a global 2deg x 2deg equal-area grid at 10 day intervals. We estimate the mass flux directly from K band range rate (KBRR) data provided by the Gravity Recovery and Climate Experiment (GRACE) mission. Using regularized least squares, we take into account the underlying process dynamics through continuous space and time-correlated constraints. In addition, we place the mascon approach in the context of other filtering techniques, showing its equivalence to anisotropic, nonsymmetric filtering, least squares collocation, and Kalman smoothing. We produce mascon time series from KBRR data that have and have not been corrected (forward modeled) for hydrological processes and fmd that the former produce superior results in oceanic areas by minimizing signal leakage from strong sources on land. By exploiting the structure of the spatiotemporal constraints, we are able to use a much more efficient (in storage and computation) inversion algorithm based upon the conjugate gradient method. This allows us to apply continuous rather than piecewise continuous time-correlated constraints, which we show via global maps and comparisons with ocean-bottom pressure gauges, to produce time series with reduced random variance and full systematic signal. Finally, we present a preferred global model, a hybrid whose oceanic portions are derived using forward modeling of hydrology but whose land portions are not, and thus represent a pure GRACE-derived signal.
Reid, Brian J; Papanikolaou, Niki D; Wilcox, Ronah K
2005-02-01
The catabolic activity with respect to the systemic herbicide isoproturon was determined in soil samples by (14)C-radiorespirometry. The first experiment assessed levels of intrinsic catabolic activity in soil samples that represented three dissimilar soil series under arable cultivation. Results showed average extents of isoproturon mineralisation (after 240 h assay time) in the three soil series to be low. A second experiment assessed the impact of addition of isoproturon (0.05 microg kg(-1)) into these soils on the levels of catabolic activity following 28 days of incubation. Increased catabolic activity was observed in all three soils. A third experiment assessed levels of intrinsic catabolic activity in soil samples representing a single soil series managed under either conventional agricultural practice (including the use of isoproturon) or organic farming practice (with no use of isoproturon). Results showed higher (and more consistent) levels of isoproturon mineralisation in the soil samples collected from conventional land use. The final experiment assessed the impact of isoproturon addition on the levels of inducible catabolic activity in these soils. The results showed no significant difference in the case of the conventional farm soil samples while the induction of catabolic activity in the organic farm soil samples was significant.
Small-world bias of correlation networks: From brain to climate
NASA Astrophysics Data System (ADS)
Hlinka, Jaroslav; Hartman, David; Jajcay, Nikola; Tomeček, David; Tintěra, Jaroslav; Paluš, Milan
2017-03-01
Complex systems are commonly characterized by the properties of their graph representation. Dynamical complex systems are then typically represented by a graph of temporal dependencies between time series of state variables of their subunits. It has been shown recently that graphs constructed in this way tend to have relatively clustered structure, potentially leading to spurious detection of small-world properties even in the case of systems with no or randomly distributed true interactions. However, the strength of this bias depends heavily on a range of parameters and its relevance for real-world data has not yet been established. In this work, we assess the relevance of the bias using two examples of multivariate time series recorded in natural complex systems. The first is the time series of local brain activity as measured by functional magnetic resonance imaging in resting healthy human subjects, and the second is the time series of average monthly surface air temperature coming from a large reanalysis of climatological data over the period 1948-2012. In both cases, the clustering in the thresholded correlation graph is substantially higher compared with a realization of a density-matched random graph, while the shortest paths are relatively short, showing thus distinguishing features of small-world structure. However, comparable or even stronger small-world properties were reproduced in correlation graphs of model processes with randomly scrambled interconnections. This suggests that the small-world properties of the correlation matrices of these real-world systems indeed do not reflect genuinely the properties of the underlying interaction structure, but rather result from the inherent properties of correlation matrix.
Bivariate analysis of floods in climate impact assessments.
Brunner, Manuela Irene; Sikorska, Anna E; Seibert, Jan
2018-03-01
Climate impact studies regarding floods usually focus on peak discharges and a bivariate assessment of peak discharges and hydrograph volumes is not commonly included. A joint consideration of peak discharges and hydrograph volumes, however, is crucial when assessing flood risks for current and future climate conditions. Here, we present a methodology to develop synthetic design hydrographs for future climate conditions that jointly consider peak discharges and hydrograph volumes. First, change factors are derived based on a regional climate model and are applied to observed precipitation and temperature time series. Second, the modified time series are fed into a calibrated hydrological model to simulate runoff time series for future conditions. Third, these time series are used to construct synthetic design hydrographs. The bivariate flood frequency analysis used in the construction of synthetic design hydrographs takes into account the dependence between peak discharges and hydrograph volumes, and represents the shape of the hydrograph. The latter is modeled using a probability density function while the dependence between the design variables peak discharge and hydrograph volume is modeled using a copula. We applied this approach to a set of eight mountainous catchments in Switzerland to construct catchment-specific and season-specific design hydrographs for a control and three scenario climates. Our work demonstrates that projected climate changes have an impact not only on peak discharges but also on hydrograph volumes and on hydrograph shapes both at an annual and at a seasonal scale. These changes are not necessarily proportional which implies that climate impact assessments on future floods should consider more flood characteristics than just flood peaks. Copyright © 2017. Published by Elsevier B.V.
Burned area detection based on Landsat time series in savannas of southern Burkina Faso
NASA Astrophysics Data System (ADS)
Liu, Jinxiu; Heiskanen, Janne; Maeda, Eduardo Eiji; Pellikka, Petri K. E.
2018-02-01
West African savannas are subject to regular fires, which have impacts on vegetation structure, biodiversity and carbon balance. An efficient and accurate mapping of burned area associated with seasonal fires can greatly benefit decision making in land management. Since coarse resolution burned area products cannot meet the accuracy needed for fire management and climate modelling at local scales, the medium resolution Landsat data is a promising alternative for local scale studies. In this study, we developed an algorithm for continuous monitoring of annual burned areas using Landsat time series. The algorithm is based on burned pixel detection using harmonic model fitting with Landsat time series and breakpoint identification in the time series data. This approach was tested in a savanna area in southern Burkina Faso using 281 images acquired between October 2000 and April 2016. An overall accuracy of 79.2% was obtained with balanced omission and commission errors. This represents a significant improvement in comparison with MODIS burned area product (67.6%), which had more omission errors than commission errors, indicating underestimation of the total burned area. By observing the spatial distribution of burned areas, we found that the Landsat based method misclassified cropland and cloud shadows as burned areas due to the similar spectral response, and MODIS burned area product omitted small and fragmented burned areas. The proposed algorithm is flexible and robust against decreased data availability caused by clouds and Landsat 7 missing lines, therefore having a high potential for being applied in other landscapes in future studies.
Small-world bias of correlation networks: From brain to climate.
Hlinka, Jaroslav; Hartman, David; Jajcay, Nikola; Tomeček, David; Tintěra, Jaroslav; Paluš, Milan
2017-03-01
Complex systems are commonly characterized by the properties of their graph representation. Dynamical complex systems are then typically represented by a graph of temporal dependencies between time series of state variables of their subunits. It has been shown recently that graphs constructed in this way tend to have relatively clustered structure, potentially leading to spurious detection of small-world properties even in the case of systems with no or randomly distributed true interactions. However, the strength of this bias depends heavily on a range of parameters and its relevance for real-world data has not yet been established. In this work, we assess the relevance of the bias using two examples of multivariate time series recorded in natural complex systems. The first is the time series of local brain activity as measured by functional magnetic resonance imaging in resting healthy human subjects, and the second is the time series of average monthly surface air temperature coming from a large reanalysis of climatological data over the period 1948-2012. In both cases, the clustering in the thresholded correlation graph is substantially higher compared with a realization of a density-matched random graph, while the shortest paths are relatively short, showing thus distinguishing features of small-world structure. However, comparable or even stronger small-world properties were reproduced in correlation graphs of model processes with randomly scrambled interconnections. This suggests that the small-world properties of the correlation matrices of these real-world systems indeed do not reflect genuinely the properties of the underlying interaction structure, but rather result from the inherent properties of correlation matrix.
Dynamical density delay maps: simple, new method for visualising the behaviour of complex systems
2014-01-01
Background Physiologic signals, such as cardiac interbeat intervals, exhibit complex fluctuations. However, capturing important dynamical properties, including nonstationarities may not be feasible from conventional time series graphical representations. Methods We introduce a simple-to-implement visualisation method, termed dynamical density delay mapping (“D3-Map” technique) that provides an animated representation of a system’s dynamics. The method is based on a generalization of conventional two-dimensional (2D) Poincaré plots, which are scatter plots where each data point, x(n), in a time series is plotted against the adjacent one, x(n + 1). First, we divide the original time series, x(n) (n = 1,…, N), into a sequence of segments (windows). Next, for each segment, a three-dimensional (3D) Poincaré surface plot of x(n), x(n + 1), h[x(n),x(n + 1)] is generated, in which the third dimension, h, represents the relative frequency of occurrence of each (x(n),x(n + 1)) point. This 3D Poincaré surface is then chromatised by mapping the relative frequency h values onto a colour scheme. We also generate a colourised 2D contour plot from each time series segment using the same colourmap scheme as for the 3D Poincaré surface. Finally, the original time series graph, the colourised 3D Poincaré surface plot, and its projection as a colourised 2D contour map for each segment, are animated to create the full “D3-Map.” Results We first exemplify the D3-Map method using the cardiac interbeat interval time series from a healthy subject during sleeping hours. The animations uncover complex dynamical changes, such as transitions between states, and the relative amount of time the system spends in each state. We also illustrate the utility of the method in detecting hidden temporal patterns in the heart rate dynamics of a patient with atrial fibrillation. The videos, as well as the source code, are made publicly available. Conclusions Animations based on density delay maps provide a new way of visualising dynamical properties of complex systems not apparent in time series graphs or standard Poincaré plot representations. Trainees in a variety of fields may find the animations useful as illustrations of fundamental but challenging concepts, such as nonstationarity and multistability. For investigators, the method may facilitate data exploration. PMID:24438439
Mayaud, C; Wagner, T; Benischke, R; Birk, S
2014-04-16
The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the synthetic system allow to deduce that similar aquifer properties are relevant in both systems. In particular, the heterogeneity of aquifer parameters appears to be a controlling factor. Moreover, the location of the overflow connecting the sub-catchments of the two springs is found to be of primary importance, regarding the occurrence of inter-catchment flow. This further supports our current understanding of an overflow zone located in the upper part of the Lurbach karst aquifer. Thus, time series analysis of single events can potentially be used to characterize transient inter-catchment flow behavior of karst systems.
Mayaud, C.; Wagner, T.; Benischke, R.; Birk, S.
2014-01-01
Summary The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the synthetic system allow to deduce that similar aquifer properties are relevant in both systems. In particular, the heterogeneity of aquifer parameters appears to be a controlling factor. Moreover, the location of the overflow connecting the sub-catchments of the two springs is found to be of primary importance, regarding the occurrence of inter-catchment flow. This further supports our current understanding of an overflow zone located in the upper part of the Lurbach karst aquifer. Thus, time series analysis of single events can potentially be used to characterize transient inter-catchment flow behavior of karst systems. PMID:24748687
Legendre-tau approximations for functional differential equations
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1986-01-01
The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.
Panorama of acute diarrhoeal diseases in Mexico.
Cifuentes, E; Hernández, J E; Venczel, L; Hurtado, M
1999-09-01
We examined the recent panorama of ADD related deaths in Mexico in an effort to assess the overall impact of control measures that may vary in space and time. We pay particular attention to mortality rates recorded between 1985-1995, that is, before and after the cholera emergency. The aim is to focus on the social groups at risk, using time series data represented in the form of images and produced by a geographic information system (GIS). We show the potential of such methods to define populations at risk and support the decision process.
Legendre-Tau approximations for functional differential equations
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1983-01-01
The numerical approximation of solutions to linear functional differential equations are considered using the so called Legendre tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time differentiation. The approximate solution is then represented as a truncated Legendre series with time varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximations is made.
Technology Utilization Conference Series, volume 1
NASA Technical Reports Server (NTRS)
1975-01-01
The design, development, and results of a series of technology utilization conferences are presented. The conference series represents the development of a viable and successful means of encouraging the transfer of technology to the minority business community.
Holocene monsoon variability as resolved in small complex networks from palaeodata
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Breitenbach, S.; Kurths, J.
2012-04-01
To understand the impacts of Holocene precipitation and/or temperature changes in the spatially extensive and complex region of Asia, it is promising to combine the information from palaeo archives, such as e.g. stalagmites, tree rings and marine sediment records from India and China. To this end, complex networks present a powerful and increasingly popular tool for the description and analysis of interactions within complex spatially extended systems in the geosciences and therefore appear to be predestined for this task. Such a network is typically constructed by thresholding a similarity matrix which in turn is based on a set of time series representing the (Earth) system dynamics at different locations. Looking into the pre-instrumental past, information about the system's processes and thus its state is available only through the reconstructed time series which -- most often -- are irregularly sampled in time and space. Interpolation techniques are often used for signal reconstruction, but they introduce additional errors, especially when records have large gaps. We have recently developed and extensively tested methods to quantify linear (Pearson correlation) and non-linear (mutual information) similarity in presence of heterogeneous and irregular sampling. To illustrate our approach we derive small networks from significantly correlated, linked, time series which are supposed to capture the underlying Asian Monsoon dynamics. We assess and discuss whether and where links and directionalities in these networks from irregularly sampled time series can be soundly detected. Finally, we investigate the role of the Northern Hemispheric temperature with respect to the correlation patterns and find that those derived from warm phases (e.g. Medieval Warm Period) are significantly different from patterns found in cold phases (e.g. Little Ice Age).
NASA Astrophysics Data System (ADS)
Fatichi, S.; Ivanov, V. Y.; Caporali, E.
2013-04-01
This study extends a stochastic downscaling methodology to generation of an ensemble of hourly time series of meteorological variables that express possible future climate conditions at a point-scale. The stochastic downscaling uses general circulation model (GCM) realizations and an hourly weather generator, the Advanced WEather GENerator (AWE-GEN). Marginal distributions of factors of change are computed for several climate statistics using a Bayesian methodology that can weight GCM realizations based on the model relative performance with respect to a historical climate and a degree of disagreement in projecting future conditions. A Monte Carlo technique is used to sample the factors of change from their respective marginal distributions. As a comparison with traditional approaches, factors of change are also estimated by averaging GCM realizations. With either approach, the derived factors of change are applied to the climate statistics inferred from historical observations to re-evaluate parameters of the weather generator. The re-parameterized generator yields hourly time series of meteorological variables that can be considered to be representative of future climate conditions. In this study, the time series are generated in an ensemble mode to fully reflect the uncertainty of GCM projections, climate stochasticity, as well as uncertainties of the downscaling procedure. Applications of the methodology in reproducing future climate conditions for the periods of 2000-2009, 2046-2065 and 2081-2100, using the period of 1962-1992 as the historical baseline are discussed for the location of Firenze (Italy). The inferences of the methodology for the period of 2000-2009 are tested against observations to assess reliability of the stochastic downscaling procedure in reproducing statistics of meteorological variables at different time scales.
NASA Astrophysics Data System (ADS)
Zheng, Jinde; Pan, Haiyang; Cheng, Junsheng
2017-02-01
To timely detect the incipient failure of rolling bearing and find out the accurate fault location, a novel rolling bearing fault diagnosis method is proposed based on the composite multiscale fuzzy entropy (CMFE) and ensemble support vector machines (ESVMs). Fuzzy entropy (FuzzyEn), as an improvement of sample entropy (SampEn), is a new nonlinear method for measuring the complexity of time series. Since FuzzyEn (or SampEn) in single scale can not reflect the complexity effectively, multiscale fuzzy entropy (MFE) is developed by defining the FuzzyEns of coarse-grained time series, which represents the system dynamics in different scales. However, the MFE values will be affected by the data length, especially when the data are not long enough. By combining information of multiple coarse-grained time series in the same scale, the CMFE algorithm is proposed in this paper to enhance MFE, as well as FuzzyEn. Compared with MFE, with the increasing of scale factor, CMFE obtains much more stable and consistent values for a short-term time series. In this paper CMFE is employed to measure the complexity of vibration signals of rolling bearings and is applied to extract the nonlinear features hidden in the vibration signals. Also the physically meanings of CMFE being suitable for rolling bearing fault diagnosis are explored. Based on these, to fulfill an automatic fault diagnosis, the ensemble SVMs based multi-classifier is constructed for the intelligent classification of fault features. Finally, the proposed fault diagnosis method of rolling bearing is applied to experimental data analysis and the results indicate that the proposed method could effectively distinguish different fault categories and severities of rolling bearings.
Information-theoretical noninvasive damage detection in bridge structures
NASA Astrophysics Data System (ADS)
Sudu Ambegedara, Amila; Sun, Jie; Janoyan, Kerop; Bollt, Erik
2016-11-01
Damage detection of mechanical structures such as bridges is an important research problem in civil engineering. Using spatially distributed sensor time series data collected from a recent experiment on a local bridge in Upper State New York, we study noninvasive damage detection using information-theoretical methods. Several findings are in order. First, the time series data, which represent accelerations measured at the sensors, more closely follow Laplace distribution than normal distribution, allowing us to develop parameter estimators for various information-theoretic measures such as entropy and mutual information. Second, as damage is introduced by the removal of bolts of the first diaphragm connection, the interaction between spatially nearby sensors as measured by mutual information becomes weaker, suggesting that the bridge is "loosened." Finally, using a proposed optimal mutual information interaction procedure to prune away indirect interactions, we found that the primary direction of interaction or influence aligns with the traffic direction on the bridge even after damaging the bridge.
NASA Astrophysics Data System (ADS)
Jiang, Shi-Mei; Cai, Shi-Min; Zhou, Tao; Zhou, Pei-Ling
2008-06-01
The two-phase behaviour in financial markets actually means the bifurcation phenomenon, which represents the change of the conditional probability from an unimodal to a bimodal distribution. We investigate the bifurcation phenomenon in Hang-Seng index. It is observed that the bifurcation phenomenon in financial index is not universal, but specific under certain conditions. For Hang-Seng index and randomly generated time series, the phenomenon just emerges when the power-law exponent of absolute increment distribution is between 1 and 2 with appropriate period. Simulations on a randomly generated time series suggest the bifurcation phenomenon itself is subject to the statistics of absolute increment, thus it may not be able to reflect essential financial behaviours. However, even under the same distribution of absolute increment, the range where bifurcation phenomenon occurs is far different from real market to artificial data, which may reflect certain market information.
Cross over of recurrence networks to random graphs and random geometric graphs
NASA Astrophysics Data System (ADS)
Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.
2017-02-01
Recurrence networks are complex networks constructed from the time series of chaotic dynamical systems where the connection between two nodes is limited by the recurrence threshold. This condition makes the topology of every recurrence network unique with the degree distribution determined by the probability density variations of the representative attractor from which it is constructed. Here we numerically investigate the properties of recurrence networks from standard low-dimensional chaotic attractors using some basic network measures and show how the recurrence networks are different from random and scale-free networks. In particular, we show that all recurrence networks can cross over to random geometric graphs by adding sufficient amount of noise to the time series and into the classical random graphs by increasing the range of interaction to the system size. We also highlight the effectiveness of a combined plot of characteristic path length and clustering coefficient in capturing the small changes in the network characteristics.
Executive Functions and Prefrontal Cortex: A Matter of Persistence?
Ball, Gareth; Stokes, Paul R.; Rhodes, Rebecca A.; Bose, Subrata K.; Rezek, Iead; Wink, Alle-Meije; Lord, Louis-David; Mehta, Mitul A.; Grasby, Paul M.; Turkheimer, Federico E.
2011-01-01
Executive function is thought to originates from the dynamics of frontal cortical networks. We examined the dynamic properties of the blood oxygen level dependent time-series measured with functional MRI (fMRI) within the prefrontal cortex (PFC) to test the hypothesis that temporally persistent neural activity underlies performance in three tasks of executive function. A numerical estimate of signal persistence, the Hurst exponent, postulated to represent the coherent firing of cortical networks, was determined and correlated with task performance. Increasing persistence in the lateral PFC was shown to correlate with improved performance during an n-back task. Conversely, we observed a correlation between persistence and increasing commission error – indicating a failure to inhibit a prepotent response – during a Go/No-Go task. We propose that persistence within the PFC reflects dynamic network formation and these findings underline the importance of frequency analysis of fMRI time-series in the study of executive functions. PMID:21286223
Observations of sediment transport on the Amazon subaqueous delta
Sternberg, R.W.; Cacchione, D.A.; Paulson, B.; Kineke, G.C.; Drake, D.E.
1996-01-01
A 19-day time series of fluid, flow, and suspended-sediment characteristics in the benthic boundary layer is analyzed to identify major sedimentary processes active over the prodelta region of the Amazon subaqueous delta. Measurements were made by the benthic tripod GEOPROBE placed on the seabed in 65 m depth near the base of the deltaic foreset beds from 11 February to 3 March 1990, during the time of rising water and maximum sediment discharge of the Amazon River; and the observations included: hourly measurements of velocity and suspended-sediment concentration at four levels above the seabed; waves and tides; and seabed elevation. Results of the first 14-day period of the time series record indicate that sediment resuspension occurred as a result of tidal currents (91% of the time) and surface gravity waves (46% of the time). Observations of suspended sediment indicated that particle flux in this region is 0.4-2% of the flux measured on the adjacent topset deposits and is directed to the north and landward relative to the Brazilian coast (268??T). Fortnightly variability is strong, with particle fluxes during spring tides five times greater than during neap tides. On the 15th day of the data record, a rapid sedimentation event was documented in which 44 cm of sediment was deposited at the study site over a 14-h period. Evaluation of various mechanisms of mass sediment movement suggests that this event represents downslope migration of fluid muds from the upper foreset beds that were set in motion by boundary shear stresses generated by waves and currents. This transport mechanism appears to occur episodically and may represent a major source of sediment to the lower foreset-bottomset region of the subaqueous delta.
The Golden Age of Greece: Imperial Democracy 500-400 B.C. A Unit of Study for Grades 6-12.
ERIC Educational Resources Information Center
Cheoros, Peter; And Others
This unit is one of a series that represents specific moments in history from which students focus on the meanings of landmark events. This unit explores Greece's most glorious century, the high point of Athenian culture. Rarely has so much genius been concentrated in one small region over such a short period of time. Students discover in studying…
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
The Origins of the Cold War: A Unit of Study for Grades 9-12.
ERIC Educational Resources Information Center
King, Lisa
This unit is one of a series that represents specific moments in history from which students focus on the meanings of landmark events. The events of 1945 are regarded widely as a turning point in 20th century history, a point when the United States unequivocally took its place as a world power, at a time when Americans had a strong but…
Hypothesis Testing Using Spatially Dependent Heavy Tailed Multisensor Data
2014-12-01
Office of Research 113 Bowne Hall Syracuse, NY 13244 -1200 ABSTRACT HYPOTHESIS TESTING USING SPATIALLY DEPENDENT HEAVY-TAILED MULTISENSOR DATA Report...consistent with the null hypothesis of linearity and can be used to estimate the distribution of a test statistic that can discrimi- nate between the null... Test for nonlinearity. Histogram is generated using the surrogate data. The statistic of the original time series is represented by the solid line
Michael R. Wagner
1991-01-01
Patterns that occur in nature are the result of a complex set of current and historical factors that interact with one another and the adaptive plasticity of plants. Scientists are forced to assess such processes on the basis of series of "snapshots" over a relatively short time that represent only part of the grand pattern. In the case of insects interacting...
Recent Transport History of Fukushima Radioactivity in the Northeast Pacific Ocean.
Smith, John N; Rossi, Vincent; Buesseler, Ken O; Cullen, Jay T; Cornett, Jack; Nelson, Richard; Macdonald, Alison M; Robert, Marie; Kellogg, Jonathan
2017-09-19
The large inventory of radioactivity released during the March, 2011 Fukushima Dai-ichi nuclear reactor accident in Japan spread rapidly across the North Pacific Ocean and was first observed at the westernmost station on Line P, an oceanographic sampling line extending 1500 km westward of British Columbia (BC), Canada in June 2012. Here, time series measurements of 134 Cs and 137 Cs in seawater on Line P and on the CLIVAR-P16N 152°W line reveal the recent transport history of the Fukushima radioactivity tracer plume through the northeast Pacific Ocean. During 2013 and 2014 the Fukushima plume spread onto the Canadian continental shelf and by 2015 and early 2016 it reached 137 Cs values of 6-8 Bq/m 3 in surface water along Line P. Ocean circulation model simulations that are consistent with the time series measurements of Fukushima 137 Cs indicate that the 2015-2016 results represent maximum tracer levels on Line P and that they will begin to decline in 2017-2018. The current elevated Fukushima 137 Cs levels in seawater in the eastern North Pacific are equivalent to fallout background levels of 137 Cs that prevailed during the 1970s and do not represent a radiological threat to human health or the environment.
NASA Astrophysics Data System (ADS)
Clarke, Hannah; Done, Fay; Casadio, Stefano; Mackin, Stephen; Dinelli, Bianca Maria; Castelli, Elisa
2016-08-01
The long time-series of observations made by the Along Track Scanning Radiometers (ATSR) missions represents a valuable resource for a wide range of research and EO applications.With the advent of ESA's Long-TermData Preservation (LTDP) programme, thought has turned to the preservation and improved understanding of such long time-series, to support their continued exploitation in both existing and new areas of research, bringing the possibility of improving the existing data set and to inform and contribute towards future missions. For this reason, the 'Long Term Stability of the ATSR Instrument Series: SWIR Calibration, Cloud Masking and SAA' project, commonly known as the ATSR Long Term Stability (or ALTS) project, is designed to explore the key characteristics of the data set and new and innovative ways of enhancing and exploiting it.Work has focussed on: A new approach to the assessment of Short Wave Infra-Red (SWIR) channel calibration.; Developmentof a new method for Total Column Water Vapour (TCWV) retrieval.; Study of the South Atlantic Anomaly (SAA).; Radiative Transfer (RT) modelling for ATSR.; Providing AATSR observations with their location in the original instrument grid.; Strategies for the retrieval and archiving of historical ATSR documentation.; Study of TCWV retrieval over land; Development of new methods for cloud masking This paper provides an overview of these activities and illustrates the importance of preserving and understanding 'old' data for continued use in the future.
NASA Astrophysics Data System (ADS)
Wang, Dong; Ding, Hao; Singh, Vijay P.; Shang, Xiaosan; Liu, Dengfeng; Wang, Yuankun; Zeng, Xiankui; Wu, Jichun; Wang, Lachun; Zou, Xinqing
2015-05-01
For scientific and sustainable management of water resources, hydrologic and meteorologic data series need to be often extended. This paper proposes a hybrid approach, named WA-CM (wavelet analysis-cloud model), for data series extension. Wavelet analysis has time-frequency localization features, known as "mathematics microscope," that can decompose and reconstruct hydrologic and meteorologic series by wavelet transform. The cloud model is a mathematical representation of fuzziness and randomness and has strong robustness for uncertain data. The WA-CM approach first employs the wavelet transform to decompose the measured nonstationary series and then uses the cloud model to develop an extension model for each decomposition layer series. The final extension is obtained by summing the results of extension of each layer. Two kinds of meteorologic and hydrologic data sets with different characteristics and different influence of human activity from six (three pairs) representative stations are used to illustrate the WA-CM approach. The approach is also compared with four other methods, which are conventional correlation extension method, Kendall-Theil robust line method, artificial neural network method (back propagation, multilayer perceptron, and radial basis function), and single cloud model method. To evaluate the model performance completely and thoroughly, five measures are used, which are relative error, mean relative error, standard deviation of relative error, root mean square error, and Thiel inequality coefficient. Results show that the WA-CM approach is effective, feasible, and accurate and is found to be better than other four methods compared. The theory employed and the approach developed here can be applied to extension of data in other areas as well.
ERIC Educational Resources Information Center
Stensrud, Robert
2007-01-01
This study describes a series of focus groups conducted with employers. A series of 10 focus groups was conducted in 10 different communities in a midwestern state, with small, medium, and large communities represented. A total of 67 participants, representing human resources offices and direct supervisors, responded to questions regarding…
VPV--The velocity profile viewer user manual
Donovan, John M.
2004-01-01
The Velocity Profile Viewer (VPV) is a tool for visualizing time series of velocity profiles developed by the U.S. Geological Survey (USGS). The USGS uses VPV to preview and present measured velocity data from acoustic Doppler current profilers and simulated velocity data from three-dimensional estuarine, river, and lake hydrodynamic models. The data can be viewed as an animated three-dimensional profile or as a stack of time-series graphs that each represents a location in the water column. The graphically displayed data are shown at each time step like frames of animation. The animation can play at several different speeds or can be suspended on one frame. The viewing angle and time can be manipulated using mouse interaction. A number of options control the appearance of the profile and the graphs. VPV cannot edit or save data, but it can create a Post-Script file showing the velocity profile in three dimensions. This user manual describes how to use each of these features. VPV is available and can be downloaded for free from the World Wide Web at http://ca.water.usgs.gov/program/sfbay/vpv.
Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems
Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei
2015-01-01
The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode. PMID:26427063
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Giraldez, Julieta; Gruchalla, Kenny
2016-11-01
Duke Energy, Alstom Grid, and the National Renewable Energy Laboratory teamed up to better understand the impacts of solar photovoltaics (PV) on distribution system operations. The core goal of the project is to compare the operational - specifically, voltage regulation - impacts of three classes of PV inverter operations: 1.) Active power only (Baseline); 2.) Local inverter control (e.g., PF...not equal...1, Q(V), etc.); and 3.) Integrated volt-VAR control (centralized through the distribution management system). These comparisons were made using multiple approaches, each of which represents an important research-and-development effort on its own: a) Quasi-steady-state time-series modeling for approximately 1 yearmore » of operations using the Alstom eTerra (DOTS) system as a simulation engine, augmented by Python scripting for scenario and time-series control and using external models for an advanced inverter; b) Power-hardware-in-the-loop (PHIL) testing of a 500-kVA-class advanced inverter and traditional voltage regulating equipment. This PHIL testing used cosimulation to link full-scale feeder simulation using DOTS in real time to hardware testing; c) Advanced visualization to provide improved insights into time-series results and other PV operational impacts; and d) Cost-benefit analysis to compare the financial and business-model impacts of each integration approach.« less
Data-driven discovery of partial differential equations
Rudy, Samuel H.; Brunton, Steven L.; Proctor, Joshua L.; Kutz, J. Nathan
2017-01-01
We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg–de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable. PMID:28508044
Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.
2016-01-01
Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679
Data mining on long-term barometric data within the ARISE2 project
NASA Astrophysics Data System (ADS)
Hupe, Patrick; Ceranna, Lars; Pilger, Christoph
2016-04-01
The Comprehensive nuclear-Test-Ban Treaty (CTBT) led to the implementation of an international infrasound array network. The International Monitoring System (IMS) network includes 48 certified stations, each providing data for up to 15 years. As part of work package 3 of the ARISE2 project (Atmospheric dynamics Research InfraStructure in Europe, phase 2) the data sets will be statistically evaluated with regard on atmospheric dynamics. The current study focusses on fluctuations of absolute air pressure. Time series have been analysed for 17 monitoring stations which are located all over the world between Greenland and Antarctica along the latitudes to represent different climate zones and characteristic atmospheric conditions. Hence this enables quantitative comparisons between those regions. Analyses are shown including wavelet power spectra, multi-annual time series of average variances with regard to long-wave scales, and spectral densities to derive characteristics and special events. Evaluations reveal periodicities in average variances on 2 to 20 day scale with a maximum in the winter months and a minimum in summer of the respective hemisphere. This basically applies to time series of IMS stations beyond the tropics where the dominance of cyclones and anticyclones changes with seasons. Furthermore, spectral density analyses illustrate striking signals for several dynamic activities within one day, e.g., the semidiurnal tide.
Assessment of forest degradation in Vietnam using Landsat time series data
Vogelmann, James; Van Khoa, Phung; Xuan Lan, Do; Shermeyer, Jacob S.; Shi, Hua; Wimberly, Michael C.; Tat Duong, Hoang; Van Huong, Le
2017-01-01
Landsat time series data were used to characterize forest degradation in Lam Dong Province, Vietnam. We conducted three types of image change analyses using Landsat time series data to characterize the land cover changes. Our analyses concentrated on the timeframe of 1973–2014, with much emphasis on the latter part of that range. We conducted a field trip through Lam Dong Province to develop a better understanding of the ground conditions of the region, during which we obtained many photographs of representative forest sites with Global Positioning System locations to assist us in our image interpretations. High-resolution Google Earth imagery and Landsat data of the region were used to validate results. In general, our analyses indicated that many land-use changes have occurred throughout Lam Dong Province, including gradual forest to non-forest transitions. Recent changes are most marked along the relatively narrow interfaces between agricultural and forest areas that occur towards the boundaries of the province. One important observation is that the most highly protected national reserves in the region have not changed much over the entire Landsat timeframe (1972–present). Spectral changes within these regions have not occurred at the same levels as those areas adjacent to the reserves.
Iler, Amy M; Inouye, David W; Schmidt, Niels M; Høye, Toke T
2017-03-01
Time series have played a critical role in documenting how phenology responds to climate change. However, regressing phenological responses against climatic predictors involves the risk of finding potentially spurious climate-phenology relationships simply because both variables also change across years. Detrending by year is a way to address this issue. Additionally, detrending isolates interannual variation in phenology and climate, so that detrended climate-phenology relationships can represent statistical evidence of phenotypic plasticity. Using two flowering phenology time series from Colorado, USA and Greenland, we detrend flowering date and two climate predictors known to be important in these ecosystems: temperature and snowmelt date. In Colorado, all climate-phenology relationships persist after detrending. In Greenland, 75% of the temperature-phenology relationships disappear after detrending (three of four species). At both sites, the relationships that persist after detrending suggest that plasticity is a major component of sensitivity of flowering phenology to climate. Finally, simulations that created different strengths of correlations among year, climate, and phenology provide broader support for our two empirical case studies. This study highlights the utility of detrending to determine whether phenology is related to a climate variable in observational data sets. Applying this as a best practice will increase our understanding of phenological responses to climatic variation and change. © 2016 by the Ecological Society of America.
Jahanian, Hesamoddin; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gholam-Ali
2005-09-01
To present novel feature spaces, based on multiscale decompositions obtained by scalar wavelet and multiwavelet transforms, to remedy problems associated with high dimension of functional magnetic resonance imaging (fMRI) time series (when they are used directly in clustering algorithms) and their poor signal-to-noise ratio (SNR) that limits accurate classification of fMRI time series according to their activation contents. Using randomization, the proposed method finds wavelet/multiwavelet coefficients that represent the activation content of fMRI time series and combines them to define new feature spaces. Using simulated and experimental fMRI data sets, the proposed feature spaces are compared to the cross-correlation (CC) feature space and their performances are evaluated. In these studies, the false positive detection rate is controlled using randomization. To compare different methods, several points of the receiver operating characteristics (ROC) curves, using simulated data, are estimated and compared. The proposed features suppress the effects of confounding signals and improve activation detection sensitivity. Experimental results show improved sensitivity and robustness of the proposed method compared to the conventional CC analysis. More accurate and sensitive activation detection can be achieved using the proposed feature spaces compared to CC feature space. Multiwavelet features show superior detection sensitivity compared to the scalar wavelet features. (c) 2005 Wiley-Liss, Inc.
Empirical wind model for the middle and lower atmosphere. Part 2: Local time variations
NASA Technical Reports Server (NTRS)
Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Clark, R. R.; Franke, S. J.; Fraser, G. J.; Tsuda, T.; Vial, F.
1993-01-01
The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Local time variations in the mesosphere are derived from rocket soundings, incoherent scatter radar, MF radar, and meteor radar. Low-order spherical harmonics and Fourier series are used to describe these variations as a function of latitude and day of year with cubic spline interpolation in altitude. The model represents a smoothed compromise between the original data sources. Although agreement between various data sources is generally good, some systematic differences are noted. Overall root mean square differences between measured and model tidal components are on the order of 5 to 10 m/s.
NASA Astrophysics Data System (ADS)
Sharkov, Evgenii; Bogina, Maria; Chistyakov, Alexeii
2017-04-01
One of the most important problems of magmatic petrology over the past century is a «Daly Gap» [Daly, 1914]. It describes the lack of intermediate compositions (i.e., andesite, trachyandesite) in volcanic provinces like ocean islands, LIPs, & arcs, giving rise to "bimodal" basalt-rhyolite, basalt-trachyte or basanite-phonolite suites (Menzies, 2016). At the same time, the origin of the bimodal distribution still remains unclear. Among models proposed to explain the origin of the bimodal series are liquid immiscibility (Charlier et al 2011), physico-chemical specifics of melts (Mungal, Martin,1995), high water content in a primary melt (Melekhova et al., 2012), influence of latent heat production (Nelson et al., 2011), appearance of differentiated transitional chambers with hawaiites below and trachytes on top (Ferla et al., 2006), etc. In this case, the bimodal series are characterized by similar geochemical and isotopic-geochemical features of mafic and sialic members. At the same time, some bimodal series are produced by melting of sialic crust over basaltic chambers (Philpottas and Ague, 2009). This results in the essentially different isotopic characteristics of mafic and sialic members, as exemplified by the bimodal rapakivi granites-anorthosite complexes (Ramo, 1991; Sharkov, 2010). In addition, the bimodal basalt-trachyte series are widely spread in oceanic islands where sialic crust is absent. Thus, it is generally accepted that two contrasting melts were formed in magma chambers beneath volcanoes. Such chambers survived as intrusions and are available for geological study and deciphering their role in the formation of the bimodal magmatic series. We discuss this problem by the example of alkali Fe-Ti basalts and trachytes usually developed in LIPs. Transitional magmatic chambers of such series are represented by bimodal syenite-gabbro intrusions, in particular, by the Elet'ozero intrusion (2086±30 Ma) in Northern Karelia (Russia). The intrusion intruded Archean granite-gneisses and, like syenite-gabbro intrusive complexes everywhere, was formed in two intrusive phases. The first phase is represented by mafic-ultramafic layered intrusion derived from alkali Fe-Ti basalt. The second phase is made up of alkali syenites, which are close in composition to alkali trachyte. At the same time, syenite and gabbro have close ɛNd(2080) (2.99 and 3.09, respectively). So, we faced the intrusive version of alkali basalt-trachyte series. We believe that neither crystallization differentiation, nor immiscible splitting, nor other within-chamber processes were responsible for a Daly Gap. The formation of the latter is rather related to the generation of two compositionally different independent partial melts from the same mantle plume head: (1) alkali Fe-Ti basalts derived from plume head owing to adiabatic melting, and (2) trachytes produced by incongruent melting of upper cooled margin of the head under the influence of fluids, which percolated from underlying adiabatic melting zone. The existence of primary trachyte melts is supported by the finds of "melt pockets" in mantle xenoliths in basalts.
Liu, Yangfan; Bolton, J Stuart
2016-08-01
The (Cartesian) multipole series, i.e., the series comprising monopole, dipoles, quadrupoles, etc., can be used, as an alternative to the spherical or cylindrical wave series, in representing sound fields in a wide range of problems, such as source radiation, sound scattering, etc. The proofs of the completeness of the spherical and cylindrical wave series in these problems are classical results, and it is also generally agreed that the Cartesian multipole series spans the same space as the spherical waves: a rigorous mathematical proof of that statement has, however, not been presented. In the present work, such a proof of the completeness of the Cartesian multipole series, both in two and three dimensions, is given, and the linear dependence relations among different orders of multipoles are discussed, which then allows one to easily extract a basis from the multipole series. In particular, it is concluded that the multipoles comprising the two highest orders in the series form a basis of the whole series, since the multipoles of all the lower source orders can be expressed as a linear combination of that basis.
A Maple package for improved global mapping forecast
NASA Astrophysics Data System (ADS)
Carli, H.; Duarte, L. G. S.; da Mota, L. A. C. P.
2014-03-01
We present a Maple implementation of the well known global approach to time series analysis and some further developments designed to improve the computational efficiency of the forecasting capabilities of the approach. This global approach can be summarized as being a reconstruction of the phase space, based on a time ordered series of data obtained from the system. After that, using the reconstructed vectors, a portion of this space is used to produce a mapping, a polynomial fitting, through a minimization procedure, that represents the system and can be employed to forecast further entries for the series. In the present implementation, we introduce a set of commands, tools, in order to perform all these tasks. For example, the command VecTS deals mainly with the reconstruction of the vector in the phase space. The command GfiTS deals with producing the minimization and the fitting. ForecasTS uses all these and produces the prediction of the next entries. For the non-standard algorithms, we here present two commands: IforecasTS and NiforecasTS that, respectively deal with the one-step and the N-step forecasting. Finally, we introduce two further tools to aid the forecasting. The commands GfiTS and AnalysTS, basically, perform an analysis of the behavior of each portion of a series regarding the settings used on the commands just mentioned above. Catalogue identifier: AERW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERW_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3001 No. of bytes in distributed program, including test data, etc.: 95018 Distribution format: tar.gz Programming language: Maple 14. Computer: Any capable of running Maple Operating system: Any capable of running Maple. Tested on Windows ME, Windows XP, Windows 7. RAM: 128 MB Classification: 4.3, 4.9, 5 Nature of problem: Time series analysis and improving forecast capability. Solution method: The method of solution is partially based on a result published in [1]. Restrictions: If the time series that is being analyzed presents a great amount of noise or if the dynamical system behind the time series is of high dimensionality (Dim≫3), then the method may not work well. Unusual features: Our implementation can, in the cases where the dynamics behind the time series is given by a system of low dimensionality, greatly improve the forecast. Running time: This depends strongly on the command that is being used. References: [1] Barbosa, L.M.C.R., Duarte, L.G.S., Linhares, C.A. and da Mota, L.A.C.P., Improving the global fitting method on nonlinear time series analysis, Phys. Rev. E 74, 026702 (2006).
Dental size and shape in the Roman imperial age: two examples from the area of Rome.
Manzi, G; Santandrea, E; Passarello, P
1997-04-01
Different socioeconomic strata of Roman imperial age are represented by two large dental samples recovered from archaeological excavations near Rome, Italy. Teeth are investigated for crown dimensions and morphological variants. One sample, comprising 1,465 permanent teeth, represents the rural town of Lucus Feroniae (LFR) and is mainly composed of slaves and war veterans. The other, comprising 734 teeth from the Isola Sacra necropolis at Portus Romae (NIS), represents the "middle class" segment of an urban population. Both series show small dental dimensions and fit at the lower end of the trend toward dental reduction in Europe from the Upper Paleolithic to the historical times. The urban sample is less variable metrically and less sexually dimorphic than the rural one. The analysis of discrete crown traits shows absence of rare phenotypic variants in both series. The urban sample is also less variable in this last respect, suggesting that the gene pool of this particular "stratum" of the NIS population was more homogeneous than that of LFR. The occurrence of enamel hypoplasia indicates that metabolic stress during growth and development was similar in LFR and NIS. The overall set of available data is evaluated in the light of the history of the two Roman sites and the composition of each population.
Boker, Steven M; Xu, Minquan; Rotondo, Jennifer L; King, Kadijah
2002-09-01
Cross-correlation and most other longitudinal analyses assume that the association between 2 variables is stationary. Thus, a sample of occasions of measurement is expected to be representative of the association between variables regardless of the time of onset or number of occasions in the sample. The authors propose a method to analyze the association between 2 variables when the assumption of stationarity may not be warranted. The method results in estimates of both the strength of peak association and the time lag when the peak association occurred for a range of starting values of elapsed time from the beginning of an experiment.
NASA Astrophysics Data System (ADS)
Langbein, J. O.
2016-12-01
Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.
Empirical Investigation of Critical Transitions in Paleoclimate
NASA Astrophysics Data System (ADS)
Loskutov, E. M.; Mukhin, D.; Gavrilov, A.; Feigin, A.
2016-12-01
In this work we apply a new empirical method for the analysis of complex spatially distributed systems to the analysis of paleoclimate data. The method consists of two general parts: (i) revealing the optimal phase-space variables and (ii) construction the empirical prognostic model by observed time series. The method of phase space variables construction based on the data decomposition into nonlinear dynamical modes which was successfully applied to global SST field and allowed clearly separate time scales and reveal climate shift in the observed data interval [1]. The second part, the Bayesian approach to optimal evolution operator reconstruction by time series is based on representation of evolution operator in the form of nonlinear stochastic function represented by artificial neural networks [2,3]. In this work we are focused on the investigation of critical transitions - the abrupt changes in climate dynamics - in match longer time scale process. It is well known that there were number of critical transitions on different time scales in the past. In this work, we demonstrate the first results of applying our empirical methods to analysis of paleoclimate variability. In particular, we discuss the possibility of detecting, identifying and prediction such critical transitions by means of nonlinear empirical modeling using the paleoclimate record time series. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep155102. Ya. I. Molkov, D. N. Mukhin, E. M. Loskutov, A.M. Feigin, (2012) : Random dynamical models from time series. Phys. Rev. E, Vol. 85, n.3.3. Mukhin, D., Kondrashov, D., Loskutov, E., Gavrilov, A., Feigin, A., & Ghil, M. (2015). Predicting Critical Transitions in ENSO models. Part II: Spatially Dependent Models. Journal of Climate, 28(5), 1962-1976. http://doi.org/10.1175/JCLI-D-14-00240.1
NASA Astrophysics Data System (ADS)
Wang, Zhuosen; Schaaf, Crystal B.; Sun, Qingsong; Kim, JiHyun; Erb, Angela M.; Gao, Feng; Román, Miguel O.; Yang, Yun; Petroy, Shelley; Taylor, Jeffrey R.; Masek, Jeffrey G.; Morisette, Jeffrey T.; Zhang, Xiaoyang; Papuga, Shirley A.
2017-07-01
Seasonal vegetation phenology can significantly alter surface albedo which in turn affects the global energy balance and the albedo warming/cooling feedbacks that impact climate change. To monitor and quantify the surface dynamics of heterogeneous landscapes, high temporal and spatial resolution synthetic time series of albedo and the enhanced vegetation index (EVI) were generated from the 500 m Moderate Resolution Imaging Spectroradiometer (MODIS) operational Collection V006 daily BRDF/NBAR/albedo products and 30 m Landsat 5 albedo and near-nadir reflectance data through the use of the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM). The traditional Landsat Albedo (Shuai et al., 2011) makes use of the MODIS BRDF/Albedo products (MCD43) by assigning appropriate BRDFs from coincident MODIS products to each Landsat image to generate a 30 m Landsat albedo product for that acquisition date. The available cloud free Landsat 5 albedos (due to clouds, generated every 16 days at best) were used in conjunction with the daily MODIS albedos to determine the appropriate 30 m albedos for the intervening daily time steps in this study. These enhanced daily 30 m spatial resolution synthetic time series were then used to track albedo and vegetation phenology dynamics over three Ameriflux tower sites (Harvard Forest in 2007, Santa Rita in 2011 and Walker Branch in 2005). These Ameriflux sites were chosen as they are all quite nearby new towers coming on line for the National Ecological Observatory Network (NEON), and thus represent locations which will be served by spatially paired albedo measures in the near future. The availability of data from the NEON towers will greatly expand the sources of tower albedometer data available for evaluation of satellite products. At these three Ameriflux tower sites the synthetic time series of broadband shortwave albedos were evaluated using the tower albedo measurements with a Root Mean Square Error (RMSE) less than 0.013 and a bias within the range of ±0.006. These synthetic time series provide much greater spatial detail than the 500 m gridded MODIS data, especially over more heterogeneous surfaces, which improves the efforts to characterize and monitor the spatial variation across species and communities. The mean of the difference between maximum and minimum synthetic time series of albedo within the MODIS pixels over a subset of satellite data of Harvard Forest (16 km by 14 km) was as high as 0.2 during the snow-covered period and reduced to around 0.1 during the snow-free period. Similarly, we have used STARFM to also couple MODIS Nadir BRDF Adjusted Reflectances (NBAR) values with Landsat 5 reflectances to generate daily synthetic times series of NBAR and thus Enhanced Vegetation Index (NBAR-EVI) at a 30 m resolution. While normally STARFM is used with directional reflectances, the use of the view angle corrected daily MODIS NBAR values will provide more consistent time series. These synthetic times series of EVI are shown to capture seasonal vegetation dynamics with finer spatial and temporal details, especially over heterogeneous land surfaces.
Wang, Zhuosen; Schaaf, Crystal B.; Sun, Qingson; Kim, JiHyun; Erb, Angela M.; Gao, Feng; Roman, Miguel O.; Yang, Yun; Petroy, Shelley; Taylor, Jeffrey; Masek, Jeffrey G.; Morisette, Jeffrey T.; Zhang, Xiaoyang; Papuga, Shirley A.
2017-01-01
Seasonal vegetation phenology can significantly alter surface albedo which in turn affects the global energy balance and the albedo warming/cooling feedbacks that impact climate change. To monitor and quantify the surface dynamics of heterogeneous landscapes, high temporal and spatial resolution synthetic time series of albedo and the enhanced vegetation index (EVI) were generated from the 500 m Moderate Resolution Imaging Spectroradiometer (MODIS) operational Collection V006 daily BRDF/NBAR/albedo products and 30 m Landsat 5 albedo and near-nadir reflectance data through the use of the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM). The traditional Landsat Albedo (Shuai et al., 2011) makes use of the MODIS BRDF/Albedo products (MCD43) by assigning appropriate BRDFs from coincident MODIS products to each Landsat image to generate a 30 m Landsat albedo product for that acquisition date. The available cloud free Landsat 5 albedos (due to clouds, generated every 16 days at best) were used in conjunction with the daily MODIS albedos to determine the appropriate 30 m albedos for the intervening daily time steps in this study. These enhanced daily 30 m spatial resolution synthetic time series were then used to track albedo and vegetation phenology dynamics over three Ameriflux tower sites (Harvard Forest in 2007, Santa Rita in 2011 and Walker Branch in 2005). These Ameriflux sites were chosen as they are all quite nearby new towers coming on line for the National Ecological Observatory Network (NEON), and thus represent locations which will be served by spatially paired albedo measures in the near future. The availability of data from the NEON towers will greatly expand the sources of tower albedometer data available for evaluation of satellite products. At these three Ameriflux tower sites the synthetic time series of broadband shortwave albedos were evaluated using the tower albedo measurements with a Root Mean Square Error (RMSE) less than 0.013 and a bias within the range of ±0.006. These synthetic time series provide much greater spatial detail than the 500 m gridded MODIS data, especially over more heterogeneous surfaces, which improves the efforts to characterize and monitor the spatial variation across species and communities. The mean of the difference between maximum and minimum synthetic time series of albedo within the MODIS pixels over a subset of satellite data of Harvard Forest (16 km by 14 km) was as high as 0.2 during the snow-covered period and reduced to around 0.1 during the snow-free period. Similarly, we have used STARFM to also couple MODIS Nadir BRDF Adjusted Reflectances (NBAR) values with Landsat 5 reflectances to generate daily synthetic times series of NBAR and thus Enhanced Vegetation Index (NBAR-EVI) at a 30 m resolution. While normally STARFM is used with directional reflectances, the use of the view angle corrected daily MODIS NBAR values will provide more consistent time series. These synthetic times series of EVI are shown to capture seasonal vegetation dynamics with finer spatial and temporal details, especially over heterogeneous land surfaces.
NASA Technical Reports Server (NTRS)
Wang, Zhuosen; Schaaf, Crystal B.; Sun, Quingsong; Kim, Jihyun; Erb, Angela M.; Gao, Feng; Roman, Miguel O.; Yang, Yun; Petroy, Shelley; Taylor, Jeffrey R.;
2017-01-01
Seasonal vegetation phenology can significantly alter surface albedo which in turn affects the global energy balance and the albedo warmingcooling feedbacks that impact climate change. To monitor and quantify the surface dynamics of heterogeneous landscapes, high temporal and spatial resolution synthetic time series of albedo and the enhanced vegetation index (EVI) were generated from the 500-meter Moderate Resolution Imaging Spectroradiometer (MODIS) operational Collection V006 daily BRDF (Bidirectional Reflectance Distribution Function) / NBAR (Nadir BRDF-Adjusted Reflectance) / albedo products and 30-meter Landsat 5 albedo and near-nadir reflectance data through the use of the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM). The traditional Landsat Albedo (Shuai et al., 2011) makes use of the MODIS BRDFAlbedo products (MCD43) by assigning appropriate BRDFs from coincident MODIS products to each Landsat image to generate a 30-meter Landsat albedo product for that acquisition date. The available cloud free Landsat 5 albedos (due to clouds, generated every 16 days at best) were used in conjunction with the daily MODIS albedos to determine the appropriate 30-meter albedos for the intervening daily time steps in this study. These enhanced daily 30-meter spatial resolution synthetic time series were then used to track albedo and vegetation phenology dynamics over three Ameriflux tower sites (Harvard Forest in 2007, Santa Rita in 2011 and Walker Branch in 2005). These Ameriflux sites were chosen as they are all quite nearby new towers coming on line for the National Ecological Observatory Network (NEON), and thus represent locations which will be served by spatially paired albedo measures in the near future. The availability of data from the NEON towers will greatly expand the sources of tower albedometer data available for evaluation of satellite products. At these three Ameriflux tower sites the synthetic time series of broadband shortwave albedos were evaluated using the tower albedo measurements with a Root Mean Square Error (RMSE) less than 0.013 and a bias within the range of 0.006. These synthetic time series provide much greater spatial detail than the 500 meter gridded MODIS data, especially over more heterogeneous surfaces, which improves the efforts to characterize and monitor the spatial variation across species and communities. The mean of the difference between maximum and minimum synthetic time series of albedo within the MODIS pixels over a subset of satellite data of Harvard Forest (16 kilometers by 14 kilometers) was as high as 0.2 during the snow-covered period and reduced to around 0.1 during the snow-free period. Similarly, we have used STARFM to also couple MODIS Nadir BRDF-Adjusted Reflectances (NBAR) values with Landsat 5 reflectances to generate daily synthetic times series of NBAR and thus Enhanced Vegetation Index (NBAR-EVI) at a 30-meter resolution. While normally STARFM is used with directional reflectances, the use of the view angle corrected daily MODIS NBAR values will provide more consistent time series. These synthetic times series of EVI are shown to capture seasonal vegetation dynamics with finer spatial and temporal details, especially over heterogeneous land surfaces.
NASA Astrophysics Data System (ADS)
Kwon, Jae-Sool; Mayer, Victor J.
Several studies of the validity of the intensive time series design have revealed a post-intervention increase in the level of achievement data. This so called momentum effect has not been demonstrated through the application of an appropriate analysis technique. The purpose of this study was to identify and apply a technique that would adequately represent and describe such an effect if indeed it does occur, and to use that technique to study the momentum effect as it is observed in several data sets on the learning of the concept of plate tectonics. Subsequent to trials of several different analyses, a segmented straight line regression analysis was chosen and used on three different data sets. Each set revealed similar patterns of inflection points between lines with similar time intervals between inflections for those data from students with formal cognitive tendencies. These results seem to indicate that this method will indeed be useful in representing and identifying the presence and duration of the momentum effect in time series data on achievement. Since the momentum effect could be described in each of the data sets and since its presence seems a function of similar circumstances, support is given for its presence in the learning of abstract scientific concepts for formal cognitive tendency students. The results indicate that the duration of the momentum effect is related to the level of student understanding tested and the cognitive level of the learners.
Quantifying new water fractions and water age distributions using ensemble hydrograph separation
NASA Astrophysics Data System (ADS)
Kirchner, James
2017-04-01
Catchment transit times are important controls on contaminant transport, weathering rates, and runoff chemistry. Recent theoretical studies have shown that catchment transit time distributions are nonstationary, reflecting the temporal variability in precipitation forcing, the structural heterogeneity of catchments themselves, and the nonlinearity of the mechanisms controlling storage and transport in the subsurface. The challenge of empirically estimating these nonstationary transit time distributions in real-world catchments, however, has only begun to be explored. Long, high-frequency tracer time series are now becoming available, creating new opportunities to study how rainfall becomes streamflow on timescales of minutes to days following the onset of precipitation. Here I show that the conventional formula used for hydrograph separation can be converted into an equivalent linear regression equation that quantifies the fraction of current rainfall in streamflow across ensembles of precipitation events. These ensembles can be selected to represent different discharge ranges, different precipitation intensities, or different levels of antecedent moisture, thus quantifying how the fraction of "new water" in streamflow varies with forcings such as these. I further show how this approach can be generalized to empirically determine the contributions of precipitation inputs to streamflow across a range of time lags. In this way the short-term tail of the transit time distribution can be directly quantified for an ensemble of precipitation events. Benchmark testing with a simple, nonlinear, nonstationary catchment model demonstrates that this approach quantitatively measures the short tail of the transit time distribution for a wide range of catchment response characteristics. In combination with reactive tracer time series, this approach can potentially be extended to measure short-term chemical reaction rates at the catchment scale. High-frequency tracer time series from several experimental catchments will be used to demonstrate the utility of the new approach outlined here.
Park, Chihyun; Yun, So Jeong; Ryu, Sung Jin; Lee, Soyoung; Lee, Young-Sam; Yoon, Youngmi; Park, Sang Chul
2017-03-15
Cellular senescence irreversibly arrests growth of human diploid cells. In addition, recent studies have indicated that senescence is a multi-step evolving process related to important complex biological processes. Most studies analyzed only the genes and their functions representing each senescence phase without considering gene-level interactions and continuously perturbed genes. It is necessary to reveal the genotypic mechanism inferred by affected genes and their interaction underlying the senescence process. We suggested a novel computational approach to identify an integrative network which profiles an underlying genotypic signature from time-series gene expression data. The relatively perturbed genes were selected for each time point based on the proposed scoring measure denominated as perturbation scores. Then, the selected genes were integrated with protein-protein interactions to construct time point specific network. From these constructed networks, the conserved edges across time point were extracted for the common network and statistical test was performed to demonstrate that the network could explain the phenotypic alteration. As a result, it was confirmed that the difference of average perturbation scores of common networks at both two time points could explain the phenotypic alteration. We also performed functional enrichment on the common network and identified high association with phenotypic alteration. Remarkably, we observed that the identified cell cycle specific common network played an important role in replicative senescence as a key regulator. Heretofore, the network analysis from time series gene expression data has been focused on what topological structure was changed over time point. Conversely, we focused on the conserved structure but its context was changed in course of time and showed it was available to explain the phenotypic changes. We expect that the proposed method will help to elucidate the biological mechanism unrevealed by the existing approaches.
Measuring and Modeling Shared Visual Attention
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Gontar, Patrick
2016-01-01
Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions. We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of "activities," each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates. We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar & Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.
Measuring and Modeling Shared Visual Attention
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
2016-01-01
Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions.We have evaluated the method using synthetic data sets, in which the behavior was modeled as a series of activities, each of which was modeled as a first-order Markov process. By tabulating distributions for pairs of identical and disparate activities, we are able to perform a receiver operating characteristic (ROC) analysis, allowing us to choose appropriate criteria and estimate error rates.We have applied the methods to data from airline crews, collected in a high-fidelity flight simulator (Haslbeck, Gontar Schubert, 2014). We conclude by considering the problem of automatic (blind) discovery of activities, using methods developed for text analysis.
Detection of anomalous signals in temporally correlated data (Invited)
NASA Astrophysics Data System (ADS)
Langbein, J. O.
2010-12-01
Detection of transient tectonic signals in data obtained from large geodetic networks requires the ability to detect signals that are both temporally and spatially coherent. In this report I will describe a modification to an existing method that estimates both the coefficients of temporally correlated noise model and an efficient filter based on the noise model. This filter, when applied to the original time-series, effectively whitens (or flattens) the power spectrum. The filtered data provide the means to calculate running averages which are then used to detect deviations from the background trends. For large networks, time-series of signal-to-noise ratio (SNR) can be easily constructed since, by filtering, each of the original time-series has been transformed into one that is closer to having a Gaussian distribution with a variance of 1.0. Anomalous intervals may be identified by counting the number of GPS sites for which the SNR exceeds a specified value. For example, during one time interval, if there were 5 out of 20 time-series with SNR>2, this would be considered anomalous; typically, one would expect at 95% confidence that there would be at least 1 out of 20 time-series with an SNR>2. For time intervals with an anomalously large number of high SNR, the spatial distribution of the SNR is mapped to identify the location of the anomalous signal(s) and their degree of spatial clustering. Estimating the filter that should be used to whiten the data requires modification of the existing methods that employ maximum likelihood estimation to determine the temporal covariance of the data. In these methods, it is assumed that the noise components in the data are a combination of white, flicker and random-walk processes and that they are derived from three different and independent sources. Instead, in this new method, the covariance matrix is constructed assuming that only one source is responsible for the noise and that source can be represented as a white-noise random-number generator convolved with a filter whose spectral properties are frequency (f) independent at its highest frequencies, 1/f at the middle frequencies, and 1/f2 at the lowest frequencies. For data sets with no gaps in their time-series, construction of covariance and inverse covariance matrices is extremely efficient. Application of the above algorithm to real data potentially involves several iterations as small, tectonic signals of interest are often indistinguishable from background noise. Consequently, simply plotting the time-series of each GPS site is used to identify the largest outliers and signals independent of their cause. Any analysis of the background noise levels must factor in these other signals while the gross outliers need to be removed.
Coastal Acoustic Tomography Data Constraints Applied to a Coastal Ocean Circulation Model
1994-04-01
Postgraduate School Monterey, CA 93943-5100 Abstract A direct insertion scheme for assimilating coastal acoustic tomo- graphic ( CAT ) vertical...days of this control run were taken to represent "actuality." A series of assimilation experiments was carried out in which CAT temperature slices...synthesized from different CAT configurations based on the "true ocean" were inserted into the n.odel at various time steps to examine the convergence of
Long term economic relationships from cointegration maps
NASA Astrophysics Data System (ADS)
Vicente, Renato; Pereira, Carlos de B.; Leite, Vitor B. P.; Caticha, Nestor
2007-07-01
We employ the Bayesian framework to define a cointegration measure aimed to represent long term relationships between time series. For visualization of these relationships we introduce a dissimilarity matrix and a map based on the sorting points into neighborhoods (SPIN) technique, which has been previously used to analyze large data sets from DNA arrays. We exemplify the technique in three data sets: US interest rates (USIR), monthly inflation rates and gross domestic product (GDP) growth rates.
Intelligence Preparation of the Battlefield (Automated) IPB(A).
1980-02-14
courses of action as the basis for friendly operations planninq. The commander’s intelligence system must handle timely combat information...through A8 represent the series of snapshots developed for Option A. Snapshots would be developed for each ontion, and stored in an automated file for ...process throughout a battle situation demands local ability to modify the IPB products stored in the system, and to develop new ones as enemy courses
NASA Astrophysics Data System (ADS)
Martynova, A. I.; Orlov, V. V.
2014-10-01
Numerical simulations have been carried out in the general three-body problem with equal masses with zero initial velocities, to investigate the distribution of the decay times T based on a representative sample of initial conditions. The distribution has a power-law character on long time scales, f( T) ∝ T - α , with α = 1.74. Over small times T < 30 T cr ( T cr is the mean crossing time for a component of the triple system), a series of local maxima separated by about 1.0 T cr is observed in the decay-time distribution. These local peaks correspond to zones of decay after one or a few triple encounters. Figures showing the arrangement of these zones in the domain of the initial conditions are presented.
NASA Astrophysics Data System (ADS)
Sahagian, D.; Prentice, C.
2004-12-01
A great deal of time, effort and resources have been expended on global change research to date, but dissemination and visualization of the key pertinent data sets has been problematical. Toward that end, we are constructing an Earth System Atlas which will serve as a single compendium describing the state of the art in our understanding of the Earth system and how it has responded to and is likely to respond to natural and anthropogenic perturbations. The Atlas is an interactive web-based system of data bases and data manipulation tools and so is much more than a collection of pre-made maps posted on the web. It represents a tool for assembling, manipulating, and displaying specific data as selected and customized by the user. Maps are created "on the fly" according to user-specified instructions. The information contained in the Atlas represents the growing body of data assembled by the broader Earth system research community, and can be displayed in the form of maps and time series of the various relevant parameters that drive and are driven by changes in the Earth system at various time scales. The Atlas is designed to display the information assembled by the global change research community in the form of maps and time series of all the relevant parameters that drive or are driven by changes in the Earth System at various time scales. This will serve to provide existing data to the community, but also will help to highlight data gaps that may hinder our understanding of critical components of the Earth system. This new approach to handling Earth system data is unique in several ways. First and foremost, data must be peer-reviewed. Further, it is designed to draw on the expertise and products of extensive international research networks rather than on a limited number of projects or institutions. It provides explanatory explanations targeted to the user's needs, and the display of maps and time series can be customize by the user. In general, the Atlas is designed provide the research community with a new opportunity for data observation and manipulation, enabling new scientific discoveries in the coming years. An initial prototype of the Atlas has been developed and can be manipulated in real time.
Updating stand-level forest inventories using airborne laser scanning and Landsat time series data
NASA Astrophysics Data System (ADS)
Bolton, Douglas K.; White, Joanne C.; Wulder, Michael A.; Coops, Nicholas C.; Hermosilla, Txomin; Yuan, Xiaoping
2018-04-01
Vertical forest structure can be mapped over large areas by combining samples of airborne laser scanning (ALS) data with wall-to-wall spatial data, such as Landsat imagery. Here, we use samples of ALS data and Landsat time-series metrics to produce estimates of top height, basal area, and net stem volume for two timber supply areas near Kamloops, British Columbia, Canada, using an imputation approach. Both single-year and time series metrics were calculated from annual, gap-free Landsat reflectance composites representing 1984-2014. Metrics included long-term means of vegetation indices, as well as measures of the variance and slope of the indices through time. Terrain metrics, generated from a 30 m digital elevation model, were also included as predictors. We found that imputation models improved with the inclusion of Landsat time series metrics when compared to single-year Landsat metrics (relative RMSE decreased from 22.8% to 16.5% for top height, from 32.1% to 23.3% for basal area, and from 45.6% to 34.1% for net stem volume). Landsat metrics that characterized 30-years of stand history resulted in more accurate models (for all three structural attributes) than Landsat metrics that characterized only the most recent 10 or 20 years of stand history. To test model transferability, we compared imputed attributes against ALS-based estimates in nearby forest blocks (>150,000 ha) that were not included in model training or testing. Landsat-imputed attributes correlated strongly to ALS-based estimates in these blocks (R2 = 0.62 and relative RMSE = 13.1% for top height, R2 = 0.75 and relative RMSE = 17.8% for basal area, and R2 = 0.67 and relative RMSE = 26.5% for net stem volume), indicating model transferability. These findings suggest that in areas containing spatially-limited ALS data acquisitions, imputation models, and Landsat time series and terrain metrics can be effectively used to produce wall-to-wall estimates of key inventory attributes, providing an opportunity to update estimates of forest attributes in areas where inventory information is either out of date or non-existent.
NASA Astrophysics Data System (ADS)
Ai, Jinquan; Gao, Wei; Gao, Zhiqiang; Shi, Runhe; Zhang, Chao
2017-04-01
Spartina alterniflora is an aggressive invasive plant species that replaces native species, changes the structure and function of the ecosystem across coastal wetlands in China, and is thus a major conservation concern. Mapping the spread of its invasion is a necessary first step for the implementation of effective ecological management strategies. The performance of a phenology-based approach for S. alterniflora mapping is explored in the coastal wetland of the Yangtze Estuary using a time series of GaoFen satellite no. 1 wide field of view camera (GF-1 WFV) imagery. First, a time series of the normalized difference vegetation index (NDVI) was constructed to evaluate the phenology of S. alterniflora. Two phenological stages (the senescence stage from November to mid-December and the green-up stage from late April to May) were determined as important for S. alterniflora detection in the study area based on NDVI temporal profiles, spectral reflectance curves of S. alterniflora and its coexistent species, and field surveys. Three phenology feature sets representing three major phenology-based detection strategies were then compared to map S. alterniflora: (1) the single-date imagery acquired within the optimal phenological window, (2) the multitemporal imagery, including four images from the two important phenological windows, and (3) the monthly NDVI time series imagery. Support vector machines and maximum likelihood classifiers were applied on each phenology feature set at different training sample sizes. For all phenology feature sets, the overall results were produced consistently with high mapping accuracies under sufficient training samples sizes, although significantly improved classification accuracies (10%) were obtained when the monthly NDVI time series imagery was employed. The optimal single-date imagery had the lowest accuracies of all detection strategies. The multitemporal analysis demonstrated little reduction in the overall accuracy compared with the use of monthly NDVI time series imagery. These results show the importance of considering the phenological stage for image selection for mapping S. alterniflora using GF-1 WFV imagery. Furthermore, in light of the better tradeoff between the number of images and classification accuracy when using multitemporal GF-1 WFV imagery, we suggest using multitemporal imagery acquired at appropriate phenological windows for S. alterniflora mapping at regional scales.
Landsat Time-Series Analysis Opens New Approaches for Regional Glacier Mapping
NASA Astrophysics Data System (ADS)
Winsvold, S. H.; Kääb, A.; Nuth, C.; Altena, B.
2016-12-01
The archive of Landsat satellite scenes is important for mapping of glaciers, especially as it represents the longest running and continuous satellite record of sufficient resolution to track glacier changes over time. Contributing optical sensors newly launched (Landsat 8 and Sentinel-2A) or upcoming in the near future (Sentinel-2B), will promote very high temporal resolution of optical satellite images especially in high-latitude regions. Because of the potential that lies within such near-future dense time series, methods for mapping glaciers from space should be revisited. We present application scenarios that utilize and explore dense time series of optical data for automatic mapping of glacier outlines and glacier facies. Throughout the season, glaciers display a temporal sequence of properties in optical reflection as the seasonal snow melts away, and glacier ice appears in the ablation area and firn in the accumulation area. In one application scenario presented we simulated potential future seasonal resolution using several years of Landsat 5TM/7ETM+ data, and found a sinusoidal evolution of the spectral reflectance for on-glacier pixels throughout a year. We believe this is because of the short wave infrared band and its sensitivity to snow grain size. The parameters retrieved from the fitting sinus curve can be used for glacier mapping purposes, thus we also found similar results using e.g. the mean of summer band ratio images. In individual optical mapping scenes, conditions will vary (e.g., snow, ice, and clouds) and will not be equally optimal over the entire scene. Using robust statistics on stacked pixels reveals a potential for synthesizing optimal mapping scenes from a temporal stack, as we present in a further application scenario. The dense time series available from satellite imagery will also promote multi-temporal and multi-sensor based analyses. The seasonal pattern of snow and ice on a glacier seen in the optical time series can in the summer season also be observed using radar backscatter series. Optical sensors reveal the reflective properties at the surface, while radar sensors may penetrate the surface revealing properties from a certain volume.In an outlook to this contribution we have explored how we can combine information from SAR and optical sensor systems for different purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandal, Mihirbaran; Wu, Yusheng; Misiaszek, Jeffrey
2016-04-14
We describe successful efforts to optimize the in vivo profile and address off-target liabilities of a series of BACE1 inhibitors represented by 6 that embodies the recently validated fused pyrrolidine iminopyrimidinone scaffold. Employing structure-based design, truncation of the cyanophenyl group of 6 that binds in the S3 pocket of BACE1 followed by modification of the thienyl group in S1 was pursued. Optimization of the pyrimidine substituent that binds in the S2'–S2'' pocket of BACE1 remediated time-dependent CYP3A4 inhibition of earlier analogues in this series and imparted high BACE1 affinity. These efforts resulted in the discovery of difluorophenyl analogue 9 (MBi-4),more » which robustly lowered CSF and cortex Aβ40 in both rats and cynomolgus monkeys following a single oral dose. Compound 9 represents a unique molecular shape among BACE inhibitors reported to potently lower central Aβ in nonrodent preclinical species.« less
Petroleum supply monthly, April 1991. [Glossary included
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-04-29
Data presented in the PSM (Petroleum Supply Monthly) describe the supply and disposition of petroleum products in the United States and major US geographic regions. The data series describe production, imports and exports, inter-Petroleum Administration for Defense (PAD) District movements, and inventories by the primary suppliers of petroleum products in the United States (50 States and the District of Columbia). The reporting universe includes those petroleum sectors in Primary Supply. Included are: petroleum refiners, motor gasoline blenders, operators of natural gas processing plants and fractionators, inter-PAD transporters, importers, and major inventory holders of petroleum products and crude oil. When aggregated,more » the data reported by these sectors approximately represent the consumption of petroleum products in the United States. The tables and figures in the Summary Statistics section of the PSM present a time series of selected petroleum data on a US level. Most time series include preliminary estimates for one month. The Detailed Statistics tables of the PSM present statistics for the most current month available as well as year-to-date. Industry terminology and product definitions are listed alphabetically in the Glossary. 14 figs., 65 tabs.« less
Identifying the scale-dependent motifs in atmospheric surface layer by ordinal pattern analysis
NASA Astrophysics Data System (ADS)
Li, Qinglei; Fu, Zuntao
2018-07-01
Ramp-like structures in various atmospheric surface layer time series have been long studied, but the presence of motifs with the finer scale embedded within larger scale ramp-like structures has largely been overlooked in the reported literature. Here a novel, objective and well-adapted methodology, the ordinal pattern analysis, is adopted to study the finer-scaled motifs in atmospheric boundary-layer (ABL) time series. The studies show that the motifs represented by different ordinal patterns take clustering properties and 6 dominated motifs out of the whole 24 motifs account for about 45% of the time series under particular scales, which indicates the higher contribution of motifs with the finer scale to the series. Further studies indicate that motif statistics are similar for both stable conditions and unstable conditions at larger scales, but large discrepancies are found at smaller scales, and the frequencies of motifs "1234" and/or "4321" are a bit higher under stable conditions than unstable conditions. Under stable conditions, there are great changes for the occurrence frequencies of motifs "1234" and "4321", where the occurrence frequencies of motif "1234" decrease from nearly 24% to 4.5% with the scale factor increasing, and the occurrence frequencies of motif "4321" change nonlinearly with the scale increasing. These great differences of dominated motifs change with scale can be taken as an indicator to quantify the flow structure changes under different stability conditions, and motif entropy can be defined just by only 6 dominated motifs to quantify this time-scale independent property of the motifs. All these results suggest that the defined scale of motifs with the finer scale should be carefully taken into consideration in the interpretation of turbulence coherent structures.
Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang
2016-01-01
Objective To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. Design A time-series study using regional death registry between 2009 and 2010. Setting 8 districts in a large metropolitan area in Northern China. Participants 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Main outcome measures Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. Results The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (−1.09 to 4.28 vs −1.08 to 3.93) and the PCs-based model (−2.23 to 4.07 vs −2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, −1.12 to 4.85 versus −1.11 versus 4.83. Conclusions The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. PMID:27531727
The IRIS Federator: Accessing Seismological Data Across Data Centers
NASA Astrophysics Data System (ADS)
Trabant, C. M.; Van Fossen, M.; Ahern, T. K.; Weekly, R. T.
2015-12-01
In 2013 the International Federation of Digital Seismograph Networks (FDSN) approved a specification for web service interfaces for accessing seismological station metadata, time series and event parameters. Since then, a number of seismological data centers have implemented FDSN service interfaces, with more implementations in development. We have developed a new system called the IRIS Federator which leverages this standardization and provides the scientific community with a service for easy discovery and access of seismological data across FDSN data centers. These centers are located throughout the world and this work represents one model of a system for data collection across geographic and political boundaries.The main components of the IRIS Federator are a catalog of time series metadata holdings at each data center and a web service interface for searching the catalog. The service interface is designed to support client-side federated data access, a model in which the client (software run by the user) queries the catalog and then collects the data from each identified center. By default the results are returned in a format suitable for direct submission to those web services, but could also be formatted in a simple text format for general data discovery purposes. The interface will remove any duplication of time series channels between data centers according to a set of business rules by default, however a user may request results with all duplicate time series entries included. We will demonstrate how client-side federation is being incorporated into some of the DMC's data access tools. We anticipate further enhancement of the IRIS Federator to improve data discovery in various scenarios and to improve usefulness to communities beyond seismology.Data centers with FDSN web services: http://www.fdsn.org/webservices/The IRIS Federator query interface: http://service.iris.edu/irisws/fedcatalog/1/
Detecting macroeconomic phases in the Dow Jones Industrial Average time series
NASA Astrophysics Data System (ADS)
Wong, Jian Cheng; Lian, Heng; Cheong, Siew Ann
2009-11-01
In this paper, we perform statistical segmentation and clustering analysis of the Dow Jones Industrial Average (DJI) time series between January 1997 and August 2008. Modeling the index movements and log-index movements as stationary Gaussian processes, we find a total of 116 and 119 statistically stationary segments respectively. These can then be grouped into between five and seven clusters, each representing a different macroeconomic phase. The macroeconomic phases are distinguished primarily by their volatilities. We find that the US economy, as measured by the DJI, spends most of its time in a low-volatility phase and a high-volatility phase. The former can be roughly associated with economic expansion, while the latter contains the economic contraction phase in the standard economic cycle. Both phases are interrupted by a moderate-volatility market correction phase, but extremely-high-volatility market crashes are found mostly within the high-volatility phase. From the temporal distribution of various phases, we see a high-volatility phase from mid-1998 to mid-2003, and another starting mid-2007 (the current global financial crisis). Transitions from the low-volatility phase to the high-volatility phase are preceded by a series of precursor shocks, whereas the transition from the high-volatility phase to the low-volatility phase is preceded by a series of inverted shocks. The time scale for both types of transitions is about a year. We also identify the July 1997 Asian Financial Crisis to be the trigger for the mid-1998 transition, and an unnamed May 2006 market event related to corrections in the Chinese markets to be the trigger for the mid-2007 transition.
Radiance And Irradiance Of The Solar HeII 304 Emission Line
NASA Astrophysics Data System (ADS)
McMullin, D. R.; Floyd, L. E.; Auchère, F.
2013-12-01
For over 17 years, EIT and the later EUVI instruments aboard SoHO and STEREO, respectively, have provided a time series of radiant images in the HeII 30.4 nm transition region and three coronal emission lines (FeIX/X, FeXII, and FeXV). While the EIT measurements were gathered from positions approximately on the Earth-Sun axis, EUVI images have been gathered at angles ranging to more than ×90 degrees in solar longitude relative the Earth-Sun axis. Using a Differential Emission Measure (DEM) model, these measurements provide a basis for estimates of the spectral irradiance for the solar spectrum of wavelengths between 15 and 50 nm at any position in the heliosphere. In particular, we generate the He 30.4 spectral irradiance in all directions in the heliosphere and examine its time series in selected directions. Such spectra are utilized for two distinct purposes. First, the photoionization rate of neutral He at each position is calculated. Neutral He is of interest because it traverses the heliopause relatively undisturbed and therefore provides a measure of isotopic parameters beyond the heliosphere. Second, we use these generate a time series of estimates of the solar spectral luminosity in the HeII 30.4 nm emission line extending from the recent past solar cycle 23 minimum into the current weak solar cycle 24 enabling an estimate of its variation over the solar cycle. Because this 30.4~nm spectral luminosity is the sum of such radiation in all directions, its time series is devoid of the 27-day solar rotation periodicity present in indices typically used to represent solar activity.
Hidden discriminative features extraction for supervised high-order time series modeling.
Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee
2016-11-01
In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Y.; Lin, L.; Chen, H.
2015-07-01
Natural disasters have enormous impacts on human society, especially on the development of the economy. To support decision-making in mitigation and adaption to natural disasters, assessment of economic impacts is fundamental and of great significance. Based on a review of the literature on economic impact evaluation, this paper proposes a new assessment model of the economic impacts of droughts by using the sugar industry in China as a case study, which focuses on the generation and transfer of economic impacts along a simple value chain involving only sugarcane growers and a sugar-producing company. A perspective of profit loss rate is applied to scale economic impact. By using "with and without" analysis, profit loss is defined as the difference in profits between disaster-hit and disaster-free scenarios. To calculate profit, analysis of a time series of sugar price is applied. With the support of a linear regression model, an endogenous trend in sugar price is identified and the time series of sugar price "without" disaster is obtained, using an autoregressive error model to separate impact of disasters from the internal trend in sugar price. Unlike the settings in other assessment models, representative sugar prices, which represent value level in disaster-free conditions and disaster-hit conditions, are integrated from a long time series that covers the whole period of drought. As a result, it is found that in a rigid farming contract, sugarcane growers suffer far more than the sugar company when impacted by severe drought, which may promote reflections among various economic bodies on economic equality related to the occurrence of natural disasters. Further, sensitivity analysis of the model built reveals that sugarcane purchase price has a significant influence on profit loss rate, which implies that setting a proper sugarcane purchase price would be an effective way of realizing economic equality in future practice of contract farming.
Thermoacoustic tomography for an integro-differential wave equation modeling attenuation
NASA Astrophysics Data System (ADS)
Acosta, Sebastián; Palacios, Benjamín
2018-02-01
In this article we study the inverse problem of thermoacoustic tomography (TAT) on a medium with attenuation represented by a time-convolution (or memory) term, and whose consideration is motivated by the modeling of ultrasound waves in heterogeneous tissue via fractional derivatives with spatially dependent parameters. Under the assumption of being able to measure data on the whole boundary, we prove uniqueness and stability, and propose a convergent reconstruction method for a class of smooth variable sound speeds. By a suitable modification of the time reversal technique, we obtain a Neumann series reconstruction formula.
NASA Astrophysics Data System (ADS)
Baisden, W. T.
2011-12-01
Time-series radiocarbon measurements have substantial ability to constrain the size and residence time of the soil C pools commonly represented in ecosystem models. Radiocarbon remains unique in the ability to constrain the large stabilized C pool with decadal residence times. Radiocarbon also contributes usefully to constraining the size and turnover rate of the passive pool, but typically struggles to constrain pools with residence times less than a few years. Overall, the number of pools and associated turnover rates that can be constrained depends upon the number of time-series samples available, the appropriateness of chemical or physical fractions to isolate unequivocal pools, and the utility of additional C flux data to provide additional constraints. In New Zealand pasture soils, we demonstrate the ability to constrain decadal turnover times with in a few years for the stabilized pool and reasonably constrain the passive fraction. Good constraint is obtained with two time-series samples spaced 10 or more years apart after 1970. Three or more time-series samples further improve the level of constraint. Work within this context shows that a two-pool model does explain soil radiocarbon data for the most detailed profiles available (11 time-series samples), and identifies clear and consistent differences in rates of C turnover and passive fraction in Andisols vs Non-Andisols. Furthermore, samples from multiple horizons can commonly be combined, yielding consistent residence times and passive fraction estimates that are stable with, or increase with, depth in different sites. Radiocarbon generally fails to quantify rapid C turnover, however. Given that the strength of radiocarbon is estimating the size and turnover of the stabilized (decadal) and passive (millennial) pools, the magnitude of fast cycling pool(s) can be estimated by subtracting the radiocarbon-based estimates of turnover within stabilized and passive pools from total estimates of NPP. In grazing land, these estimates can be derived primarily from measured aboveground NPP and calculated belowground NPP. Results suggest that only 19-36% of heterotrophic soil respiration is derived from the soil C with rapid turnover times. A final logical step in synthesis is the analysis of temporal variation in NPP, primarily due to climate, as driver of changes in plant inputs and resulting in dynamic changes in rapid and decadal soil C pools. In sites with good time series samples from 1959-1975, we examine the apparent impacts of measured or modelled (Biome-BGC) NPP on soil Δ14C. Ultimately, these approaches have the ability to empirically constrain, and provide limited verification, of the soil C cycle as commonly depicted ecosystem biogeochemistry models.
2012-01-01
Background Companion diagnostic tests can depend on accurate measurement of protein expression in tissues. Preanalytic variables, especially cold ischemic time (time from tissue removal to fixation in formalin) can affect the measurement and may cause false-negative results. We examined 23 proteins, including four commonly used breast cancer biomarker proteins, to quantify their sensitivity to cold ischemia in breast cancer tissues. Methods A series of 93 breast cancer specimens with known time-to-fixation represented in a tissue microarray and a second series of 25 matched pairs of core needle biopsies and breast cancer resections were used to evaluate changes in antigenicity as a function of cold ischemic time. Estrogen receptor (ER), progesterone receptor (PgR), HER2 or Ki67, and 19 other antigens were tested. Each antigen was measured using the AQUA method of quantitative immunofluorescence on at least one series. All statistical tests were two-sided. Results We found no evidence for loss of antigenicity with time-to-fixation for ER, PgR, HER2, or Ki67 in a 4-hour time window. However, with a bootstrapping analysis, we observed a trend toward loss for ER and PgR, a statistically significant loss of antigenicity for phosphorylated tyrosine (P = .0048), and trends toward loss for other proteins. There was evidence of increased antigenicity in acetylated lysine, AKAP13 (P = .009), and HIF1A (P = .046), which are proteins known to be expressed in conditions of hypoxia. The loss of antigenicity for phosphorylated tyrosine and increase in expression of AKAP13, and HIF1A were confirmed in the biopsy/resection series. Conclusions Key breast cancer biomarkers show no evidence of loss of antigenicity, although this dataset assesses the relatively short time beyond the 1-hour limit in recent guidelines. Other proteins show changes in antigenicity in both directions. Future studies that extend the time range and normalize for heterogeneity will provide more comprehensive information on preanalytic variation due to cold ischemic time. PMID:23090068
Advanced methods for modeling water-levels and estimating drawdowns with SeriesSEE, an Excel add-in
Halford, Keith; Garcia, C. Amanda; Fenelon, Joe; Mirus, Benjamin B.
2012-12-21
Water-level modeling is used for multiple-well aquifer tests to reliably differentiate pumping responses from natural water-level changes in wells, or “environmental fluctuations.” Synthetic water levels are created during water-level modeling and represent the summation of multiple component fluctuations, including those caused by environmental forcing and pumping. Pumping signals are modeled by transforming step-wise pumping records into water-level changes by using superimposed Theis functions. Water-levels can be modeled robustly with this Theis-transform approach because environmental fluctuations and pumping signals are simulated simultaneously. Water-level modeling with Theis transforms has been implemented in the program SeriesSEE, which is a Microsoft® Excel add-in. Moving average, Theis, pneumatic-lag, and gamma functions transform time series of measured values into water-level model components in SeriesSEE. Earth tides and step transforms are additional computed water-level model components. Water-level models are calibrated by minimizing a sum-of-squares objective function where singular value decomposition and Tikhonov regularization stabilize results. Drawdown estimates from a water-level model are the summation of all Theis transforms minus residual differences between synthetic and measured water levels. The accuracy of drawdown estimates is limited primarily by noise in the data sets, not the Theis-transform approach. Drawdowns much smaller than environmental fluctuations have been detected across major fault structures, at distances of more than 1 mile from the pumping well, and with limited pre-pumping and recovery data at sites across the United States. In addition to water-level modeling, utilities exist in SeriesSEE for viewing, cleaning, manipulating, and analyzing time-series data.
NASA Astrophysics Data System (ADS)
Shan, Zhendong; Ling, Daosheng
2018-02-01
This article develops an analytical solution for the transient wave propagation of a cylindrical P-wave line source in a semi-infinite elastic solid with a fluid layer. The analytical solution is presented in a simple closed form in which each term represents a transient physical wave. The Scholte equation is derived, through which the Scholte wave velocity can be determined. The Scholte wave is the wave that propagates along the interface between the fluid and solid. To develop the analytical solution, the wave fields in the fluid and solid are defined, their analytical solutions in the Laplace domain are derived using the boundary and interface conditions, and the solutions are then decomposed into series form according to the power series expansion method. Each item of the series solution has a clear physical meaning and represents a transient wave path. Finally, by applying Cagniard's method and the convolution theorem, the analytical solutions are transformed into the time domain. Numerical examples are provided to illustrate some interesting features in the fluid layer, the interface and the semi-infinite solid. When the P-wave velocity in the fluid is higher than that in the solid, two head waves in the solid, one head wave in the fluid and a Scholte wave at the interface are observed for the cylindrical P-wave line source.
NASA Astrophysics Data System (ADS)
Yin, Yi; Shang, Pengjian
2013-12-01
We use multiscale detrended fluctuation analysis (MSDFA) and multiscale detrended cross-correlation analysis (MSDCCA) to investigate auto-correlation (AC) and cross-correlation (CC) in the US and Chinese stock markets during 1997-2012. The results show that US and Chinese stock indices differ in terms of their multiscale AC structures. Stock indices in the same region also differ with regard to their multiscale AC structures. We analyze AC and CC behaviors among indices for the same region to determine similarity among six stock indices and divide them into four groups accordingly. We choose S&P500, NQCI, HSI, and the Shanghai Composite Index as representative samples for simplicity. MSDFA and MSDCCA results and average MSDFA spectra for local scaling exponents (LSEs) for individual series are presented. We find that the MSDCCA spectrum for LSE CC between two time series generally tends to be greater than the average MSDFA LSE spectrum for individual series. We obtain detailed multiscale structures and relations for CC between the four representatives. MSDFA and MSDCCA with secant rolling windows of different sizes are then applied to reanalyze the AC and CC. Vertical and horizontal comparisons of different window sizes are made. The MSDFA and MSDCCA results for the original window size are confirmed and some new interesting characteristics and conclusions regarding multiscale correlation structures are obtained.
Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating
NASA Astrophysics Data System (ADS)
Chen, Liangji; Guo, Guangsong; Li, Huiying
2017-07-01
NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.
NASA Astrophysics Data System (ADS)
El Yazidi, Abdelhadi; Ramonet, Michel; Ciais, Philippe; Broquet, Gregoire; Pison, Isabelle; Abbaris, Amara; Brunner, Dominik; Conil, Sebastien; Delmotte, Marc; Gheusi, Francois; Guerin, Frederic; Hazan, Lynn; Kachroudi, Nesrine; Kouvarakis, Giorgos; Mihalopoulos, Nikolaos; Rivier, Leonard; Serça, Dominique
2018-03-01
This study deals with the problem of identifying atmospheric data influenced by local emissions that can result in spikes in time series of greenhouse gases and long-lived tracer measurements. We considered three spike detection methods known as coefficient of variation (COV), robust extraction of baseline signal (REBS) and standard deviation of the background (SD) to detect and filter positive spikes in continuous greenhouse gas time series from four monitoring stations representative of the European ICOS (Integrated Carbon Observation System) Research Infrastructure network. The results of the different methods are compared to each other and against a manual detection performed by station managers. Four stations were selected as test cases to apply the spike detection methods: a continental rural tower of 100 m height in eastern France (OPE), a high-mountain observatory in the south-west of France (PDM), a regional marine background site in Crete (FKL) and a marine clean-air background site in the Southern Hemisphere on Amsterdam Island (AMS). This selection allows us to address spike detection problems in time series with different variability. Two years of continuous measurements of CO2, CH4 and CO were analysed. All methods were found to be able to detect short-term spikes (lasting from a few seconds to a few minutes) in the time series. Analysis of the results of each method leads us to exclude the COV method due to the requirement to arbitrarily specify an a priori percentage of rejected data in the time series, which may over- or underestimate the actual number of spikes. The two other methods freely determine the number of spikes for a given set of parameters, and the values of these parameters were calibrated to provide the best match with spikes known to reflect local emissions episodes that are well documented by the station managers. More than 96 % of the spikes manually identified by station managers were successfully detected both in the SD and the REBS methods after the best adjustment of parameter values. At PDM, measurements made by two analyzers located 200 m from each other allow us to confirm that the CH4 spikes identified in one of the time series but not in the other correspond to a local source from a sewage treatment facility in one of the observatory buildings. From this experiment, we also found that the REBS method underestimates the number of positive anomalies in the CH4 data caused by local sewage emissions. As a conclusion, we recommend the use of the SD method, which also appears to be the easiest one to implement in automatic data processing, used for the operational filtering of spikes in greenhouse gases time series at global and regional monitoring stations of networks like that of the ICOS atmosphere network.
NASA Astrophysics Data System (ADS)
Sharma, A.; Woldemeskel, F. M.; Sivakumar, B.; Mehrotra, R.
2014-12-01
We outline a new framework for assessing uncertainties in model simulations, be they hydro-ecological simulations for known scenarios, or climate simulations for assumed scenarios representing the future. This framework is illustrated here using GCM projections for future climates for hydrologically relevant variables (precipitation and temperature), with the uncertainty segregated into three dominant components - model uncertainty, scenario uncertainty (representing greenhouse gas emission scenarios), and ensemble uncertainty (representing uncertain initial conditions and states). A novel uncertainty metric, the Square Root Error Variance (SREV), is used to quantify the uncertainties involved. The SREV requires: (1) Interpolating raw and corrected GCM outputs to a common grid; (2) Converting these to percentiles; (3) Estimating SREV for model, scenario, initial condition and total uncertainty at each percentile; and (4) Transforming SREV to a time series. The outcome is a spatially varying series of SREVs associated with each model that can be used to assess how uncertain the system is at each simulated point or time. This framework, while illustrated in a climate change context, is completely applicable for assessment of uncertainties any modelling framework may be subject to. The proposed method is applied to monthly precipitation and temperature from 6 CMIP3 and 13 CMIP5 GCMs across the world. For CMIP3, B1, A1B and A2 scenarios whereas for CMIP5, RCP2.6, RCP4.5 and RCP8.5 representing low, medium and high emissions are considered. For both CMIP3 and CMIP5, model structure is the largest source of uncertainty, which reduces significantly after correcting for biases. Scenario uncertainly increases, especially for temperature, in future due to divergence of the three emission scenarios analysed. While CMIP5 precipitation simulations exhibit a small reduction in total uncertainty over CMIP3, there is almost no reduction observed for temperature projections. Estimation of uncertainty in both space and time sheds lights on the spatial and temporal patterns of uncertainties in GCM outputs, providing an effective platform for risk-based assessments of any alternate plans or decisions that may be formulated using GCM simulations.
Rainfall height stochastic modelling as a support tool for landslides early warning
NASA Astrophysics Data System (ADS)
Capparelli, G.; Giorgio, M.; Greco, R.; Versace, P.
2009-04-01
Occurrence of landslides is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Although heavy landslides frequently occurred in Campania, southern Italy, during the last decade, no complete data sets are available for natural slopes where landslides occurred. As a consequence, landslide risk assessment procedures and early warning systems in Campania still rely on simple empirical models based on correlation between daily rainfall records and observed landslides, like FLAIR model [Versace et al., 2003]. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction. In mountainous areas, rainfall spatial and temporal variability are very pronounced due to orographic effects, making predictions even more complicated. Existing rain gauge networks are not dense enough to resolve the small scale spatial variability, and the same limitation of spatial resolution affects rainfall height maps provided by radar sensors as well as by meteorological physically based models. Therefore, analysis of on-site recorded rainfall height time series still represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR and ARMA [Box and Jenkins, 1976]. Sometimes exogenous information coming from additional series of observations is also taken into account, and the models are called ARX and ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted in conjunction with FLAIR model to calculate the probability of flowslides occurrence. The final aim of the study is in fact to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil protection agency meteorological warning network. So far, the model has been applied only to data series recorded at a single rain gauge. Future extension will deal with spatial correlation between time series recorded at different gauges. ACKNOWLEDGEMENTS The research was co-financed by the Italian Ministry of University, by means of the PRIN 2006 PRIN program, within the research project entitled ‘Definition of critical rainfall thresholds for destructive landslides for civil protection purposes'. REFERENCES Box, G.E.P. and Jenkins, G.M., 1976. Time Series Analysis Forecasting and Control, Holden-Day, San Francisco. Cowpertwait, P.S.P., Kilsby, C.G. and O'Connell, P.E., 2002. A space-time Neyman-Scott model of rainfall: Empirical analysis of extremes, Water Resources Research, 38(8):1-14. Salas, J.D., 1992. Analysis and modeling of hydrological time series, in D.R. Maidment, ed., Handbook of Hydrology, McGraw-Hill, New York. Heneker, T.M., Lambert, M.F. and Kuczera G., 2001. A point rainfall model for risk-based design, Journal of Hydrology, 247(1-2):54-71. Versace, P., Sirangelo. B. and Capparelli, G., 2003. Forewarning model of landslides triggered by rainfall. Proc. 3rd International Conference on Debris-Flow Hazards Mitigation: Mechanics, Prediction and Assessment, Davos.
NASA Astrophysics Data System (ADS)
Müller, H.; Haberlandt, U.
2018-01-01
Rainfall time series of high temporal resolution and spatial density are crucial for urban hydrology. The multiplicative random cascade model can be used for temporal rainfall disaggregation of daily data to generate such time series. Here, the uniform splitting approach with a branching number of 3 in the first disaggregation step is applied. To achieve a final resolution of 5 min, subsequent steps after disaggregation are necessary. Three modifications at different disaggregation levels are tested in this investigation (uniform splitting at Δt = 15 min, linear interpolation at Δt = 7.5 min and Δt = 3.75 min). Results are compared both with observations and an often used approach, based on the assumption that a time steps with Δt = 5.625 min, as resulting if a branching number of 2 is applied throughout, can be replaced with Δt = 5 min (called the 1280 min approach). Spatial consistence is implemented in the disaggregated time series using a resampling algorithm. In total, 24 recording stations in Lower Saxony, Northern Germany with a 5 min resolution have been used for the validation of the disaggregation procedure. The urban-hydrological suitability is tested with an artificial combined sewer system of about 170 hectares. The results show that all three variations outperform the 1280 min approach regarding reproduction of wet spell duration, average intensity, fraction of dry intervals and lag-1 autocorrelation. Extreme values with durations of 5 min are also better represented. For durations of 1 h, all approaches show only slight deviations from the observed extremes. The applied resampling algorithm is capable to achieve sufficient spatial consistence. The effects on the urban hydrological simulations are significant. Without spatial consistence, flood volumes of manholes and combined sewer overflow are strongly underestimated. After resampling, results using disaggregated time series as input are in the range of those using observed time series. Best overall performance regarding rainfall statistics are obtained by the method in which the disaggregation process ends at time steps with 7.5 min duration, deriving the 5 min time steps by linear interpolation. With subsequent resampling this method leads to a good representation of manhole flooding and combined sewer overflow volume in terms of hydrological simulations and outperforms the 1280 min approach.
Nuclear cardiology apparatus and method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Applegate, R.J.; Ionnou, B.N.; Kearns, D.S.
1981-01-20
A nuclear cardiology system for use with a scintillation camera for evaluating cardiac function by real time measurement of the variation of radiation from the heart of a patient to whom is administered a radioactive tracer. The camera provides data describing the location of individual counts representing radiation events coming from the patient. The system segregates, in real time, counts corresponding to radiation from an electronically defined region of interest describing an investigated part of the heart, such as the left ventricle. Synchronized by the patient's electrocardiogram, time gated memory circuitry divides each heartbeat into a series of subintervals, andmore » stores indications of the respective amounts of radiation events emanating from the region of interest during each of the subintervals. Calculating circuitry scans the stored information and, based on the maximum and minimum respective radiation amounts detected in the subintervals, computes the fraction of blood ejected by the heart in each beat. A strip chart recorder provides a permanent representation of the curve of radiation from the region of interest, as defined by the indicated series of subinterval radiation amounts.« less
Liu, Shuyuan; Liu, Xiangnan; Liu, Meiling; Wu, Ling; Ding, Chao; Huang, Zhi
2017-05-30
An effective method to monitor heavy metal stress in crops is of critical importance to assure agricultural production and food security. Phenology, as a sensitive indicator of environmental change, can respond to heavy metal stress in crops and remote sensing is an effective method to detect plant phenological changes. This study focused on identifying the rice phenological differences under varied heavy metal stress using EVI (enhanced vegetation index) time-series, which was obtained from HJ-1A/B CCD images and fitted with asymmetric Gaussian model functions. We extracted three phenological periods using first derivative analysis: the tillering period, heading period, and maturation period; and constructed two kinds of metrics with phenological characteristics: date-intervals and time-integrated EVI, to explore the rice phenological differences under mild and severe stress levels. Results indicated that under severe stress the values of the metrics for presenting rice phenological differences in the experimental areas of heavy metal stress were smaller than the ones under mild stress. This finding represents a new method for monitoring heavy metal contamination through rice phenology.
Simon, Laurent; Ospina, Juan
2016-07-25
Three-dimensional solute transport was investigated for a spherical device with a release hole. The governing equation was derived using the Fick's second law. A mixed Neumann-Dirichlet condition was imposed at the boundary to represent diffusion through a small region on the surface of the device. The cumulative percentage of drug released was calculated in the Laplace domain and represented by the first term of an infinite series of Legendre and modified Bessel functions of the first kind. Application of the Zakian algorithm yielded the time-domain closed-form expression. The first-order solution closely matched a numerical solution generated by Mathematica(®). The proposed method allowed computation of the characteristic time. A larger surface pore resulted in a smaller effective time constant. The agreement between the numerical solution and the semi-analytical method improved noticeably as the size of the orifice increased. It took four time constants for the device to release approximately ninety-eight of its drug content. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Inoue, Y.; Tsuruoka, K.; Arikawa, M.
2014-04-01
In this paper, we proposed a user interface that displays visual animations on geographic maps and timelines for depicting historical stories by representing causal relationships among events for time series. We have been developing an experimental software system for the spatial-temporal visualization of historical stories for tablet computers. Our proposed system makes people effectively learn historical stories using visual animations based on hierarchical structures of different scale timelines and maps.
NASA Astrophysics Data System (ADS)
Vernon, F.; Arrott, M.; Orcutt, J. A.; Mueller, C.; Case, J.; De Wardener, G.; Kerfoot, J.; Schofield, O.
2013-12-01
Any approach sophisticated enough to handle a variety of data sources and scale, yet easy enough to promote wide use and mainstream adoption is required to address the following mappings: - From the authored domain of observation to the requested domain of interest; - From the authored spatiotemporal resolution to the requested resolution; and - From the representation of data placed on wide variety of discrete mesh types to the use of that data as a continuos field with a selectable continuity. The Open Geospatial Consortium's (OGC) Reference Model[1] with its direct association with the ISO 19000 series standards provides a comprehensive foundation to represent all data on any type of mesh structure, aka "Discrete Coverages". The Reference Model also provides the specification for the core operations required to utilize any Discrete Coverage. The FEniCS Project[2] provides a comprehensive model for how to represent the Basis Functions on mesh structures as "Degrees of Freedom" to present discrete data as continuous fields with variable continuity. In this talk, we will present the research and development the OOI Cyberinfrastructure Project is pursuing to integrate these approaches into a comprehensive Application Programming Interface (API) to author, acquire and operate on the broad range of data formulation from time series, trajectories and tables through to time variant finite difference grids and finite element meshes.
NASA Astrophysics Data System (ADS)
Camarero, Lluís; Bacardit, Montserrat; de Diego, Alberto; Arana, Gorka
2017-10-01
Atmospheric deposition collected at remote, high elevation stations is representative of long-range transport of elements. Here we present time-series of Al, Fe, Ti, Mn, Zn, Ni, Cu, As, Cd and Pb deposition sampled in the Central Pyrenees at 2240 m a.s.l, representative of the fluxes of these elements over South West Europe. Trace element deposition did not show a simple trend. Rather, there was statistical evidence of several underlying factors governing the variability of the time-series recorded: seasonal cycles, trends, the effects of the amount of precipitation, climate-controlled export of dust, and changes in anthropogenic emissions. Overall, there were three main modes of variation in deposition. The first mode was related to North Atlantic Oscillation (NAO), and affected Al, Fe, Ti, Mn and Pb. We interpret this as changes in the dust export from Northern Africa under the different meteorological conditions that the NAO index indicates. The second mode was an upward trend related to a rise in the frequency of precipitation events (that also lead to an increase in the amount). More frequent events might cause a higher efficiency in the scavenging of aerosols. As, Cu and Ni responded to this. And finally, the third mode of variation was related to changes in anthropogenic emissions of Pb and Zn.
Change classification in SAR time series: a functional approach
NASA Astrophysics Data System (ADS)
Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan
2017-10-01
Change detection represents a broad field of research in SAR remote sensing, consisting of many different approaches. Besides the simple recognition of change areas, the analysis of type, category or class of the change areas is at least as important for creating a comprehensive result. Conventional strategies for change classification are based on supervised or unsupervised landuse / landcover classifications. The main drawback of such approaches is that the quality of the classification result directly depends on the selection of training and reference data. Additionally, supervised processing methods require an experienced operator who capably selects the training samples. This training step is not necessary when using unsupervised strategies, but nevertheless meaningful reference data must be available for identifying the resulting classes. Consequently, an experienced operator is indispensable. In this study, an innovative concept for the classification of changes in SAR time series data is proposed. Regarding the drawbacks of traditional strategies given above, it copes without using any training data. Moreover, the method can be applied by an operator, who does not have detailed knowledge about the available scenery yet. This knowledge is provided by the algorithm. The final step of the procedure, which main aspect is given by the iterative optimization of an initial class scheme with respect to the categorized change objects, is represented by the classification of these objects to the finally resulting classes. This assignment step is subject of this paper.
Why didn't Box-Jenkins win (again)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pack, D.J.; Downing, D.J.
This paper focuses on the forecasting performance of the Box-Jenkins methodology applied to the 111 time series of the Makridakis competition. It considers the influence of the following factors: (1) time series length, (2) time-series information (autocorrelation) content, (3) time-series outliers or structural changes, (4) averaging results over time series, and (5) forecast time origin choice. It is found that the 111 time series contain substantial numbers of very short series, series with obvious structural change, and series whose histories are relatively uninformative. If these series are typical of those that one must face in practice, the real message ofmore » the competition is that univariate time series extrapolations will frequently fail regardless of the methodology employed to produce them.« less
Multifractal analysis of visibility graph-based Ito-related connectivity time series.
Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano
2016-02-01
In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.
Green-wave control of an unbalanced two-route traffic system with signals
NASA Astrophysics Data System (ADS)
Tobita, Kazuhiro; Nagatani, Takashi
2013-11-01
We introduce the preference parameter into the two-route dynamic model proposed by Wahle et al. The parameter represents the driver’s preference for the route choice. When the driver prefers a route, the traffic flow on route A does not balance with that on route B. We study the signal control for the unbalanced two-route traffic flow at the tour-time feedback strategy where the vehicles move ahead through a series of signals. The traffic signals are controlled by both cycle time and phase shift (offset time). We find that the mean tour time can be balanced by selecting the offset time successfully. We derive the relationship between the mean tour time and offset time (phase shift). Also, the dependences of the mean density and mean current on the offset time are derived.
The Transiting Exoplanet Community Early Release Science Program
NASA Astrophysics Data System (ADS)
Batalha, Natalie; Bean, Jacob; Stevenson, Kevin; Alam, M.; Batalha, N.; Benneke, B.; Berta-Thompson, Z.; Blecic, J.; Bruno, G.; Carter, A.; Chapman, J.; Crossfield, I.; Crouzet, N.; Decin, L.; Demory, B.; Desert, J.; Dragomir, D.; Evans, T.; Fortney, J.; Fraine, J.; Gao, P.; Garcia Munoz, A.; Gibson, N.; Goyal, J.; Harrington, J.; Heng, K.; Hu, R.; Kempton, E.; Kendrew, S.; Kilpatrick, B.; Knutson, H.; Kreidberg, L.; Krick, J.; Lagage, P.; Lendl, M.; Line, M.; Lopez-Morales, M.; Louden, T.; Madhusudhan, N.; Mandell, A.; Mansfield, M.; May, E.; Morello, G.; Morley, C.; Moses, J.; Nikolov, N.; Parmentier, V.; Redfield, S.; Roberts, J.; Schlawin, E.; Showman, A.; Sing, D.; Spake, J.; Swain, M.; Todorov, K.; Tsiaras, A.; Venot, O.; Waalkes, W.; Wakeford, H.; Wheatley, P.; Zellem, R.
2017-11-01
JWST presents the opportunity to transform our understanding of planets and the origins of life by revealing the atmospheric compositions, structures, and dynamics of transiting exoplanets in unprecedented detail. However, the high-precision, time-series observations required for such investigations have unique technical challenges, and our prior experience with HST, Spitzer, and Kepler indicates that there will be a steep learning curve when JWST becomes operational. We propose an ERS program to accelerate the acquisition and diffusion of technical expertise for transiting exoplanet observations with JWST. This program will also provide a compelling set of representative datasets, which will enable immediate scientific breakthroughs. We will exercise the time-series modes of all four instruments that have been identified as the consensus highest priority by the community, observe the full suite of transiting planet characterization geometries (transits, eclipses, and phase curves), and target planets with host stars that span an illustrative range of brightnesses. The proposed observations were defined through an inclusive and transparent process that had participation from JWST instrument experts and international leaders in transiting exoplanet studies. The targets have been vetted with previous measurements, will be observable early in the mission, and have exceptional scientific merit. We will engage the community with a two-phase Data Challenge that culminates with the delivery of planetary spectra, time series instrument performance reports, and open-source data analysis toolkits.
NASA Astrophysics Data System (ADS)
Ritschel, Christoph; Ulbrich, Uwe; Névir, Peter; Rust, Henning W.
2017-12-01
For several hydrological modelling tasks, precipitation time series with a high (i.e. sub-daily) resolution are indispensable. The data are, however, not always available, and thus model simulations are used to compensate. A canonical class of stochastic models for sub-daily precipitation are Poisson cluster processes, with the original Bartlett-Lewis (OBL) model as a prominent representative. The OBL model has been shown to well reproduce certain characteristics found in observations. Our focus is on intensity-duration-frequency (IDF) relationships, which are of particular interest in risk assessment. Based on a high-resolution precipitation time series (5 min) from Berlin-Dahlem, OBL model parameters are estimated and IDF curves are obtained on the one hand directly from the observations and on the other hand from OBL model simulations. Comparing the resulting IDF curves suggests that the OBL model is able to reproduce the main features of IDF statistics across several durations but cannot capture rare events (here an event with a return period larger than 1000 years on the hourly timescale). In this paper, IDF curves are estimated based on a parametric model for the duration dependence of the scale parameter in the generalized extreme value distribution; this allows us to obtain a consistent set of curves over all durations. We use the OBL model to investigate the validity of this approach based on simulated long time series.
The risk characteristics of solar and geomagnetic activity
NASA Astrophysics Data System (ADS)
Podolska, Katerina
2016-04-01
The main aim of this contribution is a deeper analysis of the influence of solar activity which is expected to have an impact on human health, and therefore on mortality, in particular civilization and degenerative diseases. We have constructed the characteristics that represent the risk of solar and geomagnetic activity on human health on the basis of our previous analysis of association between the daily numbers of death on diseases of the nervous system and diseases of the circulatory system and solar and geomagnetic activity in the Czech Republic during the years 1994 - 2013. We used long period daily time series of numbers of deaths by cause, long period time series of solar activity indices (namely R and F10.7), geomagnetic indicies (Kp planetary index, Dst) and ionospheric parameters (foF2 and TEC). The ionospheric parameters were related to the geographic location of the Czech Republic and adjusted for middle geographic latitudes. The risk characteristics were composed by cluster analysis in time series according to the phases of the solar cycle resp. the seasonal insolation at mid-latitudes or the daily period according to the impact of solar and geomagnetic activity on mortality by cause of death from medical cause groups of death VI. Diseases of the nervous system and IX. Diseases of the circulatory system mortality by 10th Revision of International Classification of Diseases WHO (ICD-10).
Systematic comparisons between PRISM version 1.0.0, BAP, and CSMIP ground-motion processing
Kalkan, Erol; Stephens, Christopher
2017-02-23
A series of benchmark tests was run by comparing results of the Processing and Review Interface for Strong Motion data (PRISM) software version 1.0.0 to Basic Strong-Motion Accelerogram Processing Software (BAP; Converse and Brady, 1992), and to California Strong Motion Instrumentation Program (CSMIP) processing (Shakal and others, 2003, 2004). These tests were performed by using the MatLAB implementation of PRISM, which is equivalent to its public release version in Java language. Systematic comparisons were made in time and frequency domains of records processed in PRISM and BAP, and in CSMIP, by using a set of representative input motions with varying resolutions, frequency content, and amplitudes. Although the details of strong-motion records vary among the processing procedures, there are only minor differences among the waveforms for each component and within the frequency passband common to these procedures. A comprehensive statistical evaluation considering more than 1,800 ground-motion components demonstrates that differences in peak amplitudes of acceleration, velocity, and displacement time series obtained from PRISM and CSMIP processing are equal to or less than 4 percent for 99 percent of the data, and equal to or less than 2 percent for 96 percent of the data. Other statistical measures, including the Euclidian distance (L2 norm) and the windowed root mean square level of processed time series, also indicate that both processing schemes produce statistically similar products.
Ade, Serge; Békou, Wilfried; Adjobimey, Mênonli; Adjibode, Omer; Ade, Gabriel; Harries, Anthony D.; Anagonou, Séverin
2016-01-01
Objective. To determine any changes in tuberculosis epidemiology in the last 15 years in Benin, seasonal variations, and forecasted numbers of tuberculosis cases in the next five years. Materials and Methods. Retrospective cohort and time series study of all tuberculosis cases notified between 2000 and 2014. The “R” software version 3.2.1 (Institute for Statistics and Mathematics Vienna Austria) and the Box-Jenkins 1976 modeling approach were used for time series analysis. Results. Of 246943 presumptive cases, 54303 (22%) were diagnosed with tuberculosis. Annual notified case numbers increased, with the highest reported in 2011. New pulmonary bacteriologically confirmed tuberculosis (NPBCT) represented 78% ± SD 2%. Retreatment cases decreased from 10% to 6% and new pulmonary clinically diagnosed cases increased from 2% to 8%. NPBCT notification rates decreased in males from 2012, in young people aged 15–34 years and in Borgou-Alibori region. There was a seasonal pattern in tuberculosis cases. Over 90% of NPBCT were HIV-tested with a stable HIV prevalence of 13%. The ARIMA best fit model predicted a decrease in tuberculosis cases finding in the next five years. Conclusion. Tuberculosis case notifications are predicted to decrease in the next five years if current passive case finding is used. Additional strategies are needed in the country. PMID:27293887
A New Approach to Monitoring Coastal Marshes for Persistent Flooding
NASA Astrophysics Data System (ADS)
Kalcic, M. T.; Underwood, L. W.; Fletcher, R. M.
2012-12-01
Many areas in coastal Louisiana are below sea level and protected from flooding by a system of natural and man-made levees. Flooding is common when the levees are overtopped by storm surge or rising rivers. Many levees in this region are further stressed by erosion and subsidence. The floodwaters can become constricted by levees and trapped, causing prolonged inundation. Vegetative communities in coastal regions, from fresh swamp forest to saline marsh, can be negatively affected by inundation and changes in salinity. As saltwater persists, it can have a toxic effect upon marsh vegetation causing die off and conversion to open water types, destroying valuable species habitats. The length of time the water persists and the average annual salinity are important variables in modeling habitat switching (cover type change). Marsh type habitat switching affects fish, shellfish, and wildlife inhabitants, and can affect the regional ecosystem and economy. There are numerous restoration and revitalization projects underway in the coastal region, and their effects on the entire ecosystem need to be understood. For these reasons, monitoring persistent saltwater intrusion and inundation is important. For this study, persistent flooding in Louisiana coastal marshes was mapped using MODIS (Moderate Resolution Imaging Spectroradiometer) time series of a Normalized Difference Water Index (NDWI). The time series data were derived for 2000 through 2009, including flooding due to Hurricane Rita in 2005 and Hurricane Ike in 2008. Using the NDWI, duration and extent of flooding can be inferred. The Time Series Product Tool (TSPT), developed at NASA SSC, is a suite of software developed in MATLAB® that enables improved-quality time series images to be computed using advanced temporal processing techniques. This software has been used to compute time series for monitoring temporal changes in environmental phenomena, (e.g. NDVI times series from MODIS), and was modified and used to compute the NDWI indices and also the Normalized Difference Soil Index (NDSI). Coastwide Reference Monitoring System (CRMS) water levels from various hydrologic monitoring stations and aerial photography were used to optimize thresholds for MODIS-derived time series of NDWI and to validate resulting flood maps. In most of the profiles produced for post-hurricane assessment, the increase in the NDWI index (from storm surge) is accompanied by a decrease in the vegetation index (NDVI) and then a period of declining water. The NDSI index represents non-green or dead vegetation and increases after the hurricane's destruction of the marsh vegetation. Behavior of these indices over time is indicative of which areas remain flooded, which areas recover to their former levels of vegetative vigor, and which areas are stressed or in transition. Tracking these indices over time shows the recovery rate of vegetation and the relative behavior to inundation persistence. The results from this study demonstrated that identification of persistent marsh flooding, utilizing the tools developed in this study, provided an approximate 70-80 percent accuracy rate when compared to the actual days flooded at the CRMS stations.
A New Approach to Monitoring Coastal Marshes for Persistent Flooding
NASA Technical Reports Server (NTRS)
Kalcic, M. T.; Undersood, Lauren W.; Fletcher, Rose
2012-01-01
Many areas in coastal Louisiana are below sea level and protected from flooding by a system of natural and man-made levees. Flooding is common when the levees are overtopped by storm surge or rising rivers. Many levees in this region are further stressed by erosion and subsidence. The floodwaters can become constricted by levees and trapped, causing prolonged inundation. Vegetative communities in coastal regions, from fresh swamp forest to saline marsh, can be negatively affected by inundation and changes in salinity. As saltwater persists, it can have a toxic effect upon marsh vegetation causing die off and conversion to open water types, destroying valuable species habitats. The length of time the water persists and the average annual salinity are important variables in modeling habitat switching (cover type change). Marsh type habitat switching affects fish, shellfish, and wildlife inhabitants, and can affect the regional ecosystem and economy. There are numerous restoration and revitalization projects underway in the coastal region, and their effects on the entire ecosystem need to be understood. For these reasons, monitoring persistent saltwater intrusion and inundation is important. For this study, persistent flooding in Louisiana coastal marshes was mapped using MODIS (Moderate Resolution Imaging Spectroradiometer) time series of a Normalized Difference Water Index (NDWI). The time series data were derived for 2000 through 2009, including flooding due to Hurricane Rita in 2005 and Hurricane Ike in 2008. Using the NDWI, duration and extent of flooding can be inferred. The Time Series Product Tool (TSPT), developed at NASA SSC, is a suite of software developed in MATLAB(R) that enables improved-quality time series images to be computed using advanced temporal processing techniques. This software has been used to compute time series for monitoring temporal changes in environmental phenomena, (e.g. NDVI times series from MODIS), and was modified and used to compute the NDWI indices and also the Normalized Difference Soil Index (NDSI). Coastwide Reference Monitoring System (CRMS) water levels from various hydrologic monitoring stations and aerial photography were used to optimize thresholds for MODIS-derived time series of NDWI and to validate resulting flood maps. In most of the profiles produced for post-hurricane assessment, the increase in the NDWI index (from storm surge) is accompanied by a decrease in the vegetation index (NDVI) and then a period of declining water. The NDSI index represents non-green or dead vegetation and increases after the hurricane s destruction of the marsh vegetation. Behavior of these indices over time is indicative of which areas remain flooded, which areas recover to their former levels of vegetative vigor, and which areas are stressed or in transition. Tracking these indices over time shows the recovery rate of vegetation and the relative behavior to inundation persistence. The results from this study demonstrated that identification of persistent marsh flooding, utilizing the tools developed in this study, provided an approximate 70-80 percent accuracy rate when compared to the actual days flooded at the CRMS stations.
Enhancement to Non-Contacting Stress Measurement of Blade Vibration Frequency
NASA Technical Reports Server (NTRS)
Platt, Michael; Jagodnik, John
2011-01-01
A system for turbo machinery blade vibration has been developed that combines time-of-arrival sensors for blade vibration amplitude measurement and radar sensors for vibration frequency and mode identification. The enabling technology for this continuous blade monitoring system is the radar sensor, which provides a continuous time series of blade displacement over a portion of a revolution. This allows the data reduction algorithms to directly calculate the blade vibration frequency and to correctly identify the active modes of vibration. The work in this project represents a significant enhancement in the mode identification and stress calculation accuracy in non-contacting stress measurement system (NSMS) technology when compared to time-of-arrival measurements alone.
NASA Astrophysics Data System (ADS)
Gruszczynska, Marta; Rosat, Severine; Klos, Anna; Bogusz, Janusz
2017-04-01
Seasonal oscillations in the GPS position time series can arise from real geophysical effects and numerical artefacts. According to Dong et al. (2002) environmental loading effects can account for approximately 40% of the total variance of the annual signals in GPS time series, however using generally acknowledged methods (e.g. Least Squares Estimation, Wavelet Decomposition, Singular Spectrum Analysis) to model seasonal signals we are not able to separate real from spurious signals (effects of mismodelling aliased into annual period as well as draconitic). Therefore, we propose to use Multichannel Singular Spectrum Analysis (MSSA) to determine seasonal oscillations (with annual and semi-annual periods) from GPS position time series and environmental loading displacement models. The MSSA approach is an extension of the classical Karhunen-Loève method and it is a special case of SSA for multivariate time series. The main advantage of MSSA is the possibility to extract common seasonal signals for stations from selected area and to investigate the causality between a set of time series as well. In this research, we explored the ability of MSSA application to separate real geophysical effects from spurious effects in GPS time series. For this purpose, we used GPS position changes and environmental loading models. We analysed the topocentric time series from 250 selected stations located worldwide, delivered from Network Solution obtained by the International GNSS Service (IGS) as a contribution to the latest realization of the International Terrestrial Reference System (namely ITRF2014, Rebishung et al., 2016). We also researched atmospheric, hydrological and non-tidal oceanic loading models provided by the EOST/IPGS Loading Service in the Centre-of-Figure (CF) reference frame. The analysed displacements were estimated from ERA-Interim (surface pressure), MERRA-land (soil moisture and snow) as well as ECCO2 ocean bottom pressure. We used Multichannel Singular Spectrum Analysis to determine common seasonal signals in two case studies with adopted a 3-years lag-window as the optimal window size. We also inferred the statistical significance of oscillations through the Monte Carlo MSSA method (Allen and Robertson, 1996). In the first case study, we investigated the common spatio-temporal seasonal signals for all stations. For this purpose, we divided selected stations with respect to the continents. For instance, for stations located in Europe, seasonal oscillations accounts for approximately 45% of the GPS-derived data variance. Much higher variance of seasonal signals is explained by hydrological loadings of about 92%, while the non-tidal oceanic loading accounted for 31% of total variance. In the second case study, we analysed the capability of the MSSA method to establish a causality between several time series. Each of estimated Principal Component represents pattern of the common signal for all analysed data. For ZIMM station (Zimmerwald, Switzerland), the 1st, 2nd and 9th, 10th Principal Components, which accounts for 35% of the variance, corresponds to the annual and semi-annual signals. In this part, we applied the non-parametric MSSA approach to extract the common seasonal signals for GPS time series and environmental loadings for each of the 250 stations with clear statement, that some part of seasonal signal reflects the real geophysical effects. REFERENCES: 1. Allen, M. and Robertson, A.: 1996, Distinguishing modulated oscillations from coloured noise in multivariate datasets. Climate Dynamics, 12, No. 11, 775-784. DOI: 10.1007/s003820050142. 2. Dong, D., Fang, P., Bock, Y., Cheng, M.K. and Miyazaki, S.: 2002, Anatomy of apparent seasonal variations from GPS-derived site position time series. Journal of Geophysical Research, 107, No. B4, 2075. DOI: 10.1029/2001JB000573. 3. Rebischung, P., Altamimi, Z., Ray, J. and Garayt, B.: 2016, The IGS contribution to ITRF2014. Journal of Geodesy, 90, No. 7, 611-630. DOI:10.1007/s00190-016-0897-6.
Regenerating time series from ordinal networks.
McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael
2017-03-01
Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.
Regenerating time series from ordinal networks
NASA Astrophysics Data System (ADS)
McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael
2017-03-01
Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.
Propelled microprobes in turbulence
NASA Astrophysics Data System (ADS)
Calzavarini, E.; Huang, Y. X.; Schmitt, F. G.; Wang, L. P.
2018-05-01
The temporal statistics of incompressible fluid velocity and passive scalar fields in developed turbulent conditions is investigated by means of direct numerical simulations along the trajectories of self-propelled pointlike probes drifting in a flow. Such probes are characterized by a propulsion velocity which is fixed in intensity and direction; however, like vessels in a flow they are continuously deviated on their intended course as the result of local sweeping of the fluid flow. The recorded time series by these moving probes represent the simplest realization of transect measurements in a fluid flow environment. We investigate the nontrivial combination of Lagrangian and Eulerian statistical properties displayed by the transect time series. We show that, as a result of the homogeneity and isotropy of the flow, the single-point acceleration statistics of the probes follows a predictable trend at varying the propulsion speed, a feature that is also present in the scalar time-derivative fluctuations. Further, by focusing on two-time statistics we characterize how the Lagrangian-to-Eulerian transition occurs at increasing the propulsion velocity. The analysis of intermittency of temporal increments highlights in a striking way the opposite trends displayed by the fluid velocity and passive scalars.
Application of the Hilbert-Huang Transform to Financial Data
NASA Technical Reports Server (NTRS)
Huang, Norden
2005-01-01
A paper discusses the application of the Hilbert-Huang transform (HHT) method to time-series financial-market data. The method was described, variously without and with the HHT name, in several prior NASA Tech Briefs articles and supporting documents. To recapitulate: The method is especially suitable for analyzing time-series data that represent nonstationary and nonlinear phenomena including physical phenomena and, in the present case, financial-market processes. The method involves the empirical mode decomposition (EMD), in which a complicated signal is decomposed into a finite number of functions, called "intrinsic mode functions" (IMFs), that admit well-behaved Hilbert transforms. The HHT consists of the combination of EMD and Hilbert spectral analysis. The local energies and the instantaneous frequencies derived from the IMFs through Hilbert transforms can be used to construct an energy-frequency-time distribution, denoted a Hilbert spectrum. The instant paper begins with a discussion of prior approaches to quantification of market volatility, summarizes the HHT method, then describes the application of the method in performing time-frequency analysis of mortgage-market data from the years 1972 through 2000. Filtering by use of the EMD is shown to be useful for quantifying market volatility.
Ballarin, Antonio; Posteraro, Brunella; Demartis, Giuseppe; Gervasi, Simona; Panzarella, Fabrizio; Torelli, Riccardo; Paroni Sterbini, Francesco; Morandotti, Grazia; Posteraro, Patrizia; Ricciardi, Walter; Gervasi Vidal, Kristian A; Sanguinetti, Maurizio
2014-12-06
Mathematical or statistical tools are capable to provide a valid help to improve surveillance systems for healthcare and non-healthcare-associated bacterial infections. The aim of this work is to evaluate the time-varying auto-adaptive (TVA) algorithm-based use of clinical microbiology laboratory database to forecast medically important drug-resistant bacterial infections. Using TVA algorithm, six distinct time series were modelled, each one representing the number of episodes per single 'ESKAPE' (E nterococcus faecium, S taphylococcus aureus, K lebsiella pneumoniae, A cinetobacter baumannii, P seudomonas aeruginosa and E nterobacter species) infecting pathogen, that had occurred monthly between 2002 and 2011 calendar years at the Università Cattolica del Sacro Cuore general hospital. Monthly moving averaged numbers of observed and forecasted ESKAPE infectious episodes were found to show a complete overlapping of their respective smoothed time series curves. Overall good forecast accuracy was observed, with percentages ranging from 82.14% for E. faecium infections to 90.36% for S. aureus infections. Our approach may regularly provide physicians with forecasted bacterial infection rates to alert them about the spread of antibiotic-resistant bacterial species, especially when clinical microbiological results of patients' specimens are delayed.
GPS Position Time Series @ JPL
NASA Technical Reports Server (NTRS)
Owen, Susan; Moore, Angelyn; Kedar, Sharon; Liu, Zhen; Webb, Frank; Heflin, Mike; Desai, Shailen
2013-01-01
Different flavors of GPS time series analysis at JPL - Use same GPS Precise Point Positioning Analysis raw time series - Variations in time series analysis/post-processing driven by different users. center dot JPL Global Time Series/Velocities - researchers studying reference frame, combining with VLBI/SLR/DORIS center dot JPL/SOPAC Combined Time Series/Velocities - crustal deformation for tectonic, volcanic, ground water studies center dot ARIA Time Series/Coseismic Data Products - Hazard monitoring and response focused center dot ARIA data system designed to integrate GPS and InSAR - GPS tropospheric delay used for correcting InSAR - Caltech's GIANT time series analysis uses GPS to correct orbital errors in InSAR - Zhen Liu's talking tomorrow on InSAR Time Series analysis
Geocenter Motion Derived from the JTRF2014 Combination
NASA Astrophysics Data System (ADS)
Abbondanza, C.; Chin, T. M.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; van Dam, T. M.; Wu, X.
2016-12-01
JTRF2014 represents the JPL Terrestrial Reference Frame (TRF) recently obtained as a result of the combination of the space-geodetic reprocessed inputs to the ITRF2014. Based upon a Kalman filter and smoother approach, JTRF2014 assimilates station positions and Earth-Orientation Parameters (EOPs) from GNSS, VLBI, SLR and DORIS and combine them through local tie measurements. JTRF is in its essence a time-series based TRF. In the JTRF2014 the dynamical evolution of the station positions is formulated by introducing linear and seasonal terms (annual and semi-annual periodic modes). Non-secular and non-seasonal motions of the geodetic sites are included in the smoothed time series by properly defining the station position process noise whose variance is characterized by analyzing station displacements induced by temporal changes of planetary fluid masses (atmosphere, oceans and continental surface water). With its station position time series output at a weekly resolution, JTRF2014 materializes a sub-secular frame whose origin is at the quasi-instantaneous Center of Mass (CM) as sensed by SLR. Both SLR and VLBI contribute to the scale of the combined frame. The sub-secular nature of the frame allows the users to directly access the quasi-instantaneous geocenter and scale information. Unlike standard combined TRF products which only give access to the secular component of the CM-CN motions, JTRF2014 is able to preserve -in addition to the long-term- the seasonal, non-seasonal and non-secular components of the geocenter motion. In the JTRF2014 assimilation scheme, local tie measurements are used to transfer the geocenter information from SLR to the space-geodetic techniques which are either insensitive to CM (VLBI) or whose geocenter motion is poorly determined (GNSS and DORIS). Properly tied to the CM frame through local ties and co-motion constraints, GNSS, VLBI and DORIS contribute to improve the SLR network geometry. In this paper, the determination of the weekly (CM-CN) time series as inferred from the JTRF2014 combination will be presented. Comparisons with geocenter time series derived from global inversions of GPS, GRACE and ocean bottom pressure models show the JTRF2014-derived geocenter favourably compares to the results of the inversion.
NASA Astrophysics Data System (ADS)
Kiss, Andrea; Wilson, Rob; Bariska, István
2011-07-01
In this paper, we present a 392-year-long preliminary temperature reconstruction for western Hungary. The reconstructed series is based on five vine- and grain-related historical phenological series from the town of Kőszeg. We apply dendrochronological methods for both signal assessment of the phenological series and the resultant temperature reconstruction. As a proof of concept, the present reconstruction explains 57% of the temperature variance of May-July Budapest mean temperatures and is well verified with coefficient of efficiency values in excess of 0.45. The developed temperature reconstruction portrays warm conditions during the late seventeenth and early eighteenth centuries with a period of cooling until the coldest reconstructed period centred around 1815, which was followed by a period of warming until the 1860s. The phenological evidence analysed here represent an important data source from which non-biased estimates of past climate can be derived that may provide information at all possible time-scales.
Characterizing the Recurrence of Hydrologic Droughts
NASA Astrophysics Data System (ADS)
Cancelliere, A.; Salas, J. D.
2002-12-01
Characterizing periods of deficit and drought has been an important aspect in planning and management of water resources systems for many decades. An extreme drought is a complex phenomenon that evolves through time and space in a random fashion. It may be characterized by its initiation, duration, severity (magnitude or intensity), spatial extent, and termination. These characteristics may be determined by comparing the water supply time series versus the corresponding water demand series in the area of consideration. Because the water supply quantities such as rainfall and streamflow are stochastic variables the ensuing drought characteristics are random and must be described in probabilistic terms. Let us consider a periodic stochastic water supply and a variable water demand series. A drought event is defined as a succession of consecutive periods (run) in which the water supply remains below the water demand. Thus, the drought length L (negative run length) is the number of consecutive time intervals (seasons) in which the water supply remains below the water demand, preceded and followed by (at least one season where) the water supply is equal or greater than the demand. Likewise, the difference between the water demand and the supply at time t is the magnitude of the deficit at time t so that the accumulated deficit D (drought magnitude) is the sum of the deficits over the drought duration L. In the study reported herein, the probability density functions (pdf) of drought length and drought magnitude and their low order moments are derived assuming that the underlying water supply series after is clipped by a constant or periodic water demand results in a periodic dependent binary series that is represented by a periodic two-state Markov chain. The derived pdfs allow estimating the occurrence probabilities of droughts of a given length where either the drought begins in a given season or regardless of the initial season. In addition, the return periods of droughts (based on length and magnitude) are determined. The applicability of the drought formulations is illustrated using several series of precipitation and streamflow in Sicily, Italy and Colorado, USA. The results obtained show an excellent agreement between the observed and theoretical results. In conclusion, the proposed methods appear to be a useful addition for drought analysis and characterization using stochastic methods.
Late Paleocene Arctic Ocean shallow-marine temperatures from mollusc stable isotopes
Bice, Karen L.; Arthur, Michael A.; Marincovich, Louie
1996-01-01
Late Paleocene high-latitude (80°N) Arctic Ocean shallow-marine temperatures are estimated from molluscan δ18O time series. Sampling of individual growth increments of two specimens of the bivalve Camptochlamys alaskensis provides a high-resolution record of shell stable isotope composition. The heavy carbon isotopic values of the specimens support a late Paleocene age for the youngest marine beds of the Prince Creek Formation exposed near Ocean Point, Alaska. The oxygen isotopic composition of regional freshwater runoff is estimated from the mean δ18O value of two freshwater bivalves collected from approximately coeval fluviatile beds. Over a 30 – 34‰ range of salinity, values assumed to represent the tolerance of C. alaskensis, the mean annual shallow-marine temperature recorded by these individuals is between 11° and 22°C. These values could represent maximum estimates of the mean annual temperature because of a possible warm-month bias imposed on the average δ18O value by slowing or cessation of growth in winter months. The amplitude of the molluscan δ18O time series probably records most of the seasonality in shallow-marine temperature. The annual temperature range indicated is approximately 6°C, suggesting very moderate high-latitude marine temperature seasonality during the late Paleocene. On the basis of analogy with modern Chlamys species, C. alaskensis probably inhabited water depths of 30–50 m. The seasonal temperature range derived from δ18O is therefore likely to be damped relative to the full range of annual sea surface temperatures. High-resolution sampling of molluscan shell material across inferred growth bands represents an important proxy record of seasonality of marine and freshwater conditions applicable at any latitude. If applied to other regions and time periods, the approach used here would contribute substantially to the paleoclimate record of seasonality.
Performance evaluation of the croissant production line with reparable machines
NASA Astrophysics Data System (ADS)
Tsarouhas, Panagiotis H.
2015-03-01
In this study, the analytical probability models for an automated serial production system, bufferless that consists of n-machines in series with common transfer mechanism and control system was developed. Both time to failure and time to repair a failure are assumed to follow exponential distribution. Applying those models, the effect of system parameters on system performance in actual croissant production line was studied. The production line consists of six workstations with different numbers of reparable machines in series. Mathematical models of the croissant production line have been developed using Markov process. The strength of this study is in the classification of the whole system in states, representing failures of different machines. Failure and repair data from the actual production environment have been used to estimate reliability and maintainability for each machine, workstation, and the entire line is based on analytical models. The analysis provides a useful insight into the system's behaviour, helps to find design inherent faults and suggests optimal modifications to upgrade the system and improve its performance.
Tipping point analysis of ocean acoustic noise
NASA Astrophysics Data System (ADS)
Livina, Valerie N.; Brouwer, Albert; Harris, Peter; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen
2018-02-01
We apply tipping point analysis to a large record of ocean acoustic data to identify the main components of the acoustic dynamical system and study possible bifurcations and transitions of the system. The analysis is based on a statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the data using time-series techniques. We analyse long-term and seasonal trends, system states and acoustic fluctuations to reconstruct a one-dimensional stochastic equation to approximate the acoustic dynamical system. We apply potential analysis to acoustic fluctuations and detect several changes in the system states in the past 14 years. These are most likely caused by climatic phenomena. We analyse trends in sound pressure level within different frequency bands and hypothesize a possible anthropogenic impact on the acoustic environment. The tipping point analysis framework provides insight into the structure of the acoustic data and helps identify its dynamic phenomena, correctly reproducing the probability distribution and scaling properties (power-law correlations) of the time series.
Observing and Understanding Tropospheric Ozone Changes
NASA Astrophysics Data System (ADS)
Logan, Jennifer; Schultz, Martin; Oltmans, Samuel
2010-03-01
Tropospheric Ozone Changes Workshop; Boulder, Colorado, 14-16 October 2009; Prompted by the lack of consensus on, and the need to assess current understanding of, long-term changes in tropospheric ozone, a workshop was held in Colorado to (1) evaluate the consistency of data records; (2) assess robust long-term changes; (3) determine how to combine observations and model studies; and (4) define research and observation needs for the future. At the workshop, long-term ozone records from regionally representative surface and mountain sites, ozonesondes, and aircraft were reviewed by region. In western Europe there are several time series of ˜15-40 years from all platforms. Overall, they show a rise in ozone into the middle to late 1990s and a leveling off, or in some cases declines, in the 2000s, in general agreement with precursor emission changes. However, significant differences in detail in the time series from nearby locations provide less confidence in changes before the late 1990s.
Cosmogenic 36Cl in karst waters: Quantifying contributions from atmospheric and bedrock sources
NASA Astrophysics Data System (ADS)
Johnston, V. E.; McDermott, F.
2009-12-01
Improved reconstructions of cosmogenic isotope production through time are crucial to understand past solar variability. As a preliminary step to derive atmospheric 36Cl/Cl solar proxy time-series from speleothems, we quantify 36Cl sources in cave dripwaters. Atmospheric 36Cl fallout rates are a potential proxy for solar output; however extraneous 36Cl derived from in-situ production in cave host-rocks could complicate the solar signal. Results from numerical modeling and preliminary geochemical data presented here show that the atmospheric 36Cl source dominates in many, but not all cave dripwaters. At favorable low elevation, mid-latitude sites, 36Cl based speleothem solar irradiance reconstructions could extend back to 500 ka, with a possible centennial scale temporal resolution. This would represent a marginal improvement in resolution compared with existing polar ice core records, with the added advantages of a wider geographic range, independent U-series constrained chronology, and the potential for contemporaneous climate signals within the same speleothem material.
NASA Astrophysics Data System (ADS)
Cymberknop, L.; Legnani, W.; Pessana, F.; Bia, D.; Zócalo, Y.; Armentano, R. L.
2011-12-01
The advent of vascular diseases, such as hypertension and atherosclerosis, is associated to significant alterations in the physical properties of arterial vessels. Evaluation of arterial biomechanical behaviour is related to the assessment of three representative indices: arterial compliance, arterial distensibility and arterial stiffness index. Elasticity is the most important mechanical property of the arterial wall, whose natures is strictly non-linear. Intervention of elastin and collagen fibres, passive constituent elements of the arterial wall, is related to the applied wall stress level. Concerning this, appropriate tools are required to analyse the temporal dynamics of the signals involved, in order to characterize the whole phenomenon. Fractal geometry can be mentioned as one of those techniques. The aim of this study consisted on arterial pressure and diameter signals processing, by means of nonlinear techniques based on fractal geometry. Time series morphology was related to different arterial stiffness states, generated by means of blood flow variations, during experiences performed in vitro.
Historical evidence for nature disconnection in a 70-year time series of Disney animated films.
Prévot-Julliard, Anne-Caroline; Julliard, Romain; Clayton, Susan
2015-08-01
The assumed ongoing disconnection between humans and nature in Western societies represents a profoundly challenging conservation issue. Here, we demonstrate one manifestation of this nature disconnection, via an examination of the representation of natural settings in a 70-year time series of Disney animated films. We found that natural settings are increasingly less present as a representation of outdoor environments in these films. Moreover, these drawn natural settings tend to be more and more human controlled and are less and less complex in terms of the biodiversity they depict. These results demonstrate the increasing nature disconnection of the filmmaking teams, which we consider as a proxy of the Western relation to nature. Additionally, because nature experience of children is partly based on movies, the depleted representation of biodiversity in outdoor environments of Disney films may amplify the current disconnection from nature for children. This reduction in exposure to nature may hinder the implementation of biodiversity conservation measures. © The Author(s) 2014.
Using "big data" to optimally model hydrology and water quality across expansive regions
Roehl, E.A.; Cook, J.B.; Conrads, P.A.
2009-01-01
This paper describes a new divide and conquer approach that leverages big environmental data, utilizing all available categorical and time-series data without subjectivity, to empirically model hydrologic and water-quality behaviors across expansive regions. The approach decomposes large, intractable problems into smaller ones that are optimally solved; decomposes complex signals into behavioral components that are easier to model with "sub- models"; and employs a sequence of numerically optimizing algorithms that include time-series clustering, nonlinear, multivariate sensitivity analysis and predictive modeling using multi-layer perceptron artificial neural networks, and classification for selecting the best sub-models to make predictions at new sites. This approach has many advantages over traditional modeling approaches, including being faster and less expensive, more comprehensive in its use of available data, and more accurate in representing a system's physical processes. This paper describes the application of the approach to model groundwater levels in Florida, stream temperatures across Western Oregon and Wisconsin, and water depths in the Florida Everglades. ?? 2009 ASCE.
Robot-assisted laparoscopic radical prostatectomy: perioperative outcomes of 1500 cases.
Patel, Vipul R; Palmer, Kenneth J; Coughlin, Geoff; Samavedi, Srinivas
2008-10-01
Robot-assisted laparoscopic radical prostatectomy (RALP) is an evolving minimally invasive treatment of for localized prostate cancer. We present our experience of 1500 consecutive cases with an analysis of perioperative outcomes. Fifteen hundred consecutive RALPs were performed by a single surgeon (VRP). Following Institutional Review Board approval, clinical coordinators performed prospective intraoperative and postoperative data collection. Functional outcomes were assessed using validated self-administered questionnaires. Mean OR time from skin incision to fascial closure (the time that the surgeon was present) was 105 minutes (55-300). Mean EBL was 111 cc (50-500). Ninety-seven percent of patients were discharged home on postoperative day 1. The overall complication rate was 4.3% with no mortalities. The positive margin rate (PMR) was 9.3% overall. PMR was 4% for pT2, 34% for T3 and 40% for pathologic stage T4. Our initial series represents one of the largest published series for perioperative outcomes of robotic assisted prostatectomy. Our data demonstrates the feasibility, safety and efficacy of the procedure.
Fitzmaurice, Gerard J.; Redmond, Karen C.; Fitzpatrick, David A.; Bartosik, Waldemar
2014-01-01
In keeping with international trends, lung cancer incidence and mortality are increasing among the Irish population with many patients presenting with advanced disease that excludes the potential for curative management. Consequently palliative treatment options for this patient group are being increasingly explored with various degrees of success. Endobronchial stenosis represents a particularly challenging area of management among these patients and a number of techniques have been described without the identification of a single gold standard. We report our experience of the first time use of endobronchial cryotherapy in Ireland with reference to a case series, including an example of its use in the management of benign disease, in order to support patients with borderline lung function and enable definitive palliative treatment. PMID:24791176
ERIC Educational Resources Information Center
Johnston, Lloyd D.; Schulenberg, John E.; O'Malley, Patrick M.; Bachman, Jerald G.; Miech, Richard A.; Patrick, Megan E.
2017-01-01
This occasional paper presents subgroup findings from the Monitoring the Future (MTF) study on levels of, and trends in, the use of a number of substances for nationally representative samples of high school graduates ages 19-30. The data have been gathered in a series of follow-up surveys of representative subsamples of high school seniors who…
ERIC Educational Resources Information Center
Johnston, Lloyd D.; O'Malley, Patrick M.; Bachman, Jerald G.; Schulenberg, John E.; Miech, Richard A.
2016-01-01
This occasional paper presents subgroup findings from the Monitoring the Future (MTF) study on levels of, and trends in, the use of a number of substances for nationally representative samples of high school graduates ages 19-30. The data have been gathered in a series of follow-up surveys of representative subsamples of high school seniors who…
ERIC Educational Resources Information Center
Johnston, Lloyd D.; O'Malley, Patrick M.; Bachman, Jerald G.; Schulenberg, John E.; Miech, Richard A.
2015-01-01
This occasional paper presents subgroup findings from the Monitoring the Future (MTF) study on levels of and trends in the use of a number of substances for nationally representative samples of high school graduates ages 19-30. The data have been gathered in a series of follow-up surveys of representative subsamples of high school seniors who were…
NASA Astrophysics Data System (ADS)
Menezes-Blackburn, Daniel; Sun, Jiahui; Lehto, Niklas; Zhang, Hao; Stutter, Marc; Giles, Courtney D.; Darch, Tegan; George, Timothy S.; Shand, Charles; Lumsdon, David; Blackwell, Martin; Wearing, Catherine; Cooper, Patricia; Wendler, Renate; Brown, Lawrie; Haygarth, Philip M.
2017-04-01
The phosphorus (P) labile pool and desorption kinetics were simultaneously evaluated in ten representative UK soils using the technique of Diffusive gradients in thin films (DGT). The DGT-induced fluxes in soil and sediments model (DIFS) was fitted to the time series of DGT deployment (1h to 240h). The desorbable P concentration (labile P) was obtained by multiplying the fitted Kd by the soil solution P concentration obtained using Diffusive Equilibration in Thin Films (DET) devices. The labile P was then compared to several soil P extracts including Olsen P, Resin P, FeO-P and water extractable P, in order to assess if these analytical procedures can be used to represent the labile P across different soils. The Olsen P, commonly used as a representation of the soil labile P pool, overestimated the desorbable P concentration by a seven fold factor. The use of this approach for the quantification of soil P desorption kinetics parameters was somewhat unprecise, showing a wide range of equally valid solutions for the response of the system P equilibration time (Tc). Additionally, the performance of different DIFS model versions (1D, 2D and 3D) was compared. Although these models had a good fit to experimental DGT time series data, the fitted parameters showed a poor agreement between different model versions. The limitations of the DIFS model family are associated with the assumptions taken in the modelling approach and the 3D version is here considered to be the most precise among them.
Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-10-01
In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.
Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-05-01
In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.
Arismendi, Ivan; Dunham, Jason B.; Heck, Michael; Schultz, Luke; Hockman-Wert, David
2017-01-01
Intermittent and ephemeral streams represent more than half of the length of the global river network. Dryland freshwater ecosystems are especially vulnerable to changes in human-related water uses as well as shifts in terrestrial climates. Yet, the description and quantification of patterns of flow permanence in these systems is challenging mostly due to difficulties in instrumentation. Here, we took advantage of existing stream temperature datasets in dryland streams in the northwest Great Basin desert, USA, to extract critical information on climate-sensitive patterns of flow permanence. We used a signal detection technique, Hidden Markov Models (HMMs), to extract information from daily time series of stream temperature to diagnose patterns of stream drying. Specifically, we applied HMMs to time series of daily standard deviation (SD) of stream temperature (i.e., dry stream channels typically display highly variable daily temperature records compared to wet stream channels) between April and August (2015–2016). We used information from paired stream and air temperature data loggers as well as co-located stream temperature data loggers with electrical resistors as confirmatory sources of the timing of stream drying. We expanded our approach to an entire stream network to illustrate the utility of the method to detect patterns of flow permanence over a broader spatial extent. We successfully identified and separated signals characteristic of wet and dry stream conditions and their shifts over time. Most of our study sites within the entire stream network exhibited a single state over the entire season (80%), but a portion of them showed one or more shifts among states (17%). We provide recommendations to use this approach based on a series of simple steps. Our findings illustrate a successful method that can be used to rigorously quantify flow permanence regimes in streams using existing records of stream temperature.
Atmospheric extinction in simulation tools for solar tower plants
NASA Astrophysics Data System (ADS)
Hanrieder, Natalie; Wilbert, Stefan; Schroedter-Homscheidt, Marion; Schnell, Franziska; Guevara, Diana Mancera; Buck, Reiner; Giuliano, Stefano; Pitz-Paal, Robert
2017-06-01
Atmospheric extinction causes significant radiation losses between the heliostat field and the receiver in a solar tower plants. These losses vary with site and time. State of the art is that in ray-tracing and plant optimization tools, atmospheric extinction is included by choosing between few constant standard atmospheric conditions. Even though some tools allow the consideration of site and time dependent extinction data, such data sets are nearly never available. This paper summarizes and compares the most common model equations implemented in several ray-tracing tools. There are already several methods developed and published to measure extinction on-site. An overview of the existing methods is also given here. Ray-tracing simulations of one exemplary tower plant at the Plataforma Solar de Almería (PSA) are presented to estimate the plant yield deviations between simulations using standard model equations instead of extinction time series. For PSA, the effect of atmospheric extinction accounts for losses between 1.6 and 7 %. This range is caused by considering overload dumping or not. Applying standard clear or hazy model equations instead of extinction time series lead to an underestimation of the annual plant yield at PSA. The discussion of the effect of extinction in tower plants has to include overload dumping. Situations in which overload dumping occurs are mostly connected to high radiation levels and low atmospheric extinction. Therefore it can be recommended that project developers should consider site and time dependent extinction data especially on hazy sites. A reduced uncertainty of the plant yield prediction can significantly reduce costs due to smaller risk margins for financing and EPCs. The generation of extinction data for several locations in form of representative yearly time series or geographical maps should be further elaborated.
Detection of a sudden change of the field time series based on the Lorenz system.
Da, ChaoJiu; Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan
2017-01-01
We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series.