NASA Astrophysics Data System (ADS)
Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal
2018-06-01
Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.
The short time Fourier transform and local signals
NASA Astrophysics Data System (ADS)
Okumura, Shuhei
In this thesis, I examine the theoretical properties of the short time discrete Fourier transform (STFT). The STFT is obtained by applying the Fourier transform by a fixed-sized, moving window to input series. We move the window by one time point at a time, so we have overlapping windows. I present several theoretical properties of the STFT, applied to various types of complex-valued, univariate time series inputs, and their outputs in closed forms. In particular, just like the discrete Fourier transform, the STFT's modulus time series takes large positive values when the input is a periodic signal. One main point is that a white noise time series input results in the STFT output being a complex-valued stationary time series and we can derive the time and time-frequency dependency structure such as the cross-covariance functions. Our primary focus is the detection of local periodic signals. I present a method to detect local signals by computing the probability that the squared modulus STFT time series has consecutive large values exceeding some threshold after one exceeding observation following one observation less than the threshold. We discuss a method to reduce the computation of such probabilities by the Box-Cox transformation and the delta method, and show that it works well in comparison to the Monte Carlo simulation method.
Giassi, Pedro; Okida, Sergio; Oliveira, Maurício G; Moraes, Raimes
2013-11-01
Short-term cardiovascular regulation mediated by the sympathetic and parasympathetic branches of the autonomic nervous system has been investigated by multivariate autoregressive (MVAR) modeling, providing insightful analysis. MVAR models employ, as inputs, heart rate (HR), systolic blood pressure (SBP) and respiratory waveforms. ECG (from which HR series is obtained) and respiratory flow waveform (RFW) can be easily sampled from the patients. Nevertheless, the available methods for acquisition of beat-to-beat SBP measurements during exams hamper the wider use of MVAR models in clinical research. Recent studies show an inverse correlation between pulse wave transit time (PWTT) series and SBP fluctuations. PWTT is the time interval between the ECG R-wave peak and photoplethysmography waveform (PPG) base point within the same cardiac cycle. This study investigates the feasibility of using inverse PWTT (IPWTT) series as an alternative input to SBP for MVAR modeling of the cardiovascular regulation. For that, HR, RFW, and IPWTT series acquired from volunteers during postural changes and autonomic blockade were used as input of MVAR models. Obtained results show that IPWTT series can be used as input of MVAR models, replacing SBP measurements in order to overcome practical difficulties related to the continuous sampling of the SBP during clinical exams.
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.
A data mining framework for time series estimation.
Hu, Xiao; Xu, Peng; Wu, Shaozhi; Asgari, Shadnaz; Bergsneider, Marvin
2010-04-01
Time series estimation techniques are usually employed in biomedical research to derive variables less accessible from a set of related and more accessible variables. These techniques are traditionally built from systems modeling approaches including simulation, blind decovolution, and state estimation. In this work, we define target time series (TTS) and its related time series (RTS) as the output and input of a time series estimation process, respectively. We then propose a novel data mining framework for time series estimation when TTS and RTS represent different sets of observed variables from the same dynamic system. This is made possible by mining a database of instances of TTS, its simultaneously recorded RTS, and the input/output dynamic models between them. The key mining strategy is to formulate a mapping function for each TTS-RTS pair in the database that translates a feature vector extracted from RTS to the dissimilarity between true TTS and its estimate from the dynamic model associated with the same TTS-RTS pair. At run time, a feature vector is extracted from an inquiry RTS and supplied to the mapping function associated with each TTS-RTS pair to calculate a dissimilarity measure. An optimal TTS-RTS pair is then selected by analyzing these dissimilarity measures. The associated input/output model of the selected TTS-RTS pair is then used to simulate the TTS given the inquiry RTS as an input. An exemplary implementation was built to address a biomedical problem of noninvasive intracranial pressure assessment. The performance of the proposed method was superior to that of a simple training-free approach of finding the optimal TTS-RTS pair by a conventional similarity-based search on RTS features. 2009 Elsevier Inc. All rights reserved.
Sensitivity analysis of machine-learning models of hydrologic time series
NASA Astrophysics Data System (ADS)
O'Reilly, A. M.
2017-12-01
Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.
Series resonant converter with auxiliary winding turns: analysis, design and implementation
NASA Astrophysics Data System (ADS)
Lin, Bor-Ren
2018-05-01
Conventional series resonant converters have researched and applied for high-efficiency power units due to the benefit of its low switching losses. The main problems of series resonant converters are wide frequency variation and high circulating current. Thus, resonant converter is limited at narrow input voltage range and large input capacitor is normally adopted in commercial power units to provide the minimum hold-up time requirement when AC power is off. To overcome these problems, the resonant converter with auxiliary secondary windings are presented in this paper to achieve high voltage gain at low input voltage case such as hold-up time duration when utility power is off. Since the high voltage gain is used at low input voltage cased, the frequency variation of the proposed converter compared to the conventional resonant converter is reduced. Compared to conventional resonant converter, the hold-up time in the proposed converter is more than 40ms. The larger magnetising inductance of transformer is used to reduce the circulating current losses. Finally, a laboratory prototype is constructed and experiments are provided to verify the converter performance.
NASA Astrophysics Data System (ADS)
Suhartono, Lee, Muhammad Hisyam; Rezeki, Sri
2017-05-01
Intervention analysis is a statistical model in the group of time series analysis which is widely used to describe the effect of an intervention caused by external or internal factors. An example of external factors that often occurs in Indonesia is a disaster, both natural or man-made disaster. The main purpose of this paper is to provide the results of theoretical studies on identification step for determining the order of multi inputs intervention analysis for evaluating the magnitude and duration of the impact of interventions on time series data. The theoretical result showed that the standardized residuals could be used properly as response function for determining the order of multi inputs intervention model. Then, these results are applied for evaluating the impact of a disaster on a real case in Indonesia, i.e. the magnitude and duration of the impact of the Lapindo mud on the volume of vehicles on the highway. Moreover, the empirical results showed that the multi inputs intervention model can describe and explain accurately the magnitude and duration of the impact of disasters on a time series data.
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
Ensemble Bayesian forecasting system Part I: Theory and algorithms
NASA Astrophysics Data System (ADS)
Herr, Henry D.; Krzysztofowicz, Roman
2015-05-01
The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.
Multifractal analysis of visibility graph-based Ito-related connectivity time series.
Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano
2016-02-01
In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.
Forecasting of cyanobacterial density in Torrão reservoir using artificial neural networks.
Torres, Rita; Pereira, Elisa; Vasconcelos, Vítor; Teles, Luís Oliva
2011-06-01
The ability of general regression neural networks (GRNN) to forecast the density of cyanobacteria in the Torrão reservoir (Tâmega river, Portugal), in a period of 15 days, based on three years of collected physical and chemical data, was assessed. Several models were developed and 176 were selected based on their correlation values for the verification series. A time lag of 11 was used, equivalent to one sample (periods of 15 days in the summer and 30 days in the winter). Several combinations of the series were used. Input and output data collected from three depths of the reservoir were applied (surface, euphotic zone limit and bottom). The model that presented a higher average correlation value presented the correlations 0.991; 0.843; 0.978 for training, verification and test series. This model had the three series independent in time: first test series, then verification series and, finally, training series. Only six input variables were considered significant to the performance of this model: ammonia, phosphates, dissolved oxygen, water temperature, pH and water evaporation, physical and chemical parameters referring to the three depths of the reservoir. These variables are common to the next four best models produced and, although these included other input variables, their performance was not better than the selected best model.
49 CFR 571.126 - Standard No. 126; Electronic stability control systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... counterclockwise steering, and the other series uses clockwise steering. The maximum time permitted between each... or side slip derivative with respect to time; (4) That has a means to monitor driver steering inputs... dwell steering input (time T0 + 1 in Figure 1) must not exceed 35 percent of the first peak value of yaw...
49 CFR 571.126 - Standard No. 126; Electronic stability control systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... counterclockwise steering, and the other series uses clockwise steering. The maximum time permitted between each... or side slip derivative with respect to time; (4) That has a means to monitor driver steering inputs... dwell steering input (time T0 + 1 in Figure 1) must not exceed 35 percent of the first peak value of yaw...
NASA Satellite Data for Seagrass Health Modeling and Monitoring
NASA Technical Reports Server (NTRS)
Spiering, Bruce A.; Underwood, Lauren; Ross, Kenton
2011-01-01
Time series derived information for coastal waters will be used to provide input data for the Fong and Harwell model. The current MODIS land mask limits where the model can be applied; this project will: a) Apply MODIS data with resolution higher than the standard products (250-m vs. 1-km). b) Seek to refine the land mask. c) Explore nearby areas to use as proxies for time series directly over the beds. Novel processing approaches will be leveraged from other NASA projects and customized as inputs for seagrass productivity modeling
Finite difference time domain grid generation from AMC helicopter models
NASA Technical Reports Server (NTRS)
Cravey, Robin L.
1992-01-01
A simple technique is presented which forms a cubic grid model of a helicopter from an Aircraft Modeling Code (AMC) input file. The AMC input file defines the helicopter fuselage as a series of polygonal cross sections. The cubic grid model is used as an input to a Finite Difference Time Domain (FDTD) code to obtain predictions of antenna performance on a generic helicopter model. The predictions compare reasonably well with measured data.
49 CFR 571.126 - Standard No. 126; Electronic stability control systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... series uses counterclockwise steering, and the other series uses clockwise steering. The maximum time... rate and to estimate its side slip or side slip derivative with respect to time; (4) That has a means... after completion of the sine with dwell steering input (time T0 + 1 in Figure 1) must not exceed 35...
NASA Astrophysics Data System (ADS)
Godsey, S. E.; Kirchner, J. W.
2008-12-01
The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.
NASA Astrophysics Data System (ADS)
Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.
2018-03-01
In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.
Improving Cancer-Related Outcomes with Connected Health - Acknowledgements
The President’s Cancer Panel is grateful to all participants who invested their time to take part in the series of workshops on connected health and cancer. A complete list of participants is in Series Information. The Panel is especially appreciative to the series co-chairs who graciously contributed their time and knowledge on this topic, providing valuable guidance during workshop planning and extensive input on this report.
NASA Astrophysics Data System (ADS)
Zhou, Ya-Tong; Fan, Yu; Chen, Zi-Yi; Sun, Jian-Cheng
2017-05-01
The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expectation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHC-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval. SHC-EM outperforms the traditional variational learning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning. Supported by the National Natural Science Foundation of China under Grant No 60972106, the China Postdoctoral Science Foundation under Grant No 2014M561053, the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108, and the Hebei Province Natural Science Foundation under Grant No E2016202341.
Forecasting hotspots using predictive visual analytics approach
Maciejewski, Ross; Hafen, Ryan; Rudolph, Stephen; Cleveland, William; Ebert, David
2014-12-30
A method for forecasting hotspots is provided. The method may include the steps of receiving input data at an input of the computational device, generating a temporal prediction based on the input data, generating a geospatial prediction based on the input data, and generating output data based on the time series and geospatial predictions. The output data may be configured to display at least one user interface at an output of the computational device.
Treatments of Precipitation Inputs to Hydrologic Models
USDA-ARS?s Scientific Manuscript database
Hydrological models are used to assess many water resources problems from agricultural use and water quality to engineering issues. The success of these models are dependent on correct parameterization; the most sensitive being the rainfall input time series. These records can come from land-based ...
Asquith, W.H.; Mosier, J. G.; Bush, P.W.
1997-01-01
The watershed simulation model Hydrologic Simulation Program—Fortran (HSPF) was used to generate simulated flow (runoff) from the 13 watersheds to the six bay systems because adequate gaged streamflow data from which to estimate freshwater inflows are not available; only about 23 percent of the adjacent contributing watershed area is gaged. The model was calibrated for the gaged parts of three watersheds—that is, selected input parameters (meteorologic and hydrologic properties and conditions) that control runoff were adjusted in a series of simulations until an adequate match between model-generated flows and a set (time series) of gaged flows was achieved. The primary model input is rainfall and evaporation data and the model output is a time series of runoff volumes. After calibration, simulations driven by daily rainfall for a 26-year period (1968–93) were done for the 13 watersheds to obtain runoff under current (1983–93), predevelopment (pre-1940 streamflow and pre-urbanization), and future (2010) land-use conditions for estimating freshwater inflows and for comparing runoff under the three land-use conditions; and to obtain time series of runoff from which to estimate time series of freshwater inflows for trend analysis.
Change point detection of the Persian Gulf sea surface temperature
NASA Astrophysics Data System (ADS)
Shirvani, A.
2017-01-01
In this study, the Student's t parametric and Mann-Whitney nonparametric change point models (CPMs) were applied to detect change point in the annual Persian Gulf sea surface temperature anomalies (PGSSTA) time series for the period 1951-2013. The PGSSTA time series, which were serially correlated, were transformed to produce an uncorrelated pre-whitened time series. The pre-whitened PGSSTA time series were utilized as the input file of change point models. Both the applied parametric and nonparametric CPMs estimated the change point in the PGSSTA in 1992. The PGSSTA follow the normal distribution up to 1992 and thereafter, but with a different mean value after year 1992. The estimated slope of linear trend in PGSSTA time series for the period 1951-1992 was negative; however, that was positive after the detected change point. Unlike the PGSSTA, the applied CPMs suggested no change point in the Niño3.4SSTA time series.
Tormene, Paolo; Giorgino, Toni; Quaglini, Silvana; Stefanelli, Mario
2009-01-01
The purpose of this study was to assess the performance of a real-time ("open-end") version of the dynamic time warping (DTW) algorithm for the recognition of motor exercises. Given a possibly incomplete input stream of data and a reference time series, the open-end DTW algorithm computes both the size of the prefix of reference which is best matched by the input, and the dissimilarity between the matched portions. The algorithm was used to provide real-time feedback to neurological patients undergoing motor rehabilitation. We acquired a dataset of multivariate time series from a sensorized long-sleeve shirt which contains 29 strain sensors distributed on the upper limb. Seven typical rehabilitation exercises were recorded in several variations, both correctly and incorrectly executed, and at various speeds, totaling a data set of 840 time series. Nearest-neighbour classifiers were built according to the outputs of open-end DTW alignments and their global counterparts on exercise pairs. The classifiers were also tested on well-known public datasets from heterogeneous domains. Nonparametric tests show that (1) on full time series the two algorithms achieve the same classification accuracy (p-value =0.32); (2) on partial time series, classifiers based on open-end DTW have a far higher accuracy (kappa=0.898 versus kappa=0.447;p<10(-5)); and (3) the prediction of the matched fraction follows closely the ground truth (root mean square <10%). The results hold for the motor rehabilitation and the other datasets tested, as well. The open-end variant of the DTW algorithm is suitable for the classification of truncated quantitative time series, even in the presence of noise. Early recognition and accurate class prediction can be achieved, provided that enough variance is available over the time span of the reference. Therefore, the proposed technique expands the use of DTW to a wider range of applications, such as real-time biofeedback systems.
NASA Astrophysics Data System (ADS)
Krämer, Stefan; Rohde, Sophia; Schröder, Kai; Belli, Aslan; Maßmann, Stefanie; Schönfeld, Martin; Henkel, Erik; Fuchs, Lothar
2015-04-01
The design of urban drainage systems with numerical simulation models requires long, continuous rainfall time series with high temporal resolution. However, suitable observed time series are rare. As a result, usual design concepts often use uncertain or unsuitable rainfall data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic rainfall data as input for urban drainage modelling are advanced, tested, and compared. Synthetic rainfall time series of three different precipitation model approaches, - one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model-, are provided for three catchments with different sewer system characteristics in different climate regions in Germany: - Hamburg (northern Germany): maritime climate, mean annual rainfall: 770 mm; combined sewer system length: 1.729 km (City center of Hamburg), storm water sewer system length (Hamburg Harburg): 168 km - Brunswick (Lower Saxony, northern Germany): transitional climate from maritime to continental, mean annual rainfall: 618 mm; sewer system length: 278 km, connected impervious area: 379 ha, height difference: 27 m - Friburg in Brisgau (southern Germany): Central European transitional climate, mean annual rainfall: 908 mm; sewer system length: 794 km, connected impervious area: 1 546 ha, height difference 284 m Hydrodynamic models are set up for each catchment to simulate rainfall runoff processes in the sewer systems. Long term event time series are extracted from the - three different synthetic rainfall time series (comprising up to 600 years continuous rainfall) provided for each catchment and - observed gauge rainfall (reference rainfall) according national hydraulic design standards. The synthetic and reference long term event time series are used as rainfall input for the hydrodynamic sewer models. For comparison of the synthetic rainfall time series against the reference rainfall and against each other the number of - surcharged manholes, - surcharges per manhole, - and the average surcharge volume per manhole are applied as hydraulic performance criteria. The results are discussed and assessed to answer the following questions: - Are the synthetic rainfall approaches suitable to generate high resolution rainfall series and do they produce, - in combination with numerical rainfall runoff models - valid results for design of urban drainage systems? - What are the bounds of uncertainty in the runoff results depending on the synthetic rainfall model and on the climate region? The work is carried out within the SYNOPSE project, funded by the German Federal Ministry of Education and Research (BMBF).
FPGA implementation of predictive degradation model for engine oil lifetime
NASA Astrophysics Data System (ADS)
Idros, M. F. M.; Razak, A. H. A.; Junid, S. A. M. Al; Suliman, S. I.; Halim, A. K.
2018-03-01
This paper presents the implementation of linear regression model for degradation prediction on Register Transfer Logic (RTL) using QuartusII. A stationary model had been identified in the degradation trend for the engine oil in a vehicle in time series method. As for RTL implementation, the degradation model is written in Verilog HDL and the data input are taken at a certain time. Clock divider had been designed to support the timing sequence of input data. At every five data, a regression analysis is adapted for slope variation determination and prediction calculation. Here, only the negative value are taken as the consideration for the prediction purposes for less number of logic gate. Least Square Method is adapted to get the best linear model based on the mean values of time series data. The coded algorithm has been implemented on FPGA for validation purposes. The result shows the prediction time to change the engine oil.
Identification of human operator performance models utilizing time series analysis
NASA Technical Reports Server (NTRS)
Holden, F. M.; Shinners, S. M.
1973-01-01
The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.
Deconvolution of time series in the laboratory
NASA Astrophysics Data System (ADS)
John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian
2016-10-01
In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.
NASA Astrophysics Data System (ADS)
Abrokwah, K.; O'Reilly, A. M.
2017-12-01
Groundwater is an important resource that is extracted every day because of its invaluable use for domestic, industrial and agricultural purposes. The need for sustaining groundwater resources is clearly indicated by declining water levels and has led to modeling and forecasting accurate groundwater levels. In this study, spectral decomposition of climatic forcing time series was used to develop hybrid wavelet analysis (WA) and moving window average (MWA) artificial neural network (ANN) models. These techniques are explored by modeling historical groundwater levels in order to provide understanding of potential causes of the observed groundwater-level fluctuations. Selection of the appropriate decomposition level for WA and window size for MWA helps in understanding the important time scales of climatic forcing, such as rainfall, that influence water levels. Discrete wavelet transform (DWT) is used to decompose the input time-series data into various levels of approximate and details wavelet coefficients, whilst MWA acts as a low-pass signal-filtering technique for removing high-frequency signals from the input data. The variables used to develop and validate the models were daily average rainfall measurements from five National Atmospheric and Oceanic Administration (NOAA) weather stations and daily water-level measurements from two wells recorded from 1978 to 2008 in central Florida, USA. Using different decomposition levels and different window sizes, several WA-ANN and MWA-ANN models for simulating the water levels were created and their relative performances compared against each other. The WA-ANN models performed better than the corresponding MWA-ANN models; also higher decomposition levels of the input signal by the DWT gave the best results. The results obtained show the applicability and feasibility of hybrid WA-ANN and MWA-ANN models for simulating daily water levels using only climatic forcing time series as model inputs.
Model Performance Evaluation and Scenario Analysis ...
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too
NASA Astrophysics Data System (ADS)
Eckert, Sandra
2016-08-01
The SPOT-5 Take 5 campaign provided SPOT time series data of an unprecedented spatial and temporal resolution. We analysed 29 scenes acquired between May and September 2015 of a semi-arid region in the foothills of Mount Kenya, with two aims: first, to distinguish rainfed from irrigated cropland and cropland from natural vegetation covers, which show similar reflectance patterns; and second, to identify individual crop types. We tested several input data sets in different combinations: the spectral bands and the normalized difference vegetation index (NDVI) time series, principal components of NDVI time series, and selected NDVI time series statistics. For the classification we used random forests (RF). In the test differentiating rainfed cropland, irrigated cropland, and natural vegetation covers, the best classification accuracies were achieved using spectral bands. For the differentiation of crop types, we analysed the phenology of selected crop types based on NDVI time series. First results are promising.
Directionality volatility in electroencephalogram time series
NASA Astrophysics Data System (ADS)
Mansor, Mahayaudin M.; Green, David A.; Metcalfe, Andrew V.
2016-06-01
We compare time series of electroencephalograms (EEGs) from healthy volunteers with EEGs from subjects diagnosed with epilepsy. The EEG time series from the healthy group are recorded during awake state with their eyes open and eyes closed, and the records from subjects with epilepsy are taken from three different recording regions of pre-surgical diagnosis: hippocampal, epileptogenic and seizure zone. The comparisons for these 5 categories are in terms of deviations from linear time series models with constant variance Gaussian white noise error inputs. One feature investigated is directionality, and how this can be modelled by either non-linear threshold autoregressive models or non-Gaussian errors. A second feature is volatility, which is modelled by Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) processes. Other features include the proportion of variability accounted for by time series models, and the skewness and the kurtosis of the residuals. The results suggest these comparisons may have diagnostic potential for epilepsy and provide early warning of seizures.
Annual land cover change mapping using MODIS time series to improve emissions inventories.
NASA Astrophysics Data System (ADS)
López Saldaña, G.; Quaife, T. L.; Clifford, D.
2014-12-01
Understanding and quantifying land surface changes is necessary for estimating greenhouse gas and ammonia emissions, and for meeting air quality limits and targets. More sophisticated inventories methodologies for at least key emission source are needed due to policy-driven air quality directives. Quantifying land cover changes on an annual basis requires greater spatial and temporal disaggregation of input data. The main aim of this study is to develop a methodology for using Earth Observations (EO) to identify annual land surface changes that will improve emissions inventories from agriculture and land use/land use change and forestry (LULUCF) in the UK. First goal is to find the best sets of input features that describe accurately the surface dynamics. In order to identify annual and inter-annual land surface changes, a times series of surface reflectance was used to capture seasonal variability. Daily surface reflectance images from the Moderate Resolution Imaging Spectroradiometer (MODIS) at 500m resolution were used to invert a Bidirectional Reflectance Distribution Function (BRDF) model to create the seamless time series. Given the limited number of cloud-free observations, a BRDF climatology was used to constrain the model inversion and where no high-scientific quality observations were available at all, as a gap filler. The Land Cover Map 2007 (LC2007) produced by the Centre for Ecology & Hydrology (CEH) was used for training and testing purposes. A prototype land cover product was created for 2006 to 2008. Several machine learning classifiers were tested as well as different sets of input features going from the BRDF parameters to spectral Albedo. We will present the results of the time series development and the first exercises when creating the prototype land cover product.
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 5, Appendix D
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS 5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Average input high current, worst case input high current, output low current, and data setup time are some of the results presented.
van Mierlo, Pieter; Lie, Octavian; Staljanssens, Willeke; Coito, Ana; Vulliémoz, Serge
2018-04-26
We investigated the influence of processing steps in the estimation of multivariate directed functional connectivity during seizures recorded with intracranial EEG (iEEG) on seizure-onset zone (SOZ) localization. We studied the effect of (i) the number of nodes, (ii) time-series normalization, (iii) the choice of multivariate time-varying connectivity measure: Adaptive Directed Transfer Function (ADTF) or Adaptive Partial Directed Coherence (APDC) and (iv) graph theory measure: outdegree or shortest path length. First, simulations were performed to quantify the influence of the various processing steps on the accuracy to localize the SOZ. Afterwards, the SOZ was estimated from a 113-electrodes iEEG seizure recording and compared with the resection that rendered the patient seizure-free. The simulations revealed that ADTF is preferred over APDC to localize the SOZ from ictal iEEG recordings. Normalizing the time series before analysis resulted in an increase of 25-35% of correctly localized SOZ, while adding more nodes to the connectivity analysis led to a moderate decrease of 10%, when comparing 128 with 32 input nodes. The real-seizure connectivity estimates localized the SOZ inside the resection area using the ADTF coupled to outdegree or shortest path length. Our study showed that normalizing the time-series is an important pre-processing step, while adding nodes to the analysis did only marginally affect the SOZ localization. The study shows that directed multivariate Granger-based connectivity analysis is feasible with many input nodes (> 100) and that normalization of the time-series before connectivity analysis is preferred.
NASA Astrophysics Data System (ADS)
Lohani, A. K.; Kumar, Rakesh; Singh, R. D.
2012-06-01
SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.
Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.
2012-01-01
The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.
NASA Astrophysics Data System (ADS)
Du, Kongchang; Zhao, Ying; Lei, Jiaqiang
2017-09-01
In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.
Single flux quantum voltage amplifiers
NASA Astrophysics Data System (ADS)
Golomidov, Vladimir; Kaplunenko, Vsevolod; Khabipov, Marat; Koshelets, Valery; Kaplunenko, Olga
The novel elements of the Rapid Single Flux Quantum (RSFQ) logic family — a Quasi Digital Voltage Parallel and Series Amplifiers (QDVA) have been computer simulated, designed and experimentally investigated. The Parallel QDVA consists of six stages and provides multiplication of the input voltage with factor five. The output resistance of the QDVA is five times larger than the input so this amplifier seems to be a good matching stage between RSFQL and usual semiconductor electronics. The series QDVA provides a gain factor four and involves two doublers connected by transmission line. The proposed parallel QDVA can be integrated on the same chip with a SQUID sensor.
A Report on Applying EEGnet to Discriminate Human State Effects on Task Performance
2018-01-01
whether we could identify what task the participant was performing from differences in the recorded brain time series . We modeled the relationship...between input data (brain time series ) and output labels (task A and task B) as an unknown function, and we found an optimal approximation of that...this report are not to be construed as an official Department of the Army position unless so designated by other authorized documents. Citation of
Harmonize input selection for sediment transport prediction
NASA Astrophysics Data System (ADS)
Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed
2017-09-01
In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.
New method for solving inductive electric fields in the non-uniformly conducting ionosphere
NASA Astrophysics Data System (ADS)
Vanhamäki, H.; Amm, O.; Viljanen, A.
2006-10-01
We present a new calculation method for solving inductive electric fields in the ionosphere. The time series of the potential part of the ionospheric electric field, together with the Hall and Pedersen conductances serves as the input to this method. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition, no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called the Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfvén wave reflection from a uniformly conducting ionosphere.
Nkiaka, E; Nawaz, N R; Lovett, J C
2016-07-01
Hydro-meteorological data is an important asset that can enhance management of water resources. But existing data often contains gaps, leading to uncertainties and so compromising their use. Although many methods exist for infilling data gaps in hydro-meteorological time series, many of these methods require inputs from neighbouring stations, which are often not available, while other methods are computationally demanding. Computing techniques such as artificial intelligence can be used to address this challenge. Self-organizing maps (SOMs), which are a type of artificial neural network, were used for infilling gaps in a hydro-meteorological time series in a Sudano-Sahel catchment. The coefficients of determination obtained were all above 0.75 and 0.65 while the average topographic error was 0.008 and 0.02 for rainfall and river discharge time series, respectively. These results further indicate that SOMs are a robust and efficient method for infilling missing gaps in hydro-meteorological time series.
Time series modeling of human operator dynamics in manual control tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.
Time Series Modeling of Human Operator Dynamics in Manual Control Tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.
Time reversibility of intracranial human EEG recordings in mesial temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
van der Heyden, M. J.; Diks, C.; Pijn, J. P. M.; Velis, D. N.
1996-02-01
Intracranial electroencephalograms from patients suffering from mesial temporal lobe epilepsy were tested for time reversibility. If the recorded time series is irreversible, the input of the recording system cannot be a realisation of a linear Gaussian random process. We confirmed experimentally that the measurement equipment did not introduce irreversibility in the recorded output when the input was a realisation of a linear Gaussian random process. In general, the non-seizure recordings are reversible, whereas the seizure recordings are irreversible. These results suggest that time reversibility is a useful property for the characterisation of human intracranial EEG recordings in mesial temporal lobe epilepsy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen
2013-01-01
When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.
A new complexity measure for time series analysis and classification
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Balasubramanian, Karthi; Dey, Sutirth
2013-07-01
Complexity measures are used in a number of applications including extraction of information from data such as ecological time series, detection of non-random structure in biomedical signals, testing of random number generators, language recognition and authorship attribution etc. Different complexity measures proposed in the literature like Shannon entropy, Relative entropy, Lempel-Ziv, Kolmogrov and Algorithmic complexity are mostly ineffective in analyzing short sequences that are further corrupted with noise. To address this problem, we propose a new complexity measure ETC and define it as the "Effort To Compress" the input sequence by a lossless compression algorithm. Here, we employ the lossless compression algorithm known as Non-Sequential Recursive Pair Substitution (NSRPS) and define ETC as the number of iterations needed for NSRPS to transform the input sequence to a constant sequence. We demonstrate the utility of ETC in two applications. ETC is shown to have better correlation with Lyapunov exponent than Shannon entropy even with relatively short and noisy time series. The measure also has a greater rate of success in automatic identification and classification of short noisy sequences, compared to entropy and a popular measure based on Lempel-Ziv compression (implemented by Gzip).
GrammarViz 3.0: Interactive Discovery of Variable-Length Time Series Patterns
Senin, Pavel; Lin, Jessica; Wang, Xing; ...
2018-02-23
The problems of recurrent and anomalous pattern discovery in time series, e.g., motifs and discords, respectively, have received a lot of attention from researchers in the past decade. However, since the pattern search space is usually intractable, most existing detection algorithms require that the patterns have discriminative characteristics and have its length known in advance and provided as input, which is an unreasonable requirement for many real-world problems. In addition, patterns of similar structure, but of different lengths may co-exist in a time series. In order to address these issues, we have developed algorithms for variable-length time series pattern discoverymore » that are based on symbolic discretization and grammar inference—two techniques whose combination enables the structured reduction of the search space and discovery of the candidate patterns in linear time. In this work, we present GrammarViz 3.0—a software package that provides implementations of proposed algorithms and graphical user interface for interactive variable-length time series pattern discovery. The current version of the software provides an alternative grammar inference algorithm that improves the time series motif discovery workflow, and introduces an experimental procedure for automated discretization parameter selection that builds upon the minimum cardinality maximum cover principle and aids the time series recurrent and anomalous pattern discovery.« less
GrammarViz 3.0: Interactive Discovery of Variable-Length Time Series Patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senin, Pavel; Lin, Jessica; Wang, Xing
The problems of recurrent and anomalous pattern discovery in time series, e.g., motifs and discords, respectively, have received a lot of attention from researchers in the past decade. However, since the pattern search space is usually intractable, most existing detection algorithms require that the patterns have discriminative characteristics and have its length known in advance and provided as input, which is an unreasonable requirement for many real-world problems. In addition, patterns of similar structure, but of different lengths may co-exist in a time series. In order to address these issues, we have developed algorithms for variable-length time series pattern discoverymore » that are based on symbolic discretization and grammar inference—two techniques whose combination enables the structured reduction of the search space and discovery of the candidate patterns in linear time. In this work, we present GrammarViz 3.0—a software package that provides implementations of proposed algorithms and graphical user interface for interactive variable-length time series pattern discovery. The current version of the software provides an alternative grammar inference algorithm that improves the time series motif discovery workflow, and introduces an experimental procedure for automated discretization parameter selection that builds upon the minimum cardinality maximum cover principle and aids the time series recurrent and anomalous pattern discovery.« less
Bendel, David; Beck, Ferdinand; Dittmer, Ulrich
2013-01-01
In the presented study climate change impacts on combined sewer overflows (CSOs) in Baden-Wuerttemberg, Southern Germany, were assessed based on continuous long-term rainfall-runoff simulations. As input data, synthetic rainfall time series were used. The applied precipitation generator NiedSim-Klima accounts for climate change effects on precipitation patterns. Time series for the past (1961-1990) and future (2041-2050) were generated for various locations. Comparing the simulated CSO activity of both periods we observe significantly higher overflow frequencies for the future. Changes in overflow volume and overflow duration depend on the type of overflow structure. Both values will increase at simple CSO structures that merely divide the flow, whereas they will decrease when the CSO structure is combined with a storage tank. However, there is a wide variation between the results of different precipitation time series (representative for different locations).
Detection of "noisy" chaos in a time series
NASA Technical Reports Server (NTRS)
Chon, K. H.; Kanters, J. K.; Cohen, R. J.; Holstein-Rathlou, N. H.
1997-01-01
Time series from biological system often displays fluctuations in the measured variables. Much effort has been directed at determining whether this variability reflects deterministic chaos, or whether it is merely "noise". The output from most biological systems is probably the result of both the internal dynamics of the systems, and the input to the system from the surroundings. This implies that the system should be viewed as a mixed system with both stochastic and deterministic components. We present a method that appears to be useful in deciding whether determinism is present in a time series, and if this determinism has chaotic attributes. The method relies on fitting a nonlinear autoregressive model to the time series followed by an estimation of the characteristic exponents of the model over the observed probability distribution of states for the system. The method is tested by computer simulations, and applied to heart rate variability data.
NASA Astrophysics Data System (ADS)
Francile, C.; Luoni, M. L.
We present a prediction of the time series of the Wolf number R of sunspots using "time lagged feed forward neural networks". We use two types of networks: the focused and distributed ones which were trained with the back propagation of errors algorithm and the temporal back propagation algorithm respectively. As inputs to neural networks we use the time series of the number R averaged annually and monthly with the method IR5. As data sets for training and test we choose certain intervals of the time series similar to other works, in order to compare the results. Finally we discuss the topology of the networks used, the number of delays used, the number of neurons per layer, the number of hidden layers and the results in the prediction of the series between one and six steps ahead. FULL TEXT IN SPANISH
NASA Astrophysics Data System (ADS)
Yuan, Y.; Meng, Y.; Chen, Y. X.; Jiang, C.; Yue, A. Z.
2018-04-01
In this study, we proposed a method to map urban encroachment onto farmland using satellite image time series (SITS) based on the hierarchical hidden Markov model (HHMM). In this method, the farmland change process is decomposed into three hierarchical levels, i.e., the land cover level, the vegetation phenology level, and the SITS level. Then a three-level HHMM is constructed to model the multi-level semantic structure of farmland change process. Once the HHMM is established, a change from farmland to built-up could be detected by inferring the underlying state sequence that is most likely to generate the input time series. The performance of the method is evaluated on MODIS time series in Beijing. Results on both simulated and real datasets demonstrate that our method improves the change detection accuracy compared with the HMM-based method.
Estimating water temperatures in small streams in western Oregon using neural network models
Risley, John C.; Roehl, Edwin A.; Conrads, Paul
2003-01-01
Artificial neural network models were developed to estimate water temperatures in small streams using data collected at 148 sites throughout western Oregon from June to September 1999. The sites were located on 1st-, 2nd-, or 3rd-order streams having undisturbed or minimally disturbed conditions. Data collected at each site for model development included continuous hourly water temperature and description of riparian habitat. Additional data pertaining to the landscape characteristics of the basins upstream of the sites were assembled using geographic information system (GIS) techniques. Hourly meteorological time series data collected at 25 locations within the study region also were assembled. Clustering analysis was used to partition 142 sites into 3 groups. Separate models were developed for each group. The riparian habitat, basin characteristic, and meteorological time series data were independent variables and water temperature time series were dependent variables to the models, respectively. Approximately one-third of the data vectors were used for model training, and the remaining two-thirds were used for model testing. Critical input variables included riparian shade, site elevation, and percentage of forested area of the basin. Coefficient of determination and root mean square error for the models ranged from 0.88 to 0.99 and 0.05 to 0.59 oC, respectively. The models also were tested and validated using temperature time series, habitat, and basin landscape data from 6 sites that were separate from the 142 sites that were used to develop the models. The models are capable of estimating water temperatures at locations along 1st-, 2nd-, and 3rd-order streams in western Oregon. The model user must assemble riparian habitat and basin landscape characteristics data for a site of interest. These data, in addition to meteorological data, are model inputs. Output from the models include simulated hourly water temperatures for the June to September period. Adjustments can be made to the shade input data to simulate the effects of minimum or maximum shade on water temperatures.
Solutions for transients in arbitrarily branching cables: III. Voltage clamp problems.
Major, G
1993-07-01
Branched cable voltage recording and voltage clamp analytical solutions derived in two previous papers are used to explore practical issues concerning voltage clamp. Single exponentials can be fitted reasonably well to the decay phase of clamped synaptic currents, although they contain many underlying components. The effective time constant depends on the fit interval. The smoothing effects on synaptic clamp currents of dendritic cables and series resistance are explored with a single cylinder + soma model, for inputs with different time courses. "Soma" and "cable" charging currents cannot be separated easily when the soma is much smaller than the dendrites. Subtractive soma capacitance compensation and series resistance compensation are discussed. In a hippocampal CA1 pyramidal neurone model, voltage control at most dendritic sites is extremely poor. Parameter dependencies are illustrated. The effects of series resistance compound those of dendritic cables and depend on the "effective capacitance" of the cell. Plausible combinations of parameters can cause order-of-magnitude distortions to clamp current waveform measures of simulated Schaeffer collateral inputs. These voltage clamp problems are unlikely to be solved by the use of switch clamp methods.
Mapping the structure of the world economy.
Lenzen, Manfred; Kanemoto, Keiichiro; Moran, Daniel; Geschke, Arne
2012-08-07
We have developed a new series of environmentally extended multi-region input-output (MRIO) tables with applications in carbon, water, and ecological footprinting, and Life-Cycle Assessment, as well as trend and key driver analyses. Such applications have recently been at the forefront of global policy debates, such as about assigning responsibility for emissions embodied in internationally traded products. The new time series was constructed using advanced parallelized supercomputing resources, and significantly advances the previous state of art because of four innovations. First, it is available as a continuous 20-year time series of MRIO tables. Second, it distinguishes 187 individual countries comprising more than 15,000 industry sectors, and hence offers unsurpassed detail. Third, it provides information just 1-3 years delayed therefore significantly improving timeliness. Fourth, it presents MRIO elements with accompanying standard deviations in order to allow users to understand the reliability of data. These advances will lead to material improvements in the capability of applications that rely on input-output tables. The timeliness of information means that analyses are more relevant to current policy questions. The continuity of the time series enables the robust identification of key trends and drivers of global environmental change. The high country and sector detail drastically improves the resolution of Life-Cycle Assessments. Finally, the availability of information on uncertainty allows policy-makers to quantitatively judge the level of confidence that can be placed in the results of analyses.
Simple Example of Backtest Overfitting (SEBO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less
GEsture: an online hand-drawing tool for gene expression pattern search.
Wang, Chunyan; Xu, Yiqing; Wang, Xuelin; Zhang, Li; Wei, Suyun; Ye, Qiaolin; Zhu, Youxiang; Yin, Hengfu; Nainwal, Manoj; Tanon-Reyes, Luis; Cheng, Feng; Yin, Tongming; Ye, Ning
2018-01-01
Gene expression profiling data provide useful information for the investigation of biological function and process. However, identifying a specific expression pattern from extensive time series gene expression data is not an easy task. Clustering, a popular method, is often used to classify similar expression genes, however, genes with a 'desirable' or 'user-defined' pattern cannot be efficiently detected by clustering methods. To address these limitations, we developed an online tool called GEsture. Users can draw, or graph a curve using a mouse instead of inputting abstract parameters of clustering methods. GEsture explores genes showing similar, opposite and time-delay expression patterns with a gene expression curve as input from time series datasets. We presented three examples that illustrate the capacity of GEsture in gene hunting while following users' requirements. GEsture also provides visualization tools (such as expression pattern figure, heat map and correlation network) to display the searching results. The result outputs may provide useful information for researchers to understand the targets, function and biological processes of the involved genes.
Linear and quadratic models of point process systems: contributions of patterned input to output.
Lindsay, K A; Rosenberg, J R
2012-08-01
In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.
Interactive digital signal processor
NASA Technical Reports Server (NTRS)
Mish, W. H.; Wenger, R. M.; Behannon, K. W.; Byrnes, J. B.
1982-01-01
The Interactive Digital Signal Processor (IDSP) is examined. It consists of a set of time series analysis Operators each of which operates on an input file to produce an output file. The operators can be executed in any order that makes sense and recursively, if desired. The operators are the various algorithms used in digital time series analysis work. User written operators can be easily interfaced to the sysatem. The system can be operated both interactively and in batch mode. In IDSP a file can consist of up to n (currently n=8) simultaneous time series. IDSP currently includes over thirty standard operators that range from Fourier transform operations, design and application of digital filters, eigenvalue analysis, to operators that provide graphical output, allow batch operation, editing and display information.
Programmable Logic Application Notes
NASA Technical Reports Server (NTRS)
Katz, Richard
2000-01-01
This column will be provided each quarter as a source for reliability, radiation results, NASA capabilities, and other information on programmable logic devices and related applications. This quarter will continue a series of notes concentrating on analysis techniques with this issue's section discussing: Digital Timing Analysis Tools and Techniques. Articles in this issue include: SX and SX-A Series Devices Power Sequencing; JTAG and SXISX-AISX-S Series Devices; Analysis Techniques (i.e., notes on digital timing analysis tools and techniques); Status of the Radiation Hard reconfigurable Field Programmable Gate Array Program, Input Transition Times; Apollo Guidance Computer Logic Study; RT54SX32S Prototype Data Sets; A54SX32A - 0.22 micron/UMC Test Results; Ramtron FM1608 FRAM; and Analysis of VHDL Code and Synthesizer Output.
Low-dimensional chaos in magnetospheric activity from AE time series
NASA Technical Reports Server (NTRS)
Vassiliadis, D. V.; Sharma, A. S.; Eastman, T. E.; Papadopoulos, K.
1990-01-01
The magnetospheric response to the solar-wind input, as represented by the time-series measurements of the auroral electrojet (AE) index, has been examined using phase-space reconstruction techniques. The system was found to behave as a low-dimensional chaotic system with a fractal dimension of 3.6 and has Kolmogorov entropy less than 0.2/min. These indicate that the dynamics of the system can be adequately described by four independent variables, and that the corresponding intrinsic time scale is of the order of 5 min. The relevance of the results to magnetospheric modeling is discussed.
NASA Astrophysics Data System (ADS)
Silvestro, Francesco; Parodi, Antonio; Campo, Lorenzo
2017-04-01
The characterization of the hydrometeorological extremes, both in terms of rainfall and streamflow, in a given region plays a key role in the environmental monitoring provided by the flood alert services. In last years meteorological simulations (both near real-time and historical reanalysis) were available at increasing spatial and temporal resolutions, making possible long-period hydrological reanalysis in which the meteo dataset is used as input in distributed hydrological models. In this work, a very high resolution meteorological reanalysis dataset, namely Express-Hydro (CIMA, ISAC-CNR, GAUSS Special Project PR45DE), was employed as input in the hydrological model Continuum in order to produce long time series of streamflows in the Liguria territory, located in the Northern part of Italy. The original dataset covers the whole Europe territory in the 1979-2008 period, at 4 km of spatial resolution and 3 hours of time resolution. Analyses in terms of comparison between the rainfall estimated by the dataset and the observations (available from the local raingauges network) were carried out, and a bias correction was also performed in order to better match the observed climatology. An extreme analysis was eventually carried on the streamflows time series obtained by the simulations, by comparing them with the results of the same hydrological model fed with the observed time series of rainfall. The results of the analysis are shown and discussed.
Detection of chaotic determinism in time series from randomly forced maps
NASA Technical Reports Server (NTRS)
Chon, K. H.; Kanters, J. K.; Cohen, R. J.; Holstein-Rathlou, N. H.
1997-01-01
Time series from biological system often display fluctuations in the measured variables. Much effort has been directed at determining whether this variability reflects deterministic chaos, or whether it is merely "noise". Despite this effort, it has been difficult to establish the presence of chaos in time series from biological sytems. The output from a biological system is probably the result of both its internal dynamics, and the input to the system from the surroundings. This implies that the system should be viewed as a mixed system with both stochastic and deterministic components. We present a method that appears to be useful in deciding whether determinism is present in a time series, and if this determinism has chaotic attributes, i.e., a positive characteristic exponent that leads to sensitivity to initial conditions. The method relies on fitting a nonlinear autoregressive model to the time series followed by an estimation of the characteristic exponents of the model over the observed probability distribution of states for the system. The method is tested by computer simulations, and applied to heart rate variability data.
Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series
NASA Astrophysics Data System (ADS)
Champion, Nicolas
2016-06-01
Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences.
Two cloud-based cues for estimating scene structure and camera calibration.
Jacobs, Nathan; Abrams, Austin; Pless, Robert
2013-10-01
We describe algorithms that use cloud shadows as a form of stochastically structured light to support 3D scene geometry estimation. Taking video captured from a static outdoor camera as input, we use the relationship of the time series of intensity values between pairs of pixels as the primary input to our algorithms. We describe two cues that relate the 3D distance between a pair of points to the pair of intensity time series. The first cue results from the fact that two pixels that are nearby in the world are more likely to be under a cloud at the same time than two distant points. We describe methods for using this cue to estimate focal length and scene structure. The second cue is based on the motion of cloud shadows across the scene; this cue results in a set of linear constraints on scene structure. These constraints have an inherent ambiguity, which we show how to overcome by combining the cloud motion cue with the spatial cue. We evaluate our method on several time lapses of real outdoor scenes.
Model for the heart beat-to-beat time series during meditation
NASA Astrophysics Data System (ADS)
Capurro, A.; Diambra, L.; Malta, C. P.
2003-09-01
We present a model for the respiratory modulation of the heart beat-to-beat interval series. The model consists of a pacemaker, that simulates the membrane potential of the sinoatrial node, modulated by a periodic input signal plus correlated noise that simulates the respiratory input. The model was used to assess the waveshape of the respiratory signals needed to reproduce in the phase space the trajectory of experimental heart beat-to-beat interval data. The data sets were recorded during meditation practices of the Chi and Kundalini Yoga techniques. Our study indicates that in the first case the respiratory signal has the shape of a smoothed square wave, and in the second case it has the shape of a smoothed triangular wave.
NASA Astrophysics Data System (ADS)
DeWalle, David R.; Boyer, Elizabeth W.; Buda, Anthony R.
2016-12-01
Forecasts of ecosystem changes due to variations in atmospheric emissions policies require a fundamental understanding of lag times between changes in chemical inputs and watershed response. Impacts of changes in atmospheric deposition in the United States have been documented using national and regional long-term environmental monitoring programs beginning several decades ago. Consequently, time series of weekly NADP atmospheric wet deposition and monthly EPA-Long Term Monitoring stream chemistry now exist for much of the Northeast which may provide insights into lag times. In this study of Appalachian forest basins, we estimated lag times for S, N and Cl by cross-correlating monthly data from four pairs of stream and deposition monitoring sites during the period from 1978 to 2012. A systems or impulse response function approach to cross-correlation was used to estimate lag times where the input deposition time series was pre-whitened using regression modeling and the stream response time series was filtered using the deposition regression model prior to cross-correlation. Cross-correlations for S were greatest at annual intervals over a relatively well-defined range of lags with the maximum correlations occurring at mean lags of 48 months. Chloride results were similar but more erratic with a mean lag of 57 months. Few high-correlation lags for N were indicated. Given the growing availability of atmospheric deposition and surface water chemistry monitoring data and our results for four Appalachian basins, further testing of cross-correlation as a method of estimating lag times on other basins appears justified.
Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion
NASA Astrophysics Data System (ADS)
Li, Z.; Ghaith, M.
2017-12-01
Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.
NASA Astrophysics Data System (ADS)
Nourani, Vahid; Andalib, Gholamreza; Dąbrowska, Dominika
2017-05-01
Accurate nitrate load predictions can elevate decision management of water quality of watersheds which affects to environment and drinking water. In this paper, two scenarios were considered for Multi-Station (MS) nitrate load modeling of the Little River watershed. In the first scenario, Markovian characteristics of streamflow-nitrate time series were proposed for the MS modeling. For this purpose, feature extraction criterion of Mutual Information (MI) was employed for input selection of artificial intelligence models (Feed Forward Neural Network, FFNN and least square support vector machine). In the second scenario for considering seasonality-based characteristics of the time series, wavelet transform was used to extract multi-scale features of streamflow-nitrate time series of the watershed's sub-basins to model MS nitrate loads. Self-Organizing Map (SOM) clustering technique which finds homogeneous sub-series clusters was also linked to MI for proper cluster agent choice to be imposed into the models for predicting the nitrate loads of the watershed's sub-basins. The proposed MS method not only considers the prediction of the outlet nitrate but also covers predictions of interior sub-basins nitrate load values. The results indicated that the proposed FFNN model coupled with the SOM-MI improved the performance of MS nitrate predictions compared to the Markovian-based models up to 39%. Overall, accurate selection of dominant inputs which consider seasonality-based characteristics of streamflow-nitrate process could enhance the efficiency of nitrate load predictions.
Using model order tests to determine sensory inputs in a motion study
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Junker, A. M.
1977-01-01
In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.
PSO-MISMO modeling strategy for multistep-ahead time series prediction.
Bao, Yukun; Xiong, Tao; Hu, Zhongyi
2014-05-01
Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.
Mapping wildfire and clearcut harvest disturbances in boreal forests with Landsat time series data
Todd Schroeder; Michael A. Wulder; Sean P. Healey; Gretchen G. Moisen
2011-01-01
Information regarding the extent, timing andmagnitude of forest disturbance are key inputs required for accurate estimation of the terrestrial carbon balance. Equally important for studying carbon dynamics is the ability to distinguish the cause or type of forest disturbance occurring on the landscape. Wildfire and timber harvesting are common disturbances occurring in...
NASA Technical Reports Server (NTRS)
Savani, N. P.; Vourlidas, A.; Pulkkinen, A.; Nieves-Chinchilla, T.; Lavraud, B.; Owens, M. J.
2013-01-01
We investigate a coronal mass ejection (CME) propagating toward Earth on 29 March 2011. This event is specifically chosen for its predominately northward directed magnetic field, so that the influence from the momentum flux onto Earth can be isolated. We focus our study on understanding how a small Earth-directed segment propagates. Mass images are created from the white-light cameras onboard STEREO which are also converted into mass height-time maps (mass J-maps). The mass tracks on these J-maps correspond to the sheath region between the CME and its associated shockfront as detected by in situ measurements at L1. A time series of mass measurements from the STEREOCOR-2A instrument is made along the Earth propagation direction. Qualitatively, this mass time series shows a remarkable resemblance to the L1 in situ density series. The in situ measurements are used as inputs into a three-dimensional (3-D) magnetospheric space weather simulation from the Community Coordinated Modeling Center. These simulations display a sudden compression of the magnetosphere from the large momentum flux at the leading edge of the CME, and predictions are made for the time derivative of the magnetic field (dBdt) on the ground. The predicted dBdt values were then compared with the observations from specific equatorially located ground stations and showed notable similarity. This study of the momentum of a CME from the Sun down to its influence on magnetic ground stations on Earth is presented as a preliminary proof of concept, such that future attempts may try to use remote sensing to create density and velocity time series as inputs to magnetospheric simulations.
MULTIPLE INPUT BINARY ADDER EMPLOYING MAGNETIC DRUM DIGITAL COMPUTING APPARATUS
Cooke-Yarborough, E.H.
1960-12-01
A digital computing apparatus is described for adding a plurality of multi-digit binary numbers. The apparatus comprises a rotating magnetic drum, a recording head, first and second reading heads disposed adjacent to the first and second recording tracks, and a series of timing signals recorded on the first track. A series of N groups of digit-representing signals is delivered to the recording head at time intervals corresponding to the timing signals, each group consisting of digits of the same significance in the numbers, and the signal series is recorded on the second track of the drum in synchronism with the timing signals on the first track. The multistage registers are stepped cyclically through all positions, and each of the multistage registers is coupled to the control lead of a separate gate circuit to open the corresponding gate at only one selected position in each cycle. One of the gates has its input coupled to the bistable element to receive the sum digit, and the output lead of this gate is coupled to the recording device. The inputs of the other gates receive the digits to be added from the second reading head, and the outputs of these gates are coupled to the adding register. A phase-setting pulse source is connected to each of the multistage registers individually to step the multistage registers to different initial positions in the cycle, and the phase-setting pulse source is actuated each N time interval to shift a sum digit to the bistable element, where the multistage register coupled to bistable element is operated by the phase- setting pulse source to that position in its cycle N steps before opening the first gate, so that this gate opens in synchronism with each of the shifts to pass the sum digits to the recording head.
Parsimonious Hydrologic and Nitrate Response Models For Silver Springs, Florida
NASA Astrophysics Data System (ADS)
Klammler, Harald; Yaquian-Luna, Jose Antonio; Jawitz, James W.; Annable, Michael D.; Hatfield, Kirk
2014-05-01
Silver Springs with an approximate discharge of 25 m3/sec is one of Florida's first magnitude springs and among the largest springs worldwide. Its 2500-km2 springshed overlies the mostly unconfined Upper Floridan Aquifer. The aquifer is approximately 100 m thick and predominantly consists of porous, fractured and cavernous limestone, which leads to excellent surface drainage properties (no major stream network other than Silver Springs run) and complex groundwater flow patterns through both rock matrix and fast conduits. Over the past few decades, discharge from Silver Springs has been observed to slowly but continuously decline, while nitrate concentrations in the spring water have enormously increased from a background level of 0.05 mg/l to over 1 mg/l. In combination with concurrent increases in algae growth and turbidity, for example, and despite an otherwise relatively stable water quality, this has given rise to concerns about the ecological equilibrium in and near the spring run as well as possible impacts on tourism. The purpose of the present work is to elaborate parsimonious lumped parameter models that may be used by resource managers for evaluating the springshed's hydrologic and nitrate transport responses. Instead of attempting to explicitly consider the complex hydrogeologic features of the aquifer in a typically numerical and / or stochastic approach, we use a transfer function approach wherein input signals (i.e., time series of groundwater recharge and nitrate loading) are transformed into output signals (i.e., time series of spring discharge and spring nitrate concentrations) by some linear and time-invariant law. The dynamic response types and parameters are inferred from comparing input and output time series in frequency domain (e.g., after Fourier transformation). Results are converted into impulse (or step) response functions, which describe at what time and to what magnitude a unitary change in input manifests at the output. For the hydrologic response model, frequency spectra of groundwater recharge and spring discharge suggest an exponential response model, which may explain a significant portion of spring discharge variability with only two fitting parameters (mean response time 2.4 years). For the transport model, direct use of nitrate data is confounded by inconsistent data and a strong trend. Instead, chloride concentrations in rainfall and at the spring are investigated as a surrogate candidate. Preliminary results indicate that the transport response function of the springshed as a whole may be of the gamma type, which possesses both a larger initial peak as well as a longer tail than the exponential response function. This is consistent with the large range of travel times to be expected between input directly into fast conduits connected to the spring (e.g., though sinkholes) and input or back-diffusion from the rock matrix. The result implies that reductions in nitrate input, especially at remote and hydraulically not well connected locations, will only manifest in a rather delayed and smoothed out form in concentration observed at the spring.
History of nutrient inputs to the northeastern United States, 1930-2000
NASA Astrophysics Data System (ADS)
Hale, Rebecca L.; Hoover, Joseph H.; Wollheim, Wilfred M.; Vörösmarty, Charles J.
2013-04-01
Humans have dramatically altered nutrient cycles at local to global scales. We examined changes in anthropogenic nutrient inputs to the northeastern United States (NE) from 1930 to 2000. We created a comprehensive time series of anthropogenic N and P inputs to 437 counties in the NE at 5 year intervals. Inputs included atmospheric N deposition, biological N2 fixation, fertilizer, detergent P, livestock feed, and human food. Exports included exports of feed and food and volatilization of ammonia. N inputs to the NE increased throughout the study period, primarily due to increases in atmospheric deposition and fertilizer. P inputs increased until 1970 and then declined due to decreased fertilizer and detergent inputs. Livestock consistently consumed the majority of nutrient inputs over time and space. The area of crop agriculture declined during the study period but consumed more nutrients as fertilizer. We found that stoichiometry (N:P) of inputs and absolute amounts of N matched nutritional needs (livestock, humans, crops) when atmospheric components (N deposition, N2 fixation) were not included. Differences between N and P led to major changes in N:P stoichiometry over time, consistent with global trends. N:P decreased from 1930 to 1970 due to increased inputs of P, and increased from 1970 to 2000 due to increased N deposition and fertilizer and decreases in P fertilizer and detergent use. We found that nutrient use is a dynamic product of social, economic, political, and environmental interactions. Therefore, future nutrient management must take into account these factors to design successful and effective nutrient reduction measures.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1983-10-04
Reduction in the maximum time uncertainty (t[sub max]--t[sub min]) of a series of paired time signals t[sub 1] and t[sub 2] varying between two input terminals and representative of a series of single events where t[sub 1][<=]t[sub 2] and t[sub 1]+t[sub 2] equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t[sub min]) of the first signal t[sub 1] closer to t[sub max] and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20--800. 6 figs.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, George E.; Dawson, John W.
1983-01-01
Reduction in the maximum time uncertainty (t.sub.max -t.sub.min) of a series of paired time signals t.sub.1 and t.sub.2 varying between two input terminals and representative of a series of single events where t.sub.1 .ltoreq.t.sub.2 and t.sub.1 +t.sub.2 equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t.sub.min) of the first signal t.sub.1 closer to t.sub.max and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20-800.
NASA Astrophysics Data System (ADS)
Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.
2017-08-01
Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Combining neural networks and genetic algorithms for hydrological flow forecasting
NASA Astrophysics Data System (ADS)
Neruda, Roman; Srejber, Jan; Neruda, Martin; Pascenko, Petr
2010-05-01
We present a neural network approach to rainfall-runoff modeling for small size river basins based on several time series of hourly measured data. Different neural networks are considered for short time runoff predictions (from one to six hours lead time) based on runoff and rainfall data observed in previous time steps. Correlation analysis shows that runoff data, short time rainfall history, and aggregated API values are the most significant data for the prediction. Neural models of multilayer perceptron and radial basis function networks with different numbers of units are used and compared with more traditional linear time series predictors. Out of possible 48 hours of relevant history of all the input variables, the most important ones are selected by means of input filters created by a genetic algorithm. The genetic algorithm works with population of binary encoded vectors defining input selection patterns. Standard genetic operators of two-point crossover, random bit-flipping mutation, and tournament selection were used. The evaluation of objective function of each individual consists of several rounds of building and testing a particular neural network model. The whole procedure is rather computational exacting (taking hours to days on a desktop PC), thus a high-performance mainframe computer has been used for our experiments. Results based on two years worth data from the Ploucnice river in Northern Bohemia suggest that main problems connected with this approach to modeling are ovetraining that can lead to poor generalization, and relatively small number of extreme events which makes it difficult for a model to predict the amplitude of the event. Thus, experiments with both absolute and relative runoff predictions were carried out. In general it can be concluded that the neural models show about 5 per cent improvement in terms of efficiency coefficient over liner models. Multilayer perceptrons with one hidden layer trained by back propagation algorithm and predicting relative runoff show the best behavior so far. Utilizing the genetically evolved input filter improves the performance of yet another 5 per cent. In the future we would like to continue with experiments in on-line prediction using real-time data from Smeda River with 6 hours lead time forecast. Following the operational reality we will focus on classification of the runoffs into flood alert levels, and reformulation of the time series prediction task as a classification problem. The main goal of all this work is to improve flood warning system operated by the Czech Hydrometeorological Institute.
Forecasting air quality time series using deep learning.
Freeman, Brian S; Taylor, Graham; Gharabaghi, Bahram; Thé, Jesse
2018-04-13
This paper presents one of the first applications of deep learning (DL) techniques to predict air pollution time series. Air quality management relies extensively on time series data captured at air monitoring stations as the basis of identifying population exposure to airborne pollutants and determining compliance with local ambient air standards. In this paper, 8 hr averaged surface ozone (O 3 ) concentrations were predicted using deep learning consisting of a recurrent neural network (RNN) with long short-term memory (LSTM). Hourly air quality and meteorological data were used to train and forecast values up to 72 hours with low error rates. The LSTM was able to forecast the duration of continuous O 3 exceedances as well. Prior to training the network, the dataset was reviewed for missing data and outliers. Missing data were imputed using a novel technique that averaged gaps less than eight time steps with incremental steps based on first-order differences of neighboring time periods. Data were then used to train decision trees to evaluate input feature importance over different time prediction horizons. The number of features used to train the LSTM model was reduced from 25 features to 5 features, resulting in improved accuracy as measured by Mean Absolute Error (MAE). Parameter sensitivity analysis identified look-back nodes associated with the RNN proved to be a significant source of error if not aligned with the prediction horizon. Overall, MAE's less than 2 were calculated for predictions out to 72 hours. Novel deep learning techniques were used to train an 8-hour averaged ozone forecast model. Missing data and outliers within the captured data set were replaced using a new imputation method that generated calculated values closer to the expected value based on the time and season. Decision trees were used to identify input variables with the greatest importance. The methods presented in this paper allow air managers to forecast long range air pollution concentration while only monitoring key parameters and without transforming the data set in its entirety, thus allowing real time inputs and continuous prediction.
NASA Technical Reports Server (NTRS)
Birchenough, Arthur G.
2003-01-01
Improvements in the efficiency and size of DC-DC converters have resulted from advances in components, primarily semiconductors, and improved topologies. One topology, which has shown very high potential in limited applications, is the Series Connected Boost Unit (SCBU), wherein a small DC-DC converter output is connected in series with the input bus to provide an output voltage equal to or greater than the input voltage. Since the DC-DC converter switches only a fraction of the power throughput, the overall system efficiency is very high. But this technique is limited to applications where the output is always greater than the input. The Series Connected Buck Boost Regulator (SCBBR) concept extends partial power processing technique used in the SCBU to operation when the desired output voltage is higher or lower than the input voltage, and the implementation described can even operate as a conventional buck converter to operate at very low output to input voltage ratios. This paper describes the operation and performance of an SCBBR configured as a bus voltage regulator providing 50 percent voltage regulation range, bus switching, and overload limiting, operating above 98 percent efficiency. The technique does not provide input-output isolation.
A tool for NDVI time series extraction from wide-swath remotely sensed images
NASA Astrophysics Data System (ADS)
Li, Zhishan; Shi, Runhe; Zhou, Cong
2015-09-01
Normalized Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring the vegetation coverage in land surface. The time series features of NDVI are capable of reflecting dynamic changes of various ecosystems. Calculating NDVI via Moderate Resolution Imaging Spectrometer (MODIS) and other wide-swath remotely sensed images provides an important way to monitor the spatial and temporal characteristics of large-scale NDVI. However, difficulties are still existed for ecologists to extract such information correctly and efficiently because of the problems in several professional processes on the original remote sensing images including radiometric calibration, geometric correction, multiple data composition and curve smoothing. In this study, we developed an efficient and convenient online toolbox for non-remote sensing professionals who want to extract NDVI time series with a friendly graphic user interface. It is based on Java Web and Web GIS technically. Moreover, Struts, Spring and Hibernate frameworks (SSH) are integrated in the system for the purpose of easy maintenance and expansion. Latitude, longitude and time period are the key inputs that users need to provide, and the NDVI time series are calculated automatically.
Monitoring Phenology as Indicator for Timing of Nutrient Inputs in Northern Gulf Watersheds
2010-06-01
region and compared to nutrient monitoring data. A. Image Data This project uses MODIS normalized difference vegetation index ( NDVI ) to create a time...series of land vegetation canopies. MODIS provides a near-daily repeat time for the elimination of cloud contamination, and NDVI has been widely adopted...steps and NDVI was calculated by the defined formula NDVI = (near-infrared reflectance - red reflectance) / (near-infrared reflectance + red
New insights into soil temperature time series modeling: linear or nonlinear?
NASA Astrophysics Data System (ADS)
Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram
2018-03-01
Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.; Leban, F.; Mashiko, S.
1986-01-01
Single-channel pilot manual control output in closed-tracking tasks is modeled in terms of linear discrete transfer functions which are parsimonious and guaranteed stable. The transfer functions are found by applying a modified super-position time series generation technique. A Levinson-Durbin algorithm is used to determine the filter which prewhitens the input and a projective (least squares) fit of pulse response estimates is used to guarantee identified model stability. Results from two case studies are compared to previous findings, where the source of data are relatively short data records, approximately 25 seconds long. Time delay effects and pilot seasonalities are discussed and analyzed. It is concluded that single-channel time series controller modeling is feasible on short records, and that it is important for the analyst to determine a criterion for best time domain fit which allows association of model parameter values, such as pure time delay, with actual physical and physiological constraints. The purpose of the modeling is thus paramount.
Speech input system for meat inspection and pathological coding used thereby
NASA Astrophysics Data System (ADS)
Abe, Shozo
Meat inspection is one of exclusive and important jobs of veterinarians though it is not well known in general. As the inspection should be conducted skillfully during a series of continuous operations in a slaughter house, development of automatic inspecting systems has been required for a long time. We employed a hand-free speech input system to record the inspecting data because inspecters have to use their both hands to treat the internals of catles and check their health conditions by necked eyes. The data collected by the inspectors are transfered to a speech recognizer and then stored as controlable data of each catle inspected. Control of terms such as pathological conditions to be input and their coding are also important in this speech input system and practical examples are shown.
Conditions affecting boundary response to messages out of awareness.
Fisher, S
1976-05-01
Multiple studies evaluated the role of the following parameters in mediating the effects of auditory subliminal inputs upon the body boundary: being made aware that exposure to subliminal stimuli is occurring, nature of the priming preliminary to the input, length of exposure, competing sensory input, use of specialized content messages, tolerance for unrealistic experience, and masculinity-feminity. A test-retest design was typically employed that involved measuring the baseline Barrier score with the Holtzman bolts and then ascertaining the Barrier change when responding to a second series of Holtzman blots at the same time that subliminal input was occurring. Complex results emerged that defined in considerably new detail what facilitates and blocks the boundary-disrupting effects of subliminal messages in men and to a lesser degree in women.
Storm Water Management Model User’s Manual Version 5.1 - manual
SWMM 5 provides an integrated environment for editing study area input data, running hydrologic, hydraulic and water quality simulations, and viewing the results in a variety of formats. These include color-coded drainage area and conveyance system maps, time series graphs and ta...
Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data
NASA Astrophysics Data System (ADS)
Pathak, Jaideep; Lu, Zhixin; Hunt, Brian R.; Girvan, Michelle; Ott, Edward
2017-12-01
We use recent advances in the machine learning area known as "reservoir computing" to formulate a method for model-free estimation from data of the Lyapunov exponents of a chaotic process. The technique uses a limited time series of measurements as input to a high-dimensional dynamical system called a "reservoir." After the reservoir's response to the data is recorded, linear regression is used to learn a large set of parameters, called the "output weights." The learned output weights are then used to form a modified autonomous reservoir designed to be capable of producing an arbitrarily long time series whose ergodic properties approximate those of the input signal. When successful, we say that the autonomous reservoir reproduces the attractor's "climate." Since the reservoir equations and output weights are known, we can compute the derivatives needed to determine the Lyapunov exponents of the autonomous reservoir, which we then use as estimates of the Lyapunov exponents for the original input generating system. We illustrate the effectiveness of our technique with two examples, the Lorenz system and the Kuramoto-Sivashinsky (KS) equation. In the case of the KS equation, we note that the high dimensional nature of the system and the large number of Lyapunov exponents yield a challenging test of our method, which we find the method successfully passes.
Optimizing Use of Water Management Systems during Changes of Hydrological Conditions
NASA Astrophysics Data System (ADS)
Výleta, Roman; Škrinár, Andrej; Danáčová, Michaela; Valent, Peter
2017-10-01
When designing the water management systems and their components, there is a need of more detail research on hydrological conditions of the river basin, runoff of which creates the main source of water in the reservoir. Over the lifetime of the water management systems the hydrological time series are never repeated in the same form which served as the input for the design of the system components. The design assumes the observed time series to be representative at the time of the system use. However, it is rather unrealistic assumption, because the hydrological past will not be exactly repeated over the design lifetime. When designing the water management systems, the specialists may occasionally face the insufficient or oversized capacity design, possibly wrong specification of the management rules which may lead to their non-optimal use. It is therefore necessary to establish a comprehensive approach to simulate the fluctuations in the interannual runoff (taking into account the current dry and wet periods) in the form of stochastic modelling techniques in water management practice. The paper deals with the methodological procedure of modelling the mean monthly flows using the stochastic Thomas-Fiering model, while modification of this model by Wilson-Hilferty transformation of independent random number has been applied. This transformation usually applies in the event of significant asymmetry in the observed time series. The methodological procedure was applied on the data acquired at the gauging station of Horné Orešany in the Parná Stream. Observed mean monthly flows for the period of 1.11.1980 - 31.10.2012 served as the model input information. After extrapolation the model parameters and Wilson-Hilferty transformation parameters the synthetic time series of mean monthly flows were simulated. Those have been compared with the observed hydrological time series using basic statistical characteristics (e. g. mean, standard deviation and skewness) for testing the quality of the model simulation. The synthetic hydrological series of monthly flows were created having the same statistical properties as the time series observed in the past. The compiled model was able to take into account the diversity of extreme hydrological situations in a form of synthetic series of mean monthly flows, while the occurrence of a set of flows was confirmed, which could and may occur in the future. The results of stochastic modelling in the form of synthetic time series of mean monthly flows, which takes into account the seasonal fluctuations of runoff within the year, could be applicable in engineering hydrology (e. g. for optimum use of the existing water management system that is related to reassessment of economic risks of the system).
Model for the respiratory modulation of the heart beat-to-beat time interval series
NASA Astrophysics Data System (ADS)
Capurro, Alberto; Diambra, Luis; Malta, C. P.
2005-09-01
In this study we present a model for the respiratory modulation of the heart beat-to-beat interval series. The model consists of a set of differential equations used to simulate the membrane potential of a single rabbit sinoatrial node cell, excited with a periodic input signal with added correlated noise. This signal, which simulates the input from the autonomous nervous system to the sinoatrial node, was included in the pacemaker equations as a modulation of the iNaK current pump and the potassium current iK. We focus at modeling the heart beat-to-beat time interval series from normal subjects during meditation of the Kundalini Yoga and Chi techniques. The analysis of the experimental data indicates that while the embedding of pre-meditation and control cases have a roughly circular shape, it acquires a polygonal shape during meditation, triangular for the Kundalini Yoga data and quadrangular in the case of Chi data. The model was used to assess the waveshape of the respiratory signals needed to reproduce the trajectory of the experimental data in the phase space. The embedding of the Chi data could be reproduced using a periodic signal obtained by smoothing a square wave. In the case of Kundalini Yoga data, the embedding was reproduced with a periodic signal obtained by smoothing a triangular wave having a rising branch of longer duration than the decreasing branch. Our study provides an estimation of the respiratory signal using only the heart beat-to-beat time interval series.
Fitzgerald, Michael G.; Karlinger, Michael R.
1983-01-01
Time-series models were constructed for analysis of daily runoff and sediment discharge data from selected rivers of the Eastern United States. Logarithmic transformation and first-order differencing of the data sets were necessary to produce second-order, stationary time series and remove seasonal trends. Cyclic models accounted for less than 42 percent of the variance in the water series and 31 percent in the sediment series. Analysis of the apparent oscillations of given frequencies occurring in the data indicates that frequently occurring storms can account for as much as 50 percent of the variation in sediment discharge. Components of the frequency analysis indicate that a linear representation is reasonable for the water-sediment system. Models that incorporate lagged water discharge as input prove superior to univariate techniques in modeling and prediction of sediment discharges. The random component of the models includes errors in measurement and model hypothesis and indicates no serial correlation. An index of sediment production within or between drain-gage basins can be calculated from model parameters.
Multiresolution forecasting for futures trading using wavelet decompositions.
Zhang, B L; Coggins, R; Jabri, M A; Dersch, D; Flower, B
2001-01-01
We investigate the effectiveness of a financial time-series forecasting strategy which exploits the multiresolution property of the wavelet transform. A financial series is decomposed into an over complete, shift invariant scale-related representation. In transform space, each individual wavelet series is modeled by a separate multilayer perceptron (MLP). We apply the Bayesian method of automatic relevance determination to choose short past windows (short-term history) for the inputs to the MLPs at lower scales and long past windows (long-term history) at higher scales. To form the overall forecast, the individual forecasts are then recombined by the linear reconstruction property of the inverse transform with the chosen autocorrelation shell representation, or by another perceptron which learns the weight of each scale in the prediction of the original time series. The forecast results are then passed to a money management system to generate trades.
Fast Fourier Tranformation Algorithms: Experiments with Microcomputers.
1986-07-01
is, functions a with a known, discrete Fourier transform A Such functions are given fn [I]. The functions, TF1 , TF2, and TF3, were used and are...the IBM PC, all with TF1 (Eq. 1). ’The compilers provided options to improve performance, as noted, for which a penalty in compiling time has to be...BASIC only. Series I In this series the procedures were as follows: (i) Calculate the input values for TF1 of ar and the modulus Iar (which is
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1981-02-11
Reduction in the maximum time uncertainty (t/sub max/ - t/sub min/) of a series of paired time signals t/sub 1/ and t/sub 2/ varying between two input terminals and representative of a series of single events where t/sub 1/ less than or equal to t/sub 2/ and t/sub 1/ + t/sub 2/ equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t/sub min/) of the first signal t/sub 1/ closer to t/sub max/ and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20 to 800.
Fuchs, Erich; Gruber, Christian; Reitmaier, Tobias; Sick, Bernhard
2009-09-01
Neural networks are often used to process temporal information, i.e., any kind of information related to time series. In many cases, time series contain short-term and long-term trends or behavior. This paper presents a new approach to capture temporal information with various reference periods simultaneously. A least squares approximation of the time series with orthogonal polynomials will be used to describe short-term trends contained in a signal (average, increase, curvature, etc.). Long-term behavior will be modeled with the tapped delay lines of a time-delay neural network (TDNN). This network takes the coefficients of the orthogonal expansion of the approximating polynomial as inputs such considering short-term and long-term information efficiently. The advantages of the method will be demonstrated by means of artificial data and two real-world application examples, the prediction of the user number in a computer network and online tool wear classification in turning.
van de Flierdt, T.; Frank, M.; Lee, D.-C.; Halliday, A.N.; Reynolds, B.C.; Hein, J.R.
2004-01-01
The behavior of dissolved Hf in the marine environment is not well understood due to the lack of direct seawater measurements of Hf isotopes and the limited number of Hf isotope time-series obtained from ferromanganese crusts. In order to place better constraints on input sources and develop further applications, a combined Nd-Hf isotope time-series study of five Pacific ferromanganese crusts was carried out. The samples cover the past 38 Myr and their locations range from sites at the margin of the ocean to remote areas, sites from previously unstudied North and South Pacific areas, and water depths corresponding to deep and bottom waters. For most of the samples a broad coupling of Nd and Hf isotopes is observed. In the Equatorial Pacific ENd and EHf both decrease with water depth. Similarly, ENd and EHf both increase from the South to the North Pacific. These data indicate that the Hf isotopic composition is, in general terms, a suitable tracer for ocean circulation, since inflow and progressive admixture of bottom water is clearly identifiable. The time-series data indicate that inputs and outputs have been balanced throughout much of the late Cenozoic. A simple box model can constrain the relative importance of potential input sources to the North Pacific. Assuming steady state, the model implies significant contributions of radiogenic Nd and Hf from young circum-Pacific arcs and a subordinate role of dust inputs from the Asian continent for the dissolved Nd and Hf budget of the North Pacific. Some changes in ocean circulation that are clearly recognizable in Nd isotopes do not appear to be reflected by Hf isotopic compositions. At two locations within the Pacific Ocean a decoupling of Nd and Hf isotopes is found, indicating limited potential for Hf isotopes as a stand-alone oceanographic tracer and providing evidence of additional local processes that govern the Hf isotopic composition of deep water masses. In the case of the Southwest Pacific there is evidence that decoupling may have been the result of changes in weathering style related to the buildup of Antarctic glaciation. Copyright ?? 2004 Elsevier Ltd.
Time series association learning
Papcun, George J.
1995-01-01
An acoustic input is recognized from inferred articulatory movements output by a learned relationship between training acoustic waveforms and articulatory movements. The inferred movements are compared with template patterns prepared from training movements when the relationship was learned to regenerate an acoustic recognition. In a preferred embodiment, the acoustic articulatory relationships are learned by a neural network. Subsequent input acoustic patterns then generate the inferred articulatory movements for use with the templates. Articulatory movement data may be supplemented with characteristic acoustic information, e.g. relative power and high frequency data, to improve template recognition.
NASA Technical Reports Server (NTRS)
Stroosnijder, L.; Lascano, R. J.; Newton, R. W.; Vanbavel, C. H. M.
1984-01-01
A general method to use a time series of L-band emissivities as an input to a hydrological model for continuously monitoring the net rainfall and evaporation as well as the water content over the entire soil profile is proposed. The model requires a sufficiently accurate and general relation between soil emissivity and surface moisture content. A model which requires the soil hydraulic properties as an additional input, but does not need any weather data was developed. The method is shown to be numerically consistent.
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2011-07-01
The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow and where peak-flow timing at sub-daily time scales is of high importance. The results suggest that the calibration method can be useful when observation time periods for discharge and model input data do not overlap. The method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.
Cui, Yiqian; Shi, Junyou; Wang, Zili
2015-11-01
Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Delpierre, Nicolas; Berveiller, Daniel; Granda, Elena; Dufrêne, Eric
2016-04-01
Although the analysis of flux data has increased our understanding of the interannual variability of carbon inputs into forest ecosystems, we still know little about the determinants of wood growth. Here, we aimed to identify which drivers control the interannual variability of wood growth in a mesic temperate deciduous forest. We analysed a 9-yr time series of carbon fluxes and aboveground wood growth (AWG), reconstructed at a weekly time-scale through the combination of dendrometer and wood density data. Carbon inputs and AWG anomalies appeared to be uncorrelated from the seasonal to interannual scales. More than 90% of the interannual variability of AWG was explained by a combination of the growth intensity during a first 'critical period' of the wood growing season, occurring close to the seasonal maximum, and the timing of the first summer growth halt. Both atmospheric and soil water stress exerted a strong control on the interannual variability of AWG at the study site, despite its mesic conditions, whilst not affecting carbon inputs. Carbon sink activity, not carbon inputs, determined the interannual variations in wood growth at the study site. Our results provide a functional understanding of the dependence of radial growth on precipitation observed in dendrological studies. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Cluster analysis of word frequency dynamics
NASA Astrophysics Data System (ADS)
Maslennikova, Yu S.; Bochkarev, V. V.; Belashova, I. A.
2015-01-01
This paper describes the analysis and modelling of word usage frequency time series. During one of previous studies, an assumption was put forward that all word usage frequencies have uniform dynamics approaching the shape of a Gaussian function. This assumption can be checked using the frequency dictionaries of the Google Books Ngram database. This database includes 5.2 million books published between 1500 and 2008. The corpus contains over 500 billion words in American English, British English, French, German, Spanish, Russian, Hebrew, and Chinese. We clustered time series of word usage frequencies using a Kohonen neural network. The similarity between input vectors was estimated using several algorithms. As a result of the neural network training procedure, more than ten different forms of time series were found. They describe the dynamics of word usage frequencies from birth to death of individual words. Different groups of word forms were found to have different dynamics of word usage frequency variations.
NASA Astrophysics Data System (ADS)
Jothiprakash, V.; Magar, R. B.
2012-07-01
SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.
Data compression using Chebyshev transform
NASA Technical Reports Server (NTRS)
Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)
2007-01-01
The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.
Using Time Series Analysis to Predict Cardiac Arrest in a PICU.
Kennedy, Curtis E; Aoki, Noriaki; Mariscalco, Michele; Turley, James P
2015-11-01
To build and test cardiac arrest prediction models in a PICU, using time series analysis as input, and to measure changes in prediction accuracy attributable to different classes of time series data. Retrospective cohort study. Thirty-one bed academic PICU that provides care for medical and general surgical (not congenital heart surgery) patients. Patients experiencing a cardiac arrest in the PICU and requiring external cardiac massage for at least 2 minutes. None. One hundred three cases of cardiac arrest and 109 control cases were used to prepare a baseline dataset that consisted of 1,025 variables in four data classes: multivariate, raw time series, clinical calculations, and time series trend analysis. We trained 20 arrest prediction models using a matrix of five feature sets (combinations of data classes) with four modeling algorithms: linear regression, decision tree, neural network, and support vector machine. The reference model (multivariate data with regression algorithm) had an accuracy of 78% and 87% area under the receiver operating characteristic curve. The best model (multivariate + trend analysis data with support vector machine algorithm) had an accuracy of 94% and 98% area under the receiver operating characteristic curve. Cardiac arrest predictions based on a traditional model built with multivariate data and a regression algorithm misclassified cases 3.7 times more frequently than predictions that included time series trend analysis and built with a support vector machine algorithm. Although the final model lacks the specificity necessary for clinical application, we have demonstrated how information from time series data can be used to increase the accuracy of clinical prediction models.
Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo
2014-07-01
Reservoir computing is a recently introduced machine learning paradigm that has already shown excellent performances in the processing of empirical data. We study a particular kind of reservoir computers called time-delay reservoirs that are constructed out of the sampling of the solution of a time-delay differential equation and show their good performance in the forecasting of the conditional covariances associated to multivariate discrete-time nonlinear stochastic processes of VEC-GARCH type as well as in the prediction of factual daily market realized volatilities computed with intraday quotes, using as training input daily log-return series of moderate size. We tackle some problems associated to the lack of task-universality for individually operating reservoirs and propose a solution based on the use of parallel arrays of time-delay reservoirs. Copyright © 2014 Elsevier Ltd. All rights reserved.
Miranian, A; Abdollahzade, M
2013-02-01
Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.
NASA Astrophysics Data System (ADS)
Wu, Xiaoping; Abbondanza, Claudio; Altamimi, Zuheir; Chin, T. Mike; Collilieux, Xavier; Gross, Richard S.; Heflin, Michael B.; Jiang, Yan; Parker, Jay W.
2015-05-01
The current International Terrestrial Reference Frame is based on a piecewise linear site motion model and realized by reference epoch coordinates and velocities for a global set of stations. Although linear motions due to tectonic plates and glacial isostatic adjustment dominate geodetic signals, at today's millimeter precisions, nonlinear motions due to earthquakes, volcanic activities, ice mass losses, sea level rise, hydrological changes, and other processes become significant. Monitoring these (sometimes rapid) changes desires consistent and precise realization of the terrestrial reference frame (TRF) quasi-instantaneously. Here, we use a Kalman filter and smoother approach to combine time series from four space geodetic techniques to realize an experimental TRF through weekly time series of geocentric coordinates. In addition to secular, periodic, and stochastic components for station coordinates, the Kalman filter state variables also include daily Earth orientation parameters and transformation parameters from input data frames to the combined TRF. Local tie measurements among colocated stations are used at their known or nominal epochs of observation, with comotion constraints applied to almost all colocated stations. The filter/smoother approach unifies different geodetic time series in a single geocentric frame. Fragmented and multitechnique tracking records at colocation sites are bridged together to form longer and coherent motion time series. While the time series approach to TRF reflects the reality of a changing Earth more closely than the linear approximation model, the filter/smoother is computationally powerful and flexible to facilitate incorporation of other data types and more advanced characterization of stochastic behavior of geodetic time series.
NASA Astrophysics Data System (ADS)
van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András
2015-04-01
For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.
Zhang, Bo; Peng, Beihua; Liu, Mingchu
2012-01-01
This paper presents an overview of the resources use and environmental impact of the Chinese industry during 1997-2006. For the purpose of this analysis the thermodynamic concept of exergy has been employed both to quantify and aggregate the resources input and the environmental emissions arising from the sector. The resources input and environmental emissions show an increasing trend in this period. Compared with 47568.7 PJ in 1997, resources input in 2006 increased by 75.4% and reached 83437.9 PJ, of which 82.5% came from nonrenewable resources, mainly from coal and other energy minerals. Furthermore, the total exergy of environmental emissions was estimated to be 3499.3 PJ in 2006, 1.7 times of that in 1997, of which 93.4% was from GHG emissions and only 6.6% from "three wastes" emissions. A rapid increment of the nonrenewable resources input and GHG emissions over 2002-2006 can be found, owing to the excessive expansion of resource- and energy-intensive subsectors. Exergy intensities in terms of resource input intensity and environmental emission intensity time-series are also calculated, and the trends are influenced by the macroeconomic situation evidently, particularly by the investment-derived economic development in recent years. Corresponding policy implications to guide a more sustainable industry system are addressed.
Sea change: Charting the course for biogeochemical ocean time-series research in a new millennium
NASA Astrophysics Data System (ADS)
Church, Matthew J.; Lomas, Michael W.; Muller-Karger, Frank
2013-09-01
Ocean time-series provide vital information needed for assessing ecosystem change. This paper summarizes the historical context, major program objectives, and future research priorities for three contemporary ocean time-series programs: The Hawaii Ocean Time-series (HOT), the Bermuda Atlantic Time-series Study (BATS), and the CARIACO Ocean Time-Series. These three programs operate in physically and biogeochemically distinct regions of the world's oceans, with HOT and BATS located in the open-ocean waters of the subtropical North Pacific and North Atlantic, respectively, and CARIACO situated in the anoxic Cariaco Basin of the tropical Atlantic. All three programs sustain near-monthly shipboard occupations of their field sampling sites, with HOT and BATS beginning in 1988, and CARIACO initiated in 1996. The resulting data provide some of the only multi-disciplinary, decadal-scale determinations of time-varying ecosystem change in the global ocean. Facilitated by a scoping workshop (September 2010) sponsored by the Ocean Carbon Biogeochemistry (OCB) program, leaders of these time-series programs sought community input on existing program strengths and for future research directions. Themes that emerged from these discussions included: 1. Shipboard time-series programs are key to informing our understanding of the connectivity between changes in ocean-climate and biogeochemistry 2. The scientific and logistical support provided by shipboard time-series programs forms the backbone for numerous research and education programs. Future studies should be encouraged that seek mechanistic understanding of ecological interactions underlying the biogeochemical dynamics at these sites. 3. Detecting time-varying trends in ocean properties and processes requires consistent, high-quality measurements. Time-series must carefully document analytical procedures and, where possible, trace the accuracy of analyses to certified standards and internal reference materials. 4. Leveraged implementation, testing, and validation of autonomous and remote observing technologies at time-series sites provide new insights into spatiotemporal variability underlying ecosystem changes. 5. The value of existing time-series data for formulating and validating ecosystem models should be promoted. In summary, the scientific underpinnings of ocean time-series programs remain as strong and important today as when these programs were initiated. The emerging data inform our knowledge of the ocean's biogeochemistry and ecology, and improve our predictive capacity about planetary change.
Automatic Dance Lesson Generation
ERIC Educational Resources Information Center
Yang, Yang; Leung, H.; Yue, Lihua; Deng, LiQun
2012-01-01
In this paper, an automatic lesson generation system is presented which is suitable in a learning-by-mimicking scenario where the learning objects can be represented as multiattribute time series data. The dance is used as an example in this paper to illustrate the idea. Given a dance motion sequence as the input, the proposed lesson generation…
Time series study of concentrations of SO4(2-) and H+ in precipitation and soil waters in Norway.
Kvaalen, H; Solberg, S; Clarke, N; Torp, T; Aamlid, D
2002-01-01
Along with a steady reduction of acid inputs during 14 years of intensive forest monitoring in Norway, the influence of acid deposition upon soil water acidity is gradually reduced in favour of other and internal sources of H+ and sulphate, in particular from processes in the upper soil layer. We used statistical analyses in two steps for precipitation, throughfall and soil water at 5, 15 and 40 cm depths. Firstly, we employed time series analyses to model the temporal variation as a long-term linear trend and a monthly variation, and by this filtered out residual, weekly variation. Secondly, we used the parameter estimates and the residuals from this to show that the long term, the monthly and the weekly variation in one layer were correlated to similar temporal variation in the above, adjacent layer. This was strongly evident for throughfall correlated to precipitation, but much weaker for soil water. Continued acidification in soil water on many plots suggests that the combined effects of anthropogenic and natural acid inputs exceed in places the buffering capacity of the soil.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halpin, M.P.
This project used a Box and Jenkins time-series analysis of energetic electron fluxes measured at geosynchronous orbit in an effort to derive prediction models for the flux in each of five energy channels. In addition, the technique of transfer function modeling described by Box and Jenkins was used in an attempt to derive input-output relationships between the flux channels (viewed as the output) and the solar-wind speed or interplanetary magnetic field (IMF) north-south component, Bz, (viewed as the input). The transfer function modeling was done in order to investigate the theoretical dynamic relationship which is believed to exist between themore » solar wind, the IMF Bz, and the energetic electron flux in the magnetosphere. The models derived from the transfer-function techniques employed were also intended to be used in the prediction of flux values. The results from this study indicate that the energetic electron flux changes in the various channels are dependent on more than simply the solar-wind speed or the IMF Bz.« less
NASA Astrophysics Data System (ADS)
Pelt, E.; Chabaux, F. J.; Innocent, C.; Ghaleb, B.
2009-12-01
Analysis of U-series nuclides in weathering profiles is developed today for constraining time scale of soil and weathering profile formation (e.g., Chabaux et al., 2008). These studies require the understanding of U-series nuclides sources and fractionation in weathering systems. For most of these studies the impact of aeolian inputs on U-series nuclides in soils is usually neglected. Here, we propose to discuss such an assumption, i.e., to evaluate the impact of dust deposition on U-series nuclides in soils, by working on present and paleo-soils collected on the Mount Cameroon volcano. Recent Sr, Nd, Pb isotopic analyses performed on these samples have indeed documented significant inputs of Saharan dusts in these soils (Dia et al., 2006). We have therefore analyzed 238U-234U-230Th nuclides in the same samples. Comparison of U-Th isotopic data with Sr-Nd-Pb isotopic data indicates a significant impact of the dust input on the U and Th budget of the soils, around 10% for both U and Th. Using Sr-Nd-Pb isotopic data of Saharan dusts given by Dia et al. (2006) we estimate U-Th concentrations and U-Th isotope ratios of dusts compatible with U-Th data obtained on Saharan dusts collected in Barbados (Rydell H.S. and Prospero J.M., 1972). However, the variations of U/Th ratios along the weathering profiles cannot be explained by a simple mixing scenario between material from basalt and from the defined atmospheric dust pool. A secondary uranium migration associated with chemical weathering has affected the weathering profiles. Mass balance calculation suggests that U in soils from Mount Cameroon is affected at the same order of magnitude by both chemical migration and dust accretion. Nevertheless, the Mount Cameroon is a limit case were large dust inputs from continental crust of Sahara contaminate basaltic terrain from Mount Cameroon volcano. Therefore, this study suggests that in other contexts were dust inputs are lower, or the bedrocks more concentrated in U and Th, the dust contribution will not significantly influence U-series dating. Chabaux F., Bourdon B., Riotte J. (2008). U-series Geochemistry in weathering profiles, river waters and lakes. Radioactivity in the Environment, 13, 49-104. Dia A., Chauvel C., Bulourde M. and Gérard M. (2006). Eolian contribution to soils on Mount Cameroon: Isotopic and trace element records. Chem. Geol. 226, 232-252. Rydell H.S. and Prospero J.M. (1972). Uranium and thorium concentrations in wind-borne Saharan dust over the western equatorial north atlantic ocean. EPSL 14, 397-402.
Rio, Daniel E.; Rawlings, Robert R.; Woltz, Lawrence A.; Gilman, Jodi; Hommer, Daniel W.
2013-01-01
A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function. PMID:23840281
Rio, Daniel E; Rawlings, Robert R; Woltz, Lawrence A; Gilman, Jodi; Hommer, Daniel W
2013-01-01
A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function.
NASA Astrophysics Data System (ADS)
Amit, S. N. K.; Saito, S.; Sasaki, S.; Kiyoki, Y.; Aoki, Y.
2015-04-01
Google earth with high-resolution imagery basically takes months to process new images before online updates. It is a time consuming and slow process especially for post-disaster application. The objective of this research is to develop a fast and effective method of updating maps by detecting local differences occurred over different time series; where only region with differences will be updated. In our system, aerial images from Massachusetts's road and building open datasets, Saitama district datasets are used as input images. Semantic segmentation is then applied to input images. Semantic segmentation is a pixel-wise classification of images by implementing deep neural network technique. Deep neural network technique is implemented due to being not only efficient in learning highly discriminative image features such as road, buildings etc., but also partially robust to incomplete and poorly registered target maps. Then, aerial images which contain semantic information are stored as database in 5D world map is set as ground truth images. This system is developed to visualise multimedia data in 5 dimensions; 3 dimensions as spatial dimensions, 1 dimension as temporal dimension, and 1 dimension as degenerated dimensions of semantic and colour combination dimension. Next, ground truth images chosen from database in 5D world map and a new aerial image with same spatial information but different time series are compared via difference extraction method. The map will only update where local changes had occurred. Hence, map updating will be cheaper, faster and more effective especially post-disaster application, by leaving unchanged region and only update changed region.
Daily water level forecasting using wavelet decomposition and artificial intelligence techniques
NASA Astrophysics Data System (ADS)
Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.
2015-01-01
Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.
Discovering time-lagged rules from microarray data using gene profile classifiers
2011-01-01
Background Gene regulatory networks have an essential role in every process of life. In this regard, the amount of genome-wide time series data is becoming increasingly available, providing the opportunity to discover the time-delayed gene regulatory networks that govern the majority of these molecular processes. Results This paper aims at reconstructing gene regulatory networks from multiple genome-wide microarray time series datasets. In this sense, a new model-free algorithm called GRNCOP2 (Gene Regulatory Network inference by Combinatorial OPtimization 2), which is a significant evolution of the GRNCOP algorithm, was developed using combinatorial optimization of gene profile classifiers. The method is capable of inferring potential time-delay relationships with any span of time between genes from various time series datasets given as input. The proposed algorithm was applied to time series data composed of twenty yeast genes that are highly relevant for the cell-cycle study, and the results were compared against several related approaches. The outcomes have shown that GRNCOP2 outperforms the contrasted methods in terms of the proposed metrics, and that the results are consistent with previous biological knowledge. Additionally, a genome-wide study on multiple publicly available time series data was performed. In this case, the experimentation has exhibited the soundness and scalability of the new method which inferred highly-related statistically-significant gene associations. Conclusions A novel method for inferring time-delayed gene regulatory networks from genome-wide time series datasets is proposed in this paper. The method was carefully validated with several publicly available data sets. The results have demonstrated that the algorithm constitutes a usable model-free approach capable of predicting meaningful relationships between genes, revealing the time-trends of gene regulation. PMID:21524308
Self-calibrating multiplexer circuit
Wahl, Chris P.
1997-01-01
A time domain multiplexer system with automatic determination of acceptable multiplexer output limits, error determination, or correction is comprised of a time domain multiplexer, a computer, a constant current source capable of at least three distinct current levels, and two series resistances employed for calibration and testing. A two point linear calibration curve defining acceptable multiplexer voltage limits may be defined by the computer by determining the voltage output of the multiplexer to very accurately known input signals developed from predetermined current levels across the series resistances. Drift in the multiplexer may be detected by the computer when the output voltage limits, expected during normal operation, are exceeded, or the relationship defined by the calibration curve is invalidated.
Spatiotemporal coding of inputs for a system of globally coupled phase oscillators
NASA Astrophysics Data System (ADS)
Wordsworth, John; Ashwin, Peter
2008-12-01
We investigate the spatiotemporal coding of low amplitude inputs to a simple system of globally coupled phase oscillators with coupling function g(ϕ)=-sin(ϕ+α)+rsin(2ϕ+β) that has robust heteroclinic cycles (slow switching between cluster states). The inputs correspond to detuning of the oscillators. It was recently noted that globally coupled phase oscillators can encode their frequencies in the form of spatiotemporal codes of a sequence of cluster states [P. Ashwin, G. Orosz, J. Wordsworth, and S. Townley, SIAM J. Appl. Dyn. Syst. 6, 728 (2007)]. Concentrating on the case of N=5 oscillators we show in detail how the spatiotemporal coding can be used to resolve all of the information that relates the individual inputs to each other, providing that a long enough time series is considered. We investigate robustness to the addition of noise and find a remarkable stability, especially of the temporal coding, to the addition of noise even for noise of a comparable magnitude to the inputs.
Tooth enamel mineralization in ungulates: implications for recovering a primary isotopic time-series
NASA Astrophysics Data System (ADS)
Passey, Benjamin H.; Cerling, Thure E.
2002-09-01
Temporal changes in the carbon and oxygen isotopic composition of an animal are an environmental and behavioral input signal that is recorded into the enamel of developing teeth. In this paper, we evaluate changes in phosphorus content and density along the axial lengths of three developing ungulate teeth to illustrate the protracted nature of mineral accumulation in a volume of developing enamel. The least mature enamel in these teeth contains by volume about 25% of the mineral mass of mature enamel, and the remaining 75% of the mineral accumulates during maturation. Using data from one of these teeth (a Hippopotamus amphibius canine), we develop a model for teeth growing at constant rate that describes how an input signal is recorded into tooth enamel. The model accounts for both the temporal and spatial patterns of amelogenesis (enamel formation) and the sampling geometry. The model shows that input signal attenuation occurs as a result of time-averaging during amelogenesis when the maturation interval is long compared to the duration of features in the input signal. Sampling does not induce significant attenuation, provided that the sampling interval is several times shorter than the maturation interval. We present a detailed δ 13C and δ 18O record for the H. amphibius canine and suggest possible input isotope signals that may have given rise to the measured isotope signal.
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2010-12-01
The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow. The results suggest that the new calibration method can be useful when observation time periods for discharge and model input data do not overlap. The new method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.
Zou, Yi; Chakravarty, Swapnajit; Zhu, Liang; Chen, Ray T.
2014-01-01
We experimentally demonstrate an efficient and robust method for series connection of photonic crystal microcavities that are coupled to photonic crystal waveguides in the slow light transmission regime. We demonstrate that group index taper engineering provides excellent optical impedance matching between the input and output strip waveguides and the photonic crystal waveguide, a nearly flat transmission over the entire guided mode spectrum and clear multi-resonance peaks corresponding to individual microcavities that are connected in series. Series connected photonic crystal microcavities are further multiplexed in parallel using cascaded multimode interference power splitters to generate a high density silicon nanophotonic microarray comprising 64 photonic crystal microcavity sensors, all of which are interrogated simultaneously at the same instant of time. PMID:25316921
Re-analysis of Alaskan benchmark glacier mass-balance data using the index method
Van Beusekom, Ashely E.; O'Nell, Shad R.; March, Rod S.; Sass, Louis C.; Cox, Leif H.
2010-01-01
At Gulkana and Wolverine Glaciers, designated the Alaskan benchmark glaciers, we re-analyzed and re-computed the mass balance time series from 1966 to 2009 to accomplish our goal of making more robust time series. Each glacier's data record was analyzed with the same methods. For surface processes, we estimated missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernized the traditional degree-day model and derived new degree-day factors in an effort to match the balance time series more closely. We estimated missing yearly-site data with a new balance gradient method. These efforts showed that an additional step needed to be taken at Wolverine Glacier to adjust for non-representative index sites. As with the previously calculated mass balances, the re-analyzed balances showed a continuing trend of mass loss. We noted that the time series, and thus our estimate of the cumulative mass loss over the period of record, was very sensitive to the data input, and suggest the need to add data-collection sites and modernize our weather stations.
Hybrid wavelet-support vector machine approach for modelling rainfall-runoff process.
Komasi, Mehdi; Sharghi, Soroush
2016-01-01
Because of the importance of water resources management, the need for accurate modeling of the rainfall-runoff process has rapidly grown in the past decades. Recently, the support vector machine (SVM) approach has been used by hydrologists for rainfall-runoff modeling and the other fields of hydrology. Similar to the other artificial intelligence models, such as artificial neural network (ANN) and adaptive neural fuzzy inference system, the SVM model is based on the autoregressive properties. In this paper, the wavelet analysis was linked to the SVM model concept for modeling the rainfall-runoff process of Aghchai and Eel River watersheds. In this way, the main time series of two variables, rainfall and runoff, were decomposed to multiple frequent time series by wavelet theory; then, these time series were imposed as input data on the SVM model in order to predict the runoff discharge one day ahead. The obtained results show that the wavelet SVM model can predict both short- and long-term runoff discharges by considering the seasonality effects. Also, the proposed hybrid model is relatively more appropriate than classical autoregressive ones such as ANN and SVM because it uses the multi-scale time series of rainfall and runoff data in the modeling process.
Approaches to Forecasting Demands for Library Network Services. Report No. 10.
ERIC Educational Resources Information Center
Kang, Jong Hoa
The problem of forecasting monthly demands for library network services is considered in terms of using forecasts as inputs to policy analysis models, and in terms of using forecasts to aid in the making of budgeting and staffing decisions. Box-Jenkins time-series methodology, adaptive filtering, and regression approaches are examined and compared…
Automated time series forecasting for biosurveillance.
Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit
2007-09-30
For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.
User's guide for MAGIC-Meteorologic and hydrologic genscn (generate scenarios) input converter
Ortel, Terry W.; Martin, Angel
2010-01-01
Meteorologic and hydrologic data used in watershed modeling studies are collected by various agencies and organizations, and stored in various formats. Data may be in a raw, un-processed format with little or no quality control, or may be checked for validity before being made available. Flood-simulation systems require data in near real-time so that adequate flood warnings can be made. Additionally, forecasted data are needed to operate flood-control structures to potentially mitigate flood damages. Because real-time data are of a provisional nature, missing data may need to be estimated for use in floodsimulation systems. The Meteorologic and Hydrologic GenScn (Generate Scenarios) Input Converter (MAGIC) can be used to convert data from selected formats into the Hydrologic Simulation System-Fortran hourly-observations format for input to a Watershed Data Management database, for use in hydrologic modeling studies. MAGIC also can reformat the data to the Full Equations model time-series format, for use in hydraulic modeling studies. Examples of the application of MAGIC for use in the flood-simulation system for Salt Creek in northeastern Illinois are presented in this report.
Land cover change mapping using MODIS time series to improve emissions inventories
NASA Astrophysics Data System (ADS)
López-Saldaña, Gerardo; Quaife, Tristan; Clifford, Debbie
2016-04-01
MELODIES is an FP7 funded project to develop innovative and sustainable services, based upon Open Data, for users in research, government, industry and the general public in a broad range of societal and environmental benefit areas. Understanding and quantifying land surface changes is necessary for estimating greenhouse gas and ammonia emissions, and for meeting air quality limits and targets. More sophisticated inventories methodologies for at least key emission source are needed due to policy-driven air quality directives. Quantifying land cover changes on an annual basis requires greater spatial and temporal disaggregation of input data. The main aim of this study is to develop a methodology for using Earth Observations (EO) to identify annual land surface changes that will improve emissions inventories from agriculture and land use/land use change and forestry (LULUCF) in the UK. First goal is to find the best sets of input features that describe accurately the surface dynamics. In order to identify annual and inter-annual land surface changes, a times series of surface reflectance was used to capture seasonal variability. Daily surface reflectance images from the Moderate Resolution Imaging Spectroradiometer (MODIS) at 500m resolution were used to invert a Bidirectional Reflectance Distribution Function (BRDF) model to create the seamless time series. Given the limited number of cloud-free observations, a BRDF climatology was used to constrain the model inversion and where no high-scientific quality observations were available at all, as a gap filler. The Land Cover Map 2007 (LC2007) produced by the Centre for Ecology & Hydrology (CEH) was used for training and testing purposes. A land cover product was created for 2003 to 2015 and a bayesian approach was created to identified land cover changes. We will present the results of the time series development and the first exercises when creating the land cover and land cover changes products.
NASA Astrophysics Data System (ADS)
Elangasinghe, M. A.; Singhal, N.; Dirks, K. N.; Salmond, J. A.; Samarasinghe, S.
2014-09-01
This paper uses artificial neural networks (ANN), combined with k-means clustering, to understand the complex time series of PM10 and PM2.5 concentrations at a coastal location of New Zealand based on data from a single site. Out of available meteorological parameters from the network (wind speed, wind direction, solar radiation, temperature, relative humidity), key factors governing the pattern of the time series concentrations were identified through input sensitivity analysis performed on the trained neural network model. The transport pathways of particulate matter under these key meteorological parameters were further analysed through bivariate concentration polar plots and k-means clustering techniques. The analysis shows that the external sources such as marine aerosols and local sources such as traffic and biomass burning contribute equally to the particulate matter concentrations at the study site. These results are in agreement with the results of receptor modelling by the Auckland Council based on Positive Matrix Factorization (PMF). Our findings also show that contrasting concentration-wind speed relationships exist between marine aerosols and local traffic sources resulting in very noisy and seemingly large random PM10 concentrations. The inclusion of cluster rankings as an input parameter to the ANN model showed a statistically significant (p < 0.005) improvement in the performance of the ANN time series model and also showed better performance in picking up high concentrations. For the presented case study, the correlation coefficient between observed and predicted concentrations improved from 0.77 to 0.79 for PM2.5 and from 0.63 to 0.69 for PM10 and reduced the root mean squared error (RMSE) from 5.00 to 4.74 for PM2.5 and from 6.77 to 6.34 for PM10. The techniques presented here enable the user to obtain an understanding of potential sources and their transport characteristics prior to the implementation of costly chemical analysis techniques or advanced air dispersion models.
Wavelet regression model in forecasting crude oil price
NASA Astrophysics Data System (ADS)
Hamid, Mohd Helmie; Shabri, Ani
2017-05-01
This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.
Using Evolved Fuzzy Neural Networks for Injury Detection from Isokinetic Curves
NASA Astrophysics Data System (ADS)
Couchet, Jorge; Font, José María; Manrique, Daniel
In this paper we propose an evolutionary fuzzy neural networks system for extracting knowledge from a set of time series containing medical information. The series represent isokinetic curves obtained from a group of patients exercising the knee joint on an isokinetic dynamometer. The system has two parts: i) it analyses the time series input in order generate a simplified model of an isokinetic curve; ii) it applies a grammar-guided genetic program to obtain a knowledge base represented by a fuzzy neural network. Once the knowledge base has been generated, the system is able to perform knee injuries detection. The results suggest that evolved fuzzy neural networks perform better than non-evolutionary approaches and have a high accuracy rate during both the training and testing phases. Additionally, they are robust, as the system is able to self-adapt to changes in the problem without human intervention.
Zhang, Bo; Peng, Beihua; Liu, Mingchu
2012-01-01
This paper presents an overview of the resources use and environmental impact of the Chinese industry during 1997–2006. For the purpose of this analysis the thermodynamic concept of exergy has been employed both to quantify and aggregate the resources input and the environmental emissions arising from the sector. The resources input and environmental emissions show an increasing trend in this period. Compared with 47568.7 PJ in 1997, resources input in 2006 increased by 75.4% and reached 83437.9 PJ, of which 82.5% came from nonrenewable resources, mainly from coal and other energy minerals. Furthermore, the total exergy of environmental emissions was estimated to be 3499.3 PJ in 2006, 1.7 times of that in 1997, of which 93.4% was from GHG emissions and only 6.6% from “three wastes” emissions. A rapid increment of the nonrenewable resources input and GHG emissions over 2002–2006 can be found, owing to the excessive expansion of resource- and energy-intensive subsectors. Exergy intensities in terms of resource input intensity and environmental emission intensity time-series are also calculated, and the trends are influenced by the macroeconomic situation evidently, particularly by the investment-derived economic development in recent years. Corresponding policy implications to guide a more sustainable industry system are addressed. PMID:22973176
Complexity and non-commutativity of learning operations on graphs.
Atmanspacher, Harald; Filk, Thomas
2006-07-01
We present results from numerical studies of supervised learning operations in small recurrent networks considered as graphs, leading from a given set of input conditions to predetermined outputs. Graphs that have optimized their output for particular inputs with respect to predetermined outputs are asymptotically stable and can be characterized by attractors, which form a representation space for an associative multiplicative structure of input operations. As the mapping from a series of inputs onto a series of such attractors generally depends on the sequence of inputs, this structure is generally non-commutative. Moreover, the size of the set of attractors, indicating the complexity of learning, is found to behave non-monotonically as learning proceeds. A tentative relation between this complexity and the notion of pragmatic information is indicated.
Note: Characterization and test of a high input impedance RF amplifier for series nanowire detector
NASA Astrophysics Data System (ADS)
Wan, Chao; Pei, Yufeng; Jiang, Zhou; Kang, Lin; Wu, Peiheng
2016-09-01
We designed a high input impedance RF amplifier based on Tower Jazz's 0.18 μm SiGe BiCMOS process for series nanowire detector. The characterization of its gain and input impedance with a vector network analyzer is described in detail for its specificity. The actual 15 dB gain should be the measured value subtracts 6 dB, which is easy to be ignored. Its input impedance can be equivalent to 6.7 kΩ ∥ 3.4 pF though fitting the measurement, whose accuracy is verified. The process of measurement provides a good reference to characterize the similar special amplifier with unmatched impedance.
Tutu, Hiroki
2011-06-01
Stochastic resonance (SR) enhanced by time-delayed feedback control is studied. The system in the absence of control is described by a Langevin equation for a bistable system, and possesses a usual SR response. The control with the feedback loop, the delay time of which equals to one-half of the period (2π/Ω) of the input signal, gives rise to a noise-induced oscillatory switching cycle between two states in the output time series, while its average frequency is just smaller than Ω in a small noise regime. As the noise intensity D approaches an appropriate level, the noise constructively works to adapt the frequency of the switching cycle to Ω, and this changes the dynamics into a state wherein the phase of the output signal is entrained to that of the input signal from its phase slipped state. The behavior is characterized by power loss of the external signal or response function. This paper deals with the response function based on a dichotomic model. A method of delay-coordinate series expansion, which reduces a non-Markovian transition probability flux to a series of memory fluxes on a discrete delay-coordinate system, is proposed. Its primitive implementation suggests that the method can be a potential tool for a systematic analysis of SR phenomenon with delayed feedback loop. We show that a D-dependent behavior of poles of a finite Laplace transform of the response function qualitatively characterizes the structure of the power loss, and we also show analytical results for the correlation function and the power spectral density.
Effect of spatial averaging on multifractal properties of meteorological time series
NASA Astrophysics Data System (ADS)
Hoffmann, Holger; Baranowski, Piotr; Krzyszczak, Jaromir; Zubik, Monika
2016-04-01
Introduction The process-based models for large-scale simulations require input of agro-meteorological quantities that are often in the form of time series of coarse spatial resolution. Therefore, the knowledge about their scaling properties is fundamental for transferring locally measured fluctuations to larger scales and vice-versa. However, the scaling analysis of these quantities is complicated due to the presence of localized trends and non-stationarities. Here we assess how spatially aggregating meteorological data to coarser resolutions affects the data's temporal scaling properties. While it is known that spatial aggregation may affect spatial data properties (Hoffmann et al., 2015), it is unknown how it affects temporal data properties. Therefore, the objective of this study was to characterize the aggregation effect (AE) with regard to both temporal and spatial input data properties considering scaling properties (i.e. statistical self-similarity) of the chosen agro-meteorological time series through multifractal detrended fluctuation analysis (MFDFA). Materials and Methods Time series coming from years 1982-2011 were spatially averaged from 1 to 10, 25, 50 and 100 km resolution to assess the impact of spatial aggregation. Daily minimum, mean and maximum air temperature (2 m), precipitation, global radiation, wind speed and relative humidity (Zhao et al., 2015) were used. To reveal the multifractal structure of the time series, we used the procedure described in Baranowski et al. (2015). The diversity of the studied multifractals was evaluated by the parameters of time series spectra. In order to analyse differences in multifractal properties to 1 km resolution grids, data of coarser resolutions was disaggregated to 1 km. Results and Conclusions Analysing the spatial averaging on multifractal properties we observed that spatial patterns of the multifractal spectrum (MS) of all meteorological variables differed from 1 km grids and MS-parameters were biased by -29.1 % (precipitation; width of MS) up to >4 % (min. Temperature, Radiation; asymmetry of MS). Also, the spatial variability of MS parameters was strongly affected at the highest aggregation (100 km). Obtained results confirm that spatial data aggregation may strongly affect temporal scaling properties. This should be taken into account when upscaling for large-scale studies. Acknowledgements The study was conducted within FACCE MACSUR. Please see Baranowski et al. (2015) for details on funding. References Baranowski, P., Krzyszczak, J., Sławiński, C. et al. (2015). Climate Research 65, 39-52. Hoffman, H., G. Zhao, L.G.J. Van Bussel et al. (2015). Climate Research 65, 53-69. Zhao, G., Siebert, S., Rezaei E. et al. (2015). Agricultural and Forest Meteorology 200, 156-171.
New Method for Solving Inductive Electric Fields in the Ionosphere
NASA Astrophysics Data System (ADS)
Vanhamäki, H.
2005-12-01
We present a new method for calculating inductive electric fields in the ionosphere. It is well established that on large scales the ionospheric electric field is a potential field. This is understandable, since the temporal variations of large scale current systems are generally quite slow, in the timescales of several minutes, so inductive effects should be small. However, studies of Alfven wave reflection have indicated that in some situations inductive phenomena could well play a significant role in the reflection process, and thus modify the nature of ionosphere-magnetosphere coupling. The input to our calculation method are the time series of the potential part of the ionospheric electric field together with the Hall and Pedersen conductances. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfven wave reflection from uniformly conducting ionosphere.
Real time wave forecasting using wind time history and numerical model
NASA Astrophysics Data System (ADS)
Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.
Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.
Stern, Michelle A.; Anderson, Frank A.; Flint, Lorraine E.; Flint, Alan L.
2018-05-03
In situ soil moisture datasets are important inputs used to calibrate and validate watershed, regional, or statewide modeled and satellite-based soil moisture estimates. The soil moisture dataset presented in this report includes hourly time series of the following: soil temperature, volumetric water content, water potential, and total soil water content. Data were collected by the U.S. Geological Survey at five locations in California: three sites in the central Sierra Nevada and two sites in the northern Coast Ranges. This report provides a description of each of the study areas, procedures and equipment used, processing steps, and time series data from each site in the form of comma-separated values (.csv) tables.
NASA Astrophysics Data System (ADS)
Febrian Umbara, Rian; Tarwidi, Dede; Budi Setiawan, Erwin
2018-03-01
The paper discusses the prediction of Jakarta Composite Index (JCI) in Indonesia Stock Exchange. The study is based on JCI historical data for 1286 days to predict the value of JCI one day ahead. This paper proposes predictions done in two stages., The first stage using Fuzzy Time Series (FTS) to predict values of ten technical indicators, and the second stage using Support Vector Regression (SVR) to predict the value of JCI one day ahead, resulting in a hybrid prediction model FTS-SVR. The performance of this combined prediction model is compared with the performance of the single stage prediction model using SVR only. Ten technical indicators are used as input for each model.
Ahmed, Ashik; Al-Amin, Rasheduzzaman; Amin, Ruhul
2014-01-01
This paper proposes designing of Static Synchronous Series Compensator (SSSC) based damping controller to enhance the stability of a Single Machine Infinite Bus (SMIB) system by means of Invasive Weed Optimization (IWO) technique. Conventional PI controller is used as the SSSC damping controller which takes rotor speed deviation as the input. The damping controller parameters are tuned based on time integral of absolute error based cost function using IWO. Performance of IWO based controller is compared to that of Particle Swarm Optimization (PSO) based controller. Time domain based simulation results are presented and performance of the controllers under different loading conditions and fault scenarios is studied in order to illustrate the effectiveness of the IWO based design approach.
Time Series Neural Network Model for Part-of-Speech Tagging Indonesian Language
NASA Astrophysics Data System (ADS)
Tanadi, Theo
2018-03-01
Part-of-speech tagging (POS tagging) is an important part in natural language processing. Many methods have been used to do this task, including neural network. This paper models a neural network that attempts to do POS tagging. A time series neural network is modelled to solve the problems that a basic neural network faces when attempting to do POS tagging. In order to enable the neural network to have text data input, the text data will get clustered first using Brown Clustering, resulting a binary dictionary that the neural network can use. To further the accuracy of the neural network, other features such as the POS tag, suffix, and affix of previous words would also be fed to the neural network.
Enhancements to the Branched Lagrangian Transport Modeling System
Jobson, Harvey E.
1997-01-01
The Branched Lagrangian Transport Model (BLTM) has received wide use within the U.S. Geological Survey over the past 10 years. This report documents the enhancements and modifications that have been made to this modeling system since it was first introduced. The programs in the modeling system are arranged into five levels?programs to generate time-series of meteorological data (EQULTMP, SOLAR), programs to process time-series data (INTRP, MRG), programs to build input files for transport model (BBLTM, BQUAL2E), the model with defined reaction kinetics (BLTM, QUAL2E), and post processor plotting programs (CTPLT, CXPLT). An example application is presented to illustrate how the modeling system can be used to simulate 10 water-quality constituents in the Chattahoochee River below Atlanta, Georgia.
Enabling Web-Based Analysis of CUAHSI HIS Hydrologic Data Using R and Web Processing Services
NASA Astrophysics Data System (ADS)
Ames, D. P.; Kadlec, J.; Bayles, M.; Seul, M.; Hooper, R. P.; Cummings, B.
2015-12-01
The CUAHSI Hydrologic Information System (CUAHSI HIS) provides open access to a large number of hydrological time series observation and modeled data from many parts of the world. Several software tools have been designed to simplify searching and access to the CUAHSI HIS datasets. These software tools include: Desktop client software (HydroDesktop, HydroExcel), developer libraries (WaterML R Package, OWSLib, ulmo), and the new interactive search website, http://data.cuahsi.org. An issue with using the time series data from CUAHSI HIS for further analysis by hydrologists (for example for verification of hydrological and snowpack models) is the large heterogeneity of the time series data. The time series may be regular or irregular, contain missing data, have different time support, and be recorded in different units. R is a widely used computational environment for statistical analysis of time series and spatio-temporal data that can be used to assess fitness and perform scientific analyses on observation data. R includes the ability to record a data analysis in the form of a reusable script. The R script together with the input time series dataset can be shared with other users, making the analysis more reproducible. The major goal of this study is to examine the use of R as a Web Processing Service for transforming time series data from the CUAHSI HIS and sharing the results on the Internet within HydroShare. HydroShare is an online data repository and social network for sharing large hydrological data sets such as time series, raster datasets, and multi-dimensional data. It can be used as a permanent cloud storage space for saving the time series analysis results. We examine the issues associated with running R scripts online: including code validation, saving of outputs, reporting progress, and provenance management. An explicit goal is that the script which is run locally should produce exactly the same results as the script run on the Internet. Our design can be used as a model for other studies that need to run R scripts on the web.
Estimation of different data compositions for early-season crop type classification.
Hao, Pengyu; Wu, Mingquan; Niu, Zheng; Wang, Li; Zhan, Yulin
2018-01-01
Timely and accurate crop type distribution maps are an important inputs for crop yield estimation and production forecasting as multi-temporal images can observe phenological differences among crops. Therefore, time series remote sensing data are essential for crop type mapping, and image composition has commonly been used to improve the quality of the image time series. However, the optimal composition period is unclear as long composition periods (such as compositions lasting half a year) are less informative and short composition periods lead to information redundancy and missing pixels. In this study, we initially acquired daily 30 m Normalized Difference Vegetation Index (NDVI) time series by fusing MODIS, Landsat, Gaofen and Huanjing (HJ) NDVI, and then composited the NDVI time series using four strategies (daily, 8-day, 16-day, and 32-day). We used Random Forest to identify crop types and evaluated the classification performances of the NDVI time series generated from four composition strategies in two studies regions from Xinjiang, China. Results indicated that crop classification performance improved as crop separabilities and classification accuracies increased, and classification uncertainties dropped in the green-up stage of the crops. When using daily NDVI time series, overall accuracies saturated at 113-day and 116-day in Bole and Luntai, and the saturated overall accuracies (OAs) were 86.13% and 91.89%, respectively. Cotton could be identified 40∼60 days and 35∼45 days earlier than the harvest in Bole and Luntai when using daily, 8-day and 16-day composition NDVI time series since both producer's accuracies (PAs) and user's accuracies (UAs) were higher than 85%. Among the four compositions, the daily NDVI time series generated the highest classification accuracies. Although the 8-day, 16-day and 32-day compositions had similar saturated overall accuracies (around 85% in Bole and 83% in Luntai), the 8-day and 16-day compositions achieved these accuracies around 155-day in Bole and 133-day in Luntai, which were earlier than the 32-day composition (170-day in both Bole and Luntai). Therefore, when the daily NDVI time series cannot be acquired, the 16-day composition is recommended in this study.
Estimation of different data compositions for early-season crop type classification
Wu, Mingquan; Wang, Li; Zhan, Yulin
2018-01-01
Timely and accurate crop type distribution maps are an important inputs for crop yield estimation and production forecasting as multi-temporal images can observe phenological differences among crops. Therefore, time series remote sensing data are essential for crop type mapping, and image composition has commonly been used to improve the quality of the image time series. However, the optimal composition period is unclear as long composition periods (such as compositions lasting half a year) are less informative and short composition periods lead to information redundancy and missing pixels. In this study, we initially acquired daily 30 m Normalized Difference Vegetation Index (NDVI) time series by fusing MODIS, Landsat, Gaofen and Huanjing (HJ) NDVI, and then composited the NDVI time series using four strategies (daily, 8-day, 16-day, and 32-day). We used Random Forest to identify crop types and evaluated the classification performances of the NDVI time series generated from four composition strategies in two studies regions from Xinjiang, China. Results indicated that crop classification performance improved as crop separabilities and classification accuracies increased, and classification uncertainties dropped in the green-up stage of the crops. When using daily NDVI time series, overall accuracies saturated at 113-day and 116-day in Bole and Luntai, and the saturated overall accuracies (OAs) were 86.13% and 91.89%, respectively. Cotton could be identified 40∼60 days and 35∼45 days earlier than the harvest in Bole and Luntai when using daily, 8-day and 16-day composition NDVI time series since both producer’s accuracies (PAs) and user’s accuracies (UAs) were higher than 85%. Among the four compositions, the daily NDVI time series generated the highest classification accuracies. Although the 8-day, 16-day and 32-day compositions had similar saturated overall accuracies (around 85% in Bole and 83% in Luntai), the 8-day and 16-day compositions achieved these accuracies around 155-day in Bole and 133-day in Luntai, which were earlier than the 32-day composition (170-day in both Bole and Luntai). Therefore, when the daily NDVI time series cannot be acquired, the 16-day composition is recommended in this study. PMID:29868265
NASA Astrophysics Data System (ADS)
Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish
2018-06-01
Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.
2014-01-01
humic to flilvic acids *). Therefore, a time-series of satellite data (rrs and a_QAA) monitoring the mouth of local river inputs as well as case-l...organic matter in a humic -rich, turbid estuary," Geophysical Research Letters, 28(17), 3309-3312 (2001). [II] Del Castillo, C, Coble, P., Morell, J
NASA Technical Reports Server (NTRS)
Black, S.; Macdonald, R.; Kelly, M.
1993-01-01
U-series disequilibrium analyses have been conducted on samples from Olkaria rhyolite centers with ages being available for all but one center using both internal and whole rock isochrons. 67 percent of the rhyolites analyzed show U-Th disequilibrium, ranging from 27 percent excess thorium to 36 percent excess uranium. Internal and whole rock isochrons give crystallization/formation ages between 65 ka and 9 ka, in every case these are substantially older than the eruptive dates. The residence times of the rhyolites (U-Th age minus the eruption date) have decreased almost linearly with time, from 45 ka to 7 Ka suggesting a possible increase of activity within the system related to increased basaltic input. The long residence times are mirrored by large Rn-222 fluxes from the centers which cannot be explained by larger U contents.
Temporal dynamics of catchment transit times from stable isotope data
NASA Astrophysics Data System (ADS)
Klaus, Julian; Chun, Kwok P.; McGuire, Kevin J.; McDonnell, Jeffrey J.
2015-06-01
Time variant catchment transit time distributions are fundamental descriptors of catchment function but yet not fully understood, characterized, and modeled. Here we present a new approach for use with standard runoff and tracer data sets that is based on tracking of tracer and age information and time variant catchment mixing. Our new approach is able to deal with nonstationarity of flow paths and catchment mixing, and an irregular shape of the transit time distribution. The approach extracts information on catchment mixing from the stable isotope time series instead of prior assumptions of mixing or the shape of transit time distribution. We first demonstrate proof of concept of the approach with artificial data; the Nash-Sutcliffe efficiencies in tracer and instantaneous transit times were >0.9. The model provides very accurate estimates of time variant transit times when the boundary conditions and fluxes are fully known. We then tested the model with real rainfall-runoff flow and isotope tracer time series from the H.J. Andrews Watershed 10 (WS10) in Oregon. Model efficiencies were 0.37 for the 18O modeling for a 2 year time series; the efficiencies increased to 0.86 for the second year underlying the need of long time tracer time series with a long overlap of tracer input and output. The approach was able to determine time variant transit time of WS10 with field data and showed how it follows the storage dynamics and related changes in flow paths where wet periods with high flows resulted in clearly shorter transit times compared to dry low flow periods.
NASA Astrophysics Data System (ADS)
de Lautour, Oliver R.; Omenzetter, Piotr
2010-07-01
Developed for studying long sequences of regularly sampled data, time series analysis methods are being increasingly investigated for the use of Structural Health Monitoring (SHM). In this research, Autoregressive (AR) models were used to fit the acceleration time histories obtained from two experimental structures: a 3-storey bookshelf structure and the ASCE Phase II Experimental SHM Benchmark Structure, in undamaged and limited number of damaged states. The coefficients of the AR models were considered to be damage-sensitive features and used as input into an Artificial Neural Network (ANN). The ANN was trained to classify damage cases or estimate remaining structural stiffness. The results showed that the combination of AR models and ANNs are efficient tools for damage classification and estimation, and perform well using small number of damage-sensitive features and limited sensors.
Burau, J.R.; Simpson, M.R.; Cheng, R.T.
1993-01-01
Water-velocity profiles were collected at the west end of Carquinez Strait, San Francisco Bay, California, from March to November 1988, using an acoustic Doppler current profiler (ADCP). These data are a series of 10-minute-averaged water velocities collected at 1-meter vertical intervals (bins) in the 16.8-meter water column, beginning 2.1 meters above the estuary bed. To examine the vertical structure of the horizontal water velocities, the data are separated into individual time-series by bin and then used for time-series plots, harmonic analysis, and for input to digital filters. Three-dimensional graphic renditions of the filtered data are also used in the analysis. Harmonic analysis of the time-series data from each bin indicates that the dominant (12.42 hour or M2) partial tidal currents reverse direction near the bottom, on average, 20 minutes sooner than M2 partial tidal currents near the surface. Residual (nontidal) currents derived from the filtered data indicate that currents near the bottom are pre- dominantly up-estuary during the neap tides and down-estuary during the more energetic spring tides.
Connectionist Architectures for Time Series Prediction of Dynamical Systems
NASA Astrophysics Data System (ADS)
Weigend, Andreas Sebastian
We investigate the effectiveness of connectionist networks for predicting the future continuation of temporal sequences. The problem of overfitting, particularly serious for short records of noisy data, is addressed by the method of weight-elimination: a term penalizing network complexity is added to the usual cost function in back-propagation. We describe the dynamics of the procedure and clarify the meaning of the parameters involved. From a Bayesian perspective, the complexity term can be usefully interpreted as an assumption about prior distribution of the weights. We analyze three time series. On the benchmark sunspot series, the networks outperform traditional statistical approaches. We show that the network performance does not deteriorate when there are more input units than needed. In the second example, the notoriously noisy foreign exchange rates series, we pick one weekday and one currency (DM vs. US). Given exchange rate information up to and including a Monday, the task is to predict the rate for the following Tuesday. Weight-elimination manages to extract a significant part of the dynamics and makes the solution interpretable. In the third example, the networks predict the resource utilization of a chaotic computational ecosystem for hundreds of steps forward in time.
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.
Lumped Nonlinear System Analysis with Volterra Series.
1980-04-01
f h2 (t-=,t-r )x(r)x(t2)dl d 2 (4- 1 )O0 0 Consider the input signal comprising two unit sinusoidal signals at fre- quencies wa and wb. The input x... 1 - 2 . Nonlinear System Analysis Methods. .............. 2 1 -3. Objectives of the Investigation ....... ............... 6 1 -4. Organization of...the Report ..... ... ................. 9 CHAPTER 2 - VOLTERRA FUNCTIONAL SERIES ...... ............... 12 2 - 1 . Introduction
A Markovian model of evolving world input-output network
Isacchini, Giulio
2017-01-01
The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money. PMID:29065145
Genetic programming and serial processing for time series classification.
Alfaro-Cid, Eva; Sharman, Ken; Esparcia-Alcázar, Anna I
2014-01-01
This work describes an approach devised by the authors for time series classification. In our approach genetic programming is used in combination with a serial processing of data, where the last output is the result of the classification. The use of genetic programming for classification, although still a field where more research in needed, is not new. However, the application of genetic programming to classification tasks is normally done by considering the input data as a feature vector. That is, to the best of our knowledge, there are not examples in the genetic programming literature of approaches where the time series data are processed serially and the last output is considered as the classification result. The serial processing approach presented here fills a gap in the existing literature. This approach was tested in three different problems. Two of them are real world problems whose data were gathered for online or conference competitions. As there are published results of these two problems this gives us the chance to compare the performance of our approach against top performing methods. The serial processing of data in combination with genetic programming obtained competitive results in both competitions, showing its potential for solving time series classification problems. The main advantage of our serial processing approach is that it can easily handle very large datasets.
Can we use Earth Observations to improve monthly water level forecasts?
NASA Astrophysics Data System (ADS)
Slater, L. J.; Villarini, G.
2017-12-01
Dynamical-statistical hydrologic forecasting approaches benefit from different strengths in comparison with traditional hydrologic forecasting systems: they are computationally efficient, can integrate and `learn' from a broad selection of input data (e.g., General Circulation Model (GCM) forecasts, Earth Observation time series, teleconnection patterns), and can take advantage of recent progress in machine learning (e.g. multi-model blending, post-processing and ensembling techniques). Recent efforts to develop a dynamical-statistical ensemble approach for forecasting seasonal streamflow using both GCM forecasts and changing land cover have shown promising results over the U.S. Midwest. Here, we use climate forecasts from several GCMs of the North American Multi Model Ensemble (NMME) alongside 15-minute stage time series from the National River Flow Archive (NRFA) and land cover classes extracted from the European Space Agency's Climate Change Initiative 300 m annual Global Land Cover time series. With these data, we conduct systematic long-range probabilistic forecasting of monthly water levels in UK catchments over timescales ranging from one to twelve months ahead. We evaluate the improvement in model fit and model forecasting skill that comes from using land cover classes as predictors in the models. This work opens up new possibilities for combining Earth Observation time series with GCM forecasts to predict a variety of hazards from space using data science techniques.
Simple Deterministically Constructed Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Rodan, Ali; Tiňo, Peter
A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.
NASA Astrophysics Data System (ADS)
Sun, L. Qing; Feng, Feng X.
2014-11-01
In this study, we first built and compared two different climate datasets for Wuling mountainous area in 2010, one of which considered topographical effects during the ANUSPLIN interpolation was referred as terrain-based climate dataset, while the other one did not was called ordinary climate dataset. Then, we quantified the topographical effects of climatic inputs on NPP estimation by inputting two different climate datasets to the same ecosystem model, the Boreal Ecosystem Productivity Simulator (BEPS), to evaluate the importance of considering relief when estimating NPP. Finally, we found the primary contributing variables to the topographical effects through a series of experiments given an overall accuracy of the model output for NPP. The results showed that: (1) The terrain-based climate dataset presented more reliable topographic information and had closer agreements with the station dataset than the ordinary climate dataset at successive time series of 365 days in terms of the daily mean values. (2) On average, ordinary climate dataset underestimated NPP by 12.5% compared with terrain-based climate dataset over the whole study area. (3) The primary climate variables contributing to the topographical effects of climatic inputs for Wuling mountainous area were temperatures, which suggest that it is necessary to correct temperature differences for estimating NPP accurately in such a complex terrain.
NASA Astrophysics Data System (ADS)
Van Pelt, S.; Kohfeld, K. E.; Allen, D. M.
2015-12-01
The decline of the Mayan Civilization is thought to be caused by a series of droughts that affected the Yucatan Peninsula during the Terminal Classic Period (T.C.P.) 800-1000 AD. The goals of this study are two-fold: (a) to compare paleo-model simulations of the past 1000 years with a compilation of multiple proxies of changes in moisture conditions for the Yucatan Peninsula during the T.C.P. and (b) to use this comparison to inform the modeling of groundwater recharge in this region, with a focus on generating the daily climate data series needed as input to a groundwater recharge model. To achieve the first objective, we compiled a dataset of 5 proxies from seven locations across the Yucatan Peninsula, to be compared with temperature and precipitation output from the Community Climate System Model Version 4 (CCSM4), which is part of the Coupled Model Intercomparison Project Phase 5 (CMIP5) past1000 experiment. The proxy dataset includes oxygen isotopes from speleothems and gastropod/ostrocod shells (11 records); and sediment density, mineralogy, and magnetic susceptibility records from lake sediment cores (3 records). The proxy dataset is supplemented by a compilation of reconstructed temperatures using pollen and tree ring records for North America (archived in the PAGES2k global network data). Our preliminary analysis suggests that many of these datasets show evidence of drier and warmer climate on the Yucatan Peninsula around the T.C.P. when compared to modern conditions, although the amplitude and timing of individual warming and drying events varies between sites. This comparison with modeled output will ultimately be used to inform backward shift factors that will be input to a stochastic weather generator. These shift factors will be based on monthly changes in temperature and precipitation and applied to a modern daily climate time series for the Yucatan Peninsula to produce a daily climate time series for the T.C.P.
NASA Astrophysics Data System (ADS)
Bock, Y.; Fang, P.; Moore, A. W.; Kedar, S.; Liu, Z.; Owen, S. E.; Glasscoe, M. T.
2016-12-01
Detection of time-dependent crustal deformation relies on the availability of accurate surface displacements, proper time series analysis to correct for secular motion, coseismic and non-tectonic instrument offsets, periodic signatures at different frequencies, and a realistic estimate of uncertainties for the parameters of interest. As part of the NASA Solid Earth Science ESDR System (SESES) project, daily displacement time series are estimated for about 2500 stations, focused on tectonic plate boundaries and having a global distribution for accessing the terrestrial reference frame. The "combined" time series are optimally estimated from independent JPL GIPSY and SIO GAMIT solutions, using a consistent set of input epoch-date coordinates and metadata. The longest time series began in 1992; more than 30% of the stations have experienced one or more of 35 major earthquakes with significant postseismic deformation. Here we present three examples of time-dependent deformation that have been detected in the SESES displacement time series. (1) Postseismic deformation is a fundamental time-dependent signal that indicates a viscoelastic response of the crust/mantle lithosphere, afterslip, or poroelastic effects at different spatial and temporal scales. It is critical to identify and estimate the extent of postseismic deformation in both space and time not only for insight into the crustal deformation and earthquake cycles and their underlying physical processes, but also to reveal other time-dependent signals. We report on our database of characterized postseismic motions using a principal component analysis to isolate different postseismic processes. (2) Starting with the SESES combined time series and applying a time-dependent Kalman filter, we examine episodic tremor and slow slip (ETS) in the Cascadia subduction zone. We report on subtle slip details, allowing investigation of the spatiotemporal relationship between slow slip transients and tremor and their underlying physical mechanisms. (3) We present evolving strain dilatation and shear rates based on the SESES velocities for regional subnetworks as a metric for assigning earthquake probabilities and detection of possible time-dependent deformation related to underlying physical processes.
NASA Astrophysics Data System (ADS)
Birkel, C.; Paroli, R.; Spezia, L.; Tetzlaff, D.; Soulsby, C.
2012-12-01
In this paper we present a novel model framework using the class of Markov Switching Autoregressive Models (MSARMs) to examine catchments as complex stochastic systems that exhibit non-stationary, non-linear and non-Normal rainfall-runoff and solute dynamics. Hereby, MSARMs are pairs of stochastic processes, one observed and one unobserved, or hidden. We model the unobserved process as a finite state Markov chain and assume that the observed process, given the hidden Markov chain, is conditionally autoregressive, which means that the current observation depends on its recent past (system memory). The model is fully embedded in a Bayesian analysis based on Markov Chain Monte Carlo (MCMC) algorithms for model selection and uncertainty assessment. Hereby, the autoregressive order and the dimension of the hidden Markov chain state-space are essentially self-selected. The hidden states of the Markov chain represent unobserved levels of variability in the observed process that may result from complex interactions of hydroclimatic variability on the one hand and catchment characteristics affecting water and solute storage on the other. To deal with non-stationarity, additional meteorological and hydrological time series along with a periodic component can be included in the MSARMs as covariates. This extension allows identification of potential underlying drivers of temporal rainfall-runoff and solute dynamics. We applied the MSAR model framework to streamflow and conservative tracer (deuterium and oxygen-18) time series from an intensively monitored 2.3 km2 experimental catchment in eastern Scotland. Statistical time series analysis, in the form of MSARMs, suggested that the streamflow and isotope tracer time series are not controlled by simple linear rules. MSARMs showed that the dependence of current observations on past inputs observed by transport models often in form of the long-tailing of travel time and residence time distributions can be efficiently explained by non-stationarity either of the system input (climatic variability) and/or the complexity of catchment storage characteristics. The statistical model is also capable of reproducing short (event) and longer-term (inter-event) and wet and dry dynamical "hydrological states". These reflect the non-linear transport mechanisms of flow pathways induced by transient climatic and hydrological variables and modified by catchment characteristics. We conclude that MSARMs are a powerful tool to analyze the temporal dynamics of hydrological data, allowing for explicit integration of non-stationary, non-linear and non-Normal characteristics.
Nonlinear ARMA models for the D(st) index and their physical interpretation
NASA Technical Reports Server (NTRS)
Vassiliadis, D.; Klimas, A. J.; Baker, D. N.
1996-01-01
Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.
Comparison of ITRF2014 station coordinate input time series of DORIS, VLBI and GNSS
NASA Astrophysics Data System (ADS)
Tornatore, Vincenza; Tanır Kayıkçı, Emine; Roggero, Marco
2016-12-01
In this paper station coordinate time series from three space geodesy techniques that have contributed to the realization of the International Terrestrial Reference Frame 2014 (ITRF2014) are compared. In particular the height component time series extracted from official combined intra-technique solutions submitted for ITRF2014 by DORIS, VLBI and GNSS Combination Centers have been investigated. The main goal of this study is to assess the level of agreement among these three space geodetic techniques. A novel analytic method, modeling time series as discrete-time Markov processes, is presented and applied to the compared time series. The analysis method has proven to be particularly suited to obtain quasi-cyclostationary residuals which are an important property to carry out a reliable harmonic analysis. We looked for common signatures among the three techniques. Frequencies and amplitudes of the detected signals have been reported along with their percentage of incidence. Our comparison shows that two of the estimated signals, having one-year and 14 days periods, are common to all the techniques. Different hypotheses on the nature of the signal having a period of 14 days are presented. As a final check we have compared the estimated velocities and their standard deviations (STD) for the sites that co-located the VLBI, GNSS and DORIS stations, obtaining a good agreement among the three techniques both in the horizontal (1.0 mm/yr mean STD) and in the vertical (0.7 mm/yr mean STD) component, although some sites show larger STDs, mainly due to lack of data, different data spans or noisy observations.
Univariate Time Series Prediction of Solar Power Using a Hybrid Wavelet-ARMA-NARX Prediction Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazaripouya, Hamidreza; Wang, Yubo; Chu, Chi-Cheng
This paper proposes a new hybrid method for super short-term solar power prediction. Solar output power usually has a complex, nonstationary, and nonlinear characteristic due to intermittent and time varying behavior of solar radiance. In addition, solar power dynamics is fast and is inertia less. An accurate super short-time prediction is required to compensate for the fluctuations and reduce the impact of solar power penetration on the power system. The objective is to predict one step-ahead solar power generation based only on historical solar power time series data. The proposed method incorporates discrete wavelet transform (DWT), Auto-Regressive Moving Average (ARMA)more » models, and Recurrent Neural Networks (RNN), while the RNN architecture is based on Nonlinear Auto-Regressive models with eXogenous inputs (NARX). The wavelet transform is utilized to decompose the solar power time series into a set of richer-behaved forming series for prediction. ARMA model is employed as a linear predictor while NARX is used as a nonlinear pattern recognition tool to estimate and compensate the error of wavelet-ARMA prediction. The proposed method is applied to the data captured from UCLA solar PV panels and the results are compared with some of the common and most recent solar power prediction methods. The results validate the effectiveness of the proposed approach and show a considerable improvement in the prediction precision.« less
Swetapadma, Aleena; Yadav, Anamika
2015-01-01
Many schemes are reported for shunt fault location estimation, but fault location estimation of series or open conductor faults has not been dealt with so far. The existing numerical relays only detect the open conductor (series) fault and give the indication of the faulty phase(s), but they are unable to locate the series fault. The repair crew needs to patrol the complete line to find the location of series fault. In this paper fuzzy based fault detection/classification and location schemes in time domain are proposed for both series faults, shunt faults, and simultaneous series and shunt faults. The fault simulation studies and fault location algorithm have been developed using Matlab/Simulink. Synchronized phasors of voltage and current signals of both the ends of the line have been used as input to the proposed fuzzy based fault location scheme. Percentage of error in location of series fault is within 1% and shunt fault is 5% for all the tested fault cases. Validation of percentage of error in location estimation is done using Chi square test with both 1% and 5% level of significance. PMID:26413088
Dynamic modeling of neuronal responses in fMRI using cubature Kalman filtering
Havlicek, Martin; Friston, Karl J.; Jan, Jiri; Brazdil, Milan; Calhoun, Vince D.
2011-01-01
This paper presents a new approach to inverting (fitting) models of coupled dynamical systems based on state-of-the-art (cubature) Kalman filtering. Crucially, this inversion furnishes posterior estimates of both the hidden states and parameters of a system, including any unknown exogenous input. Because the underlying generative model is formulated in continuous time (with a discrete observation process) it can be applied to a wide variety of models specified with either ordinary or stochastic differential equations. These are an important class of models that are particularly appropriate for biological time-series, where the underlying system is specified in terms of kinetics or dynamics (i.e., dynamic causal models). We provide comparative evaluations with generalized Bayesian filtering (dynamic expectation maximization) and demonstrate marked improvements in accuracy and computational efficiency. We compare the schemes using a series of difficult (nonlinear) toy examples and conclude with a special focus on hemodynamic models of evoked brain responses in fMRI. Our scheme promises to provide a significant advance in characterizing the functional architectures of distributed neuronal systems, even in the absence of known exogenous (experimental) input; e.g., resting state fMRI studies and spontaneous fluctuations in electrophysiological studies. Importantly, unlike current Bayesian filters (e.g. DEM), our scheme provides estimates of time-varying parameters, which we will exploit in future work on the adaptation and enabling of connections in the brain. PMID:21396454
Surface electric fields for North America during historical geomagnetic storms
Wei, Lisa H.; Homeier, Nichole; Gannon, Jennifer L.
2013-01-01
To better understand the impact of geomagnetic disturbances on the electric grid, we recreate surface electric fields from two historical geomagnetic storms—the 1989 “Quebec” storm and the 2003 “Halloween” storms. Using the Spherical Elementary Current Systems method, we interpolate sparsely distributed magnetometer data across North America. We find good agreement between the measured and interpolated data, with larger RMS deviations at higher latitudes corresponding to larger magnetic field variations. The interpolated magnetic field data are combined with surface impedances for 25 unique physiographic regions from the United States Geological Survey and literature to estimate the horizontal, orthogonal surface electric fields in 1 min time steps. The induced horizontal electric field strongly depends on the local surface impedance, resulting in surprisingly strong electric field amplitudes along the Atlantic and Gulf Coast. The relative peak electric field amplitude of each physiographic region, normalized to the value in the Interior Plains region, varies by a factor of 2 for different input magnetic field time series. The order of peak electric field amplitudes (largest to smallest), however, does not depend much on the input. These results suggest that regions at lower magnetic latitudes with high ground resistivities are also at risk from the effect of geomagnetically induced currents. The historical electric field time series are useful for estimating the flow of the induced currents through long transmission lines to study power flow and grid stability during geomagnetic disturbances.
Steady-state phase error for a phase-locked loop subjected to periodic Doppler inputs
NASA Technical Reports Server (NTRS)
Chen, C.-C.; Win, M. Z.
1991-01-01
The performance of a carrier phase locked loop (PLL) driven by a periodic Doppler input is studied. By expanding the Doppler input into a Fourier series and applying the linearized PLL approximations, it is easy to show that, for periodic frequency disturbances, the resulting steady state phase error is also periodic. Compared to the method of expanding frequency excursion into a power series, the Fourier expansion method can be used to predict the maximum phase error excursion for a periodic Doppler input. For systems with a large Doppler rate fluctuation, such as an optical transponder aboard an Earth orbiting spacecraft, the method can be applied to test whether a lower order tracking loop can provide satisfactory tracking and thereby save the effect of a higher order loop design.
NASA Technical Reports Server (NTRS)
Jacob, H. G.
1972-01-01
An optimization method has been developed that computes the optimal open loop inputs for a dynamical system by observing only its output. The method reduces to static optimization by expressing the inputs as series of functions with parameters to be optimized. Since the method is not concerned with the details of the dynamical system to be optimized, it works for both linear and nonlinear systems. The method and the application to optimizing longitudinal landing paths for a STOL aircraft with an augmented wing are discussed. Noise, fuel, time, and path deviation minimizations are considered with and without angle of attack, acceleration excursion, flight path, endpoint, and other constraints.
The Comparison of Visual Working Memory Representations with Perceptual Inputs
Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew
2008-01-01
The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755
Torija, Antonio J; Ruiz, Diego P
2012-10-01
Road traffic has a heavy impact on the urban sound environment, constituting the main source of noise and widely dominating its spectral composition. In this context, our research investigates the use of recorded sound spectra as input data for the development of real-time short-term road traffic flow estimation models. For this, a series of models based on the use of Multilayer Perceptron Neural Networks, multiple linear regression, and the Fisher linear discriminant were implemented to estimate road traffic flow as well as to classify it according to the composition of heavy vehicles and motorcycles/mopeds. In view of the results, the use of the 50-400 Hz and 1-2.5 kHz frequency ranges as input variables in multilayer perceptron-based models successfully estimated urban road traffic flow with an average percentage of explained variance equal to 86%, while the classification of the urban road traffic flow gave an average success rate of 96.1%. Copyright © 2012 Elsevier B.V. All rights reserved.
The Recalibrated Sunspot Number: Impact on Solar Cycle Predictions
NASA Astrophysics Data System (ADS)
Clette, F.; Lefevre, L.
2017-12-01
Recently and for the first time since their creation, the sunspot number and group number series were entirely revisited and a first fully recalibrated version was officially released in July 2015 by the World Data Center SILSO (Brussels). Those reference long-term series are widely used as input data or as a calibration reference by various solar cycle prediction methods. Therefore, past predictions may now need to be redone using the new sunspot series, and methods already used for predicting cycle 24 will require adaptations before attempting predictions of the next cycles.In order to clarify the nature of the applied changes, we describe the different corrections applied to the sunspot and group number series, which affect extended time periods and can reach up to 40%. While some changes simply involve constant scale factors, other corrections vary with time or follow the solar cycle modulation. Depending on the prediction method and on the selected time interval, this can lead to different responses and biases. Moreover, together with the new series, standard error estimates are also progressively added to the new sunspot numbers, which may help deriving more accurate uncertainties for predicted activity indices. We conclude on the new round of recalibration that is now undertaken in the framework of a broad multi-team collaboration articulated around upcoming ISSI workshops. We outline the future corrections that can still be expected in the future, as part of a permanent upgrading process and quality control. From now on, future sunspot-based predictive models should thus be made more adaptable, and regular updates of predictions should become common practice in order to track periodic upgrades of the sunspot number series, just like it is done when using other modern solar observational series.
NASA Astrophysics Data System (ADS)
Perera, Kushan C.; Western, Andrew W.; Robertson, David E.; George, Biju; Nawarathna, Bandara
2016-06-01
Irrigation demands fluctuate in response to weather variations and a range of irrigation management decisions, which creates challenges for water supply system operators. This paper develops a method for real-time ensemble forecasting of irrigation demand and applies it to irrigation command areas of various sizes for lead times of 1 to 5 days. The ensemble forecasts are based on a deterministic time series model coupled with ensemble representations of the various inputs to that model. Forecast inputs include past flow, precipitation, and potential evapotranspiration. These inputs are variously derived from flow observations from a modernized irrigation delivery system; short-term weather forecasts derived from numerical weather prediction models and observed weather data available from automatic weather stations. The predictive performance for the ensemble spread of irrigation demand was quantified using rank histograms, the mean continuous rank probability score (CRPS), the mean CRPS reliability and the temporal mean of the ensemble root mean squared error (MRMSE). The mean forecast was evaluated using root mean squared error (RMSE), Nash-Sutcliffe model efficiency (NSE) and bias. The NSE values for evaluation periods ranged between 0.96 (1 day lead time, whole study area) and 0.42 (5 days lead time, smallest command area). Rank histograms and comparison of MRMSE, mean CRPS, mean CRPS reliability and RMSE indicated that the ensemble spread is generally a reliable representation of the forecast uncertainty for short lead times but underestimates the uncertainty for long lead times.
Influence of ionospheric disturbances onto long-baseline relative positioning in kinematic mode
NASA Astrophysics Data System (ADS)
Wezka, Kinga; Herrera, Ivan; Cokrlic, Marija; Galas, Roman
2013-04-01
Ionospheric disturbances are fast and random variabilities in the ionosphere and they are difficult to detect and model. Some strong disturbances can cause, among others, interruption of GNSS signal or even lead to loss of signal lock. These phenomena are especially harmful for kinematic real-time applications, where the system availability is one of the most important parameters influencing positioning reliability. Our investigations were conducted using long time series of GNSS observations gathered at high latitude, where ionospheric disturbances more frequently occur. Selected processing strategy was used to monitor ionospheric signatures in time series of the coordinates. Quality of the data of input and of the processing results were examined and described by a set of proposed parameters. Variations in the coordinates were compared with available information about the state of ionosphere derived from Neustrelitz TEC Model (NTCM) and with the time series of raw observations. Some selected parameters were also calculated with the "iono-tools" module of the TUB-NavSolutions software developed by the Precise Navigation and Positioning Group at Technische Universitaet Berlin. The paper presents very first results of evaluation of the robustness of positioning algorithms with respect to ionospheric anomalies using the NTCM model and our calculated ionospheric parameters.
Wavelet analysis of near-resonant series RLC circuit with time-dependent forcing frequency
NASA Astrophysics Data System (ADS)
Caccamo, M. T.; Cannuli, A.; Magazù, S.
2018-07-01
In this work, the results of an analysis of the response of a near-resonant series resistance‑inductance‑capacitance (RLC) electric circuit with time-dependent forcing frequency by means of a wavelet cross-correlation approach are reported. In particular, it is shown how the wavelet approach enables frequency and time analysis of the circuit response to be carried out simultaneously—this procedure not being possible by Fourier transform, since the frequency is not stationary in time. A series RLC circuit simulation is performed by using the Simulation Program with Integrated Circuits Emphasis (SPICE), in which an oscillatory sinusoidal voltage drive signal of constant amplitude is swept through the resonant condition by progressively increasing the frequency over a 20-second time window, linearly, from 0.32 Hz to 6.69 Hz. It is shown that the wavelet cross-correlation procedure quantifies the common power between the input signal (represented by the electromotive force) and the output signal, which in the present case is a current, highlighting not only which frequencies are present but also when they occur, i.e. providing a simultaneous time-frequency analysis. The work is directed toward graduate Physics, Engineering and Mathematics students, with the main intention of introducing wavelet analysis into their data analysis toolkit.
Romaguera, Mireia; Vaughan, R. Greg; Ettema, J.; Izquierdo-Verdiguier, E.; Hecker, C. A.; van der Meer, F.D.
2018-01-01
This paper explores for the first time the possibilities to use two land surface temperature (LST) time series of different origins (geostationary Meteosat Second Generation satellite data and Noah land surface modelling, LSM), to detect geothermal anomalies and extract the geothermal component of LST, the LSTgt. We hypothesize that in geothermal areas the LSM time series will underestimate the LST as compared to the remote sensing data, since the former does not account for the geothermal component in its model.In order to extract LSTgt, two approaches of different nature (physical based and data mining) were developed and tested in an area of about 560 × 560 km2 centered at the Kenyan Rift. Pre-dawn data in the study area during the first 45 days of 2012 were analyzed.The results show consistent spatial and temporal LSTgt patterns between the two approaches, and systematic differences of about 2 K. A geothermal area map from surface studies was used to assess LSTgt inside and outside the geothermal boundaries. Spatial means were found to be higher inside the geothermal limits, as well as the relative frequency of occurrence of high LSTgt. Results further show that areas with strong topography can result in anomalously high LSTgt values (false positives), which suggests the need for a slope and aspect correction in the inputs to achieve realistic results in those areas. The uncertainty analysis indicates that large uncertainties of the input parameters may limit detection of LSTgt anomalies. To validate the approaches, higher spatial resolution images from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data over the Olkaria geothermal field were used. An established method to estimate radiant geothermal flux was applied providing values between 9 and 24 W/m2 in the geothermal area, which coincides with the LSTgt flux rates obtained with the proposed approaches.The proposed approaches are a first step in estimating LSTgt at large spatial coverage from remote sensing and LSM data series, and provide an innovative framework for future improvements.
Declining spatial efficiency of global cropland nitrogen allocation
NASA Astrophysics Data System (ADS)
Mueller, Nathaniel D.; Lassaletta, Luis; Runck, Bryan C.; Billen, Gilles; Garnier, Josette; Gerber, James S.
2017-02-01
Efficiently allocating nitrogen (N) across space maximizes crop productivity for a given amount of N input and reduces N losses to the environment. Here we quantify changes in the global spatial efficiency of cropland N use by calculating historical trade-off frontiers relating N inputs to possible N yield assuming efficient allocation. Time series cropland N budgets from 1961 to 2009 characterize the evolution of N input-yield response functions across 12 regions and are the basis for constructing trade-off frontiers. Improvements in agronomic technology have substantially increased cropping system yield potentials and expanded N-driven crop production possibilities. However, we find that these gains are compromised by the declining spatial efficiency of N use across regions. Since the start of the Green Revolution, N inputs and yields have moved farther from the optimal frontier over time; in recent years (1994-2009), global N surplus has grown to a value that is 69% greater than what is possible with efficient N allocation between regions. To reflect regional pollution and agricultural development goals, we construct scenarios that restrict reallocation, finding that these changes only slightly decrease potential gains in nitrogen use efficiency. Our results are inherently conservative due to the regional unit of analysis, meaning a larger potential exists than is quantified here for cross-scale policies to promote spatially efficient N use.
NASA Astrophysics Data System (ADS)
Gowda, P. H.
2016-12-01
Evapotranspiration (ET) is an important process in ecosystems' water budget and closely linked to its productivity. Therefore, regional scale daily time series ET maps developed at high and medium resolutions have large utility in studying the carbon-energy-water nexus and managing water resources. There are efforts to develop such datasets on a regional to global scale but often faced with the limitations of spatial-temporal resolution tradeoffs in satellite remote sensing technology. In this study, we developed frameworks for generating high and medium resolution daily ET maps from Landsat and MODIS (Moderate Resolution Imaging Spectroradiometer) data, respectively. For developing high resolution (30-m) daily time series ET maps with Landsat TM data, the series version of Two Source Energy Balance (TSEB) model was used to compute sensible and latent heat fluxes of soil and canopy separately. Landsat 5 (2000-2011) and Landsat 8 (2013-2014) imageries for row 28/35 and 27/36 covering central Oklahoma was used. MODIS data (2001-2014) covering Oklahoma and Texas Panhandle was used to develop medium resolution (250-m), time series daily ET maps with SEBS (Surface Energy Balance System) model. An extensive network of weather stations managed by Texas High Plains ET Network and Oklahoma Mesonet was used to generate spatially interpolated inputs of air temperature, relative humidity, wind speed, solar radiation, pressure, and reference ET. A linear interpolation sub-model was used to estimate the daily ET between the image acquisition days. Accuracy assessment of daily ET maps were done against eddy covariance data from two grassland sites at El Reno, OK. Statistical results indicated good performance by modeling frameworks developed for deriving time series ET maps. Results indicated that the proposed ET mapping framework is suitable for deriving daily time series ET maps at regional scale with Landsat and MODIS data.
MTpy: A Python toolbox for magnetotellurics
Krieger, Lars; Peacock, Jared R.
2014-01-01
In this paper, we introduce the structure and concept of MTpy . Additionally, we show some examples from an everyday work-flow of MT data processing: the generation of standard EDI data files from raw electric (E-) and magnetic flux density (B-) field time series as input, the conversion into MiniSEED data format, as well as the generation of a graphical data representation in the form of a Phase Tensor pseudosection.
Sharma, Ram C; Hara, Keitarou; Hirayama, Hidetake
2017-01-01
This paper presents the performance and evaluation of a number of machine learning classifiers for the discrimination between the vegetation physiognomic classes using the satellite based time-series of the surface reflectance data. Discrimination of six vegetation physiognomic classes, Evergreen Coniferous Forest, Evergreen Broadleaf Forest, Deciduous Coniferous Forest, Deciduous Broadleaf Forest, Shrubs, and Herbs, was dealt with in the research. Rich-feature data were prepared from time-series of the satellite data for the discrimination and cross-validation of the vegetation physiognomic types using machine learning approach. A set of machine learning experiments comprised of a number of supervised classifiers with different model parameters was conducted to assess how the discrimination of vegetation physiognomic classes varies with classifiers, input features, and ground truth data size. The performance of each experiment was evaluated by using the 10-fold cross-validation method. Experiment using the Random Forests classifier provided highest overall accuracy (0.81) and kappa coefficient (0.78). However, accuracy metrics did not vary much with experiments. Accuracy metrics were found to be very sensitive to input features and size of ground truth data. The results obtained in the research are expected to be useful for improving the vegetation physiognomic mapping in Japan.
Assessing the performance of eight real-time updating models and procedures for the Brosna River
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Bhattarai, K. P.; Shamseldin, A. Y.
2005-10-01
The flow forecasting performance of eight updating models, incorporated in the Galway River Flow Modelling and Forecasting System (GFMFS), was assessed using daily data (rainfall, evaporation and discharge) of the Irish Brosna catchment (1207 km2), considering their one to six days lead-time discharge forecasts. The Perfect Forecast of Input over the Forecast Lead-time scenario was adopted, where required, in place of actual rainfall forecasts. The eight updating models were: (i) the standard linear Auto-Regressive (AR) model, applied to the forecast errors (residuals) of a simulation (non-updating) rainfall-runoff model; (ii) the Neural Network Updating (NNU) model, also using such residuals as input; (iii) the Linear Transfer Function (LTF) model, applied to the simulated and the recently observed discharges; (iv) the Non-linear Auto-Regressive eXogenous-Input Model (NARXM), also a neural network-type structure, but having wide options of using recently observed values of one or more of the three data series, together with non-updated simulated outflows, as inputs; (v) the Parametric Simple Linear Model (PSLM), of LTF-type, using recent rainfall and observed discharge data; (vi) the Parametric Linear perturbation Model (PLPM), also of LTF-type, using recent rainfall and observed discharge data, (vii) n-AR, an AR model applied to the observed discharge series only, as a naïve updating model; and (viii) n-NARXM, a naive form of the NARXM, using only the observed discharge data, excluding exogenous inputs. The five GFMFS simulation (non-updating) models used were the non-parametric and parametric forms of the Simple Linear Model and of the Linear Perturbation Model, the Linearly-Varying Gain Factor Model, the Artificial Neural Network Model, and the conceptual Soil Moisture Accounting and Routing (SMAR) model. As the SMAR model performance was found to be the best among these models, in terms of the Nash-Sutcliffe R2 value, both in calibration and in verification, the simulated outflows of this model only were selected for the subsequent exercise of producing updated discharge forecasts. All the eight forms of updating models for producing lead-time discharge forecasts were found to be capable of producing relatively good lead-1 (1-day ahead) forecasts, with R2 values almost 90% or above. However, for higher lead time forecasts, only three updating models, viz., NARXM, LTF, and NNU, were found to be suitable, with lead-6 values of R2 about 90% or higher. Graphical comparisons were made of the lead-time forecasts for the two largest floods, one in the calibration period and the other in the verification period.
NASA Astrophysics Data System (ADS)
Ibarra, Juan G.; Tao, Yang; Xin, Hongwei
2000-11-01
A noninvasive method for the estimation of internal temperature in chicken meat immediately following cooking is proposed. The external temperature from IR images was correlated with measured internal temperature through a multilayer neural network. To provide inputs for the network, time series experiments were conducted to obtain simultaneous observations of internal and external temperatures immediately after cooking during the cooling process. An IR camera working at the spectral band of 3.4 to 5.0 micrometers registered external temperature distributions without the interference of close-to-oven environment, while conventional thermocouples registered internal temperatures. For an internal temperature at a given time, simultaneous and lagged external temperature observations were used as the input of the neural network. Based on practical and statistical considerations, a criterion is established to reduce the nodes in the neural network input. The combined method was able to estimate internal temperature for times between 0 and 540 s within a standard error of +/- 1.01 degree(s)C, and within an error of +/- 1.07 degree(s)C for short times after cooking (3 min), with two thermograms at times t and t+30s. The method has great potential for monitoring of doneness of chicken meat in conveyor belt type cooking and can be used as a platform for similar studies in other food products.
NASA Astrophysics Data System (ADS)
Murray, J. R.; Svarc, J. L.
2016-12-01
Constant secular velocities estimated from Global Positioning System (GPS)-derived position time series are a central input for modeling interseismic deformation in seismically active regions. Both postseismic motion and temporally correlated noise produce long-period signals that are difficult to separate from secular motion and can bias velocity estimates. For GPS sites installed post-earthquake it is especially challenging to uniquely estimate velocities and postseismic signals and to determine when the postseismic transient has decayed sufficiently to enable use of subsequent data for estimating secular rates. Within 60 km of the 2003 M6.5 San Simeon and 2004 M6 Parkfield earthquakes in California, 16 continuous GPS sites (group 1) were established prior to mid-2001, and 52 stations (group 2) were installed following the events. We use group 1 data to investigate how early in the post-earthquake time period one may reliably begin using group 2 data to estimate velocities. For each group 1 time series, we obtain eight velocity estimates using observation time windows with successively later start dates (2006 - 2013) and a parameterization that includes constant velocity, annual, and semi-annual terms but no postseismic decay. We compare these to velocities estimated using only pre-San Simeon data to find when the pre- and post-earthquake velocities match within uncertainties. To obtain realistic velocity uncertainties, for each time series we optimize a temporally correlated noise model consisting of white, flicker, random walk, and, in some cases, band-pass filtered noise contributions. Preliminary results suggest velocities can be reliably estimated using data from 2011 to the present. Ongoing work will assess velocity bias as a function of epicentral distance and length of post-earthquake time series as well as explore spatio-temporal filtering of detrended group 1 time series to provide empirical corrections for postseismic motion in group 2 time series.
Distortions of Subjective Time Perception Within and Across Senses
van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan
2008-01-01
Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248
Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429
Summation of power series in particle physics
NASA Astrophysics Data System (ADS)
Fischer, Jan
1999-04-01
The large-order behaviour of power series used in quantum theory (perturbation series and the operator-product expansion) is discussed and relevant summation methods are reviewed. It is emphasised that, in most physically interesting situations, the mere knowledge of the expansion coefficients is not sufficient for a unique determination of the function expanded, and the necessity of some additional, extra-perturbative, input is pointed out. Several possible nonperturbative inputs are suggested. Applications to various problems of quantum chromodynamics are considered. This lecture was presented on the special Memorial Day dedicated to Professor Ryszard R˛czka at this Workshop. The last section is devoted to my personal recollections of this remarkable personality.
Prediction, scenarios and insight: The uses of an end-to-end model
NASA Astrophysics Data System (ADS)
Steele, John H.
2012-09-01
A major function of ecosystem models is to provide extrapolations from observed data in terms of predictions or scenarios or insight. These models can be at various levels of taxonomic resolution such as total community production, abundance of functional groups, or species composition, depending on the data input as drivers. A 40-year dynamic simulation of end-to-end processes in the Georges Bank food web is used to illustrate the input/output relations and the insights gained at the three levels of food web aggregation. The focus is on the intermediate level and the longer term changes in three functional fish guilds - planktivores, benthivores and piscivores - in terms of three ecosystem-based metrics - nutrient input, relative productivity of plankton and benthos, and food intake by juvenile fish. These simulations can describe the long term constraints imposed on guild structure and productivity by energy fluxes over the 40 years but cannot explain concurrent switches in abundance of individual species within guilds. Comparing time series data for individual species with model output provides insights; but including the data in the model would confer only limited extra information. The advantages and limitations of the three levels of resolution of models in relation to ecosystem-based management are: The correlations between primary production and total yield of fish imply a “bottom-up” constraint on end-to-end energy flow through the food web that can provide predictions of such yields. Functionally defined metrics such as nutrient input, relative productivity of plankton and benthos and food intake by juvenile fish, represent bottom-up, mid-level and top-down forcing of the food web. Model scenarios using these metrics can demonstrate constraints on the productivity of these functionally defined guilds within the limits set by (1). Comparisons of guild simulations with time series of fish species provide insight into the switches in species dominance that accompany changes in guild productivity and can illuminate the top-down aspects of regime shifts.
NASA Astrophysics Data System (ADS)
Lischeid, G.; Hohenbrink, T.; Schindler, U.
2012-04-01
Hydrology is based on the observation that catchments process input signals, e.g., precipitation, in a highly deterministic way. Thus, the Darcy or the Richards equation can be applied to model water fluxes in the saturated or vadose zone, respectively. Soils and aquifers usually exhibit substantial spatial heterogeneities at different scales that can, in principle, be represented by corresponding parameterisations of the models. In practice, however, data are hardly available at the required spatial resolution, and accounting for observed heterogeneities of soil and aquifer structure renders models very time and CPU consuming. We hypothesize that the intrinsic dimensionality of soil hydrological processes, which is induced by spatial heterogeneities, actually is very low and that soil hydrological processes in heterogeneous soils follow approximately the same trajectory. That means, the way how the soil transforms any hydrological input signals is the same for different soil textures and structures. Different soils differ only with respect to the extent of transformation of input signals. In a first step, we analysed the output of a soil hydrological model, based on the Richards equation, for homogeneous soils down to 5 m depth for different soil textures. A matrix of time series of soil matrix potential and soil water content at 10 cm depth intervals was set up. The intrinsic dimensionality of that matrix was assessed using the Correlation Dimension and a non-linear principal component approach. The latter provided a metrics for the extent of transformation ("damping") of the input signal. In a second step, model outputs for heterogeneous soils were analysed. In a last step, the same approaches were applied to 55 time series of observed soil water content from 15 sites and different depths. In all cases, the intrinsic dimensionality in fact was very close to unity, confirming our hypothesis. The metrics provided a very efficient tool to quantify the observed behaviour, depending on depth and soil heterogeneity: Different soils differed primarily with respect to the extent of damping per depth interval rather than to the kind of damping. We will show how that metrics can be used in a very efficient way for representing soil heterogeneities in simulation models.
Beddows, Andrew V; Kitwiroon, Nutthida; Williams, Martin L; Beevers, Sean D
2017-06-06
Gaussian process emulation techniques have been used with the Community Multiscale Air Quality model, simulating the effects of input uncertainties on ozone and NO 2 output, to allow robust global sensitivity analysis (SA). A screening process ranked the effect of perturbations in 223 inputs, isolating the 30 most influential from emissions, boundary conditions (BCs), and reaction rates. Community Multiscale Air Quality (CMAQ) simulations of a July 2006 ozone pollution episode in the UK were made with input values for these variables plus ozone dry deposition velocity chosen according to a 576 point Latin hypercube design. Emulators trained on the output of these runs were used in variance-based SA of the model output to input uncertainties. Performing these analyses for every hour of a 21 day period spanning the episode and several days on either side allowed the results to be presented as a time series of sensitivity coefficients, showing how the influence of different input uncertainties changed during the episode. This is one of the most complex models to which these methods have been applied, and here, they reveal detailed spatiotemporal patterns of model sensitivities, with NO and isoprene emissions, NO 2 photolysis, ozone BCs, and deposition velocity being among the most influential input uncertainties.
Weber, Juliane; Zachow, Christopher; Witthaut, Dirk
2018-03-01
Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.
NASA Astrophysics Data System (ADS)
Weber, Juliane; Zachow, Christopher; Witthaut, Dirk
2018-03-01
Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.
Estimating the Information Extracted by a Single Spiking Neuron from a Continuous Input Time Series.
Zeldenrust, Fleur; de Knecht, Sicco; Wadman, Wytse J; Denève, Sophie; Gutkin, Boris
2017-01-01
Understanding the relation between (sensory) stimuli and the activity of neurons (i.e., "the neural code") lies at heart of understanding the computational properties of the brain. However, quantifying the information between a stimulus and a spike train has proven to be challenging. We propose a new ( in vitro ) method to measure how much information a single neuron transfers from the input it receives to its output spike train. The input is generated by an artificial neural network that responds to a randomly appearing and disappearing "sensory stimulus": the hidden state. The sum of this network activity is injected as current input into the neuron under investigation. The mutual information between the hidden state on the one hand and spike trains of the artificial network or the recorded spike train on the other hand can easily be estimated due to the binary shape of the hidden state. The characteristics of the input current, such as the time constant as a result of the (dis)appearance rate of the hidden state or the amplitude of the input current (the firing frequency of the neurons in the artificial network), can independently be varied. As an example, we apply this method to pyramidal neurons in the CA1 of mouse hippocampi and compare the recorded spike trains to the optimal response of the "Bayesian neuron" (BN). We conclude that like in the BN, information transfer in hippocampal pyramidal cells is non-linear and amplifying: the information loss between the artificial input and the output spike train is high if the input to the neuron (the firing of the artificial network) is not very informative about the hidden state. If the input to the neuron does contain a lot of information about the hidden state, the information loss is low. Moreover, neurons increase their firing rates in case the (dis)appearance rate is high, so that the (relative) amount of transferred information stays constant.
Baksi, Krishanu D; Kuntal, Bhusan K; Mande, Sharmila S
2018-01-01
Realization of the importance of microbiome studies, coupled with the decreasing sequencing cost, has led to the exponential growth of microbiome data. A number of these microbiome studies have focused on understanding changes in the microbial community over time. Such longitudinal microbiome studies have the potential to offer unique insights pertaining to the microbial social networks as well as their responses to perturbations. In this communication, we introduce a web based framework called 'TIME' (Temporal Insights into Microbial Ecology'), developed specifically to obtain meaningful insights from microbiome time series data. The TIME web-server is designed to accept a wide range of popular formats as input with options to preprocess and filter the data. Multiple samples, defined by a series of longitudinal time points along with their metadata information, can be compared in order to interactively visualize the temporal variations. In addition to standard microbiome data analytics, the web server implements popular time series analysis methods like Dynamic time warping, Granger causality and Dickey Fuller test to generate interactive layouts for facilitating easy biological inferences. Apart from this, a new metric for comparing metagenomic time series data has been introduced to effectively visualize the similarities/differences in the trends of the resident microbial groups. Augmenting the visualizations with the stationarity information pertaining to the microbial groups is utilized to predict the microbial competition as well as community structure. Additionally, the 'causality graph analysis' module incorporated in TIME allows predicting taxa that might have a higher influence on community structure in different conditions. TIME also allows users to easily identify potential taxonomic markers from a longitudinal microbiome analysis. We illustrate the utility of the web-server features on a few published time series microbiome data and demonstrate the ease with which it can be used to perform complex analysis.
2011-01-01
Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778
Kennedy, Curtis E; Turley, James P
2011-10-24
Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.
Minas, Giorgos; Momiji, Hiroshi; Jenkins, Dafyd J; Costa, Maria J; Rand, David A; Finkenstädt, Bärbel
2017-06-26
Given the development of high-throughput experimental techniques, an increasing number of whole genome transcription profiling time series data sets, with good temporal resolution, are becoming available to researchers. The ReTrOS toolbox (Reconstructing Transcription Open Software) provides MATLAB-based implementations of two related methods, namely ReTrOS-Smooth and ReTrOS-Switch, for reconstructing the temporal transcriptional activity profile of a gene from given mRNA expression time series or protein reporter time series. The methods are based on fitting a differential equation model incorporating the processes of transcription, translation and degradation. The toolbox provides a framework for model fitting along with statistical analyses of the model with a graphical interface and model visualisation. We highlight several applications of the toolbox, including the reconstruction of the temporal cascade of transcriptional activity inferred from mRNA expression data and protein reporter data in the core circadian clock in Arabidopsis thaliana, and how such reconstructed transcription profiles can be used to study the effects of different cell lines and conditions. The ReTrOS toolbox allows users to analyse gene and/or protein expression time series where, with appropriate formulation of prior information about a minimum of kinetic parameters, in particular rates of degradation, users are able to infer timings of changes in transcriptional activity. Data from any organism and obtained from a range of technologies can be used as input due to the flexible and generic nature of the model and implementation. The output from this software provides a useful analysis of time series data and can be incorporated into further modelling approaches or in hypothesis generation.
Weighted statistical parameters for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rimoldini, Lorenzo
2014-01-01
Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.
Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios
Banta, Edward R.
2014-01-01
Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.
A method of inversion of satellite magnetic anomaly data
NASA Technical Reports Server (NTRS)
Mayhew, M. A.
1977-01-01
A method of finding a first approximation to a crustal magnetization distribution from inversion of satellite magnetic anomaly data is described. Magnetization is expressed as a Fourier Series in a segment of spherical shell. Input to this procedure is an equivalent source representation of the observed anomaly field. Instability of the inversion occurs when high frequency noise is present in the input data, or when the series is carried to an excessively high wave number. Preliminary results are given for the United States and adjacent areas.
NASA Astrophysics Data System (ADS)
Smith, John N.; Smethie, William M.; Yashayev, Igor; Curry, Ruth; Azetsu-Scott, Kumiko
2016-11-01
Time series measurements of the nuclear fuel reprocessing tracer 129I and the gas ventilation tracer CFC-11 were undertaken on the AR7W section in the Labrador Sea (1997-2014) and on Line W (2004-2014), located over the US continental slope off Cape Cod, to determine advection and mixing time scales for the transport of Denmark Strait Overflow Water (DSOW) within the Deep Western Boundary Current (DWBC). Tracer measurements were also conducted in 2010 over the continental rise southeast of Bermuda to intercept the equatorward flow of DSOW by interior pathways. The Labrador Sea tracer and hydrographic time series data were used as input functions in a boundary current model that employs transit time distributions to simulate the effects of mixing and advection on downstream tracer distributions. Model simulations of tracer levels in the boundary current core and adjacent interior (shoulder) region with which mixing occurs were compared with the Line W time series measurements to determine boundary current model parameters. These results indicate that DSOW is transported from the Labrador Sea to Line W via the DWBC on a time scale of 5-6 years corresponding to a mean flow velocity of 2.7 cm/s while mixing between the core and interior regions occurs with a time constant of 2.6 years. A tracer section over the southern flank of the Bermuda rise indicates that the flow of DSOW that separated from the DWBC had undergone transport through interior pathways on a time scale of 9 years with a mixing time constant of 4 years.
Improvement of background solar wind predictions
NASA Astrophysics Data System (ADS)
Dálya, Zsuzsanna; Opitz, Andrea
2016-04-01
In order to estimate the solar wind properties at any heliospheric positions propagation tools use solar measurements as input data. The ballistic method extrapolates in-situ solar wind observations to the target position. This works well for undisturbed solar wind, while solar wind disturbances such as Corotating Interaction Regions (CIRs) and Coronal Mass Ejections (CMEs) need more consideration. We are working on dedicated ICME lists to clean these signatures from the input data in order to improve our prediction accuracy. These ICME lists are created from several heliospheric spacecraft measurements: ACE, WIND, STEREO, SOHO, MEX and VEX. As a result, we are able to filter out these events from the time series. Our corrected predictions contribute to the investigation of the quiet solar wind and space weather studies.
Remotely sensed soil moisture input to a hydrologic model
NASA Technical Reports Server (NTRS)
Engman, E. T.; Kustas, W. P.; Wang, J. R.
1989-01-01
The possibility of using detailed spatial soil moisture maps as input to a runoff model was investigated. The water balance of a small drainage basin was simulated using a simple storage model. Aircraft microwave measurements of soil moisture were used to construct two-dimensional maps of the spatial distribution of the soil moisture. Data from overflights on different dates provided the temporal changes resulting from soil drainage and evapotranspiration. The study site and data collection are described, and the soil measurement data are given. The model selection is discussed, and the simulation results are summarized. It is concluded that a time series of soil moisture is a valuable new type of data for verifying model performance and for updating and correcting simulated streamflow.
Scaling Observations of Surface Waves in the Beaufort Sea
2016-04-14
the treatment of wind input can be improved in partial ice cover using the ice concentration, where wave energy is a function of open water distance...drifting buoys during the 2014 open water season, are interpreted using open water distances determined from satellite ice products and wind forcing time...series measured in situ with the buoys. A significant portion of the wave observations were found to be limited by open water distance (fetch) when
Observing spatio-temporal dynamics of excitable media using reservoir computing
NASA Astrophysics Data System (ADS)
Zimmermann, Roland S.; Parlitz, Ulrich
2018-04-01
We present a dynamical observer for two dimensional partial differential equation models describing excitable media, where the required cross prediction from observed time series to not measured state variables is provided by Echo State Networks receiving input from local regions in space, only. The efficacy of this approach is demonstrated for (noisy) data from a (cubic) Barkley model and the Bueno-Orovio-Cherry-Fenton model describing chaotic electrical wave propagation in cardiac tissue.
Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M
2017-10-01
Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal transmission. This is the first application of a deterministic state-space model to represent the discharge characteristics of motor units during voluntary contractions. Copyright © 2017 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Hsu, Feng-Hsin; Su, Chih-Chieh; Lin, In-Tain; Huh, Chih-An
2015-04-01
Submarine groundwater discharge (SGD) has been recognized as an important pathway for materials exchanging between land and sea. Input of SGD carries the associated nutrients, trace metals, and inorganic carbon that may makes great impacts on ecosystem in the coastal zone. Due to the variability of SGD magnitude, it is difficult to estimate the flux of those associated materials around the world. Even in the same area, SGD magnitude also varies in response to tide fluctuation and seasonal change on hydraulic gradient. Thus, long-term investigation is in need. In Taiwan, the SGD study is rare and the intrusion of seawater in the coastal aquifer is emphasized in previous studies. According to the information from Hydrogeological Data Bank (Central Geological Survey, MOEA), some areas still show potentiality of SGD. Here, we report the preliminary investigation result of SGD at Gaomei Wildlife Conservation Area which located at the south of the Da-Chia River mouth. This study area is characterized by a great tidal rang and a shallow aquifer with high groundwater recharge rate. Time-series measurement of the short-lived Ra in surface water was done in both dry and wet seasons at a tidal flat site and shows different trends of excess Ra-224 between dry and wet seasons. High excess Ra-224 activities (>20 dpm/100L) occurred at high tide in dry season but at low tide in wet season. The plot of salinity versus excess Ra-224, showing non-conservative curve, suggests that high excess Ra-224 activities derive from desorption in dry season but from SGD input in wet season.
NASA Technical Reports Server (NTRS)
Hermance, J. F. (Principal Investigator)
1981-01-01
A spherical harmonic analysis program is being tested which takes magnetic data in universal time from a set of arbitrarily space observatories and calculates a value for the instantaneous magnetic field at any point on the globe. The calculation is done as a least mean-squares value fit to a set of spherical harmonics up to any desired order. The program accepts as a set of input the orbit position of a satellite coordinates it with ground-based magnetic data for a given time. The output is a predicted time series for the magnetic field on the Earth's surface at the (r, theta) position directly under the hypothetically orbiting satellite for the duration of the time period of the input data set. By tracking the surface magnetic field beneath the satellite, narrow-band averages crosspowers between the spatially coordinated satellite and the ground-based data sets are computed. These crosspowers are used to calculate field transfer coefficients with minimum noise distortion. The application of this technique to calculating the vector response function W is discussed.
Snedden, Gregg
2016-01-01
Estuarine navigation channels have long been recognized as conduits for saltwater intrusion into coastal wetlands. Salt flux decomposition and time series measurements of velocity and salinity were used to examine salt flux components and drivers of baroclinic and barotropic exchange in the Houma Navigation Channel, an estuarine channel located in the Mississippi River delta plain that receives substantial freshwater inputs from the Mississippi-Atchafalaya River system at its inland extent. Two modes of vertical current structure were identified from the time series data. The first mode, accounting for 90% of the total flow field variability, strongly resembled a barotropic current structure and was coherent with alongshelf wind stress over the coastal Gulf of Mexico. The second mode was indicative of gravitational circulation and was linked to variability in tidal stirring and the horizontal salinity gradient along the channel’s length. Tidal oscillatory salt flux was more important than gravitational circulation in transporting salt upestuary, except over equatorial phases of the fortnightly tidal cycle during times when river inflows were minimal. During all tidal cycles sampled, the advective flux, driven by a combination of freshwater discharge and wind-driven changes in storage, was the dominant transport term, and net flux of salt was always out of the estuary. These findings indicate that although human-made channels can effectively facilitate inland intrusion of saline water, this intrusion can be minimized or even reversed when they are subject to significant freshwater inputs.
Dynamic modeling of neuronal responses in fMRI using cubature Kalman filtering.
Havlicek, Martin; Friston, Karl J; Jan, Jiri; Brazdil, Milan; Calhoun, Vince D
2011-06-15
This paper presents a new approach to inverting (fitting) models of coupled dynamical systems based on state-of-the-art (cubature) Kalman filtering. Crucially, this inversion furnishes posterior estimates of both the hidden states and parameters of a system, including any unknown exogenous input. Because the underlying generative model is formulated in continuous time (with a discrete observation process) it can be applied to a wide variety of models specified with either ordinary or stochastic differential equations. These are an important class of models that are particularly appropriate for biological time-series, where the underlying system is specified in terms of kinetics or dynamics (i.e., dynamic causal models). We provide comparative evaluations with generalized Bayesian filtering (dynamic expectation maximization) and demonstrate marked improvements in accuracy and computational efficiency. We compare the schemes using a series of difficult (nonlinear) toy examples and conclude with a special focus on hemodynamic models of evoked brain responses in fMRI. Our scheme promises to provide a significant advance in characterizing the functional architectures of distributed neuronal systems, even in the absence of known exogenous (experimental) input; e.g., resting state fMRI studies and spontaneous fluctuations in electrophysiological studies. Importantly, unlike current Bayesian filters (e.g. DEM), our scheme provides estimates of time-varying parameters, which we will exploit in future work on the adaptation and enabling of connections in the brain. Copyright © 2011 Elsevier Inc. All rights reserved.
Calibrating binary lumped parameter models
NASA Astrophysics Data System (ADS)
Morgenstern, Uwe; Stewart, Mike
2017-04-01
Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.
NASA Astrophysics Data System (ADS)
Harris, Courtney K.; Wiberg, Patricia L.
1997-09-01
Modeling shelf sediment transport rates and bed reworking depths is problematic when the wave and current forcing conditions are not precisely known, as is usually the case when long-term sedimentation patterns are of interest. Two approaches to modeling sediment transport under such circumstances are considered. The first relies on measured or simulated time series of flow conditions to drive model calculations. The second approach uses as model input probability distribution functions of bottom boundary layer flow conditions developed from wave and current measurements. Sediment transport rates, frequency of bed resuspension by waves and currents, and bed reworking calculated using the two methods are compared at the mid-shelf STRESS (Sediment TRansport on Shelves and Slopes) site on the northern California continental shelf. Current, wave and resuspension measurements at the site are used to generate model inputs and test model results. An 11-year record of bottom wave orbital velocity, calculated from surface wave spectra measured by the National Data Buoy Center (NDBC) Buoy 46013 and verified against bottom tripod measurements, is used to characterize the frequency and duration of wave-driven transport events and to estimate the joint probability distribution of wave orbital velocity and period. A 109-day record of hourly current measurements 10 m above bottom is used to estimate the probability distribution of bottom boundary layer current velocity at this site and to develop an auto-regressive model to simulate current velocities for times when direct measurements of currents are not available. Frequency of transport, the maximum volume of suspended sediment, and average flux calculated using measured wave and simulated current time series agree well with values calculated using measured time series. A probabilistic approach is more amenable to calculations over time scales longer than existing wave records, but it tends to underestimate net transport because it does not capture the episodic nature of transport events. Both methods enable estimates to be made of the uncertainty in transport quantities that arise from an incomplete knowledge of the specific timing of wave and current conditions. 1997 Elsevier Science Ltd
NASA Astrophysics Data System (ADS)
Czuba, Jonathan A.; Foufoula-Georgiou, Efi; Gran, Karen B.; Belmont, Patrick; Wilcock, Peter R.
2017-05-01
Understanding how sediment moves along source to sink pathways through watersheds—from hillslopes to channels and in and out of floodplains—is a fundamental problem in geomorphology. We contribute to advancing this understanding by modeling the transport and in-channel storage dynamics of bed material sediment on a river network over a 600 year time period. Specifically, we present spatiotemporal changes in bed sediment thickness along an entire river network to elucidate how river networks organize and process sediment supply. We apply our model to sand transport in the agricultural Greater Blue Earth River Basin in Minnesota. By casting the arrival of sediment to links of the network as a Poisson process, we derive analytically (under supply-limited conditions) the time-averaged probability distribution function of bed sediment thickness for each link of the river network for any spatial distribution of inputs. Under transport-limited conditions, the analytical assumptions of the Poisson arrival process are violated (due to in-channel storage dynamics) where we find large fluctuations and periodicity in the time series of bed sediment thickness. The time series of bed sediment thickness is the result of dynamics on a network in propagating, altering, and amalgamating sediment inputs in sometimes unexpected ways. One key insight gleaned from the model is that there can be a small fraction of reaches with relatively low-transport capacity within a nonequilibrium river network acting as "bottlenecks" that control sediment to downstream reaches, whereby fluctuations in bed elevation can dissociate from signals in sediment supply.
NASA Astrophysics Data System (ADS)
Mosier, T. M.; Hill, D. F.; Sharp, K. V.
2013-12-01
High spatial resolution time-series data are critical for many hydrological and earth science studies. Multiple groups have developed historical and forecast datasets of high-resolution monthly time-series for regions of the world such as the United States (e.g. PRISM for hindcast data and MACA for long-term forecasts); however, analogous datasets have not been available for most data scarce regions. The current work fills this data need by producing and freely distributing hindcast and forecast time-series datasets of monthly precipitation and mean temperature for all global land surfaces, gridded at a 30 arc-second resolution. The hindcast data are constructed through a Delta downscaling method, using as inputs 0.5 degree monthly time-series and 30 arc-second climatology global weather datasets developed by Willmott & Matsuura and WorldClim, respectively. The forecast data are formulated using a similar downscaling method, but with an additional step to remove bias from the climate variable's probability distribution over each region of interest. The downscaling package is designed to be compatible with a number of general circulation models (GCM) (e.g. with GCMs developed for the IPCC AR4 report and CMIP5), and is presently implemented using time-series data from the NCAR CESM1 model in conjunction with 30 arc-second future decadal climatologies distributed by the Consultative Group on International Agricultural Research. The resulting downscaled datasets are 30 arc-second time-series forecasts of monthly precipitation and mean temperature available for all global land areas. As an example of these data, historical and forecast 30 arc-second monthly time-series from 1950 through 2070 are created and analyzed for the region encompassing Pakistan. For this case study, forecast datasets corresponding to the future representative concentration pathways 45 and 85 scenarios developed by the IPCC are presented and compared. This exercise highlights a range of potential meteorological trends for the Pakistan region and more broadly serves to demonstrate the utility of the presented 30 arc-second monthly precipitation and mean temperature datasets for use in data scarce regions.
Bellarby, Jessica; Surridge, Ben W J; Haygarth, Philip M; Liu, Kun; Siciliano, Giuseppina; Smith, Laurence; Rahn, Clive; Meng, Fanqiao
2018-04-01
In order to improve the efficiency of nutrient use whilst also meeting projected changes in the demand for food within China, new nutrient management frameworks comprised of policy, practice and the means of delivering change are required. These frameworks should be underpinned by systemic analyses of the stocks and flows of nutrients within agricultural production. In this paper, a 30-year time series of the stocks and flows of nitrogen (N), phosphorus (P) and potassium (K) are reported for Huantai county, an exemplar area of intensive agricultural production in the North China Plain. Substance flow analyses were constructed for the major crop systems in the county across the period 1983-2014. On average across all production systems between 2010 and 2014, total annual nutrient inputs to agricultural land in Huantai county remained high at 18.1kt N, 2.7kt P and 7.8kt K (696kg N ha -1 ; 104kgP ha -1 ; 300kgK ha -1 ). Whilst the application of inorganic fertiliser dominated these inputs, crop residues, atmospheric deposition and livestock manure represented significant, yet largely unrecognised, sources of nutrients, depending on the individual production system and the period of time. Whilst nutrient use efficiency (NUE) increased for N and P between 1983 and 2014, future improvements in NUE will require better alignment of nutrient inputs and crop demand. This is particularly true for high-value fruit and vegetable production, in which appropriate recognition of nutrient supply from sources such as manure and from soil reserves will be required to enhance NUE. Aligned with the structural organisation of the public agricultural extension service at county-scale in China, our analyses highlight key areas for the development of future agricultural policy and farm advice in order to rebalance the management of natural resources from a focus on production and growth towards the aims of efficiency and sustainability. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Worrall, F.; Howden, N. J. K.; Burt, T. P.
2012-10-01
SummaryThis paper analyses the world's longest fluvial record of water hardness and calcium (Ca) concentration. We used records of permanent and temporary hardness and river flow for the UK's River Thames (catchment area 9998 km2) to estimate annual Ca flux from the river since 1883. The Thames catchment has a mix of agricultural and urban land use; it is dominated by mineral soils with groundwater contributing around 60% of river flow. Since the late 1800s, the catchment has undergone widespread urbanisation and climate warming, but has also been subjected to large-scale land-use change, especially during World War II and agricultural intensification in the 1960s. Here, we use a range of time series methods to explore the relative importance of these drivers in determining catchment-scale biogeochemical response. Ca concentrations in the Thames rose to a peak in the late 1980s (106 mg Ca/l). The flux of Ca peaked in 1916 at 385 ktonnes Ca/yr; the minimum was in 1888 at 34 ktonnes Ca/yr. For both the annual average Ca concentration and the annual flux of Ca, there were significant increases with time; a significant positive memory effect relative to the previous year; and significant correlation with annual water yield. No significant correlation was found with either temperature or land use, but sulphate deposition was found to be significant. It was also possible, for a shorter time series, to show a significant relationship with inorganic nitrogen inputs into the catchment. We suggest that ionic inputs did not acidify the mineral soils of the catchment but did cause the leaching of metals, so we conclude that the decline in river Ca concentrations is caused by the decline in both S and N inputs.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
NASA Astrophysics Data System (ADS)
Lakshmi, K.; Rama Mohan Rao, A.
2014-10-01
In this paper, a novel output-only damage-detection technique based on time-series models for structural health monitoring in the presence of environmental variability and measurement noise is presented. The large amount of data obtained in the form of time-history response is transformed using principal component analysis, in order to reduce the data size and thereby improve the computational efficiency of the proposed algorithm. The time instant of damage is obtained by fitting the acceleration time-history data from the structure using autoregressive (AR) and AR with exogenous inputs time-series prediction models. The probability density functions (PDFs) of damage features obtained from the variances of prediction errors corresponding to references and healthy current data are found to be shifting from each other due to the presence of various uncertainties such as environmental variability and measurement noise. Control limits using novelty index are obtained using the distances of the peaks of the PDF curves in healthy condition and used later for determining the current condition of the structure. Numerical simulation studies have been carried out using a simply supported beam and also validated using an experimental benchmark data corresponding to a three-storey-framed bookshelf structure proposed by Los Alamos National Laboratory. Studies carried out in this paper clearly indicate the efficiency of the proposed algorithm for damage detection in the presence of measurement noise and environmental variability.
[Neuroendocrine mechanisms of puberty onset].
Teinturier, C
2002-10-01
An increase in pulsatile release of GnRH is essential for the onset of puberty. However, the mechanism controlling the pubertal increase in GnRH release is still unclear. The GnRH neurosecretory system is already active during the neonatal period but subsequently enters a dormant state by central inhibition in the juvenile period. When this central inhibition is removed or diminished, an increase in GnRH release occurs with increase in synthesis and release of gonadotropins and gonadal steroids, followed by the appearance of secondary sexual characteristics. Recent studies suggest that disinhibition of GnRH neurons from GABA (gamma-aminobutyric acid) appears to be a critical factor in female rhesus monkey. After central inhibition is removed, increases in stimulatory input from glutamatergic neurons as well as new stimulatory input from norepinephrine and NPY neurons and inhibitory input from beta endorphin neurons appear to control pulsatile GnRH release as well as gonadal steroids. Nonetheless, the most important question still remains: what determines the timing to remove central inhibition? Because many genes are turned on or turned off to establish a complex series of events occurring during puberty, the timing of puberty must be regulated by a master gene or genes, as a part of developmental events.
Austin, Caitlin M.; Stoy, William; Su, Peter; Harber, Marie C.; Bardill, J. Patrick; Hammer, Brian K.; Forest, Craig R.
2014-01-01
Biosensors exploiting communication within genetically engineered bacteria are becoming increasingly important for monitoring environmental changes. Currently, there are a variety of mathematical models for understanding and predicting how genetically engineered bacteria respond to molecular stimuli in these environments, but as sensors have miniaturized towards microfluidics and are subjected to complex time-varying inputs, the shortcomings of these models have become apparent. The effects of microfluidic environments such as low oxygen concentration, increased biofilm encapsulation, diffusion limited molecular distribution, and higher population densities strongly affect rate constants for gene expression not accounted for in previous models. We report a mathematical model that accurately predicts the biological response of the autoinducer N-acyl homoserine lactone-mediated green fluorescent protein expression in reporter bacteria in microfluidic environments by accommodating these rate constants. This generalized mass action model considers a chain of biomolecular events from input autoinducer chemical to fluorescent protein expression through a series of six chemical species. We have validated this model against experimental data from our own apparatus as well as prior published experimental results. Results indicate accurate prediction of dynamics (e.g., 14% peak time error from a pulse input) and with reduced mean-squared error with pulse or step inputs for a range of concentrations (10 μM–30 μM). This model can help advance the design of genetically engineered bacteria sensors and molecular communication devices. PMID:25379076
Revealing Real-Time Emotional Responses: a Personalized Assessment based on Heartbeat Dynamics
NASA Astrophysics Data System (ADS)
Valenza, Gaetano; Citi, Luca; Lanatá, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-01
Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis.
Revealing real-time emotional responses: a personalized assessment based on heartbeat dynamics.
Valenza, Gaetano; Citi, Luca; Lanatá, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-21
Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis.
Version 2 of the IASI NH3 neural network retrieval algorithm: near-real-time and reanalysed datasets
NASA Astrophysics Data System (ADS)
Van Damme, Martin; Whitburn, Simon; Clarisse, Lieven; Clerbaux, Cathy; Hurtmans, Daniel; Coheur, Pierre-François
2017-12-01
Recently, Whitburn et al.(2016) presented a neural-network-based algorithm for retrieving atmospheric ammonia (NH3) columns from Infrared Atmospheric Sounding Interferometer (IASI) satellite observations. In the past year, several improvements have been introduced, and the resulting new baseline version, Artificial Neural Network for IASI (ANNI)-NH3-v2.1, is documented here. One of the main changes to the algorithm is that separate neural networks were trained for land and sea observations, resulting in a better training performance for both groups. By reducing and transforming the input parameter space, performance is now also better for observations associated with favourable sounding conditions (i.e. enhanced thermal contrasts). Other changes relate to the introduction of a bias correction over land and sea and the treatment of the satellite zenith angle. In addition to these algorithmic changes, new recommendations for post-filtering the data and for averaging data in time or space are formulated. We also introduce a second dataset (ANNI-NH3-v2.1R-I) which relies on ERA-Interim ECMWF meteorological input data, along with surface temperature retrieved from a dedicated network, rather than the operationally provided Eumetsat IASI Level 2 (L2) data used for the standard near-real-time version. The need for such a dataset emerged after a series of sharp discontinuities were identified in the NH3 time series, which could be traced back to incremental changes in the IASI L2 algorithms for temperature and clouds. The reanalysed dataset is coherent in time and can therefore be used to study trends. Furthermore, both datasets agree reasonably well in the mean on recent data, after the date when the IASI meteorological L2 version 6 became operational (30 September 2014).
Shabri, Ani; Samsudin, Ruhaidah
2014-01-01
Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.
Shabri, Ani; Samsudin, Ruhaidah
2014-01-01
Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666
Flood mapping in ungauged basins using fully continuous hydrologic-hydraulic modeling
NASA Astrophysics Data System (ADS)
Grimaldi, Salvatore; Petroselli, Andrea; Arcangeletti, Ettore; Nardi, Fernando
2013-04-01
SummaryIn this work, a fully-continuous hydrologic-hydraulic modeling framework for flood mapping is introduced and tested. It is characterized by a simulation of a long rainfall time series at sub-daily resolution that feeds a continuous rainfall-runoff model producing a discharge time series that is directly given as an input to a bi-dimensional hydraulic model. The main advantage of the proposed approach is to avoid the use of the design hyetograph and the design hydrograph that constitute the main source of subjective analysis and uncertainty for standard methods. The proposed procedure is optimized for small and ungauged watersheds where empirical models are commonly applied. Results of a simple real case study confirm that this experimental fully-continuous framework may pave the way for the implementation of a less subjective and potentially automated procedure for flood hazard mapping.
Sectoral risk research about input-output structure of the United States
NASA Astrophysics Data System (ADS)
Zhang, Mao
2018-02-01
There exist rare researches about economic risk in sectoral level, which is significantly important for risk prewarning. This paper employed status coefficient to measure the symmetry of economic subnetwork, which is negatively correlated with sectoral risk. Then, we do empirical research in both cross section and time series dimensions. In cross section dimension, we study the correlation between sectoral status coefficient and sectoral volatility, earning rate and Sharpe ratio respectively in the year 2015. Next, in the perspective of time series, we first investigate the correlation change between sectoral status coefficient and annual total output from 1997 to 2015. Then, we divide the 71 sectors in America into agriculture, manufacturing, services and government, compare the trend terms of average sectoral status coefficients of the four industries and illustrate the causes behind it. We also find obvious abnormality in the sector of housing. At last, this paper puts forward some suggestions for the federal government.
Gangopadhyay, Subhrendu; McCabe, Gregory J.; Woodhouse, Connie A.
2015-01-01
In this paper, we present a methodology to use annual tree-ring chronologies and a monthly water balance model to generate annual reconstructions of water balance variables (e.g., potential evapotrans- piration (PET), actual evapotranspiration (AET), snow water equivalent (SWE), soil moisture storage (SMS), and runoff (R)). The method involves resampling monthly temperature and precipitation from the instrumental record directed by variability indicated by the paleoclimate record. The generated time series of monthly temperature and precipitation are subsequently used as inputs to a monthly water balance model. The methodology is applied to the Upper Colorado River Basin, and results indicate that the methodology reliably simulates water-year runoff, maximum snow water equivalent, and seasonal soil moisture storage for the instrumental period. As a final application, the methodology is used to produce time series of PET, AET, SWE, SMS, and R for the 1404–1905 period for the Upper Colorado River Basin.
A novel hybrid ensemble learning paradigm for tourism forecasting
NASA Astrophysics Data System (ADS)
Shabri, Ani
2015-02-01
In this paper, a hybrid forecasting model based on Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) is proposed to forecast tourism demand. This methodology first decomposes the original visitor arrival series into several Intrinsic Model Function (IMFs) components and one residual component by EMD technique. Then, IMFs components and the residual components is forecasted respectively using GMDH model whose input variables are selected by using Partial Autocorrelation Function (PACF). The final forecasted result for tourism series is produced by aggregating all the forecasted results. For evaluating the performance of the proposed EMD-GMDH methodologies, the monthly data of tourist arrivals from Singapore to Malaysia are used as an illustrative example. Empirical results show that the proposed EMD-GMDH model outperforms the EMD-ARIMA as well as the GMDH and ARIMA (Autoregressive Integrated Moving Average) models without time series decomposition.
Comparative case study between D3 and highcharts on lustre data visualization
NASA Astrophysics Data System (ADS)
ElTayeby, Omar; John, Dwayne; Patel, Pragnesh; Simmerman, Scott
2013-12-01
One of the challenging tasks in visual analytics is to target clustered time-series data sets, since it is important for data analysts to discover patterns changing over time while keeping their focus on particular subsets. In order to leverage the humans ability to quickly visually perceive these patterns, multivariate features should be implemented according to the attributes available. However, a comparative case study has been done using JavaScript libraries to demonstrate the differences in capabilities of using them. A web-based application to monitor the Lustre file system for the systems administrators and the operation teams has been developed using D3 and Highcharts. Lustre file systems are responsible of managing Remote Procedure Calls (RPCs) which include input output (I/O) requests between clients and Object Storage Targets (OSTs). The objective of this application is to provide time-series visuals of these calls and storage patterns of users on Kraken, a University of Tennessee High Performance Computing (HPC) resource in Oak Ridge National Laboratory (ORNL).
Besic, Nikola; Vasile, Gabriel; Anghel, Andrei; Petrut, Teodor-Ion; Ioana, Cornel; Stankovic, Srdjan; Girard, Alexandre; d'Urso, Guy
2014-11-01
In this paper, we propose a novel ultrasonic tomography method for pipeline flow field imaging, based on the Zernike polynomial series. Having intrusive multipath time-offlight ultrasonic measurements (difference in flight time and speed of ultrasound) at the input, we provide at the output tomograms of the fluid velocity components (axial, radial, and orthoradial velocity). Principally, by representing these velocities as Zernike polynomial series, we reduce the tomography problem to an ill-posed problem of finding the coefficients of the series, relying on the acquired ultrasonic measurements. Thereupon, this problem is treated by applying and comparing Tikhonov regularization and quadratically constrained ℓ1 minimization. To enhance the comparative analysis, we additionally introduce sparsity, by employing SVD-based filtering in selecting Zernike polynomials which are to be included in the series. The first approach-Tikhonov regularization without filtering, is used because it is the most suitable method. The performances are quantitatively tested by considering a residual norm and by estimating the flow using the axial velocity tomogram. Finally, the obtained results show the relative residual norm and the error in flow estimation, respectively, ~0.3% and ~1.6% for the less turbulent flow and ~0.5% and ~1.8% for the turbulent flow. Additionally, a qualitative validation is performed by proximate matching of the derived tomograms with a flow physical model.
Using Monte Carlo Simulation to Prioritize Key Maritime Environmental Impacts of Port Infrastructure
NASA Astrophysics Data System (ADS)
Perez Lespier, L. M.; Long, S.; Shoberg, T.
2016-12-01
This study creates a Monte Carlo simulation model to prioritize key indicators of environmental impacts resulting from maritime port infrastructure. Data inputs are derived from LandSat imagery, government databases, and industry reports to create the simulation. Results are validated using subject matter experts and compared with those returned from time-series regression to determine goodness of fit. The Port of Prince Rupert, Canada is used as the location for the study.
Partial Granger causality--eliminating exogenous inputs and latent variables.
Guo, Shuixia; Seth, Anil K; Kendrick, Keith M; Zhou, Cong; Feng, Jianfeng
2008-07-15
Attempts to identify causal interactions in multivariable biological time series (e.g., gene data, protein data, physiological data) can be undermined by the confounding influence of environmental (exogenous) inputs. Compounding this problem, we are commonly only able to record a subset of all related variables in a system. These recorded variables are likely to be influenced by unrecorded (latent) variables. To address this problem, we introduce a novel variant of a widely used statistical measure of causality--Granger causality--that is inspired by the definition of partial correlation. Our 'partial Granger causality' measure is extensively tested with toy models, both linear and nonlinear, and is applied to experimental data: in vivo multielectrode array (MEA) local field potentials (LFPs) recorded from the inferotemporal cortex of sheep. Our results demonstrate that partial Granger causality can reveal the underlying interactions among elements in a network in the presence of exogenous inputs and latent variables in many cases where the existing conditional Granger causality fails.
Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems
Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S.; Agarwal, Dev P.
2015-01-01
Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data. PMID:26366169
Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems.
Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S; Agarwal, Dev P
2015-01-01
Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data.
GPS Imaging of Time-Dependent Seasonal Strain in Central California
NASA Astrophysics Data System (ADS)
Kraner, M.; Hammond, W. C.; Kreemer, C.; Borsa, A. A.; Blewitt, G.
2016-12-01
Recently, studies are suggesting that crustal deformation can be time-dependent and nontectonic. Continuous global positioning system (cGPS) measurements are now showing how steady long-term deformation can be influenced by factors such as fluctuations in loading and temperature variations. Here we model the seasonal time-dependent dilatational and shear strain in Central California, specifically surrounding the Parkfield region and try to uncover the sources of these deformation patterns. We use 8 years of cGPS data (2008 - 2016) processed by the Nevada Geodetic Laboratory and carefully select the cGPS stations for our analysis based on the vertical position of cGPS time series during the drought period. In building our strain model, we first detrend the selected station time series using a set of velocities from the robust MIDAS trend estimator. This estimation algorithm is a robust approach that is insensitive to common problems such as step discontinuities, outliers, and seasonality. We use these detrended time series to estimate the median cGPS positions for each month of the 8-year period and filter displacement differences between these monthly median positions using a filtering technique called "GPS Imaging." This technique improves the overall robustness and spatial resolution of the input displacements for the strain model. We then model our dilatational and shear strain field for each month of time series. We also test a variety of a priori constraints, which controls the style of faulting within the strain model. Upon examining our strain maps, we find that a seasonal strain signal exists in Central California. We investigate how this signal compares to thermoelastic, hydrologic, and atmospheric loading models during the 8-year period. We additionally determine whether the drought played a role in influencing the seasonal signal.
Brigode, Pierre; Brissette, Francois; Nicault, Antoine; ...
2016-09-06
Over the last decades, different methods have been used by hydrologists to extend observed hydro-climatic time series, based on other data sources, such as tree rings or sedimentological datasets. For example, tree ring multi-proxies have been studied for the Caniapiscau Reservoir in northern Québec (Canada), leading to the reconstruction of flow time series for the last 150 years. In this paper, we applied a new hydro-climatic reconstruction method on the Caniapiscau Reservoir and compare the obtained streamflow time series against time series derived from dendrohydrology by other authors on the same catchment and study the natural streamflow variability over themore » 1881–2011 period in that region. This new reconstruction is based not on natural proxies but on a historical reanalysis of global geopotential height fields, and aims firstly to produce daily climatic time series, which are then used as inputs to a rainfall–runoff model in order to obtain daily streamflow time series. The performances of the hydro-climatic reconstruction were quantified over the observed period, and showed good performances, in terms of both monthly regimes and interannual variability. The streamflow reconstructions were then compared to two different reconstructions performed on the same catchment by using tree ring data series, one being focused on mean annual flows and the other on spring floods. In terms of mean annual flows, the interannual variability in the reconstructed flows was similar (except for the 1930–1940 decade), with noteworthy changes seen in wetter and drier years. For spring floods, the reconstructed interannual variabilities were quite similar for the 1955–2011 period, but strongly different between 1880 and 1940. Here, the results emphasize the need to apply different reconstruction methods on the same catchments. Indeed, comparisons such as those above highlight potential differences between available reconstructions and, finally, allow a retrospective analysis of the proposed reconstructions of past hydro-climatological variabilities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brigode, Pierre; Brissette, Francois; Nicault, Antoine
Over the last decades, different methods have been used by hydrologists to extend observed hydro-climatic time series, based on other data sources, such as tree rings or sedimentological datasets. For example, tree ring multi-proxies have been studied for the Caniapiscau Reservoir in northern Québec (Canada), leading to the reconstruction of flow time series for the last 150 years. In this paper, we applied a new hydro-climatic reconstruction method on the Caniapiscau Reservoir and compare the obtained streamflow time series against time series derived from dendrohydrology by other authors on the same catchment and study the natural streamflow variability over themore » 1881–2011 period in that region. This new reconstruction is based not on natural proxies but on a historical reanalysis of global geopotential height fields, and aims firstly to produce daily climatic time series, which are then used as inputs to a rainfall–runoff model in order to obtain daily streamflow time series. The performances of the hydro-climatic reconstruction were quantified over the observed period, and showed good performances, in terms of both monthly regimes and interannual variability. The streamflow reconstructions were then compared to two different reconstructions performed on the same catchment by using tree ring data series, one being focused on mean annual flows and the other on spring floods. In terms of mean annual flows, the interannual variability in the reconstructed flows was similar (except for the 1930–1940 decade), with noteworthy changes seen in wetter and drier years. For spring floods, the reconstructed interannual variabilities were quite similar for the 1955–2011 period, but strongly different between 1880 and 1940. Here, the results emphasize the need to apply different reconstruction methods on the same catchments. Indeed, comparisons such as those above highlight potential differences between available reconstructions and, finally, allow a retrospective analysis of the proposed reconstructions of past hydro-climatological variabilities.« less
Pohlmeyer, Eric A.; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline W.; Sanchez, Justin C.
2014-01-01
Brain-machine interface (BMI) systems give users direct neural control of robotic, communication, or functional electrical stimulation systems. As BMI systems begin transitioning from laboratory settings into activities of daily living, an important goal is to develop neural decoding algorithms that can be calibrated with a minimal burden on the user, provide stable control for long periods of time, and can be responsive to fluctuations in the decoder’s neural input space (e.g. neurons appearing or being lost amongst electrode recordings). These are significant challenges for static neural decoding algorithms that assume stationary input/output relationships. Here we use an actor-critic reinforcement learning architecture to provide an adaptive BMI controller that can successfully adapt to dramatic neural reorganizations, can maintain its performance over long time periods, and which does not require the user to produce specific kinetic or kinematic activities to calibrate the BMI. Two marmoset monkeys used the Reinforcement Learning BMI (RLBMI) to successfully control a robotic arm during a two-target reaching task. The RLBMI was initialized using random initial conditions, and it quickly learned to control the robot from brain states using only a binary evaluative feedback regarding whether previously chosen robot actions were good or bad. The RLBMI was able to maintain control over the system throughout sessions spanning multiple weeks. Furthermore, the RLBMI was able to quickly adapt and maintain control of the robot despite dramatic perturbations to the neural inputs, including a series of tests in which the neuron input space was deliberately halved or doubled. PMID:24498055
NASA Astrophysics Data System (ADS)
Ziegler, Yann; Lambert, Sébastien; Rosat, Séverine; Nurul Huda, Ibnu; Bizouard, Christian
2017-04-01
Nutation time series derived from very long baseline interferometry (VLBI) and time varying surface gravity data recorded by superconducting gravimeters (SG) have long been used separately to assess the Earth's interior via the estimation of the free core and inner core resonance effects on nutation or tidal gravity. The results obtained from these two techniques have been shown recently to be consistent, making relevant the combination of VLBI and SG observables and the estimation of Earth's interior parameters in a single inversion. We present here the intermediate results of the ongoing project of combining nutation and surface gravity time series to improve estimates of the Earth's core and inner core resonant frequencies. We use VLBI nutation time series spanning 1984-2016 derived by the International VLBI Service for geodesy and astrometry (IVS) as the result of a combination of inputs from various IVS analysis centers, and surface gravity data from about 15 SG stations. We address here the resonance model used for describing the Earth's interior response to tidal excitation, the data preparation consisting of the error recalibration and amplitude fitting for nutation data, and processing of SG time-varying gravity to remove any gaps, spikes, steps and other disturbances, followed by the tidal analysis with the ETERNA 3.4 software package, the preliminary estimates of the resonant periods, and the correlations between parameters.
Deriving phenological metrics from NDVI through an open source tool developed in QGIS
NASA Astrophysics Data System (ADS)
Duarte, Lia; Teodoro, A. C.; Gonçalves, Hernãni
2014-10-01
Vegetation indices have been commonly used over the past 30 years for studying vegetation characteristics using images collected by remote sensing satellites. One of the most commonly used is the Normalized Difference Vegetation Index (NDVI). The various stages that green vegetation undergoes during a complete growing season can be summarized through time-series analysis of NDVI data. The analysis of such time-series allow for extracting key phenological variables or metrics of a particular season. These characteristics may not necessarily correspond directly to conventional, ground-based phenological events, but do provide indications of ecosystem dynamics. A complete list of the phenological metrics that can be extracted from smoothed, time-series NDVI data is available in the USGS online resources (http://phenology.cr.usgs.gov/methods_deriving.php).This work aims to develop an open source application to automatically extract these phenological metrics from a set of satellite input data. The main advantage of QGIS for this specific application relies on the easiness and quickness in developing new plug-ins, using Python language, based on the experience of the research group in other related works. QGIS has its own application programming interface (API) with functionalities and programs to develop new features. The toolbar developed for this application was implemented using the plug-in NDVIToolbar.py. The user introduces the raster files as input and obtains a plot and a report with the metrics. The report includes the following eight metrics: SOST (Start Of Season - Time) corresponding to the day of the year identified as having a consistent upward trend in the NDVI time series; SOSN (Start Of Season - NDVI) corresponding to the NDVI value associated with SOST; EOST (End of Season - Time) which corresponds to the day of year identified at the end of a consistent downward trend in the NDVI time series; EOSN (End of Season - NDVI) corresponding to the NDVI value associated with EOST; MAXN (Maximum NDVI) which corresponds to the maximum NDVI value; MAXT (Time of Maximum) which is the day associated with MAXN; DUR (Duration) defined as the number of days between SOST and EOST; and AMP (Amplitude) which is the difference between MAXN and SOSN. This application provides all these metrics in a single step. Initially, the data points are interpolated using a moving average graphic with five and three points. The eight metrics previously described are then obtained from the spline using numpy functions. In the present work, the developed toolbar was applied to MODerate resolution Imaging Spectroradiometer (MODIS) data covering a particular region of Portugal, which can be generally applied to other satellite data and study area. The code is open and can be modified according to the user requirements. Other advantage in publishing the plug-ins and the application code is the possibility of other users to improve this application.
van de Flierdt, T.; Frank, M.; Halliday, A.N.; Hein, J.R.; Hattendorf, B.; Gunther, D.; Kubik, P.W.
2003-01-01
The sources of non-anthropogenic Pb in seawater have been the subject of debate. Here we present Pb isotope time-series that indicate that the non-anthropogenic Pb budget of the northernmost Pacific Ocean has been governed by ocean circulation and riverine inputs, which in turn have ultimately been controlled by tectonic processes. Despite the fact that the investigated locations are situated within the Asian dust plume, and proximal to extensive arc volcanism, eolian contributions have had little impact. We have obtained the first high-resolution and high-precision Pb isotope time-series of North Pacific deep water from two ferromanganese crusts from the Gulf of Alaska in the NE Pacific Ocean, and from the Detroit Seamount in the NW Pacific Ocean. Both crusts were dated applying 10 Be/9Be ratios and yield continuous time-series for the past 13.5 and 9.6 Myr, respectively. Lead isotopes show a monotonic evolution in 206Pb/204Pb from low values in the Miocene (??? 18.57) to high values at present day (??? 18.84) in both crusts, even though they are separated by more than 3000 km along the Aleutian Arc. The variation exceeds the amplitude found in Equatorial Pacific deep water records by about three-fold. There also is a striking similarity in 207Pb/204Pb and 208Pb/ 204Pb ratios of the two crusts, indicating the existence of a local circulation cell in the sub-polar North Pacific, where efficient lateral mixing has taken place but only limited exchange (in terms of Pb) with deep water from the Equatorial Pacific has occurred. Both crusts display well-defined trends with age in Pb-Pb isotope mixing plots, which require the involvement of at least four distinct Pb sources for North Pacific deep water. The Pb isotope time-series reveal that eolian supplies (volcanic ash and continent-derived loess) have only been of minor importance for the dissolved Pb budget of marginal sites in the deep North Pacific over the past 6 Myr. The two predominant sources have been young volcanic arcs, one located in the northeastern part and one located in the northwestern part of the Pacific margin, from where material has been eroded and delivered to the ocean, most likely via riverine pathways. ?? 2003 Elsevier Science B.V. All rights reserved.
Recognition of predictors for mid-long term runoff prediction based on lasso
NASA Astrophysics Data System (ADS)
Xie, S.; Huang, Y.
2017-12-01
Reliable and accuracy mid-long term runoff prediction is of great importance in integrated management of reservoir. And many methods are proposed to model runoff time series. Almost all forecast lead times (LT) of these models are 1 month, and the predictors are previous runoff with different time lags. However, runoff prediction with increased LT, which is more beneficial, is not popular in current researches. It is because the connection between previous runoff and current runoff will be weakened with the increase of LT. So 74 atmospheric circulation factors (ACFs) together with pre-runoff are used as alternative predictors for mid-long term runoff prediction of Longyangxia reservoir in this study. Because pre-runoff and 74 ACFs with different time lags are so many and most of these factors are useless, lasso, which means `least absolutely shrinkage and selection operator', is used to recognize predictors. And the result demonstrates that 74 ACFs are beneficial for runoff prediction in both validation and test sets when LT is greater than 6. And there are 6 factors other than pre-runoff, most of which are with big time lag, are selected as predictors frequently. In order to verify the effect of 74 ACFs, 74 stochastic time series generated from normalized 74 ACFs are used as input of model. The result shows that these 74 stochastic time series are useless, which confirm the effect of 74 ACFs on mid-long term runoff prediction.
NASA Astrophysics Data System (ADS)
Gerber, Christoph; Purtschert, Roland; Hunkeler, Daniel; Hug, Rainer; Sültenfuss, Jürgen
2018-06-01
Groundwater quality in many regions with intense agriculture has deteriorated due to the leaching of nitrate and other agricultural pollutants. Modified agricultural practices can reduce the input of nitrate to groundwater bodies, but it is crucial to determine the time span over which these measures become effective at reducing nitrate levels in pumping wells. Such estimates can be obtained from hydrogeological modeling or lumped-parameter models (LPM) in combination with environmental tracer data. Two challenges in such tracer-based estimates are (i) accounting for the different modes of transport in the unsaturated zone (USZ), and (ii) assessing uncertainties. Here we extend a recently published Bayesian inference scheme for simple LPMs to include an explicit USZ model and apply it to the Dünnerngäu aquifer, Switzerland. Compared to a previous estimate of travel times in the aquifer based on a 2D hydrogeological model, our approach provides a more accurate assessment of the dynamics of nitrate concentrations in the aquifer. We find that including tracer measurements (3H/3He, 85Kr, 39Ar, 4He) reduces uncertainty in nitrate predictions if nitrate time series at wells are not available or short, but does not necessarily lead to better predictions if long nitrate time series are available. Additionally, the combination of tracer data with nitrate time series allows for a separation of the travel times in the unsaturated and saturated zone.
NASA Astrophysics Data System (ADS)
Hawes, D. H.; Langley, R. S.
2018-01-01
Random excitation of mechanical systems occurs in a wide variety of structures and, in some applications, calculation of the power dissipated by such a system will be of interest. In this paper, using the Wiener series, a general methodology is developed for calculating the power dissipated by a general nonlinear multi-degree-of freedom oscillatory system excited by random Gaussian base motion of any spectrum. The Wiener series method is most commonly applied to systems with white noise inputs, but can be extended to encompass a general non-white input. From the extended series a simple expression for the power dissipated can be derived in terms of the first term, or kernel, of the series and the spectrum of the input. Calculation of the first kernel can be performed either via numerical simulations or from experimental data and a useful property of the kernel, namely that the integral over its frequency domain representation is proportional to the oscillating mass, is derived. The resulting equations offer a simple conceptual analysis of the power flow in nonlinear randomly excited systems and hence assist the design of any system where power dissipation is a consideration. The results are validated both numerically and experimentally using a base-excited cantilever beam with a nonlinear restoring force produced by magnets.
A Software Package for Neural Network Applications Development
NASA Technical Reports Server (NTRS)
Baran, Robert H.
1993-01-01
Original Backprop (Version 1.2) is an MS-DOS package of four stand-alone C-language programs that enable users to develop neural network solutions to a variety of practical problems. Original Backprop generates three-layer, feed-forward (series-coupled) networks which map fixed-length input vectors into fixed length output vectors through an intermediate (hidden) layer of binary threshold units. Version 1.2 can handle up to 200 input vectors at a time, each having up to 128 real-valued components. The first subprogram, TSET, appends a number (up to 16) of classification bits to each input, thus creating a training set of input output pairs. The second subprogram, BACKPROP, creates a trilayer network to do the prescribed mapping and modifies the weights of its connections incrementally until the training set is leaned. The learning algorithm is the 'back-propagating error correction procedures first described by F. Rosenblatt in 1961. The third subprogram, VIEWNET, lets the trained network be examined, tested, and 'pruned' (by the deletion of unnecessary hidden units). The fourth subprogram, DONET, makes a TSR routine by which the finished product of the neural net design-and-training exercise can be consulted under other MS-DOS applications.
NASA Technical Reports Server (NTRS)
Crosson, William L.; Smith, Eric A.
1992-01-01
The behavior of in situ measurements of surface fluxes obtained during FIFE 1987 is examined by using correlative and spectral techniques in order to assess the significance of fluctuations on various time scales, from subdiurnal up to synoptic, intraseasonal, and annual scales. The objectives of this analysis are: (1) to determine which temporal scales have a significant impact on areal averaged fluxes and (2) to design a procedure for filtering an extended flux time series that preserves the basic diurnal features and longer time scales while removing high frequency noise that cannot be attributed to site-induced variation. These objectives are accomplished through the use of a two-dimensional cross-time Fourier transform, which serves to separate processes inherently related to diurnal and subdiurnal variability from those which impact flux variations on the longer time scales. A filtering procedure is desirable before the measurements are utilized as input with an experimental biosphere model, to insure that model based intercomparisons at multiple sites are uncontaminated by input variance not related to true site behavior. Analysis of the spectral decomposition indicates that subdiurnal time scales having periods shorter than 6 hours have little site-to-site consistency and therefore little impact on areal integrated fluxes.
NASA Technical Reports Server (NTRS)
Wong, Gregory L.; Denery, Dallas (Technical Monitor)
2000-01-01
The Dynamic Planner (DP) has been designed, implemented, and integrated into the Center-TRACON Automation System (CTAS) to assist Traffic Management Coordinators (TMCs), in real time, with the task of planning and scheduling arrival traffic approximately 35 to 200 nautical miles from the destination airport. The TMC may input to the DP a series of current and future scheduling constraints that reflect the operation and environmental conditions of the airspace. Under these constraints, the DP uses flight plans, track updates, and Estimated Time of Arrival (ETA) predictions to calculate optimal runway assignments and arrival schedules that help ensure an orderly, efficient, and conflict-free flow of traffic into the terminal area. These runway assignments and schedules can be shown directly to controllers or they can be used by other CTAS tools to generate advisories to the controllers. Additionally, the TMC and controllers may override the decisions made by the DP for tactical considerations. The DP will adapt to computations to accommodate these manual inputs.
Operational modeling system with dynamic-wave routing
Ishii, A.L.; Charlton, T.J.; Ortel, T.W.; Vonnahme, C.C.; ,
1998-01-01
A near real-time streamflow-simulation system utilizing continuous-simulation rainfall-runoff generation with dynamic-wave routing is being developed by the U.S. Geological Survey in cooperation with the Du Page County Department of Environmental Concerns for a 24-kilometer reach of Salt Creek in Du Page County, Illinois. This system is needed in order to more effectively manage the Elmhurst Quarry Flood Control Facility, an off-line stormwater diversion reservoir located along Salt Creek. Near real time simulation capabilities will enable the testing and evaluation of potential rainfall, diversion, and return-flow scenarios on water-surface elevations along Salt Creek before implementing diversions or return-flows. The climatological inputs for the continuous-simulation rainfall-runoff model, Hydrologic Simulation Program - FORTRAN (HSPF) are obtained by Internet access and from a network of radio-telemetered precipitation gages reporting to a base-station computer. The unit area runoff time series generated from HSPF are the input for the dynamic-wave routing model. Full Equations (FEQ). The Generation and Analysis of Model Simulation Scenarios (GENSCN) interface is used as a pre- and post-processor for managing input data and displaying and managing simulation results. The GENSCN interface includes a variety of graphical and analytical tools for evaluation and quick visualization of the results of operational scenario simulations and thereby makes it possible to obtain the full benefit of the fully distributed dynamic routing results.
Frequency analysis via the method of moment functionals
NASA Technical Reports Server (NTRS)
Pearson, A. E.; Pan, J. Q.
1990-01-01
Several variants are presented of a linear-in-parameters least squares formulation for determining the transfer function of a stable linear system at specified frequencies given a finite set of Fourier series coefficients calculated from transient nonstationary input-output data. The basis of the technique is Shinbrot's classical method of moment functionals using complex Fourier based modulating functions to convert a differential equation model on a finite time interval into an algebraic equation which depends linearly on frequency-related parameters.
McEwan, T.E.
1993-12-28
An inexpensive pulse generating circuit is disclosed that generates ultra-short, 200 picosecond, and high voltage 100 kW, pulses suitable for wideband radar and other wideband applications. The circuit implements a nonlinear transmission line with series inductors and variable capacitors coupled to ground made from reverse biased diodes to sharpen and increase the amplitude of a high-voltage power MOSFET driver input pulse until it causes non-destructive transit time breakdown in a final avalanche shock wave diode, which increases and sharpens the pulse even more. 5 figures.
Time-dependent corrosion fatique crack propagation in 7000 series aluminum alloys. M.S. Thesis
NASA Technical Reports Server (NTRS)
Mason, Mark E.
1995-01-01
The goal of this research is to characterize environmentally assisted subcritical crack growth for the susceptible short-longitudinal orientation of aluminum alloy 7075-T651, immersed in acidified and inhibited NaCl solution. This work is necessary in order to provide a basis for incorporating environmental effects into fatigue crack propagation life prediction codes such as NASA-FLAGRO (NASGRO). This effort concentrates on determining relevant inputs to a superposition model in order to more accurately model environmental fatigue crack propagation.
The Relationship of Temperature to Strength and Power Production in Intact Human Skeletal Muscle.
1979-06-01
Kramer (1961) found that when all knowledge of the warm-up was eliminated with hypnosis , no significant change was found in ergometer ride time. The warm...2.0 to 2.5 centi- meters. A Yellow Springs Instrument (YSI) series 500 hypodermic probe was used in conjunction with a YSI Telethermometer model 46TUC... instrument is accomplished by using one of four different capacitance circuits to suppress the oscillitory effects of the input energy. A damping of 0 provides
Status of GDL - GNU Data Language
NASA Astrophysics Data System (ADS)
Coulais, A.; Schellens, M.; Gales, J.; Arabas, S.; Boquien, M.; Chanial, P.; Messmer, P.; Fillmore, D.; Poplawski, O.; Maret, S.; Marchal, G.; Galmiche, N.; Mermet, T.
2010-12-01
Gnu Data Language (GDL) is an open-source interpreted language aimed at numerical data analysis and visualisation. It is a free implementation of the Interactive Data Language (IDL) widely used in Astronomy. GDL has a full syntax compatibility with IDL, and includes a large set of library routines targeting advanced matrix manipulation, plotting, time-series and image analysis, mapping, and data input/output including numerous scientific data formats. We will present the current status of the project, the key accomplishments, and the weaknesses - areas where contributions are welcome!
Market dynamics and stock price volatility
NASA Astrophysics Data System (ADS)
Li, H.; Rosser, J. B., Jr.
2004-06-01
This paper presents a possible explanation for some of the empirical properties of asset returns within a heterogeneous-agents framework. The model turns out, even if we assume the input fundamental value follows an simple Gaussian distribution lacking both fat tails and volatility dependence, these features can show up in the time series of asset returns. In this model, the profit comparison and switching between heterogeneous play key roles, which build a connection between endogenous market and the emergence of stylized facts.
IMPMOT user's manual. [written in FORTRAN 4
NASA Technical Reports Server (NTRS)
Stewart, D. J.; Bishop, M. J.
1974-01-01
This user's manual describes the input and output variables as well as the job control language necessary to utilize the IMP-H apogee motor firing program, IMPMOT. The IMPMOT program can be executed as either a stand-alone program or as a member of the flight dynamics system. This program is used to determine the time and attitude at which to fire the IMP-H apogee boost motor. The IMPMOT program is written in FORTRAN 4 for use on the IBM 360 series computer.
McEwan, Thomas E.
1993-01-01
An inexpensive pulse generating circuit is disclosed that generates ultra-short, 200 picosecond, and high voltage 100 kW, pulses suitable for wideband radar and other wideband applications. The circuit implements a nonlinear transmission line with series inductors and variable capacitors coupled to ground made from reverse biased diodes to sharpen and increase the amplitude of a high-voltage power MOSFET driver input pulse until it causes non-destructive transit time breakdown in a final avalanche shockwave diode, which increases and sharpens the pulse even more.
Trends of the World Input and Output Network of Global Trade
del Río-Chanona, Rita María; Grujić, Jelena; Jeldtoft Jensen, Henrik
2017-01-01
The international trade naturally maps onto a complex networks. Theoretical analysis of this network gives valuable insights about the global economic system. Although different economic data sets have been investigated from the network perspective, little attention has been paid to its dynamical behaviour. Here we take the World Input Output Data set, which has values of the annual transactions between 40 different countries of 35 different sectors for the period of 15 years, and infer the time interdependence between countries and sectors. As a measure of interdependence we use correlations between various time series of the network characteristics. First we form 15 primary networks for each year of the data we have, where nodes are countries and links are annual exports from one country to the other. Then we calculate the strengths (weighted degree) and PageRank of each country in each of the 15 networks for 15 different years. This leads to sets of time series and by calculating the correlations between these we form a secondary network where the links are the positive correlations between different countries or sectors. Furthermore, we also form a secondary network where the links are negative correlations in order to study the competition between countries and sectors. By analysing this secondary network we obtain a clearer picture of the mutual influences between countries. As one might expect, we find that political and geographical circumstances play an important role. However, the derived correlation network reveals surprising aspects which are hidden in the primary network. Sometimes countries which belong to the same community in the original network are found to be competitors in the secondary networks. E.g. Spain and Portugal are always in the same trade flow community, nevertheless secondary network analysis reveal that they exhibit contrary time evolution. PMID:28125656
Trends of the World Input and Output Network of Global Trade.
Del Río-Chanona, Rita María; Grujić, Jelena; Jeldtoft Jensen, Henrik
2017-01-01
The international trade naturally maps onto a complex networks. Theoretical analysis of this network gives valuable insights about the global economic system. Although different economic data sets have been investigated from the network perspective, little attention has been paid to its dynamical behaviour. Here we take the World Input Output Data set, which has values of the annual transactions between 40 different countries of 35 different sectors for the period of 15 years, and infer the time interdependence between countries and sectors. As a measure of interdependence we use correlations between various time series of the network characteristics. First we form 15 primary networks for each year of the data we have, where nodes are countries and links are annual exports from one country to the other. Then we calculate the strengths (weighted degree) and PageRank of each country in each of the 15 networks for 15 different years. This leads to sets of time series and by calculating the correlations between these we form a secondary network where the links are the positive correlations between different countries or sectors. Furthermore, we also form a secondary network where the links are negative correlations in order to study the competition between countries and sectors. By analysing this secondary network we obtain a clearer picture of the mutual influences between countries. As one might expect, we find that political and geographical circumstances play an important role. However, the derived correlation network reveals surprising aspects which are hidden in the primary network. Sometimes countries which belong to the same community in the original network are found to be competitors in the secondary networks. E.g. Spain and Portugal are always in the same trade flow community, nevertheless secondary network analysis reveal that they exhibit contrary time evolution.
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Coffey, Victoria N.; Parker, Linda N.; Blackwell, William C., Jr.; Jun, Insoo; Garrett, Henry B.
2007-01-01
The NUMIT 1-dimensional bulk charging model is used as a screening to ol for evaluating time-dependent bulk internal or deep dielectric) ch arging of dielectrics exposed to penetrating electron environments. T he code is modified to accept time dependent electron flux time serie s along satellite orbits for the electron environment inputs instead of using the static electron flux environment input originally used b y the code and widely adopted in bulk charging models. Application of the screening technique ts demonstrated for three cases of spacecraf t exposure within the Earth's radiation belts including a geostationa ry transfer orbit and an Earth-Moon transit trajectory for a range of orbit inclinations. Electric fields and charge densities are compute d for dielectric materials with varying electrical properties exposed to relativistic electron environments along the orbits. Our objectiv e is to demonstrate a preliminary application of the time-dependent e nvironments input to the NUMIT code for evaluating charging risks to exposed dielectrics used on spacecraft when exposed to the Earth's ra diation belts. The results demonstrate that the NUMIT electric field values in GTO orbits with multiple encounters with the Earth's radiat ion belts are consistent with previous studies of charging in GTO orb its and that potential threat conditions for electrostatic discharge exist on lunar transit trajectories depending on the electrical proper ties of the materials exposed to the radiation environment.
Kamiya, Atsunori; Kawada, Toru; Shimizu, Shuji; Sugimachi, Masaru
2011-01-01
Abstract Although the dynamic characteristics of the baroreflex system have been described by baroreflex transfer functions obtained from open-loop analysis, the predictability of time-series output dynamics from input signals, which should confirm the accuracy of system identification, remains to be elucidated. Moreover, despite theoretical concerns over closed-loop system identification, the accuracy and the predictability of the closed-loop spontaneous baroreflex transfer function have not been evaluated compared with the open-loop transfer function. Using urethane and α-chloralose anaesthetized, vagotomized and aortic-denervated rabbits (n = 10), we identified open-loop baroreflex transfer functions by recording renal sympathetic nerve activity (SNA) while varying the vascularly isolated intracarotid sinus pressure (CSP) according to a binary random (white-noise) sequence (operating pressure ± 20 mmHg), and using a simplified equation to calculate closed-loop-spontaneous baroreflex transfer function while matching CSP with systemic arterial pressure (AP). Our results showed that the open-loop baroreflex transfer functions for the neural and peripheral arcs predicted the time-series SNA and AP outputs from measured CSP and SNA inputs, with r2 of 0.8 ± 0.1 and 0.8 ± 0.1, respectively. In contrast, the closed-loop-spontaneous baroreflex transfer function for the neural arc was markedly different from the open-loop transfer function (enhanced gain increase and a phase lead), and did not predict the time-series SNA dynamics (r2; 0.1 ± 0.1). However, the closed-loop-spontaneous baroreflex transfer function of the peripheral arc partially matched the open-loop transfer function in gain and phase functions, and had limited but reasonable predictability of the time-series AP dynamics (r2, 0.7 ± 0.1). A numerical simulation suggested that a noise predominantly in the neural arc under resting conditions might be a possible mechanism responsible for our findings. Furthermore, the predictabilities of the neural arc transfer functions obtained in open-loop and closed-loop conditions were validated by closed-loop pharmacological (phenylephrine and nitroprusside infusions) pressure interventions. Time-series SNA responses to drug-induced AP changes predicted by the open-loop transfer function matched closely the measured responses (r2, 0.9 ± 0.1), whereas SNA responses predicted by closed-loop-spontaneous transfer function deviated greatly and were the inverse of measured responses (r, −0.8 ± 0.2). These results indicate that although the spontaneous baroreflex transfer function obtained by closed-loop analysis has been believed to represent the neural arc function, it is inappropriate for system identification of the neural arc but is essentially appropriate for the peripheral arc under resting conditions, when compared with open-loop analysis. PMID:21486839
Desagregation des debits mensuels en debits journaliers
NASA Astrophysics Data System (ADS)
Ypou, Tanou Ya Kouassi
A good estimate of the historical natural flow of water in a water system, allows an appropriate management of reservoirs of hydroelectric plants. This management is a guarantee for efficient planning of hydropower production. The reconstruction of the real natural inputs with quality features for the periods before and after the impoundment of reservoirs is sought by HQ. The implementation of a good quality daily historical data from monthly data remains a major concern both for HQ and for the scientific community. Beyond the benefits of mastering simulations of the basin's hydrological behavior in water systems, this study allows the establishment of appropriate measures to protect the population and the various properties located in riparian areas of water systems. The main objective of the study is the breakdown of monthly flows in daily flows. This study is in the business context of HQ. To reconstruct the historical supply of water systems, HSAMI and HYDROTEL models are used. Different methods have been used by HQ to constitute the daily historical rates. So far, a good quality of the reconstituted daily data analysis illustrates the serious discrepancies and errors in those series. Several previous studies in the literature have attempted to reconstruct the daily flow rates from historical monthly series, but as explained in the report, these different approaches have results that do not represent the reality of HQ's water systems. Clearly the methods are not effective in the operational framework of Hydro-Quebec. This report presents an optimized use based on the approach HSAMI and HYDROTEL models in order to transform the flow of rain for the reconstruction of natural flow series. This approach is applied to Outardes's and Saint-Maurice's water systems with the weather and physical field data available. Input the hydrological data are validated by a process of analyzing data quality, specific flow and evaporation parameters. Input the metrological data has been analysis by Statistics, climate and water for weather series criteria. An automatic calibration of the two models is made with the Matlab software. The results of the calibration of Outardes's and Saint-Maurice's water systems are presented in this report. The modeling of ground conditions is made for input data needs of different models using the features included in the models are generally presented in this report and in particularly the model for HYDROTEL and PHYSITEL. The historical simulation flows is performed using meteorological data and physical field data on the periods of 1965 to 2014. Based on the quality of input data available and the goal of generating daily historical supply series using monthly series of natural inputs, the quality criteria have been defined to qualify the model to choose. Indeed, the quality criteria for comparing the two models are the criterion of NSE and KGE. Analysis of the results led to the conclusion that the HYDROTEL model is most appropriate in the operational framework of HQ to disaggregate monthly historical series of daily flows in series. The HYDROTEL model enabled to disaggregate monthly debits daily flows. The daily discharges simulated ponds Beaumont, Vermillion, La tuque are presented and analyzed in this report. Keywords: disaggregation, natural flow, HYDROTEL, HSAMI, data reconstruction .
Peak expiratory flow profiles delivered by pump systems. Limitations due to wave action.
Miller, M R; Jones, B; Xu, Y; Pedersen, O F; Quanjer, P H
2000-06-01
Pump systems are currently used to test the performance of both spirometers and peak expiratory flow (PEF) meters, but for certain flow profiles the input signal (i.e., requested profile) and the output profile can differ. We developed a mathematical model of wave action within a pump and compared the recorded flow profiles with both the input profiles and the output predicted by the model. Three American Thoracic Society (ATS) flow profiles and four artificial flow-versus-time profiles were delivered by a pump, first to a pneumotachograph (PT) on its own, then to the PT with a 32-cm upstream extension tube (which would favor wave action), and lastly with the PT in series with and immediately downstream to a mini-Wright peak flow meter. With the PT on its own, recorded flow for the seven profiles was 2.4 +/- 1.9% (mean +/- SD) higher than the pump's input flow, and similarly was 2.3 +/- 2.3% higher than the pump's output flow as predicted by the model. With the extension tube in place, the recorded flow was 6.6 +/- 6.4% higher than the input flow (range: 0.1 to 18.4%), but was only 1.2 +/- 2.5% higher than the output flow predicted by the model (range: -0.8 to 5.2%). With the mini-Wright meter in series, the flow recorded by the PT was on average 6.1 +/- 9.1% below the input flow (range: -23.8 to 2. 5%), but was only 0.6 +/- 3.3% above the pump's output flow predicted by the model (range: -5.5 to 3.9%). The mini-Wright meter's reading (corrected for its nonlinearity) was on average 1.3 +/- 3.6% below the model's predicted output flow (range: -9.0 to 1. 5%). The mini-Wright meter would be deemed outside ATS limits for accuracy for three of the seven profiles when compared with the pump's input PEF, but this would be true for only one profile when compared with the pump's output PEF as predicted by the model. Our study shows that the output flow from pump systems can differ from the input waveform depending on the operating configuration. This effect can be predicted with reasonable accuracy using a model based on nonsteady flow analysis that takes account of pressure wave reflections within pump systems.
Microresonator electrode design
Olsson, III, Roy H.; Wojciechowski, Kenneth; Branch, Darren W.
2016-05-10
A microresonator with an input electrode and an output electrode patterned thereon is described. The input electrode includes a series of stubs that are configured to isolate acoustic waves, such that the waves are not reflected into the microresonator. Such design results in reduction of spurious modes corresponding to the microresonator.
Relating interesting quantitative time series patterns with text events and text features
NASA Astrophysics Data System (ADS)
Wanner, Franz; Schreck, Tobias; Jentner, Wolfgang; Sharalieva, Lyubka; Keim, Daniel A.
2013-12-01
In many application areas, the key to successful data analysis is the integrated analysis of heterogeneous data. One example is the financial domain, where time-dependent and highly frequent quantitative data (e.g., trading volume and price information) and textual data (e.g., economic and political news reports) need to be considered jointly. Data analysis tools need to support an integrated analysis, which allows studying the relationships between textual news documents and quantitative properties of the stock market price series. In this paper, we describe a workflow and tool that allows a flexible formation of hypotheses about text features and their combinations, which reflect quantitative phenomena observed in stock data. To support such an analysis, we combine the analysis steps of frequent quantitative and text-oriented data using an existing a-priori method. First, based on heuristics we extract interesting intervals and patterns in large time series data. The visual analysis supports the analyst in exploring parameter combinations and their results. The identified time series patterns are then input for the second analysis step, in which all identified intervals of interest are analyzed for frequent patterns co-occurring with financial news. An a-priori method supports the discovery of such sequential temporal patterns. Then, various text features like the degree of sentence nesting, noun phrase complexity, the vocabulary richness, etc. are extracted from the news to obtain meta patterns. Meta patterns are defined by a specific combination of text features which significantly differ from the text features of the remaining news data. Our approach combines a portfolio of visualization and analysis techniques, including time-, cluster- and sequence visualization and analysis functionality. We provide two case studies, showing the effectiveness of our combined quantitative and textual analysis work flow. The workflow can also be generalized to other application domains such as data analysis of smart grids, cyber physical systems or the security of critical infrastructure, where the data consists of a combination of quantitative and textual time series data.
NASA Astrophysics Data System (ADS)
Hentze, Konrad; Thonfeld, Frank; Menz, Gunter
2017-10-01
In the discourse on land reform assessments, a significant lack of spatial and time-series data has been identified, especially with respect to Zimbabwe's ;Fast-Track Land Reform Programme; (FTLRP). At the same time, interest persists among land use change scientists to evaluate causes of land use change and therefore to increase the explanatory power of remote sensing products. This study recognizes these demands and aims to provide input on both levels: Evaluating the potential of satellite remote sensing time-series to answer questions which evolved after intensive land redistribution efforts in Zimbabwe; and investigating how time-series analysis of Normalized Difference Vegetation Index (NDVI) can be enhanced to provide information on land reform induced land use change. To achieve this, two time-series methods are applied to MODIS NDVI data: Seasonal Trend Analysis (STA) and Breakpoint Analysis for Additive Season and Trend (BFAST). In our first analysis, a link of agricultural productivity trends to different land tenure regimes shows that regional clustering of trends is more dominant than a relationship between tenure and trend with a slightly negative slope for all regimes. We demonstrate that clusters of strong negative and positive productivity trends are results of changing irrigation patterns. To locate emerging and fallow irrigation schemes in semi-arid Zimbabwe, a new multi-method approach is developed which allows to map changes from bimodal seasonal phenological patterns to unimodal and vice versa. With an enhanced breakpoint analysis through the combination of STA and BFAST, we are able to provide a technique that can be applied on large scale to map status and development of highly productive cropping systems, which are key for food production, national export and local employment. We therefore conclude that the combination of existing and accessible time-series analysis methods: is able to achieve both: overcoming demonstrated limitations of MODIS based trend analysis and enhancing knowledge of Zimbabwe's FTLRP.
Constant-Elasticity-of-Substitution Simulation
NASA Technical Reports Server (NTRS)
Reiter, G.
1986-01-01
Program simulates constant elasticity-of-substitution (CES) production function. CES function used by economic analysts to examine production costs as well as uncertainties in production. User provides such input parameters as price of labor, price of capital, and dispersion levels. CES minimizes expected cost to produce capital-uncertainty pair. By varying capital-value input, one obtains series of capital-uncertainty pairs. Capital-uncertainty pairs then used to generate several cost curves. CES program menu driven and features specific print menu for examining selected output curves. Program written in BASIC for interactive execution and implemented on IBM PC-series computer.
Sequentially Executed Model Evaluation Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-20
Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as partmore » of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less
Age Distribution of Groundwater
NASA Astrophysics Data System (ADS)
Morgenstern, U.; Daughney, C. J.
2012-04-01
Groundwater at the discharge point comprises a mixture of water from different flow lines with different travel time and therefore has no discrete age but an age distribution. The age distribution can be assessed by measuring how a pulse shaped tracer moves through the groundwater system. Detection of the time delay and the dispersion of the peak in the groundwater compared to the tracer input reveals the mean residence time and the mixing parameter. Tritium from nuclear weapons testing in the early 1960s resulted in a peak-shaped tritium input to the whole hydrologic system on earth. Tritium is the ideal tracer for groundwater because it is an isotope of hydrogen and therefore is part of the water molecule. Tritium time series data that encompass the passage of the bomb tritium pulse through the groundwater system in all common hydrogeologic situations in New Zealand demonstrate a semi-systematic pattern between age distribution parameters and hydrologic situation. The data in general indicate high fraction of mixing, but in some cases also indicate high piston flow. We will show that still, 45 years after the peak of the bomb tritium, it is possible to assess accurately the parameters of age distributions by measuring the tail of the bomb tritium.
A study of remote sensing as applied to regional and small watersheds. Volume 1: Summary report
NASA Technical Reports Server (NTRS)
Ambaruch, R.
1974-01-01
The accuracy of remotely sensed measurements to provide inputs to hydrologic models of watersheds is studied. A series of sensitivity analyses on continuous simulation models of three watersheds determined: (1)Optimal values and permissible tolerances of inputs to achieve accurate simulation of streamflow from the watersheds; (2) Which model inputs can be quantified from remote sensing, directly, indirectly or by inference; and (3) How accurate remotely sensed measurements (from spacecraft or aircraft) must be to provide a basis for quantifying model inputs within permissible tolerances.
The series product for gaussian quantum input processes
NASA Astrophysics Data System (ADS)
Gough, John E.; James, Matthew R.
2017-02-01
We present a theory for connecting quantum Markov components into a network with quantum input processes in a Gaussian state (including thermal and squeezed). One would expect on physical grounds that the connection rules should be independent of the state of the input to the network. To compute statistical properties, we use a version of Wicks' theorem involving fictitious vacuum fields (Fock space based representation of the fields) and while this aids computation, and gives a rigorous formulation, the various representations need not be unitarily equivalent. In particular, a naive application of the connection rules would lead to the wrong answer. We establish the correct interconnection rules, and show that while the quantum stochastic differential equations of motion display explicitly the covariances (thermal and squeezing parameters) of the Gaussian input fields we introduce the Wick-Stratonovich form which leads to a way of writing these equations that does not depend on these covariances and so corresponds to the universal equations written in terms of formal quantum input processes. We show that a wholly consistent theory of quantum open systems in series can be developed in this way, and as required physically, is universal and in particular representation-free.
USING MUSSEL ISTOPE RATIOS TO ASSESS ANTHROPOGENIC NITROGEN INPUTS TO FRESHWATER ECOSYSTEMS
Stable nitrogen isotope ratios ( 15N) of freshwater mussels from a series of lakes and ponds were related to watershed land use characteristics to assess their utility in determining the source of nitrogen inputs to inland water bodies. Nitrogen isotope ratios measured in freshwa...
Understanding the Behaviour of Infinite Ladder Circuits
ERIC Educational Resources Information Center
Ucak, C.; Yegin, K.
2008-01-01
Infinite ladder circuits are often encountered in undergraduate electrical engineering and physics curricula when dealing with series and parallel combination of impedances, as a part of filter design or wave propagation on transmission lines. The input impedance of such infinite ladder circuits is derived by assuming that the input impedance does…
Deriving Daily Time Series Evapotranspiration, Evaporation and Transpiration Maps With Landsat Data
NASA Astrophysics Data System (ADS)
Paul, G.; Gowda, P. H.; Marek, T.; Xiao, X.; Basara, J. B.
2014-12-01
Mapping high resolution evapotranspiration (ET) over large region at daily time step is complex and computationally intensive. Utility of high resolution daily ET maps are large ranging from crop water management to watershed management. The aim of this work is to generate daily time series (10 years) ET and its components vegetation transpiration (T) and soil water evaporation (E) maps using Landsat 5 satellite data for Southern Great Plains forage-rangeland-winter wheat production system in Oklahoma (OK). Framework for generating these products included the two source energy balance (TSEB) algorithm and other important features were: (a) atmospheric correction algorithm; (b) spatially interpolated weather inputs; (c) functions for varying Priestley-Taylor coefficient; and (d) ET, E and T extrapolating algorithm utilizing reference ET. An extensive network of 140 weather stations managed by Oklahoma Mesonet was utilized to generate spatially interpolated inputs of air temperature, relative humidity, wind speed, solar radiation, pressure, and reference ET. Validation of the ET maps were done against eddy covariance data from two grassland sites at El Reno, OK suggested good performance (Table 1). Figure 1 illustrates a daily ET map for a very small subset of 18thJuly 2006 ET map, where difference in ET among different land uses such as the irrigated cropland, vegetation along drainage, and grassland is very distinct. Results indicated that the proposed ET mapping framework is suitable for deriving high resolution time series daily ET maps at regional scale with Landsat Thematic Mapper data. . Table 1: Daily actual ET performance statistics for two grassland locations at El Reno OK for year 2005 . Management Type Mean (obs) (mm d-1) Mean (est) (mm d-1) MBE (mm d-1) % MBE (%) RMSE (mm d-1) RMSE (%) MAE (mm d-1) MAPD (%) NSE R2 Control 2.2 1.8 -0.43 -19.4 0.87 38.9 0.65 29.5 0.71 0.79 Burnt 2.0 1.8 -0.15 -7.7 0.80 39.8 0.62 30.7 0.73 0.77
Hydrodynamic measurements in Suisun Bay, California, 1992-93
Gartner, Jeffrey W.; Burau, Jon R.
1999-01-01
Sea level, velocity, temperature, and salinity (conductivity and temperature) data collected in Suisun Bay, California, from December 11, 1992, through May 31, 1993, by the U.S. Geological Survey are documented in this report. Sea-level data were collected at four locations and temperature and salinity data were collected at seven locations. Velocity data were collected at three locations using acoustic Doppler current profilers and at four other locations using point velocity meters. Sea-level and velocity data are presented in three forms (1) harmonic analysis results, (2) time-series plots (sea level, current speed, and current direction versus time), and (3) time-series plots of the low-pass filtered data. Temperature and salinity data are presented as plots of raw and low-pass filtered time series. The velocity and salinity data collected during this study document a period when the residual current patterns and salt field were significantly altered by large Delta outflow (three peaks in excess of 2,000 cubic meters per second). Residual current profiles were consistently seaward with magnitudes that fluctuated primarily in concert with Delta outflow and secondarily with the spring-neap tide cycle. The freshwater inputs advected salinity seaward of Suisun Bay for most of this study. Except for a 10-day period at the beginning of the study, dynamically significant salinities (>2) were seaward of Suisun Bay, which resulted in little or no gravitational circulation transport.
Rodenbeck, Christopher T.; Tracey, Keith J.; Barkley, Keith R.; ...
2014-08-01
This paper introduces a technique for improving the sensitivity of RF subsamplers in radar and coherent receiver applications. The technique, referred to herein as “delta modulation” (DM), feeds the time-average output of a monobit analog-to-digital converter (ADC) back to the ADC input, but with opposite polarity. Assuming pseudo-stationary modulation statistics on the sampled RF waveform, the feedback signal corrects for aggregate DC offsets present in the ADC that otherwise degrade ADC sensitivity. Two RF integrated circuits (RFICs) are designed to demonstrate the approach. One uses analog DM to create the feedback signal; the other uses digital DM to achieve themore » same result. A series of tests validates the designs. The dynamic time-domain response confirms the feedback loop’s basic operation. Measured output quantization imbalance, under noise-only input drive, significantly improves with the use of the DM circuit, even for large, deliberately induced DC offsets and wide temperature variation from -55°C to +85 °C. Examination of the corrected vs. uncorrected baseband spectrum under swept input signal-tonoise ratio (SNR) conditions demonstrates the effectiveness of this approach for realistic radar and coherent receiver applications. In conclusion, two-tone testing shows no impact of the DM technique on ADC linearity.« less
NASA Technical Reports Server (NTRS)
Zak, J. Allen; Rodgers, William G., Jr.
2000-01-01
The quality of the Aircraft Vortex Spacing System (AVOSS) is critically dependent on representative wind profiles in the atmospheric boundary layer. These winds observed from a number of sensor systems around the Dallas-Fort Worth airport were combined into single vertical wind profiles by an algorithm developed and implemented by MIT Lincoln Laboratory. This process, called the AVOSS Winds Analysis System (AWAS), is used by AVOSS for wake corridor predictions. During times when AWAS solutions were available, the quality of the resultant wind profiles and variance was judged from a series of plots combining all sensor observations and AWAS profiles during the period 1200 to 0400 UTC daily. First, input data was evaluated for continuity and consistency from criteria established. Next, the degree of agreement among all wind sensor systems was noted and cases of disagreement identified. Finally, the resultant AWAS solution was compared to the quality-assessed input data. When profiles differed by a specified amount from valid sensor consensus winds, times and altitudes were flagged. Volume one documents the process and quality of input sensor data. Volume two documents the data processing/sorting process and provides the resultant flagged files.
Field Research Facility Data Integration Framework Data Management Plan: Survey Lines Dataset
2016-08-01
CHL and its District partners. The beach morphology surveys on which this report focuses provide quantitative measures of the dynamic nature of...topography • volume change 1.4 Data description The morphology surveys are conducted over a series of 26 shore- perpendicular profile lines spaced 50...dataset input data and products. Table 1. FRF survey lines dataset input data and products. Input Data FDIF Product Description ASCII LARC survey text
Photovoltaic power system tests on an 8-kilowatt single-phase line-commutated inverter
NASA Technical Reports Server (NTRS)
Stover, J. B.
1978-01-01
Efficiency and power factor were measured as functions of solar array voltage and current. The effects of input shunt capacitance and series inductance were determined. Tests were conducted from 15 to 75 percent of the 8 kW rated inverter input power. Measured efficiencies ranged from 76 percent to 88 percent at about 50 percent of rated inverter input power. Power factor ranged from 36 percent to 72 percent.
Nitrogen enrichment and speciation in a coral reef lagoon driven by groundwater inputs of bird guano
NASA Astrophysics Data System (ADS)
McMahon, Ashly; Santos, Isaac R.
2017-09-01
While the influence of river inputs on coral reef biogeochemistry has been investigated, there is limited information on nutrient fluxes related to submarine groundwater discharge (SGD). Here, we investigate whether significant saline groundwater-derived nutrient inputs from bird guano drive coral reef photosynthesis and calcification off Heron Island (Great Barrier Reef, Australia). We used multiple experimental approaches including groundwater sampling, beach face transects, and detailed time series observations to assess the dynamics and speciation of groundwater nutrients as they travel across the island and discharge into the coral reef lagoon. Nitrogen speciation shifted from nitrate-dominated groundwater (>90% of total dissolved nitrogen) to a coral reef lagoon dominated by dissolved organic nitrogen (DON; ˜86%). There was a minimum input of nitrate of 2.1 mmol m-2 d-1 into the lagoon from tidally driven submarine groundwater discharge estimated from a radon mass balance model. An independent approach based on the enrichment of dissolved nutrients during isolation at low tide implied nitrate fluxes of 5.4 mmol m-2 d-1. A correlation was observed between nitrate and daytime net ecosystem production and calcification. We suggest that groundwater nutrients derived from bird guano may offer a significant addition to oligotrophic coral reef lagoons and fuel ecosystem productivity and the coastal carbon cycle near Heron Island. The large input of groundwater nutrients in Heron Island may serve as a natural ecological analogue to other coral reefs subject to large nutrient inputs from anthropogenic sources.
Sensitivity Analysis as a Tool to assess Energy-Water Nexus in India
NASA Astrophysics Data System (ADS)
Priyanka, P.; Banerjee, R.
2017-12-01
Rapid urbanization, population growth and related structural changes with-in the economy of a developing country act as a stressor on energy and water demand, which forms a well-established energy-water nexus. Energy-water nexus is thoroughly studied at various spatial scales viz. city level, river basin level and national level- to guide different stakeholders for sustainable management of energy and water. However, temporal dimensions of energy-water nexus at national level have not been thoroughly investigated because of unavailability of relevant time-series data. In this study we investigated energy-water nexus at national level using environmentally-extended input-output tables for Indian economy (2004-2013) as provided by EORA database. Perturbation based sensitivity analysis is proposed to highlight the critical nodes of interactions among economic sectors which is further linked to detect the synergistic effects of energy and water consumption. Technology changes (interpreted as change in value of nodes) results in modification of interactions among economic sectors and synergy is affected through direct as well as indirect effects. Indirect effects are not easily understood through preliminary examination of data, hence sensitivity analysis within an input-output framework is important to understand the indirect effects. Furthermore, time series data helps in developing the understanding on dynamics of synergistic effects. We identified the key sectors and technology changes for Indian economy which will provide the better decision support for policy makers about sustainable use of energy-water resources in India.
NASA Astrophysics Data System (ADS)
Zhang, Taiping; Stackhouse, Paul W.; Gupta, Shashi K.; Cox, Stephen J.; Mikovitz, J. Colleen
2017-02-01
Occasionally, a need arises to downscale a time series of data from a coarse temporal resolution to a finer one, a typical example being from monthly means to daily means. For this case, daily means derived as such are used as inputs of climatic or atmospheric models so that the model results may exhibit variance on the daily time scale and retain the monthly mean of the original data set without an abrupt change from the end of one month to the beginning of the next. Different methods have been developed which often need assumptions, free parameters and the solution of simultaneous equations. Here we derive a generalized formulation by means of Fourier transform and inversion so that it can be used to directly compute daily means from a series of an arbitrary number of monthly means. The formulation can be used to transform any coarse temporal resolution to a finer one. From the derived results, the original data can be recovered almost identically. As a real application, we use this method to derive the daily counterpart of the MAC-v1 aerosol climatology that provides monthly mean aerosol properties for 18 shortwave bands and 12 longwave bands for the years from 1860 to 2100. The derived daily means are to be used as inputs of the shortwave and longwave algorithms of the NASA GEWEX SRB project.
Röhling, Steffi; Dunger, Karsten; Kändler, Gerald; Klatt, Susann; Riedel, Thomas; Stümer, Wolfgang; Brötz, Johannes
2016-12-01
The German greenhouse gas inventory in the land use change sector strongly depends on national forest inventory data. As these data were collected periodically 1987, 2002, 2008 and 2012, the time series on emissions show several "jumps" due to biomass stock change, especially between 2001 and 2002 and between 2007 and 2008 while within the periods the emissions seem to be constant due to the application of periodical average emission factors. This does not reflect inter-annual variability in the time series, which would be assumed as the drivers for the carbon stock changes fluctuate between the years. Therefore additional data, which is available on annual basis, should be introduced into the calculations of the emissions inventories in order to get more plausible time series. This article explores the possibility of introducing an annual rather than periodical approach to calculating emission factors with the given data and thus smoothing the trajectory of time series for emissions from forest biomass. Two approaches are introduced to estimate annual changes derived from periodic data: the so-called logging factor method and the growth factor method. The logging factor method incorporates annual logging data to project annual values from periodic values. This is less complex to implement than the growth factor method, which additionally adds growth data into the calculations. Calculation of the input variables is based on sound statistical methodologies and periodically collected data that cannot be altered. Thus a discontinuous trajectory of the emissions over time remains, even after the adjustments. It is intended to adopt this approach in the German greenhouse gas reporting in order to meet the request for annually adjusted values.
NASA Astrophysics Data System (ADS)
Ritzberger, D.; Jakubek, S.
2017-09-01
In this work, a data-driven identification method, based on polynomial nonlinear autoregressive models with exogenous inputs (NARX) and the Volterra series, is proposed to describe the dynamic and nonlinear voltage and current characteristics of polymer electrolyte membrane fuel cells (PEMFCs). The structure selection and parameter estimation of the NARX model is performed on broad-band voltage/current data. By transforming the time-domain NARX model into a Volterra series representation using the harmonic probing algorithm, a frequency-domain description of the linear and nonlinear dynamics is obtained. With the Volterra kernels corresponding to different operating conditions, information from existing diagnostic tools in the frequency domain such as electrochemical impedance spectroscopy (EIS) and total harmonic distortion analysis (THDA) are effectively combined. Additionally, the time-domain NARX model can be utilized for fault detection by evaluating the difference between measured and simulated output. To increase the fault detectability, an optimization problem is introduced which maximizes this output residual to obtain proper excitation frequencies. As a possible extension it is shown, that by optimizing the periodic signal shape itself that the fault detectability is further increased.
Using tree ring cellulose as a tool to estimate past tritium inputs to the ocean
NASA Astrophysics Data System (ADS)
Stark, S.; Statham, P. J.; Stanley, R.; Jenkins, W. J.
2005-09-01
Tritium ( 3H) concentrations in tree rings should reflect ambient precipitation. Thus, to improve knowledge of the 3H input to the oceans, we developed a new technique to measure 3H concentrations in annual tree rings. Measurements of 3H were made on cellulose, the primary constituent of wood, as the isotopic signal of its carbon bound hydrogen atoms should be unchanged since biosynthesis. Traditional cellulose extraction techniques from softwoods are slow and were found to not yield reproducibly pure cellulose. Therefore, a new microwave method was developed which reduces extraction times from 3-5 days to approximately 3 h. Potential 3H contamination from the hydroxyl groups of the cellulose molecule was subsequently removed by exchange with 3H-free NaOH, thus avoiding the dangers of working with large amounts of cellulose nitrate. The validity of the technique was tested by presenting a 3H time series from a cedar tree which grew in Tollymore Forest Park, Northern Ireland, for comparison with 3H data from the Valentia weather station. We find that the 3H in the cellulose clearly reflects the 3H in precipitation with no significant smearing of the bomb signal. A simple box model illustrates that the maximum reservoir residence time of source water for the tree is less than 1 yr, suggesting that groundwater is not a major source of water for this tree. In general, however, the groundwater input needs to be quantified for accurate 3H reconstructions to be made. This work demonstrates the potential of using 3H in wood cellulose as a proxy for 3H in precipitation and, thus, opens the door to reconstruction of past 3H inputs to the ocean.
NASA Astrophysics Data System (ADS)
Tian, H.; Lu, C.
2016-12-01
In addition to enhance agricultural productivity, synthetic nitrogen (N) and phosphorous (P) fertilizer application in croplands dramatically altered global nutrient budget, water quality, greenhouse gas balance, and their feedbacks to the climate system. However, due to the lack of geospatial fertilizer input data, current Earth system/land surface modeling studies have to ignore or use over-simplified data (e.g., static, spatially uniform fertilizer use) to characterize agricultural N and P input over decadal or century-long period. In this study, we therefore develop a global time-series gridded data of annual synthetic N and P fertilizer use rate in croplands, matched with HYDE 3,2 historical land use maps, at a resolution of 0.5º latitude by longitude during 1900-2013. Our data indicate N and P fertilizer use rates increased by approximately 8 times and 3 times, respectively, since the year 1961, when IFA (International Fertilizer Industry Association) and FAO (Food and Agricultural Organization) survey of country-level fertilizer input were available. Considering cropland expansion, increase of total fertilizer consumption amount is even larger. Hotspots of agricultural N fertilizer use shifted from the U.S. and Western Europe in the 1960s to East Asia in the early 21st century. P fertilizer input show the similar pattern with additional hotspot in Brazil. We find a global increase of fertilizer N/P ratio by 0.8 g N/g P per decade (p< 0.05) during 1961-2013, which may have important global implication of human impacts on agroecosystem functions in the long run. Our data can serve as one of critical input drivers for regional and global assessment on agricultural productivity, crop yield, agriculture-derived greenhouse gas balance, global nutrient budget, land-to-aquatic nutrient loss, and ecosystem feedback to the climate system.
NASA Astrophysics Data System (ADS)
Lu, Chaoqun; Tian, Hanqin
2017-03-01
In addition to enhancing agricultural productivity, synthetic nitrogen (N) and phosphorous (P) fertilizer application in croplands dramatically alters global nutrient budget, water quality, greenhouse gas balance, and their feedback to the climate system. However, due to the lack of geospatial fertilizer input data, current Earth system and land surface modeling studies have to ignore or use oversimplified data (e.g., static, spatially uniform fertilizer use) to characterize agricultural N and P input over decadal or century-long periods. In this study, we therefore develop global time series gridded data of annual synthetic N and P fertilizer use rate in agricultural lands, matched with HYDE 3.2 historical land use maps, at a resolution of 0.5° × 0.5° latitude-longitude during 1961-2013. Our data indicate N and P fertilizer use rates on per unit cropland area increased by approximately 8 times and 3 times, respectively, since the year 1961 when IFA (International Fertilizer Industry Association) and FAO (Food and Agricultural Organization) surveys of country-level fertilizer input became available. Considering cropland expansion, the increase in total fertilizer consumption is even larger. Hotspots of agricultural N fertilizer application shifted from the US and western Europe in the 1960s to eastern Asia in the early 21st century. P fertilizer input shows a similar pattern with an additional current hotspot in Brazil. We found a global increase in fertilizer N / P ratio by 0.8 g N g-1 P per decade (p < 0.05) during 1961-2013, which may have an important global implication for human impacts on agroecosystem functions in the long run. Our data can serve as one of critical input drivers for regional and global models to assess the impacts of nutrient enrichment on climate system, water resources, food security, etc. Datasets available at doi:10.1594/PANGAEA.863323.
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
A two-dimensional graphing program for the Tektronix 4050-series graphics computers
Kipp, K.L.
1983-01-01
A refined, two-dimensional graph-plotting program was developed for use on Tektronix 4050-series graphics computers. Important features of this program include: any combination of logarithmic and linear axes, optional automatic scaling and numbering of the axes, multiple-curve plots, character or drawn symbol-point plotting, optional cartridge-tape data input and plot-format storage, optional spline fitting for smooth curves, and built-in data-editing options. The program is run while the Tektronix is not connected to any large auxiliary computer, although data from files on an auxiliary computer easily can be transferred to data-cartridge for later plotting. The user is led through the plot-construction process by a series of questions and requests for data input. Five example plots are presented to illustrate program capability and the sequence of program operation. (USGS)
Pan-European stochastic flood event set
NASA Astrophysics Data System (ADS)
Kadlec, Martin; Pinto, Joaquim G.; He, Yi; Punčochář, Petr; Kelemen, Fanni D.; Manful, Desmond; Palán, Ladislav
2017-04-01
Impact Forecasting (IF), the model development center of Aon Benfield, has been developing a large suite of catastrophe flood models on probabilistic bases for individual countries in Europe. Such natural catastrophes do not follow national boundaries: for example, the major flood in 2016 was responsible for the Europe's largest insured loss of USD3.4bn and affected Germany, France, Belgium, Austria and parts of several other countries. Reflecting such needs, IF initiated a pan-European flood event set development which combines cross-country exposures with country based loss distributions to provide more insightful data to re/insurers. Because the observed discharge data are not available across the whole Europe in sufficient quantity and quality to permit a detailed loss evaluation purposes, a top-down approach was chosen. This approach is based on simulating precipitation from a GCM/RCM model chain followed by a calculation of discharges using rainfall-runoff modelling. IF set up this project in a close collaboration with Karlsruhe Institute of Technology (KIT) regarding the precipitation estimates and with University of East Anglia (UEA) in terms of the rainfall-runoff modelling. KIT's main objective is to provide high resolution daily historical and stochastic time series of key meteorological variables. A purely dynamical downscaling approach with the regional climate model COSMO-CLM (CCLM) is used to generate the historical time series, using re-analysis data as boundary conditions. The resulting time series are validated against the gridded observational dataset E-OBS, and different bias-correction methods are employed. The generation of the stochastic time series requires transfer functions between large-scale atmospheric variables and regional temperature and precipitation fields. These transfer functions are developed for the historical time series using reanalysis data as predictors and bias-corrected CCLM simulated precipitation and temperature as predictands. Finally, the transfer functions are applied to a large ensemble of GCM simulations with forcing corresponding to present day climate conditions to generate highly resolved stochastic time series of precipitation and temperature for several thousand years. These time series form the input for the rainfall-runoff model developed by the UEA team. It is a spatially distributed model adapted from the HBV model and will be calibrated for individual basins using historical discharge data. The calibrated model will be driven by the precipitation time series generated by the KIT team to simulate discharges at a daily time step. The uncertainties in the simulated discharges will be analysed using multiple model parameter sets. A number of statistical methods will be used to assess return periods, changes in the magnitudes, changes in the characteristics of floods such as time base and time to peak, and spatial correlations of large flood events. The Pan-European flood stochastic event set will permit a better view of flood risk for market applications.
Statistical Tests of System Linearity Based on the Method of Surrogate Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunter, N.; Paez, T.; Red-Horse, J.
When dealing with measured data from dynamic systems we often make the tacit assumption that the data are generated by linear dynamics. While some systematic tests for linearity and determinism are available - for example the coherence fimction, the probability density fimction, and the bispectrum - fi,u-ther tests that quanti$ the existence and the degree of nonlinearity are clearly needed. In this paper we demonstrate a statistical test for the nonlinearity exhibited by a dynamic system excited by Gaussian random noise. We perform the usual division of the input and response time series data into blocks as required by themore » Welch method of spectrum estimation and search for significant relationships between a given input fkequency and response at harmonics of the selected input frequency. We argue that systematic tests based on the recently developed statistical method of surrogate data readily detect significant nonlinear relationships. The paper elucidates the method of surrogate data. Typical results are illustrated for a linear single degree-of-freedom system and for a system with polynomial stiffness nonlinearity.« less
Excess nitrogen inputs to estuaries have been linked to deteriorating water quality and habitat conditions which in turn have direct and indirect impacts on both commercial and recreational fish and shellfish. This paper is the first of a two-part series that applies a previously...
NASA Technical Reports Server (NTRS)
Stankiewicz, N.
1982-01-01
The multiple channel input signal to a soft limiter amplifier as a traveling wave tube is represented as a finite, linear sum of Gaussian functions in the frequency domain. Linear regression is used to fit the channel shapes to a least squares residual error. Distortions in output signal, namely intermodulation products, are produced by the nonlinear gain characteristic of the amplifier and constitute the principal noise analyzed in this study. The signal to noise ratios are calculated for various input powers from saturation to 10 dB below saturation for two specific distributions of channels. A criterion for the truncation of the series expansion of the nonlinear transfer characteristic is given. It is found that he signal to noise ratios are very sensitive to the coefficients used in this expansion. Improper or incorrect truncation of the series leads to ambiguous results in the signal to noise ratios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, Kenta; Department of Chemistry, Biology, and Biotechnology, University of Perugia, 06123 Perugia; Gotoda, Hiroshi
2016-05-15
The convective motions within a solution of a photochromic spiro-oxazine being irradiated by UV only on the bottom part of its volume, give rise to aperiodic spectrophotometric dynamics. In this paper, we study three nonlinear properties of the aperiodic time series: permutation entropy, short-term predictability and long-term unpredictability, and degree distribution of the visibility graph networks. After ascertaining the extracted chaotic features, we show how the aperiodic time series can be exploited to implement all the fundamental two-inputs binary logic functions (AND, OR, NAND, NOR, XOR, and XNOR) and some basic arithmetic operations (half-adder, full-adder, half-subtractor). This is possible duemore » to the wide range of states a nonlinear system accesses in the course of its evolution. Therefore, the solution of the convective photochemical oscillator results in hardware for chaos-computing alternative to conventional complementary metal-oxide semiconductor-based integrated circuits.« less
Artificial neural networks for modeling time series of beach litter in the southern North Sea.
Schulz, Marcus; Matthies, Michael
2014-07-01
In European marine waters, existing monitoring programs of beach litter need to be improved concerning litter items used as indicators of pollution levels, efficiency, and effectiveness. In order to ease and focus future monitoring of beach litter on few important litter items, feed-forward neural networks consisting of three layers were developed to relate single litter items to general categories of marine litter. The neural networks developed were applied to seven beaches in the southern North Sea and modeled time series of five general categories of marine litter, such as litter from fishing, shipping, and tourism. Results of regression analyses show that general categories were predicted significantly moderately to well. Measured and modeled data were in the same order of magnitude, and minima and maxima overlapped well. Neural networks were found to be eligible tools to deliver reliable predictions of marine litter with low computational effort and little input of information. Copyright © 2014 Elsevier Ltd. All rights reserved.
A hybrid least squares support vector machines and GMDH approach for river flow forecasting
NASA Astrophysics Data System (ADS)
Samsudin, R.; Saad, P.; Shabri, A.
2010-06-01
This paper proposes a novel hybrid forecasting model, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM), known as GLSSVM. The GMDH is used to determine the useful input variables for LSSVM model and the LSSVM model which works as time series forecasting. In this study the application of GLSSVM for monthly river flow forecasting of Selangor and Bernam River are investigated. The results of the proposed GLSSVM approach are compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA) model, GMDH and LSSVM models using the long term observations of monthly river flow discharge. The standard statistical, the root mean square error (RMSE) and coefficient of correlation (R) are employed to evaluate the performance of various models developed. Experiment result indicates that the hybrid model was powerful tools to model discharge time series and can be applied successfully in complex hydrological modeling.
Physiological time-series analysis: what does regularity quantify?
NASA Technical Reports Server (NTRS)
Pincus, S. M.; Goldberger, A. L.
1994-01-01
Approximate entropy (ApEn) is a recently developed statistic quantifying regularity and complexity that appears to have potential application to a wide variety of physiological and clinical time-series data. The focus here is to provide a better understanding of ApEn to facilitate its proper utilization, application, and interpretation. After giving the formal mathematical description of ApEn, we provide a multistep description of the algorithm as applied to two contrasting clinical heart rate data sets. We discuss algorithm implementation and interpretation and introduce a general mathematical hypothesis of the dynamics of a wide class of diseases, indicating the utility of ApEn to test this hypothesis. We indicate the relationship of ApEn to variability measures, the Fourier spectrum, and algorithms motivated by study of chaotic dynamics. We discuss further mathematical properties of ApEn, including the choice of input parameters, statistical issues, and modeling considerations, and we conclude with a section on caveats to ensure correct ApEn utilization.
Cordes, Erik E.; Auscavitch, Steven; Baums, Iliana B.; Fisher, Charles R.; Girard, Fanny; Gomez, Carlos; McClain-Counts, Jennifer P.; Mendlovitz, Howard P.; Saunders, Miles; Smith, Styles; Vohsen, Samuel; Weinheimer, Alaina
2016-01-01
The 2015 Ecosystem Impacts of Oil and Gas Inputs to the Gulf (ECOGIG) expedition was a continuation of a three-year partnership between our Gulf of Mexico Research Institute-funded research consortium and the Ocean Exploration Trust to study the effects of oil and dispersant on corals and closely related communities affected by the 2010 Deepwater Horizon oil spill (White et al., 2012, 2014; Hsing et al., 2013; Fisher et al., 2014a,b; Figure 1A– C). As part of our analysis, we explored a new site to the west of the Macondo well in lease block Mississippi Canyon (MC) 462 where we examined 50 new corals for impact from the spill (Figure 1D). A total of over 250 corals were re-imaged in 2015 for this ongoing time-series study. Another goal was to initiate a study to determine how proximity to natural seeps affects corals and infauna in these communities.
Automatic Detection of Driver Fatigue Using Driving Operation Information for Transportation Safety
Li, Zuojin; Chen, Liukui; Peng, Jun; Wu, Ying
2017-01-01
Fatigued driving is a major cause of road accidents. For this reason, the method in this paper is based on the steering wheel angles (SWA) and yaw angles (YA) information under real driving conditions to detect drivers’ fatigue levels. It analyzes the operation features of SWA and YA under different fatigue statuses, then calculates the approximate entropy (ApEn) features of a short sliding window on time series. Using the nonlinear feature construction theory of dynamic time series, with the fatigue features as input, designs a “2-6-6-3” multi-level back propagation (BP) Neural Networks classifier to realize the fatigue detection. An approximately 15-h experiment is carried out on a real road, and the data retrieved are segmented and labeled with three fatigue levels after expert evaluation, namely “awake”, “drowsy” and “very drowsy”. The average accuracy of 88.02% in fatigue identification was achieved in the experiment, endorsing the value of the proposed method for engineering applications. PMID:28587072
Automatic Detection of Driver Fatigue Using Driving Operation Information for Transportation Safety.
Li, Zuojin; Chen, Liukui; Peng, Jun; Wu, Ying
2017-05-25
Fatigued driving is a major cause of road accidents. For this reason, the method in this paper is based on the steering wheel angles (SWA) and yaw angles (YA) information under real driving conditions to detect drivers' fatigue levels. It analyzes the operation features of SWA and YA under different fatigue statuses, then calculates the approximate entropy (ApEn) features of a short sliding window on time series. Using the nonlinear feature construction theory of dynamic time series, with the fatigue features as input, designs a "2-6-6-3" multi-level back propagation (BP) Neural Networks classifier to realize the fatigue detection. An approximately 15-h experiment is carried out on a real road, and the data retrieved are segmented and labeled with three fatigue levels after expert evaluation, namely "awake", "drowsy" and "very drowsy". The average accuracy of 88.02% in fatigue identification was achieved in the experiment, endorsing the value of the proposed method for engineering applications.
Vibration Pattern Imager (VPI): A control and data acquisition system for scanning laser vibrometers
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Brown, Donald E.; Shaffer, Thomas A.
1993-01-01
The Vibration Pattern Imager (VPI) system was designed to control and acquire data from scanning laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor, but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. The sensor itself is not part of the VPI system. A graphical interface program, which runs on a PC under the MS-DOS operating system, functions in an interactive mode and communicates with the DSP and I/O boards in a user-friendly fashion through the aid of pop-up menus. Two types of data may be acquired with the VPI system: single point or 'full field.' In the single point mode, time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and is stored by the PC. The position of the measuring point (adjusted by mirrors in the sensor) is controlled via a mouse input. The mouse input is translated to output voltages by the D/A converter on the I/O board to control the mirror servos. In the 'full field' mode, the measurement point is moved over a user-selectable rectangular area. The time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and converted to a root-mean-square (rms) value by the DSP board. The rms 'full field' velocity distribution is then uploaded for display and storage on the PC.
NASA Astrophysics Data System (ADS)
Machguth, H.; Paul, F.; Kotlarski, S.; Hoelzle, M.
2009-04-01
Climate model output has been applied in several studies on glacier mass balance calculation. Hereby, computation of mass balance has mostly been performed at the native resolution of the climate model output or data from individual cells were selected and statistically downscaled. Little attention has been given to the issue of downscaling entire fields of climate model output to a resolution fine enough to compute glacier mass balance in rugged high-mountain terrain. In this study we explore the use of gridded output from a regional climate model (RCM) to drive a distributed mass balance model for the perimeter of the Swiss Alps and the time frame 1979-2003. Our focus lies on the development and testing of downscaling and validation methods. The mass balance model runs at daily steps and 100 m spatial resolution while the RCM REMO provides daily grids (approx. 18 km resolution) of dynamically downscaled re-analysis data. Interpolation techniques and sub-grid parametrizations are combined to bridge the gap in spatial resolution and to obtain daily input fields of air temperature, global radiation and precipitation. The meteorological input fields are compared to measurements at 14 high-elevation weather stations. Computed mass balances are compared to various sets of direct measurements, including stake readings and mass balances for entire glaciers. The validation procedure is performed separately for annual, winter and summer balances. Time series of mass balances for entire glaciers obtained from the model run agree well with observed time series. On the one hand, summer melt measured at stakes on several glaciers is well reproduced by the model, on the other hand, observed accumulation is either over- or underestimated. It is shown that these shifts are systematic and correlated to regional biases in the meteorological input fields. We conclude that the gap in spatial resolution is not a large drawback, while biases in RCM output are a major limitation to model performance. The development and testing of methods to reduce regionally variable biases in entire fields of RCM output should be a focus of pursuing studies.
CARE3MENU- A CARE III USER FRIENDLY INTERFACE
NASA Technical Reports Server (NTRS)
Pierce, J. L.
1994-01-01
CARE3MENU generates an input file for the CARE III program. CARE III is used for reliability prediction of complex, redundant, fault-tolerant systems including digital computers, aircraft, nuclear and chemical control systems. The CARE III input file often becomes complicated and is not easily formatted with a text editor. CARE3MENU provides an easy, interactive method of creating an input file by automatically formatting a set of user-supplied inputs for the CARE III system. CARE3MENU provides detailed on-line help for most of its screen formats. The reliability model input process is divided into sections using menu-driven screen displays. Each stage, or set of identical modules comprising the model, must be identified and described in terms of number of modules, minimum number of modules for stage operation, and critical fault threshold. The fault handling and fault occurence models are detailed in several screens by parameters such as transition rates, propagation and detection densities, Weibull or exponential characteristics, and model accuracy. The system fault tree and critical pairs fault tree screens are used to define the governing logic and to identify modules affected by component failures. Additional CARE3MENU screens prompt the user for output options and run time control values such as mission time and truncation values. There are fourteen major screens, many with default values and HELP options. The documentation includes: 1) a users guide with several examples of CARE III models, the dialog required to input them to CARE3MENU, and the output files created; and 2) a maintenance manual for assistance in changing the HELP files and modifying any of the menu formats or contents. CARE3MENU is written in FORTRAN 77 for interactive execution and has been implemented on a DEC VAX series computer operating under VMS. This program was developed in 1985.
High dynamic range charge measurements
De Geronimo, Gianluigi
2012-09-04
A charge amplifier for use in radiation sensing includes an amplifier, at least one switch, and at least one capacitor. The switch selectively couples the input of the switch to one of at least two voltages. The capacitor is electrically coupled in series between the input of the amplifier and the input of the switch. The capacitor is electrically coupled to the input of the amplifier without a switch coupled therebetween. A method of measuring charge in radiation sensing includes selectively diverting charge from an input of an amplifier to an input of at least one capacitor by selectively coupling an output of the at least one capacitor to one of at least two voltages. The input of the at least one capacitor is operatively coupled to the input of the amplifier without a switch coupled therebetween. The method also includes calculating a total charge based on a sum of the amplified charge and the diverted charge.
Development of constitutive model for composites exhibiting time dependent properties
NASA Astrophysics Data System (ADS)
Pupure, L.; Joffe, R.; Varna, J.; Nyström, B.
2013-12-01
Regenerated cellulose fibres and their composites exhibit highly nonlinear behaviour. The mechanical response of these materials can be successfully described by the model developed by Schapery for time-dependent materials. However, this model requires input parameters that are experimentally determined via large number of time-consuming tests on the studied composite material. If, for example, the volume fraction of fibres is changed we have a different material and new series of experiments on this new material are required. Therefore the ultimate objective of our studies is to develop model which determines the composite behaviour based on behaviour of constituents of the composite. This paper gives an overview of problems and difficulties, associated with development, implementation and verification of such model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Luis; Marchante, Ruth; Cony, Marco
2010-10-15
Due to strong increase of solar power generation, the predictions of incoming solar energy are acquiring more importance. Photovoltaic and solar thermal are the main sources of electricity generation from solar energy. In the case of solar thermal energy plants with storage energy system, its management and operation need reliable predictions of solar irradiance with the same temporal resolution as the temporal capacity of the back-up system. These plants can work like a conventional power plant and compete in the energy stock market avoiding intermittence in electricity production. This work presents a comparisons of statistical models based on time seriesmore » applied to predict half daily values of global solar irradiance with a temporal horizon of 3 days. Half daily values consist of accumulated hourly global solar irradiance from solar raise to solar noon and from noon until dawn for each day. The dataset of ground solar radiation used belongs to stations of Spanish National Weather Service (AEMet). The models tested are autoregressive, neural networks and fuzzy logic models. Due to the fact that half daily solar irradiance time series is non-stationary, it has been necessary to transform it to two new stationary variables (clearness index and lost component) which are used as input of the predictive models. Improvement in terms of RMSD of the models essayed is compared against the model based on persistence. The validation process shows that all models essayed improve persistence. The best approach to forecast half daily values of solar irradiance is neural network models with lost component as input, except Lerida station where models based on clearness index have less uncertainty because this magnitude has a linear behaviour and it is easier to simulate by models. (author)« less
Gsflow-py: An integrated hydrologic model development tool
NASA Astrophysics Data System (ADS)
Gardner, M.; Niswonger, R. G.; Morton, C.; Henson, W.; Huntington, J. L.
2017-12-01
Integrated hydrologic modeling encompasses a vast number of processes and specifications, variable in time and space, and development of model datasets can be arduous. Model input construction techniques have not been formalized or made easily reproducible. Creating the input files for integrated hydrologic models (IHM) requires complex GIS processing of raster and vector datasets from various sources. Developing stream network topology that is consistent with the model resolution digital elevation model is important for robust simulation of surface water and groundwater exchanges. Distribution of meteorologic parameters over the model domain is difficult in complex terrain at the model resolution scale, but is necessary to drive realistic simulations. Historically, development of input data for IHM models has required extensive GIS and computer programming expertise which has restricted the use of IHMs to research groups with available financial, human, and technical resources. Here we present a series of Python scripts that provide a formalized technique for the parameterization and development of integrated hydrologic model inputs for GSFLOW. With some modifications, this process could be applied to any regular grid hydrologic model. This Python toolkit automates many of the necessary and laborious processes of parameterization, including stream network development and cascade routing, land coverages, and meteorological distribution over the model domain.
A high gain wide dynamic range transimpedance amplifier for optical receivers
NASA Astrophysics Data System (ADS)
Lianxi, Liu; Jiao, Zou; Yunfei, En; Shubin, Liu; Yue, Niu; Zhangming, Zhu; Yintang, Yang
2014-01-01
As the front-end preamplifiers in optical receivers, transimpedance amplifiers (TIAs) are commonly required to have a high gain and low input noise to amplify the weak and susceptible input signal. At the same time, the TIAs should possess a wide dynamic range (DR) to prevent the circuit from becoming saturated by high input currents. Based on the above, this paper presents a CMOS transimpedance amplifier with high gain and a wide DR for 2.5 Gbit/s communications. The TIA proposed consists of a three-stage cascade pull push inverter, an automatic gain control circuit, and a shunt transistor controlled by the resistive divider. The inductive-series peaking technique is used to further extend the bandwidth. The TIA proposed displays a maximum transimpedance gain of 88.3 dBΩ with the -3 dB bandwidth of 1.8 GHz, exhibits an input current dynamic range from 100 nA to 10 mA. The output voltage noise is less than 48.23 nV/√Hz within the -3 dB bandwidth. The circuit is fabricated using an SMIC 0.18 μm 1P6M RFCMOS process and dissipates a dc power of 9.4 mW with 1.8 V supply voltage.
Monolithic piezoelectric sensor (MPS) for sensing chemical, biochemical and physical measurands
Andle, Jeffrey C.; Lec, Ryszard M.
2000-01-01
A piezoelectric sensor and assembly for measuring chemical, biochemical and physical measurands is disclosed. The piezoelectric sensor comprises a piezoelectric material, preferably a crystal, a common metal layer attached to the top surface of the piezoelectric crystal, and a pair of independent resonators placed in close proximity on the piezoelectric crystal such that an efficacious portion of acoustic energy couples between the resonators. The first independent resonator serves as an input port through which an input signal is converted into mechanical energy within the sensor and the second independent resonator serves an output port through which a filtered replica of the input signal is detected as an electrical signal. Both a time delay and an attenuation at a given frequency between the input signal and the filtered replica may be measured as a sensor output. The sensor may be integrated into an assembly with a series feedback oscillator and a radio frequency amplifier to process the desired sensor output. In the preferred embodiment of the invention, a selective film is disposed upon the grounded metal layer of the sensor and the resonators are encapsulated to isolate them from the measuring environment. In an alternative embodiment of the invention, more than two resonators are used in order to increase the resolution of the sensor.
Quantifying new water fractions and water age distributions using ensemble hydrograph separation
NASA Astrophysics Data System (ADS)
Kirchner, James
2017-04-01
Catchment transit times are important controls on contaminant transport, weathering rates, and runoff chemistry. Recent theoretical studies have shown that catchment transit time distributions are nonstationary, reflecting the temporal variability in precipitation forcing, the structural heterogeneity of catchments themselves, and the nonlinearity of the mechanisms controlling storage and transport in the subsurface. The challenge of empirically estimating these nonstationary transit time distributions in real-world catchments, however, has only begun to be explored. Long, high-frequency tracer time series are now becoming available, creating new opportunities to study how rainfall becomes streamflow on timescales of minutes to days following the onset of precipitation. Here I show that the conventional formula used for hydrograph separation can be converted into an equivalent linear regression equation that quantifies the fraction of current rainfall in streamflow across ensembles of precipitation events. These ensembles can be selected to represent different discharge ranges, different precipitation intensities, or different levels of antecedent moisture, thus quantifying how the fraction of "new water" in streamflow varies with forcings such as these. I further show how this approach can be generalized to empirically determine the contributions of precipitation inputs to streamflow across a range of time lags. In this way the short-term tail of the transit time distribution can be directly quantified for an ensemble of precipitation events. Benchmark testing with a simple, nonlinear, nonstationary catchment model demonstrates that this approach quantitatively measures the short tail of the transit time distribution for a wide range of catchment response characteristics. In combination with reactive tracer time series, this approach can potentially be extended to measure short-term chemical reaction rates at the catchment scale. High-frequency tracer time series from several experimental catchments will be used to demonstrate the utility of the new approach outlined here.
Proceedings of the 6th annual Speakeasy conference. [Chicago, August 17-18, 1978
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-01-01
This meeting on the Speakeasy programming language and its applications included papers on the following subjects: graphics (graphics under Speakeasy, Speakeasy on a mini, color graphics), time series (OASIS - a user-oriented system at USDA, writing input-burdened linkules), applications (weather and crop yield analysis system, property investment analysis system), data bases under Speakeasy (relational data base, applications of relational data bases), survey analysis (survey analysis package from Liege, sic and its future under Speakeasy), and new features in Speakeasy (partial differential equations, the Speakeasy compiler and optimization). (RWR)
NASA Astrophysics Data System (ADS)
Katsumata, Hisatoshi; Konishi, Keiji; Hara, Naoyuki
2018-04-01
The present paper proposes a scheme for controlling wave segments in excitable media. This scheme consists of two phases: in the first phase, a simple mathematical model for wave segments is derived using only the time series data of input and output signals for the media; in the second phase, the model derived in the first phase is used in an advanced control technique. We demonstrate with numerical simulations of the Oregonator model that this scheme performs better than a conventional control scheme.
Magnetic confinement system using charged ammonia targets
Porter, Gary D.; Bogdanoff, Anatoly
1979-01-01
A system for guiding charged laser targets to a predetermined focal spot of a laser along generally arbitrary, and especially horizontal, directions which comprises a series of electrostatic sensors which provide inputs to a computer for real time calculation of position, velocity, and direction of the target along an initial injection trajectory, and a set of electrostatic deflection means, energized according to a calculated output of said computer, to change the target trajectory to intercept the focal spot of the laser which is triggered so as to illuminate the target of the focal spot.
Guidance system for laser targets
Porter, Gary D.; Bogdanoff, Anatoly
1978-01-01
A system for guiding charged laser targets to a predetermined focal spot of a laser along generally arbitrary, and especially horizontal, directions which comprises a series of electrostatic sensors which provide inputs to a computer for real time calculation of position, velocity, and direction of the target along an initial injection trajectory, and a set of electrostatic deflection means, energized according to a calculated output of said computer, to change the target trajectory to intercept the focal spot of the laser which is triggered so as to illuminate the target of the focal spot.
Bernal, S; Belillas, C; Ibáñez, J J; Àvila, A
2013-08-01
The aim of this study was to gain insights on the potential hydrological and biogeochemical mechanisms controlling the response of two nested Mediterranean catchments to long-term changes in atmospheric inorganic nitrogen and sulphate deposition. One catchment was steep and fully forested (TM9, 5.9 ha) and the other one had gentler slopes and heathlands in the upper part while side slopes were steep and forested (TM0, 205 ha). Both catchments were highly responsive to the 45% decline in sulphate concentration measured in atmospheric deposition during the 1980s and 1990s, with stream concentrations decreasing by 1.4 to 3.4 μeq L(-1) y(-1). Long-term changes in inorganic nitrogen in both, atmospheric deposition and stream water were small compared to sulphate. The quick response to changes in atmospheric inputs could be explained by the small residence time of water (4-5 months) in these catchments (inferred from chloride time series variance analysis), which was attributed to steep slopes and the role of macropore flow bypassing the soil matrix during wet periods. The estimated residence time for sulphate (1.5-3 months) was substantially lower than for chloride suggesting unaccounted sources of sulphate (i.e., dry deposition, or depletion of soil adsorbed sulphate). In both catchments, inorganic nitrogen concentration in stream water was strongly damped compared to precipitation and its residence time was of the order of decades, indicating that this essential nutrient was strongly retained in these catchments. Inorganic nitrogen concentration tended to be higher at TM0 than at TM9 which was attributed to the presence of nitrogen fixing species in the heathlands. Our results indicate that these Mediterranean catchments react rapidly to environmental changes, which make them especially vulnerable to changes in atmospheric deposition. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, X.; Liu, L.; Yan, D.; Moon, M.; Liu, Y.; Henebry, G. M.; Friedl, M. A.; Schaaf, C.
2017-12-01
Land surface phenology (LSP) datasets have been produced from a variety of coarse spatial resolution satellite observations at both regional and global scales and spanning different time periods since 1982. However, the LSP product generated from NASA's MODerate Resolution Imaging Spectroradiometer (MODIS) data at a spatial resolution of 500m, which is termed Land Cover Dynamics (MCD12Q2), is the only global product operationally produced and freely accessible at annual time steps from 2001. Because MODIS instrument is aging and will be replaced by the Visible Infrared Imaging Radiometer Suite (VIIRS), this research focuses on the generation and evaluation of a global LSP product from Suomi-NPP VIIRS time series observations that provide continuity with the MCD12Q2 product. Specifically, we generate 500m VIIRS global LSP data using daily VIIRS Nadir BRDF (bidirectional reflectance distribution function)-Adjusted reflectances (NBAR) in combination with land surface temperature, snow cover, and land cover type as inputs. The product provides twelve phenological metrics (seven phenological dates and five phenological greenness magnitudes), along with six quality metrics characterizing the confidence and quality associated with phenology retrievals at each pixel. In this paper, we describe the input data and algorithms used to produce this new product, and investigate the impact of VIIRS data time series quality on phenology detections across various climate regimes and ecosystems. As part of our analysis, the VIIRS LSP is evaluated using PhenoCam imagery in North America and Asia, and using higher spatial resolution satellite observations from Landsat 8 over an agricultural area in the central USA. We also explore the impact of high frequency cloud cover on the VIIRS LSP product by comparing with phenology detected from the Advanced Himawari Imager (AHI) onboard Himawari-8. AHI is a new geostationary sensor that observes land surface every 10 minutes, which increases the ability to capture cloud-free observations relative to data collected from polar-orbiting satellites such as Suomi-NPP, thereby improving the quality of daily time series data in regions with heavy cloud cover. Finally, the VIIRS LSP is compared with MCD12Q2 data to investigate the continuity of long-term global LSP data records.
Software system for data management and distributed processing of multichannel biomedical signals.
Franaszczuk, P J; Jouny, C C
2004-01-01
The presented software is designed for efficient utilization of cluster of PC computers for signal analysis of multichannel physiological data. The system consists of three main components: 1) a library of input and output procedures, 2) a database storing additional information about location in a storage system, 3) a user interface for selecting data for analysis, choosing programs for analysis, and distributing computing and output data on cluster nodes. The system allows for processing multichannel time series data in multiple binary formats. The description of data format, channels and time of recording are included in separate text files. Definition and selection of multiple channel montages is possible. Epochs for analysis can be selected both manually and automatically. Implementation of a new signal processing procedures is possible with a minimal programming overhead for the input/output processing and user interface. The number of nodes in cluster used for computations and amount of storage can be changed with no major modification to software. Current implementations include the time-frequency analysis of multiday, multichannel recordings of intracranial EEG of epileptic patients as well as evoked response analyses of repeated cognitive tasks.
Metronome LKM: An open source virtual keyboard driver to measure experiment software latencies.
Garaizar, Pablo; Vadillo, Miguel A
2017-10-01
Experiment software is often used to measure reaction times gathered with keyboards or other input devices. In previous studies, the accuracy and precision of time stamps has been assessed through several means: (a) generating accurate square wave signals from an external device connected to the parallel port of the computer running the experiment software, (b) triggering the typematic repeat feature of some keyboards to get an evenly separated series of keypress events, or (c) using a solenoid handled by a microcontroller to press the input device (keyboard, mouse button, touch screen) that will be used in the experimental setup. Despite the advantages of these approaches in some contexts, none of them can isolate the measurement error caused by the experiment software itself. Metronome LKM provides a virtual keyboard to assess an experiment's software. Using this open source driver, researchers can generate keypress events using high-resolution timers and compare the time stamps collected by the experiment software with those gathered by Metronome LKM (with nanosecond resolution). Our software is highly configurable (in terms of keys pressed, intervals, SysRq activation) and runs on 2.6-4.8 Linux kernels.
NASA Astrophysics Data System (ADS)
Dubovyk, Olena; Landmann, Tobias; Erasmus, Barend F. N.; Tewes, Andreas; Schellberg, Jürgen
2015-06-01
Currently there is a lack of knowledge on spatio-temporal patterns of land surface dynamics at medium spatial scale in southern Africa, even though this information is essential for better understanding of ecosystem response to climatic variability and human-induced land transformations. In this study, we analysed vegetation dynamics across a large area in southern Africa using the 14-years (2000-2013) of medium spatial resolution (250 m) MODIS-EVI time-series data. Specifically, we investigated temporal changes in the time series of key phenometrics including overall greenness, peak and timing of annual greenness over the monitoring period and study region. In order to specifically capture spatial and per pixel vegetation changes over time, we calculated trends in these phenometrics using a robust trend analysis method. The results showed that interannual vegetation dynamics followed precipitation patterns with clearly differentiated seasonality. The earliest peak greenness during 2000-2013 occurred at the end of January in the year 2000 and the latest peak greenness was observed at the mid of March in 2012. Specifically spatial patterns of long-term vegetation trends allowed mapping areas of (i) decrease or increase in overall greenness, (ii) decrease or increase of peak greenness, and (iii) shifts in timing of occurrence of peak greenness over the 14-year monitoring period. The observed vegetation decline in the study area was mainly attributed to human-induced factors. The obtained information is useful to guide selection of field sites for detailed vegetation studies and land rehabilitation interventions and serve as an input for a range of land surface models.
TIMESERIESSTREAMING.VI: LabVIEW program for reliable data streaming of large analog time series
NASA Astrophysics Data System (ADS)
Czerwinski, Fabian; Oddershede, Lene B.
2011-02-01
With modern data acquisition devices that work fast and very precise, scientists often face the task of dealing with huge amounts of data. These need to be rapidly processed and stored onto a hard disk. We present a LabVIEW program which reliably streams analog time series of MHz sampling. Its run time has virtually no limitation. We explicitly show how to use the program to extract time series from two experiments: For a photodiode detection system that tracks the position of an optically trapped particle and for a measurement of ionic current through a glass capillary. The program is easy to use and versatile as the input can be any type of analog signal. Also, the data streaming software is simple, highly reliable, and can be easily customized to include, e.g., real-time power spectral analysis and Allan variance noise quantification. Program summaryProgram title: TimeSeriesStreaming.VI Catalogue identifier: AEHT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 No. of bytes in distributed program, including test data, etc.: 63 259 Distribution format: tar.gz Programming language: LabVIEW ( http://www.ni.com/labview/) Computer: Any machine running LabVIEW 8.6 or higher Operating system: Windows XP and Windows 7 RAM: 60-360 Mbyte Classification: 3 Nature of problem: For numerous scientific and engineering applications, it is highly desirable to have an efficient, reliable, and flexible program to perform data streaming of time series sampled with high frequencies and possibly for long time intervals. This type of data acquisition often produces very large amounts of data not easily streamed onto a computer hard disk using standard methods. Solution method: This LabVIEW program is developed to directly stream any kind of time series onto a hard disk. Due to optimized timing and usage of computational resources, such as multicores and protocols for memory usage, this program provides extremely reliable data acquisition. In particular, the program is optimized to deal with large amounts of data, e.g., taken with high sampling frequencies and over long time intervals. The program can be easily customized for time series analyses. Restrictions: Only tested in Windows-operating LabVIEW environments, must use TDMS format, acquisition cards must be LabVIEW compatible, driver DAQmx installed. Running time: As desirable: microseconds to hours
NASA Astrophysics Data System (ADS)
Saadi, Sameh; Simonneaux, Vincent; Boulet, Gilles; Mougenot, Bernard; Zribi, Mehrez; Lili Chabaane, Zohra
2015-04-01
Water scarcity is one of the main factors limiting agricultural development in semi-arid areas. It is thus of major importance to design tools allowing a better management of this resource. Remote sensing has long been used for computing evapotranspiration estimates, which is an input for crop water balance monitoring. Up to now, only medium and low resolution data (e.g. MODIS) are available on regular basis to monitor cultivated areas. However, the increasing availability of high resolution high repetitivity VIS-NIR remote sensing, like the forthcoming Sentinel-2 mission to be lunched in 2015, offers unprecedented opportunity to improve this monitoring. In this study, regional crops water consumption was estimated with the SAMIR software (Satellite of Monitoring Irrigation) using the FAO-56 dual crop coefficient water balance model fed with high resolution NDVI image time series providing estimates of both the actual basal crop coefficient (Kcb) and the vegetation fraction cover. The model includes a soil water model, requiring the knowledge of soil water holding capacity, maximum rooting depth, and water inputs. As irrigations are usually not known on large areas, they are simulated based on rules reproducing the farmer practices. The main objective of this work is to assess the operationality and accuracy of SAMIR at plot and perimeter scales, when several land use types (winter cereals, summer vegetables…), irrigation and agricultural practices are intertwined in a given landscape, including complex canopies such as sparse orchards. Meteorological ground stations were used to compute the reference evapotranspiration and get the rainfall depths. Two time series of ten and fourteen high-resolution SPOT5 have been acquired for the 2008-2009 and 2012-2013 hydrological years over an irrigated area in central Tunisia. They span the various successive crop seasons. The images were radiometrically corrected, first, using the SMAC6s Algorithm, second, using invariant objects located on the scene, based on visual observation of the images. From these time series, a Normalized Difference Vegetation Index (NDVI) profile was generated for each pixel. SAMIR was first calibrated based on ground measurements of evapotranspiration achieved using eddy-correlation devices installed on irrigated wheat and barley plots. After calibration, the model was run to spatialize irrigation over the whole area and a validation was done using cumulated seasonal water volumes obtained from ground survey at both plot and perimeter scales. The results show that although determination of model parameters was successful at plot scale, irrigation rules required an additional calibration which was achieved at perimeter scale.
NASA Astrophysics Data System (ADS)
Stackhouse, P. W., Jr.; Cox, S. J.; Mikovitz, J. C.; Zhang, T.; Gupta, S. K.
2016-12-01
The NASA/GEWEX Surface Radiation Budget (SRB) project produces, validates and analyzes shortwave and longwave surface and top of atmosphere radiative fluxes for the 1983-near present time period. The current release 3.0/3.1 consists of 1x1 degree radiative fluxes (available at gewex-srb.larc.nasa.gov) and is produced using the International Satellite Cloud Climatology Project (ISCCP) DX product for pixel level radiance and cloud information. This ISCCP DX product is subsampled to 30 km. ISCCP is currently recalibrating and reprocessing their entire data series, to be released as the H product series, with its highest resolution at 10km pixel resolution. The nine-fold increase in number of pixels will allow SRB to produce a higher resolution gridded product (e.g. 0.5 degree or higher), as well as the production of pixel-level fluxes. Other key input improvements include a detailed aerosol history using the Max Planck Institute Aerosol Climatology (MAC), temperature and moisture profiles from HIRS, and new topography, surface type, and snow/ice maps. Here we present results for the improved GEWEX Shortwave and Longwave algorithm (GSW and GLW) with new ISCCP data (for at least 5 years, 2005-2009), various other improved input data sets and incorporation of many additional internal SRB model improvements. We assess the radiative fluxes from new SRB products and contrast these at various resolutions. All these fluxes are compared to both surface measurements and to CERES SYN1Deg and EBAF data products for assessment of the effect of improvements. The SRB data produced will be released as part of the Release 4.0 Integrated Product that shares key input and output quantities with other GEWEX global products providing estimates of the Earth's global water and energy cycle (i.e., ISCCP, SeaFlux, LandFlux, NVAP, etc.).
NASA Astrophysics Data System (ADS)
Powley, Helen R.; Krom, Michael D.; Van Cappellen, Philippe
2018-03-01
Human activities have significantly modified the inputs of land-derived phosphorus (P) and nitrogen (N) to the Mediterranean Sea (MS). Here, we reconstruct the external inputs of reactive P and N to the Western Mediterranean Sea (WMS) and Eastern Mediterranean Sea (EMS) over the period 1950-2030. We estimate that during this period the land derived P and N loads increased by factors of 3 and 2 to the WMS and EMS, respectively, with reactive P inputs peaking in the 1980s but reactive N inputs increasing continuously from 1950 to 2030. The temporal variations in reactive P and N inputs are imposed in a coupled P and N mass balance model of the MS to simulate the accompanying changes in water column nutrient distributions and primary production with time. The key question we address is whether these changes are large enough to be distinguishable from variations caused by confounding factors, specifically the relatively large inter-annual variability in thermohaline circulation (THC) of the MS. Our analysis indicates that for the intermediate and deep water masses of the MS the magnitudes of changes in reactive P concentrations due to changes in anthropogenic inputs are relatively small and likely difficult to diagnose because of the noise created by the natural circulation variability. Anthropogenic N enrichment should be more readily detectable in time series concentration data for dissolved organic N (DON) after the 1970s, and for nitrate (NO3) after the 1990s. The DON concentrations in the EMS are predicted to exhibit the largest anthropogenic enrichment signature. Temporal variations in annual primary production over the 1950-2030 period are dominated by variations in deep-water formation rates, followed by changes in riverine P inputs for the WMS and atmospheric P deposition for the EMS. Overall, our analysis indicates that the detection of basin-wide anthropogenic nutrient concentration trends in the MS is rendered difficult due to: (1) the Atlantic Ocean contributing the largest reactive P and N inputs to the MS, hence diluting the anthropogenic nutrient signatures, (2) the anti-estuarine circulation removing at least 45% of the anthropogenic nutrients inputs added to both basins of the MS between 1950 and 2030, and (3) variations in intermediate and deep water formation rates that add high natural noise to the P and N concentration trajectories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rembold, Randy Kai; Hart, Darren M.; Harris, James Mark
Sandia National Laboratories has tested, evaluated and reported on the Geotech Smart24 data acquisition system with active Fortezza crypto card data signing and authentication in SAND2008-. One test, Input Terminated Noise, allows us to characterize the self-noise of the Smart24 system. By computing the power spectral density (PSD) of the input terminated noise time series data set and correcting for the instrument response of different seismometers, the resulting spectrum can be compared to the USGS new low noise model (NLNM) of Peterson (1996), and determine the ability of the matched system of seismometer and Smart24 to be quiet enough formore » any general deployment location. Four seismometer models were evaluated: the Streckeisen STS2-Low and High Gain, Guralp CMG3T and Geotech GS13 models. Each has a unique pass-band as defined by the frequency band of the instrument corrected noise spectrum that falls below the new low-noise model.« less
A computer program to trace seismic ray distribution in complex two-dimensional geological models
Yacoub, Nazieh K.; Scott, James H.
1970-01-01
A computer program has been developed to trace seismic rays and their amplitudes and energies through complex two-dimensional geological models, for which boundaries between elastic units are defined by a series of digitized X-, Y-coordinate values. Input data for the program includes problem identification, control parameters, model coordinates and elastic parameter for the elastic units. The program evaluates the partitioning of ray amplitude and energy at elastic boundaries, computes the total travel time, total travel distance and other parameters for rays arising at the earth's surface. Instructions are given for punching program control cards and data cards, and for arranging input card decks. An example of printer output for a simple problem is presented. The program is written in FORTRAN IV language. The listing of the program is shown in the Appendix, with an example output from a CDC-6600 computer.
NASA Astrophysics Data System (ADS)
Cox, Stephen J.; Stackhouse, Paul W.; Gupta, Shashi K.; Mikovitz, J. Colleen; Zhang, Taiping
2017-02-01
The NASA/GEWEX Surface Radiation Budget (SRB) project produces shortwave and longwave surface and top of atmosphere radiative fluxes for the 1983-near present time period. Spatial resolution is 1 degree. The current Release 3.0 (available at gewex-srb.larc.nasa.gov) uses the International Satellite Cloud Climatology Project (ISCCP) DX product for pixel level radiance and cloud information. This product is subsampled to 30 km. ISCCP is currently recalibrating and recomputing their entire data series, to be released as the H product, at 10km resolution. The ninefold increase in pixel number will allow SRB a higher resolution gridded product (e.g. 0.5 degree), as well as the production of pixel-level fluxes. Other key input improvements include a detailed aerosol history using the Max Planck Institute Aerosol Climatology (MAC), and temperature and moisture profiles from nnHIRS.
Orchestrating TRANSP Simulations for Interpretative and Predictive Tokamak Modeling with OMFIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grierson, B. A.; Yuan, X.; Gorelenkova, M.
TRANSP simulations are being used in the OMFIT work- flow manager to enable a machine independent means of experimental analysis, postdictive validation, and predictive time dependent simulations on the DIII-D, NSTX, JET and C-MOD tokamaks. The procedures for preparing the input data from plasma profile diagnostics and equilibrium reconstruction, as well as processing of the time-dependent heating and current drive sources and assumptions about the neutral recycling, vary across machines, but are streamlined by using a common workflow manager. Settings for TRANSP simulation fidelity are incorporated into the OMFIT framework, contrasting between-shot analysis, power balance, and fast-particle simulations. A previouslymore » established series of data consistency metrics are computed such as comparison of experimental vs. calculated neutron rate, equilibrium stored energy vs. total stored energy from profile and fast-ion pressure, and experimental vs. computed surface loop voltage. Discrepancies between data consistency metrics can indicate errors in input quantities such as electron density profile or Zeff, or indicate anomalous fast-particle transport. Measures to assess the sensitivity of the verification metrics to input quantities are provided by OMFIT, including scans of the input profiles and standardized post-processing visualizations. For predictive simulations, TRANSP uses GLF23 or TGLF to predict core plasma profiles, with user defined boundary conditions in the outer region of the plasma. ITPA validation metrics are provided in post-processing to assess the transport model validity. By using OMFIT to orchestrate the steps for experimental data preparation, selection of operating mode, submission, post-processing and visualization, we have streamlined and standardized the usage of TRANSP.« less
Orchestrating TRANSP Simulations for Interpretative and Predictive Tokamak Modeling with OMFIT
Grierson, B. A.; Yuan, X.; Gorelenkova, M.; ...
2018-02-21
TRANSP simulations are being used in the OMFIT work- flow manager to enable a machine independent means of experimental analysis, postdictive validation, and predictive time dependent simulations on the DIII-D, NSTX, JET and C-MOD tokamaks. The procedures for preparing the input data from plasma profile diagnostics and equilibrium reconstruction, as well as processing of the time-dependent heating and current drive sources and assumptions about the neutral recycling, vary across machines, but are streamlined by using a common workflow manager. Settings for TRANSP simulation fidelity are incorporated into the OMFIT framework, contrasting between-shot analysis, power balance, and fast-particle simulations. A previouslymore » established series of data consistency metrics are computed such as comparison of experimental vs. calculated neutron rate, equilibrium stored energy vs. total stored energy from profile and fast-ion pressure, and experimental vs. computed surface loop voltage. Discrepancies between data consistency metrics can indicate errors in input quantities such as electron density profile or Zeff, or indicate anomalous fast-particle transport. Measures to assess the sensitivity of the verification metrics to input quantities are provided by OMFIT, including scans of the input profiles and standardized post-processing visualizations. For predictive simulations, TRANSP uses GLF23 or TGLF to predict core plasma profiles, with user defined boundary conditions in the outer region of the plasma. ITPA validation metrics are provided in post-processing to assess the transport model validity. By using OMFIT to orchestrate the steps for experimental data preparation, selection of operating mode, submission, post-processing and visualization, we have streamlined and standardized the usage of TRANSP.« less
Critical Fluctuations in Cortical Models Near Instability
Aburn, Matthew J.; Holmes, C. A.; Roberts, James A.; Boonstra, Tjeerd W.; Breakspear, Michael
2012-01-01
Computational studies often proceed from the premise that cortical dynamics operate in a linearly stable domain, where fluctuations dissipate quickly and show only short memory. Studies of human electroencephalography (EEG), however, have shown significant autocorrelation at time lags on the scale of minutes, indicating the need to consider regimes where non-linearities influence the dynamics. Statistical properties such as increased autocorrelation length, increased variance, power law scaling, and bistable switching have been suggested as generic indicators of the approach to bifurcation in non-linear dynamical systems. We study temporal fluctuations in a widely-employed computational model (the Jansen–Rit model) of cortical activity, examining the statistical signatures that accompany bifurcations. Approaching supercritical Hopf bifurcations through tuning of the background excitatory input, we find a dramatic increase in the autocorrelation length that depends sensitively on the direction in phase space of the input fluctuations and hence on which neuronal subpopulation is stochastically perturbed. Similar dependence on the input direction is found in the distribution of fluctuation size and duration, which show power law scaling that extends over four orders of magnitude at the Hopf bifurcation. We conjecture that the alignment in phase space between the input noise vector and the center manifold of the Hopf bifurcation is directly linked to these changes. These results are consistent with the possibility of statistical indicators of linear instability being detectable in real EEG time series. However, even in a simple cortical model, we find that these indicators may not necessarily be visible even when bifurcations are present because their expression can depend sensitively on the neuronal pathway of incoming fluctuations. PMID:22952464
An evaluation of Dynamic TOPMODEL for low flow simulation
NASA Astrophysics Data System (ADS)
Coxon, G.; Freer, J. E.; Quinn, N.; Woods, R. A.; Wagener, T.; Howden, N. J. K.
2015-12-01
Hydrological models are essential tools for drought risk management, often providing input to water resource system models, aiding our understanding of low flow processes within catchments and providing low flow predictions. However, simulating low flows and droughts is challenging as hydrological systems often demonstrate threshold effects in connectivity, non-linear groundwater contributions and a greater influence of water resource system elements during low flow periods. These dynamic processes are typically not well represented in commonly used hydrological models due to data and model limitations. Furthermore, calibrated or behavioural models may not be effectively evaluated during more extreme drought periods. A better understanding of the processes that occur during low flows and how these are represented within models is thus required if we want to be able to provide robust and reliable predictions of future drought events. In this study, we assess the performance of dynamic TOPMODEL for low flow simulation. Dynamic TOPMODEL was applied to a number of UK catchments in the Thames region using time series of observed rainfall and potential evapotranspiration data that captured multiple historic droughts over a period of several years. The model performance was assessed against the observed discharge time series using a limits of acceptability framework, which included uncertainty in the discharge time series. We evaluate the models against multiple signatures of catchment low-flow behaviour and investigate differences in model performance between catchments, model diagnostics and for different low flow periods. We also considered the impact of surface water and groundwater abstractions and discharges on the observed discharge time series and how this affected the model evaluation. From analysing the model performance, we suggest future improvements to Dynamic TOPMODEL to improve the representation of low flow processes within the model structure.
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva
2014-08-01
To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.
Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva
2013-01-01
Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175
Homeostatic plasticity for single node delay-coupled reservoir computing.
Toutounji, Hazem; Schumacher, Johannes; Pipa, Gordon
2015-06-01
Supplementing a differential equation with delays results in an infinite-dimensional dynamical system. This property provides the basis for a reservoir computing architecture, where the recurrent neural network is replaced by a single nonlinear node, delay-coupled to itself. Instead of the spatial topology of a network, subunits in the delay-coupled reservoir are multiplexed in time along one delay span of the system. The computational power of the reservoir is contingent on this temporal multiplexing. Here, we learn optimal temporal multiplexing by means of a biologically inspired homeostatic plasticity mechanism. Plasticity acts locally and changes the distances between the subunits along the delay, depending on how responsive these subunits are to the input. After analytically deriving the learning mechanism, we illustrate its role in improving the reservoir's computational power. To this end, we investigate, first, the increase of the reservoir's memory capacity. Second, we predict a NARMA-10 time series, showing that plasticity reduces the normalized root-mean-square error by more than 20%. Third, we discuss plasticity's influence on the reservoir's input-information capacity, the coupling strength between subunits, and the distribution of the readout coefficients.
Foreign currency rate forecasting using neural networks
NASA Astrophysics Data System (ADS)
Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad
2000-03-01
Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.
NASA Astrophysics Data System (ADS)
Mulligan, Robert F.
2014-06-01
This paper presents Hurst exponent signatures from time series of aggregate price indices for the US over the 1975-2011 time period. Though all highly aggregated, these indices include both broad measures of consumer and producer prices. The constellation of prices evolves as a complex system throughout processes of production and distribution, culminating in the final delivery of output to consumers. Massive feedback characterizes this system, where the demand for consumable output determines the demand for the inputs used to produce it, and supply scarcities for the necessary inputs in turn determine the supply of the final product. Prices in both factor and output markets are jointly determined by interdependent supply and demand conditions. Fractal examination of the interplay among market prices would be of interest regardless, but added interest arises from the consideration of how these markets respond to external shocks over the business cycle, particularly monetary expansion. Because the initial impact of monetary injection is localized in specific sectors, the way the impact on prices diffuses throughout the economy is of special interest.
River flow simulation using a multilayer perceptron-firefly algorithm model
NASA Astrophysics Data System (ADS)
Darbandi, Sabereh; Pourhosseini, Fatemeh Akhoni
2018-06-01
River flow estimation using records of past time series is importance in water resources engineering and management and is required in hydrologic studies. In the past two decades, the approaches based on the artificial neural networks (ANN) were developed. River flow modeling is a non-linear process and highly affected by the inputs to the modeling. In this study, the best input combination of the models was identified using the Gamma test then MLP-ANN and hybrid multilayer perceptron (MLP-FFA) is used to forecast monthly river flow for a set of time intervals using observed data. The measurements from three gauge at Ajichay watershed, East Azerbaijani, were used to train and test the models approach for the period from January 2004 to July 2016. Calibration and validation were performed within the same period for MLP-ANN and MLP-FFA models after the preparation of the required data. Statistics, the root mean square error and determination coefficient, are used to verify outputs from MLP-ANN to MLP-FFA models. The results show that MLP-FFA model is satisfactory for monthly river flow simulation in study area.
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Borak, Jordan S.
2008-01-01
Many earth science modeling applications employ continuous input data fields derived from satellite data. Environmental factors, sensor limitations and algorithmic constraints lead to data products of inherently variable quality. This necessitates interpolation of one form or another in order to produce high quality input fields free of missing data. The present research tests several interpolation techniques as applied to satellite-derived leaf area index, an important quantity in many global climate and ecological models. The study evaluates and applies a variety of interpolation techniques for the Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf-Area Index Product over the time period 2001-2006 for a region containing the conterminous United States. Results indicate that the accuracy of an individual interpolation technique depends upon the underlying land cover. Spatial interpolation provides better results in forested areas, while temporal interpolation performs more effectively over non-forest cover types. Combination of spatial and temporal approaches offers superior interpolative capabilities to any single method, and in fact, generation of continuous data fields requires a hybrid approach such as this.
Systematic comparisons between PRISM version 1.0.0, BAP, and CSMIP ground-motion processing
Kalkan, Erol; Stephens, Christopher
2017-02-23
A series of benchmark tests was run by comparing results of the Processing and Review Interface for Strong Motion data (PRISM) software version 1.0.0 to Basic Strong-Motion Accelerogram Processing Software (BAP; Converse and Brady, 1992), and to California Strong Motion Instrumentation Program (CSMIP) processing (Shakal and others, 2003, 2004). These tests were performed by using the MatLAB implementation of PRISM, which is equivalent to its public release version in Java language. Systematic comparisons were made in time and frequency domains of records processed in PRISM and BAP, and in CSMIP, by using a set of representative input motions with varying resolutions, frequency content, and amplitudes. Although the details of strong-motion records vary among the processing procedures, there are only minor differences among the waveforms for each component and within the frequency passband common to these procedures. A comprehensive statistical evaluation considering more than 1,800 ground-motion components demonstrates that differences in peak amplitudes of acceleration, velocity, and displacement time series obtained from PRISM and CSMIP processing are equal to or less than 4 percent for 99 percent of the data, and equal to or less than 2 percent for 96 percent of the data. Other statistical measures, including the Euclidian distance (L2 norm) and the windowed root mean square level of processed time series, also indicate that both processing schemes produce statistically similar products.
Perl Modules for Constructing Iterators
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2009-01-01
The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.
IDSP- INTERACTIVE DIGITAL SIGNAL PROCESSOR
NASA Technical Reports Server (NTRS)
Mish, W. H.
1994-01-01
The Interactive Digital Signal Processor, IDSP, consists of a set of time series analysis "operators" based on the various algorithms commonly used for digital signal analysis work. The processing of a digital time series to extract information is usually achieved by the application of a number of fairly standard operations. However, it is often desirable to "experiment" with various operations and combinations of operations to explore their effect on the results. IDSP is designed to provide an interactive and easy-to-use system for this type of digital time series analysis. The IDSP operators can be applied in any sensible order (even recursively), and can be applied to single time series or to simultaneous time series. IDSP is being used extensively to process data obtained from scientific instruments onboard spacecraft. It is also an excellent teaching tool for demonstrating the application of time series operators to artificially-generated signals. IDSP currently includes over 43 standard operators. Processing operators provide for Fourier transformation operations, design and application of digital filters, and Eigenvalue analysis. Additional support operators provide for data editing, display of information, graphical output, and batch operation. User-developed operators can be easily interfaced with the system to provide for expansion and experimentation. Each operator application generates one or more output files from an input file. The processing of a file can involve many operators in a complex application. IDSP maintains historical information as an integral part of each file so that the user can display the operator history of the file at any time during an interactive analysis. IDSP is written in VAX FORTRAN 77 for interactive or batch execution and has been implemented on a DEC VAX-11/780 operating under VMS. The IDSP system generates graphics output for a variety of graphics systems. The program requires the use of Versaplot and Template plotting routines and IMSL Math/Library routines. These software packages are not included in IDSP. The virtual memory requirement for the program is approximately 2.36 MB. The IDSP system was developed in 1982 and was last updated in 1986. Versaplot is a registered trademark of Versatec Inc. Template is a registered trademark of Template Graphics Software Inc. IMSL Math/Library is a registered trademark of IMSL Inc.
Spatiotemporal groundwater level modeling using hybrid artificial intelligence-meshless method
NASA Astrophysics Data System (ADS)
Nourani, Vahid; Mousavi, Shahram
2016-05-01
Uncertainties of the field parameters, noise of the observed data and unknown boundary conditions are the main factors involved in the groundwater level (GL) time series which limit the modeling and simulation of GL. This paper presents a hybrid artificial intelligence-meshless model for spatiotemporal GL modeling. In this way firstly time series of GL observed in different piezometers were de-noised using threshold-based wavelet method and the impact of de-noised and noisy data was compared in temporal GL modeling by artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). In the second step, both ANN and ANFIS models were calibrated and verified using GL data of each piezometer, rainfall and runoff considering various input scenarios to predict the GL at one month ahead. In the final step, the simulated GLs in the second step of modeling were considered as interior conditions for the multiquadric radial basis function (RBF) based solve of governing partial differential equation of groundwater flow to estimate GL at any desired point within the plain where there is not any observation. In order to evaluate and compare the GL pattern at different time scales, the cross-wavelet coherence was also applied to GL time series of piezometers. The results showed that the threshold-based wavelet de-noising approach can enhance the performance of the modeling up to 13.4%. Also it was found that the accuracy of ANFIS-RBF model is more reliable than ANN-RBF model in both calibration and validation steps.
Contemporaneous disequilibrium of bio-optical properties in the Southern Ocean
NASA Astrophysics Data System (ADS)
Kahru, Mati; Lee, Zhongping; Mitchell, B. Greg
2017-03-01
Significant changes in satellite-detected net primary production (NPP, mg C m-2 d-1) were observed in the Southern Ocean during 2011-2016: an increase in the Pacific sector and a decrease in the Atlantic sector. While no clear physical forcing was identified, we hypothesize that the changes in NPP were associated with changes in the phytoplankton community and reflected in the concomitant bio-optical properties. Satellite algorithms for chlorophyll a concentration (Chl a, mg m-3) use a combination of estimates of the remote sensing reflectance Rrs(λ) that are statistically fitted to a global reference data set. In any particular region or point in space/time the estimate produced by the global "mean" algorithm can deviate from the true value. Reflectance anomaly (RA) is supposed to remove the first-order variability in Rrs(λ) associated with Chl a and reveal bio-optical properties that are due to the composition of phytoplankton and associated materials. Time series of RA showed variability at multiple scales, including the life span of the sensor, multiyear and annual. Models of plankton functional types using estimated Chl a as input cannot be expected to correctly resolve regional and seasonal anomalies due to biases in the Chl a estimate that they are based on. While a statistical model using RA(λ) time series can predict the times series of NPP with high accuracy (R2 = 0.82) in both Pacific and Atlantic regions, the underlying mechanisms in terms of phytoplankton groups and the associated materials remain elusive.
NASA Astrophysics Data System (ADS)
Handley, Heather K.; Turner, Simon; Afonso, Juan C.; Dosseto, Anthony; Cohen, Tim
2013-02-01
Quantifying the rates of landscape evolution in response to climate change is inhibited by the difficulty of dating the formation of continental detrital sediments. We present uranium isotope data for Cooper Creek palaeochannel sediments from the Lake Eyre Basin in semi-arid South Australia in order to attempt to determine the formation ages and hence residence times of the sediments. To calculate the amount of recoil loss of 234U, a key input parameter used in the comminution approach, we use two suggested methods (weighted geometric and surface area measurement with an incorporated fractal correction) and typical assumed input parameter values found in the literature. The calculated recoil loss factors and comminution ages are highly dependent on the method of recoil loss factor determination used and the chosen assumptions. To appraise the ramifications of the assumptions inherent in the comminution age approach and determine individual and combined comminution age uncertainties associated to each variable, Monte Carlo simulations were conducted for a synthetic sediment sample. Using a reasonable associated uncertainty for each input factor and including variations in the source rock and measured (234U/238U) ratios, the total combined uncertainty on comminution age in our simulation (for both methods of recoil loss factor estimation) can amount to ±220-280 ka. The modelling shows that small changes in assumed input values translate into large effects on absolute comminution age. To improve the accuracy of the technique and provide meaningful absolute comminution ages, much tighter constraints are required on the assumptions for input factors such as the fraction of α-recoil lost 234Th and the initial (234U/238U) ratio of the source material. In order to be able to directly compare calculated comminution ages produced by different research groups, the standardisation of pre-treatment procedures, recoil loss factor estimation and assumed input parameter values is required. We suggest a set of input parameter values for such a purpose. Additional considerations for calculating comminution ages of sediments deposited within large, semi-arid drainage basins are discussed.
Multi-valued logic gates based on ballistic transport in quantum point contacts.
Seo, M; Hong, C; Lee, S-Y; Choi, H K; Kim, N; Chung, Y; Umansky, V; Mahalu, D
2014-01-22
Multi-valued logic gates, which can handle quaternary numbers as inputs, are developed by exploiting the ballistic transport properties of quantum point contacts in series. The principle of a logic gate that finds the minimum of two quaternary number inputs is demonstrated. The device is scalable to allow multiple inputs, which makes it possible to find the minimum of multiple inputs in a single gate operation. Also, the principle of a half-adder for quaternary number inputs is demonstrated. First, an adder that adds up two quaternary numbers and outputs the sum of inputs is demonstrated. Second, a device to express the sum of the adder into two quaternary digits [Carry (first digit) and Sum (second digit)] is demonstrated. All the logic gates presented in this paper can in principle be extended to allow decimal number inputs with high quality QPCs.
Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert
2016-08-01
Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.
Early Examples from the Integrated Multi-Satellite Retrievals for GPM (IMERG)
NASA Astrophysics Data System (ADS)
Huffman, George; Bolvin, David; Braithwaite, Daniel; Hsu, Kuolin; Joyce, Robert; Kidd, Christopher; Sorooshian, Soroosh; Xie, Pingping
2014-05-01
The U.S. GPM Science Team's Day-1 algorithm for computing combined precipitation estimates as part of GPM is the Integrated Multi-satellitE Retrievals for GPM (IMERG). The goal is to compute the best time series of (nearly) global precipitation from "all" precipitation-relevant satellites and global surface precipitation gauge analyses. IMERG is being developed as a unified U.S. algorithm drawing on strengths in the three contributing groups, whose previous work includes: 1) the TRMM Multi-satellite Precipitation Analysis (TMPA); 2) the CPC Morphing algorithm with Kalman Filtering (K-CMORPH); and 3) the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks using a Cloud Classification System (PERSIANN-CCS). We review the IMERG design and development, plans for testing, and current status. Some of the lessons learned in running and reprocessing the previous data sets include the importance of quality-controlling input data sets, strategies for coping with transitions in the various input data sets, and practical approaches to retrospective analysis of multiple output products (namely the real- and post-real-time data streams). IMERG output will be illustrated using early test data, including the variety of supporting fields, such as the merged-microwave and infrared estimates, and the precipitation type. We end by considering recent changes in input data specifications, the transition from TRMM-based calibration to GPM-based, and further "Day 2" development.
The series-elastic shock absorber: tendons attenuate muscle power during eccentric actions.
Roberts, Thomas J; Azizi, Emanuel
2010-08-01
Elastic tendons can act as muscle power amplifiers or energy-conserving springs during locomotion. We used an in situ muscle-tendon preparation to examine the mechanical function of tendons during lengthening contractions, when muscles absorb energy. Force, length, and power were measured in the lateral gastrocnemius muscle of wild turkeys. Sonomicrometry was used to measure muscle fascicle length independently from muscle-tendon unit (MTU) length, as measured by a muscle lever system (servomotor). A series of ramp stretches of varying velocities was applied to the MTU in fully activated muscles. Fascicle length changes were decoupled from length changes imposed on the MTU by the servomotor. Under most conditions, muscle fascicles shortened on average, while the MTU lengthened. Energy input to the MTU during the fastest lengthenings was -54.4 J/kg, while estimated work input to the muscle fascicles during this period was only -11.24 J/kg. This discrepancy indicates that energy was first absorbed by elastic elements, then released to do work on muscle fascicles after the lengthening phase of the contraction. The temporary storage of energy by elastic elements also resulted in a significant attenuation of power input to the muscle fascicles. At the fastest lengthening rates, peak instantaneous power input to the MTU reached -2,143.9 W/kg, while peak power input to the fascicles was only -557.6 W/kg. These results demonstrate that tendons may act as mechanical buffers by limiting peak muscle forces, lengthening rates, and power inputs during energy-absorbing contractions.
Designing a 25-kilowatt high frequency series resonant
NASA Technical Reports Server (NTRS)
Robson, R. R.
1984-01-01
The feasibility of processing 25 kW of power with a single, transistorized, 20 kHz, series resonant converter stage has been demonstrated by the successful design, development, fabrication, and testing of such a device. It employs four Westinghouse D7ST transistors in a full-bridge configuration and operates from a 250-to-350-Vdc input bus. The unit has an overall worst-case efficiency of 93.5% at its full rated output of 1000 V and 25 A dc. A solid-state dc input circuit breaker and output-transient-current limiters are included in and integrated into the design. Circuit details of the converter are presented along with test data.
Series Connected Buck-Boost Regulator
NASA Technical Reports Server (NTRS)
Birchenough, Arthur G. (Inventor)
2006-01-01
A Series Connected Buck-Boost Regulator (SCBBR) that switches only a fraction of the input power, resulting in relatively high efficiencies. The SCBBR has multiple operating modes including a buck, a boost, and a current limiting mode, so that an output voltage of the SCBBR ranges from below the source voltage to above the source voltage.
Solubility of aerosol trace elements: Sources and deposition fluxes in the Canary Region
NASA Astrophysics Data System (ADS)
López-García, Patricia; Gelado-Caballero, María Dolores; Collado-Sánchez, Cayetano; Hernández-Brito, José Joaquín
2017-01-01
African dust inputs have important effects on the climate and marine biogeochemistry of the subtropical North Atlantic Ocean. The impact of dust inputs on oceanic carbon uptake and climate is dependent on total dust deposition fluxes as well as the bioavailability of nutrients and metals in the dust. In this work, the solubility of trace metals (Fe, Al, Mn, Co and Cu) and ions (Ca, sulphate, nitrate and phosphate) has been estimated from the analysis of a long-time series of 109 samples collected over a 3-year period in the Canary Islands. Solubility is primarily a function of aerosol origin, with higher solubility values corresponding to aerosols with more anthropogenic influence. Using soluble fractions of trace elements measured in this work, atmospheric deposition fluxes of soluble metals and nutrients have been calculated. Inputs of dissolved nutrients (P, N and Fe) have been estimated for the mixed layer. Considering that P is the limiting factor when ratios of these elements are compared with phytoplankton requirements, an increase of 0.58 nM of P in the mixed layer (∼150 m depth) and in a year can be estimated, which can support an increase of 0.02 μg Chla L-1 y-1. These atmospheric inputs of trace metals and nutrients appear to be significant relative to the concentrations reported in this region, especially during the summer months when the water column is more stratified and deep-water nutrient inputs are reduced.
Holcomb, Paul S.; Hoffpauir, Brian K.; Hoyson, Mitchell C.; Jackson, Dakota R.; Deerinck, Thomas J.; Marrs, Glenn S.; Dehoff, Marlin; Wu, Jonathan; Ellisman, Mark H.
2013-01-01
Hallmark features of neural circuit development include early exuberant innervation followed by competition and pruning to mature innervation topography. Several neural systems, including the neuromuscular junction and climbing fiber innervation of Purkinje cells, are models to study neural development in part because they establish a recognizable endpoint of monoinnervation of their targets and because the presynaptic terminals are large and easily monitored. We demonstrate here that calyx of Held (CH) innervation of its target, which forms a key element of auditory brainstem binaural circuitry, exhibits all of these characteristics. To investigate CH development, we made the first application of serial block-face scanning electron microscopy to neural development with fine temporal resolution and thereby accomplished the first time series for 3D ultrastructural analysis of neural circuit formation. This approach revealed a growth spurt of added apposed surface area (ASA) >200 μm2/d centered on a single age at postnatal day 3 in mice and an initial rapid phase of growth and competition that resolved to monoinnervation in two-thirds of cells within 3 d. This rapid growth occurred in parallel with an increase in action potential threshold, which may mediate selection of the strongest input as the winning competitor. ASAs of competing inputs were segregated on the cell body surface. These data suggest mechanisms to select “winning” inputs by regional reinforcement of postsynaptic membrane to mediate size and strength of competing synaptic inputs. PMID:23926251
A seasonal Bartlett-Lewis Rectangular Pulse model
NASA Astrophysics Data System (ADS)
Ritschel, Christoph; Agbéko Kpogo-Nuwoklo, Komlan; Rust, Henning; Ulbrich, Uwe; Névir, Peter
2016-04-01
Precipitation time series with a high temporal resolution are needed as input for several hydrological applications, e.g. river runoff or sewer system models. As adequate observational data sets are often not available, simulated precipitation series come to use. Poisson-cluster models are commonly applied to generate these series. It has been shown that this class of stochastic precipitation models is able to well reproduce important characteristics of observed rainfall. For the gauge based case study presented here, the Bartlett-Lewis rectangular pulse model (BLRPM) has been chosen. As it has been shown that certain model parameters vary with season in a midlatitude moderate climate due to different rainfall mechanisms dominating in winter and summer, model parameters are typically estimated separately for individual seasons or individual months. Here, we suggest a simultaneous parameter estimation for the whole year under the assumption that seasonal variation of parameters can be described with harmonic functions. We use an observational precipitation series from Berlin with a high temporal resolution to exemplify the approach. We estimate BLRPM parameters with and without this seasonal extention and compare the results in terms of model performance and robustness of the estimation.
NASA Astrophysics Data System (ADS)
Pereyra, Y.; Ma, L.; Sak, P. B.; Gaillardet, J.; Buss, H. L.; Brantley, S. L.
2015-12-01
Dust inputs play an important role in soil formation, especially for thick soils developed on tropical volcanic islands. In these regions, soils are highly depleted due to intensive chemical weathering, and mineral nutrients from dusts have been known to be important in sustaining soil fertility and productivity. Tropical volcanic soils are an ideal system to study the impacts of dust inputs on the ecosystem. Sr and U-series isotopes are excellent tracers to identify sources of materials in an open system if the end-members have distinctive isotope signatures. These two isotope systems are particularly useful to trace the origin of atmospheric inputs into soils and to determine rates and timescales of soil formation. This study analyzes major elemental concentrations, Sr and U-series isotope ratios in highly depleted soils in the tropical volcanic island of Basse-Terre in French Guadeloupe to determine atmospheric input sources and identify key soil formation processes. We focus on three soil profiles (8 to 12 m thick) from the Bras-David, Moustique Petit-Bourg, and Deshaies watersheds; and on the adjacent rivers to these sites. Results have shown a significant depletion of U, Sr, and major elements in the deep profile (12 to 4 m) attributed to rapid chemical weathering. The top soil profiles (4 m to the surface) all show addition of elements such as Ca, Mg, U, and Sr due to atmospheric dust. More importantly, the topsoil profiles have distinct Sr and U-series isotope compositions from the deep soils. Sr and U-series isotope ratios of the top soils and sequential extraction fractions confirm that the sources of the dust are from the Saharan dessert, through long distance transport from Africa to the Caribbean region across the Atlantic Ocean. During the transport, some dust isotope signatures may also have been modified by local volcanic ashes and marine aerosols. Our study highlights that dusts and marine aerosols play important roles in element cycles and nutrient sources in the highly depleted surface soils of tropical oceanic islands.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Azizan; Lasternas, Bertrand; Alschuler, Elena
The American Recovery and Reinvestment Act stimulus funding of 2009 for smart grid projects resulted in the tripling of smart meters deployment. In 2012, the Green Button initiative provided utility customers with access to their real-time1 energy usage. The availability of finely granular data provides an enormous potential for energy data analytics and energy benchmarking. The sheer volume of time-series utility data from a large number of buildings also poses challenges in data collection, quality control, and database management for rigorous and meaningful analyses. In this paper, we will describe a building portfolio-level data analytics tool for operational optimization, businessmore » investment and policy assessment using 15-minute to monthly intervals utility data. The analytics tool is developed on top of the U.S. Department of Energy’s Standard Energy Efficiency Data (SEED) platform, an open source software application that manages energy performance data of large groups of buildings. To support the significantly large volume of granular interval data, we integrated a parallel time-series database to the existing relational database. The time-series database improves on the current utility data input, focusing on real-time data collection, storage, analytics and data quality control. The fully integrated data platform supports APIs for utility apps development by third party software developers. These apps will provide actionable intelligence for building owners and facilities managers. Unlike a commercial system, this platform is an open source platform funded by the U.S. Government, accessible to the public, researchers and other developers, to support initiatives in reducing building energy consumption.« less
Nowcasting influenza outbreaks using open-source media report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Brownstein, John S.
We construct and verify a statistical method to nowcast influenza activity from a time-series of the frequency of reports concerning influenza related topics. Such reports are published electronically by both public health organizations as well as newspapers/media sources, and thus can be harvested easily via web crawlers. Since media reports are timely, whereas reports from public health organization are delayed by at least two weeks, using timely, open-source data to compensate for the lag in %E2%80%9Cofficial%E2%80%9D reports can be useful. We use morbidity data from networks of sentinel physicians (both the Center of Disease Control's ILINet and France's Sentinelles network)more » as the gold standard of influenza-like illness (ILI) activity. The time-series of media reports is obtained from HealthMap (http://healthmap.org). We find that the time-series of media reports shows some correlation ( 0.5) with ILI activity; further, this can be leveraged into an autoregressive moving average model with exogenous inputs (ARMAX model) to nowcast ILI activity. We find that the ARMAX models have more predictive skill compared to autoregressive (AR) models fitted to ILI data i.e., it is possible to exploit the information content in the open-source data. We also find that when the open-source data are non-informative, the ARMAX models reproduce the performance of AR models. The statistical models are tested on data from the 2009 swine-flu outbreak as well as the mild 2011-2012 influenza season in the U.S.A.« less
Logarithmic circuit with wide dynamic range
NASA Technical Reports Server (NTRS)
Wiley, P. H.; Manus, E. A. (Inventor)
1978-01-01
A circuit deriving an output voltage that is proportional to the logarithm of a dc input voltage susceptible to wide variations in amplitude includes a constant current source which forward biases a diode so that the diode operates in the exponential portion of its voltage versus current characteristic, above its saturation current. The constant current source includes first and second, cascaded feedback, dc operational amplifiers connected in negative feedback circuit. An input terminal of the first amplifier is responsive to the input voltage. A circuit shunting the first amplifier output terminal includes a resistor in series with the diode. The voltage across the resistor is sensed at the input of the second dc operational feedback amplifier. The current flowing through the resistor is proportional to the input voltage over the wide range of variations in amplitude of the input voltage.
ERIC Educational Resources Information Center
Stevens, Mary Elizabeth
The series, of which this is the initial report, is intended to give a selective overview of research and development efforts and requirements in the computer and information sciences. The operations of information acquisition, sensing, and input to information processing systems are considered in generalized terms. Specific topics include but are…
Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Douglas, Freddie; Bourgeois, Edit Kaminsky
2005-01-01
The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).
NASA Astrophysics Data System (ADS)
Lewis, Elizabeth; Kilsby, Chris; Fowler, Hayley
2014-05-01
The impact of climate change on hydrological systems requires further quantification in order to inform water management. This study intends to conduct such analysis using hydrological models. Such models are of varying forms, of which conceptual, lumped parameter models and physically-based models are two important types. The majority of hydrological studies use conceptual models calibrated against measured river flow time series in order to represent catchment behaviour. This method often shows impressive results for specific problems in gauged catchments. However, the results may not be robust under non-stationary conditions such as climate change, as physical processes and relationships amenable to change are not accounted for explicitly. Moreover, conceptual models are less readily applicable to ungauged catchments, in which hydrological predictions are also required. As such, the physically based, spatially distributed model SHETRAN is used in this study to develop a robust and reliable framework for modelling historic and future behaviour of gauged and ungauged catchments across the whole of Great Britain. In order to achieve this, a large array of data completely covering Great Britain for the period 1960-2006 has been collated and efficiently stored ready for model input. The data processed include a DEM, rainfall, PE and maps of geology, soil and land cover. A desire to make the modelling system easy for others to work with led to the development of a user-friendly graphical interface. This allows non-experts to set up and run a catchment model in a few seconds, a process that can normally take weeks or months. The quality and reliability of the extensive dataset for modelling hydrological processes has also been evaluated. One aspect of this has been an assessment of error and uncertainty in rainfall input data, as well as the effects of temporal resolution in precipitation inputs on model calibration. SHETRAN has been updated to accept gridded rainfall inputs, and UKCP09 gridded daily rainfall data has been disaggregated using hourly records to analyse the implications of using realistic sub-daily variability. Furthermore, the development of a comprehensive dataset and computationally efficient means of setting up and running catchment models has allowed for examination of how a robust parameter scheme may be derived. This analysis has been based on collective parameterisation of multiple catchments in contrasting hydrological settings and subject to varied processes. 350 gauged catchments all over the UK have been simulated, and a robust set of parameters is being sought by examining the full range of hydrological processes and calibrating to a highly diverse flow data series. The modelling system will be used to generate flow time series based on historical input data and also downscaled Regional Climate Model (RCM) forecasts using the UKCP09 Weather Generator. This will allow for analysis of flow frequency and associated future changes, which cannot be determined from the instrumental record or from lumped parameter model outputs calibrated only to historical catchment behaviour. This work will be based on the existing and functional modelling system described following some further improvements to calibration, particularly regarding simulation of groundwater-dominated catchments.
Prestressed elastomer for energy storage
Hoppie, Lyle O.; Speranza, Donald
1982-01-01
Disclosed is a regenerative braking device for an automotive vehicle. The device includes a power isolating assembly (14), an infinitely variable transmission (20) interconnecting an input shaft (16) with an output shaft (18), and an energy storage assembly (22). The storage assembly includes a plurality of elastomeric rods (44, 46) mounted for rotation and connected in series between the input and output shafts. The elastomeric rods are prestressed along their rotational or longitudinal axes to inhibit buckling of the rods due to torsional stressing of the rods in response to relative rotation of the input and output shafts.
NASA Technical Reports Server (NTRS)
Smith, W. W.
1973-01-01
A Langley Research Center version of NASTRAN Level 15.1.0 designed to provide the analyst with an added tool for debugging massive NASTRAN input data is described. The program checks all NASTRAN input data cards and displays on a CRT the graphic representation of the undeformed structure. In addition, the program permits the display and alteration of input data and allows reexecution without physically resubmitting the job. Core requirements on the CDC 6000 computer are approximately 77,000 octal words of central memory.
NASA Astrophysics Data System (ADS)
Kim, S.; Arii, M.; Jackson, T. J.
2017-12-01
L-band airborne synthetic aperture radar (SAR) observations at 7-m spatial resolution were made over California shrublands to better understand the effects of soil and vegetation parameters on backscattering coefficient (σ0). Temporal changes in σ0 of up to 3 dB were highly correlated to surface soil moisture but not to vegetation, even though vegetation water content (VWC) varied seasonally by a factor of two. HH was always greater than VV, suggesting the importance of double-bounce scattering by the woody parts. However, the geometric and dielectric properties of the woody parts did not vary significantly over time. Instead the changes in VWC occurred primarily in thin leaves that may not meaningfully influence absorption and scattering. A physically-based model for single scattering by discrete elements of plants successfully simulated the magnitude of the temporal variations in HH, VV, and HH/VV with a difference of less than 0.9 dB. In order to simulate the observations, the VWC input of the plant to the model was formulated as a function of plant's dielectric property (water fraction) while the plant geometry remains static in time. In comparison, when the VWC input was characterized by the geometry of a growing plant, the model performed poorly in describing the observed patterns in the σ0 changes. The modeling results offer explanation of the observation that soil moisture correlated highly with σ0: the dominant mechanisms for HH and VV are double-bounce scattering by trunk, and soil surface scattering, respectively. The time-series inversion of the physical model was able to retrieve soil moisture with the difference of -0.037 m3/m3 (mean), 0.025 m3/m3 (standard deviation), and 0.89 (correlation). Together with the previous results over croplands using the SAR data offering 0.05 m3/m3 retrieval accuracy, we will demonstrate the efficacy of the model-based time-series soil moisture retrieval at field scales.
Goldstein, Steven J; Abdel-Fattah, Amr I; Murrell, Michael T; Dobson, Patrick F; Norman, Deborah E; Amato, Ronald S; Nunn, Andrew J
2010-03-01
Uranium-series data for groundwater samples from the Nopal I uranium ore deposit were obtained to place constraints on radionuclide transport and hydrologic processes for a nuclear waste repository located in fractured, unsaturated volcanic tuff. Decreasing uranium concentrations for wells drilled in 2003 are consistent with a simple physical mixing model that indicates that groundwater velocities are low ( approximately 10 m/y). Uranium isotopic constraints, well productivities, and radon systematics also suggest limited groundwater mixing and slow flow in the saturated zone. Uranium isotopic systematics for seepage water collected in the mine adit show a spatial dependence which is consistent with longer water-rock interaction times and higher uranium dissolution inputs at the front adit where the deposit is located. Uranium-series disequilibria measurements for mostly unsaturated zone samples indicate that (230)Th/(238)U activity ratios range from 0.005 to 0.48 and (226)Ra/(238)U activity ratios range from 0.006 to 113. (239)Pu/(238)U mass ratios for the saturated zone are <2 x 10(-14), and Pu mobility in the saturated zone is >1000 times lower than the U mobility. Saturated zone mobility decreases in the order (238)U approximately (226)Ra > (230)Th approximately (239)Pu. Radium and thorium appear to have higher mobility in the unsaturated zone based on U-series data from fractures and seepage water near the deposit.
NASA Astrophysics Data System (ADS)
Sun, Hong; Wu, Qian-zhong
2013-09-01
In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.
Hawaii: 2002 Economic Census. 2002 Educational Services, Geographic Area Series. EC02-61A-HI.
ERIC Educational Resources Information Center
US Department of Commerce, 2005
2005-01-01
The economic census furnishes an important part of the framework for such composite measures as the gross domestic product estimates, input/output measures, production and price indexes, and other statistical series that measure short-term changes in economic conditions. Specific uses of economic census data include the following: Policymaking…
Montana: 2002. 2002 Economic Census. Educational Services, Geographic Area Series. EC02-61A-MT
ERIC Educational Resources Information Center
US Department of Commerce, 2005
2005-01-01
The economic census furnishes an important part of the framework for such composite measures as the gross domestic product estimates, input/output measures, production and price indexes, and other statistical series that measure short-term changes in economic conditions. Specific uses of economic census data include the following: Policymaking…
Unit: Systems, Inspection Pack, National Trial Print.
ERIC Educational Resources Information Center
Australian Science Education Project, Toorak, Victoria.
This unit in the series prepared by the Australian Science Education Project is intended for students capable of abstract reasoning and as an introduction to other high level units in the series. The core activities suggest a number of activities that should lead students to recognize that a system is something that transforms an input into an…
NASA Astrophysics Data System (ADS)
Inatomi, M. I.; Ito, A.
2016-12-01
Nitrous oxide (N2O), with a centennial mean residence time in the atmosphere, is one of the most remarkable greenhouse gases. Because natural and anthropogenic emissions make comparable contributions, we need to take account of different sources of N2O such as natural soils and fertilizer in croplands to predict the future emission change and to discuss its mitigation. In this study, we conduct a series of simulations of future change in nitrous oxide emission from terrestrial ecosystems using a process-based model, VISIT. We assume a couple of scenarios of future climate change, atmospheric nitrogen deposition, fertilizer input, and land-use change. In particular, we develop a new scenario of cropland fertilizer input on the basis of changes in crop productivity and fertilizer production cost. Expansion of biofuel crop production is considered but in a simplified manner (e.g., a specific fraction of pasture conversion to biofuel cultivation). Regional and temporal aspects of N2O emission are investigated and compared with previous studies. Finally, we make discussions, on the basis of simulated results, about the high-end of N2O emission, mitigation options, and impact of fertilizer input.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, W. Payton; Hokr, Milan; Shao, Hua
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less
Gardner, W. Payton; Hokr, Milan; Shao, Hua; ...
2016-10-19
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less
Kirchner, James W.; Neal, Colin
2013-01-01
The chemical dynamics of lakes and streams affect their suitability as aquatic habitats and as water supplies for human needs. Because water quality is typically monitored only weekly or monthly, however, the higher-frequency dynamics of stream chemistry have remained largely invisible. To illuminate a wider spectrum of water quality dynamics, rainfall and streamflow were sampled in two headwater catchments at Plynlimon, Wales, at 7-h intervals for 1–2 y and weekly for over two decades, and were analyzed for 45 solutes spanning the periodic table from H+ to U. Here we show that in streamflow, all 45 of these solutes, including nutrients, trace elements, and toxic metals, exhibit fractal 1/fα scaling on time scales from hours to decades (α = 1.05 ± 0.15, mean ± SD). We show that this fractal scaling can arise through dispersion of random chemical inputs distributed across a catchment. These 1/f time series are non–self-averaging: monthly, yearly, or decadal averages are approximately as variable, one from the next, as individual measurements taken hours or days apart, defying naive statistical expectations. (By contrast, stream discharge itself is nonfractal, and self-averaging on time scales of months and longer.) In the solute time series, statistically significant trends arise much more frequently, on all time scales, than one would expect from conventional t statistics. However, these same trends are poor predictors of future trends—much poorer than one would expect from their calculated uncertainties. Our results illustrate how 1/f time series pose fundamental challenges to trend analysis and change detection in environmental systems. PMID:23842090
NASA Astrophysics Data System (ADS)
Kirchner, James W.; Neal, Colin
2013-07-01
The chemical dynamics of lakes and streams affect their suitability as aquatic habitats and as water supplies for human needs. Because water quality is typically monitored only weekly or monthly, however, the higher-frequency dynamics of stream chemistry have remained largely invisible. To illuminate a wider spectrum of water quality dynamics, rainfall and streamflow were sampled in two headwater catchments at Plynlimon, Wales, at 7-h intervals for 1-2 y and weekly for over two decades, and were analyzed for 45 solutes spanning the periodic table from H+ to U. Here we show that in streamflow, all 45 of these solutes, including nutrients, trace elements, and toxic metals, exhibit fractal 1/fα scaling on time scales from hours to decades (α = 1.05 ± 0.15, mean ± SD). We show that this fractal scaling can arise through dispersion of random chemical inputs distributed across a catchment. These 1/f time series are non-self-averaging: monthly, yearly, or decadal averages are approximately as variable, one from the next, as individual measurements taken hours or days apart, defying naive statistical expectations. (By contrast, stream discharge itself is nonfractal, and self-averaging on time scales of months and longer.) In the solute time series, statistically significant trends arise much more frequently, on all time scales, than one would expect from conventional t statistics. However, these same trends are poor predictors of future trends-much poorer than one would expect from their calculated uncertainties. Our results illustrate how 1/f time series pose fundamental challenges to trend analysis and change detection in environmental systems.
Continuous Change Detection and Classification (CCDC) of Land Cover Using All Available Landsat Data
NASA Astrophysics Data System (ADS)
Zhu, Z.; Woodcock, C. E.
2012-12-01
A new algorithm for Continuous Change Detection and Classification (CCDC) of land cover using all available Landsat data is developed. This new algorithm is capable of detecting many kinds of land cover change as new images are collected and at the same time provide land cover maps for any given time. To better identify land cover change, a two step cloud, cloud shadow, and snow masking algorithm is used for eliminating "noisy" observations. Next, a time series model that has components of seasonality, trend, and break estimates the surface reflectance and temperature. The time series model is updated continuously with newly acquired observations. Due to the high variability in spectral response for different kinds of land cover change, the CCDC algorithm uses a data-driven threshold derived from all seven Landsat bands. When the difference between observed and predicted exceeds the thresholds three consecutive times, a pixel is identified as land cover change. Land cover classification is done after change detection. Coefficients from the time series models and the Root Mean Square Error (RMSE) from model fitting are used as classification inputs for the Random Forest Classifier (RFC). We applied this new algorithm for one Landsat scene (Path 12 Row 31) that includes all of Rhode Island as well as much of Eastern Massachusetts and parts of Connecticut. A total of 532 Landsat images acquired between 1982 and 2011 were processed. During this period, 619,924 pixels were detected to change once (91% of total changed pixels) and 60,199 pixels were detected to change twice (8% of total changed pixels). The most frequent land cover change category is from mixed forest to low density residential which occupies more than 8% of total land cover change pixels.
Circuit for measuring time differences among events
Romrell, Delwin M.
1977-01-01
An electronic circuit has a plurality of input terminals. Application of a first input signal to any one of the terminals initiates a timing sequence. Later inputs to the same terminal are ignored but a later input to any other terminal of the plurality generates a signal which can be used to measure the time difference between the later input and the first input signal. Also, such time differences may be measured between the first input signal and an input signal to any other terminal of the plurality or the circuit may be reset at any time by an external reset signal.
Reveal, A General Reverse Engineering Algorithm for Inference of Genetic Network Architectures
NASA Technical Reports Server (NTRS)
Liang, Shoudan; Fuhrman, Stefanie; Somogyi, Roland
1998-01-01
Given the immanent gene expression mapping covering whole genomes during development, health and disease, we seek computational methods to maximize functional inference from such large data sets. Is it possible, in principle, to completely infer a complex regulatory network architecture from input/output patterns of its variables? We investigated this possibility using binary models of genetic networks. Trajectories, or state transition tables of Boolean nets, resemble time series of gene expression. By systematically analyzing the mutual information between input states and output states, one is able to infer the sets of input elements controlling each element or gene in the network. This process is unequivocal and exact for complete state transition tables. We implemented this REVerse Engineering ALgorithm (REVEAL) in a C program, and found the problem to be tractable within the conditions tested so far. For n = 50 (elements) and k = 3 (inputs per element), the analysis of incomplete state transition tables (100 state transition pairs out of a possible 10(exp 15)) reliably produced the original rule and wiring sets. While this study is limited to synchronous Boolean networks, the algorithm is generalizable to include multi-state models, essentially allowing direct application to realistic biological data sets. The ability to adequately solve the inverse problem may enable in-depth analysis of complex dynamic systems in biology and other fields.
Fluctuation behaviors of financial return volatility duration
NASA Astrophysics Data System (ADS)
Niu, Hongli; Wang, Jun; Lu, Yunfan
2016-04-01
It is of significantly crucial to understand the return volatility of financial markets because it helps to quantify the investment risk, optimize the portfolio, and provide a key input of option pricing models. The characteristics of isolated high volatility events above certain threshold in price fluctuations and the distributions of return intervals between these events arouse great interest in financial research. In the present work, we introduce a new concept of daily return volatility duration, which is defined as the shortest passage time when the future volatility intensity is above or below the current volatility intensity (without predefining a threshold). The statistical properties of the daily return volatility durations for seven representative stock indices from the world financial markets are investigated. Some useful and interesting empirical results of these volatility duration series about the probability distributions, memory effects and multifractal properties are obtained. These results also show that the proposed stock volatility series analysis is a meaningful and beneficial trial.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
MULTI-ELECTRODE TUBE PULSE MEMORY CIRCUIT
Gundlach, J.C.; Reeves, J.B.
1958-05-20
Control circuits are described for pulse memory devices for scalers and the like, and more particularly to a driving or energizing circuit for a polycathode gaseous discharge tube having an elongated anode and a successive series of cathodes spaced opposite the anode along its length. The circuit is so arranged as to utilize an arc discharge between the anode and a cathode to count a series of pulses. Upon application of an input pulse the discharge is made to occur between the anode and the next successive cathode, and an output pulse is produced when a particular subsequent cathode is reached. The circuit means for transfering the discharge by altering the anode potential and potential of the cathodes and interconnecting the cathodes constitutes the novel aspects of the invention. A low response time and reduced number of circuit components are the practical advantages of the described circuit.
1995-01-01
expensive) option is to track the mean and variance of each input feature instead of the min and max. Then a sigmoid is the natural choice for a mapping...Scaling Down: Applying Large Vocabulary Hybrid HMM-MLP Methods to Telephone Recognition of Digits and Natural Numbers 223 Kristine Ma, Nelson Morgan...1 if Yt > 1 Yt + I if Yt < 0 where ct is uncorrelated Gaussian noise with a variance of o-2 = 0.01. Figure 2 (left) shows the time series. Figure 2
Computational problems in autoregressive moving average (ARMA) models
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.
1981-01-01
The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.
Capstone Depleted Uranium Aerosols: Generation and Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parkhurst, MaryAnn; Szrom, Fran; Guilmette, Ray
2004-10-19
In a study designed to provide an improved scientific basis for assessing possible health effects from inhaling depleted uranium (DU) aerosols, a series of DU penetrators was fired at an Abrams tank and a Bradley fighting vehicle. A robust sampling system was designed to collect aerosols in this difficult environment and continuously monitor the sampler flow rates. Aerosols collected were analyzed for uranium concentration and particle size distribution as a function of time. They were also analyzed for uranium oxide phases, particle morphology, and dissolution in vitro. The resulting data provide input useful in human health risk assessments.
Supercomputer analysis of purine and pyrimidine metabolism leading to DNA synthesis.
Heinmets, F
1989-06-01
A model-system is established to analyze purine and pyrimidine metabolism leading to DNA synthesis. The principal aim is to explore the flow and regulation of terminal deoxynucleoside triophosphates (dNTPs) in various input and parametric conditions. A series of flow equations are established, which are subsequently converted to differential equations. These are programmed (Fortran) and analyzed on a Cray chi-MP/48 supercomputer. The pool concentrations are presented as a function of time in conditions in which various pertinent parameters of the system are modified. The system is formulated by 100 differential equations.
Assessment of CTAS ETA prediction capabilities
NASA Astrophysics Data System (ADS)
Bolender, Michael A.
1994-11-01
This report summarizes the work done to date in assessing the trajectory fidelity and estimated time of arrival (ETA) prediction capability of the NASA Ames Center TRACON Automation System (CTAS) software. The CTAS software suite is a series of computer programs designed to aid air traffic controllers in their tasks of safely scheduling the landing sequence of approaching aircraft. in particular, this report concerns the accuracy of the available measurements (e.g., position, altitude, etc.) that are input to the software, as well as the accuracy of the final data that is made available to the air traffic controllers.
Non-invasive estimation of dissipation from non-equilibrium fluctuations in chemical reactions.
Muy, S; Kundu, A; Lacoste, D
2013-09-28
We show how to extract an estimate of the entropy production from a sufficiently long time series of stationary fluctuations of chemical reactions. This method, which is based on recent work on fluctuation theorems, is direct, non-invasive, does not require any knowledge about the underlying dynamics and is applicable even when only partial information is available. We apply it to simple stochastic models of chemical reactions involving a finite number of states, and for this case, we study how the estimate of dissipation is affected by the degree of coarse-graining present in the input data.
Automatic construction of a recurrent neural network based classifier for vehicle passage detection
NASA Astrophysics Data System (ADS)
Burnaev, Evgeny; Koptelov, Ivan; Novikov, German; Khanipov, Timur
2017-03-01
Recurrent Neural Networks (RNNs) are extensively used for time-series modeling and prediction. We propose an approach for automatic construction of a binary classifier based on Long Short-Term Memory RNNs (LSTM-RNNs) for detection of a vehicle passage through a checkpoint. As an input to the classifier we use multidimensional signals of various sensors that are installed on the checkpoint. Obtained results demonstrate that the previous approach to handcrafting a classifier, consisting of a set of deterministic rules, can be successfully replaced by an automatic RNN training on an appropriately labelled data.
A simulation study to quantify the impacts of exposure ...
A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
Synthetic wind speed scenarios generation for probabilistic analysis of hybrid energy systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jun; Rabiti, Cristian
Hybrid energy systems consisting of multiple energy inputs and multiple energy outputs have been proposed to be an effective element to enable ever increasing penetration of clean energy. In order to better understand the dynamic and probabilistic behavior of hybrid energy systems, this paper proposes a model combining Fourier series and autoregressive moving average (ARMA) to characterize historical weather measurements and to generate synthetic weather (e.g., wind speed) data. In particular, Fourier series is used to characterize the seasonal trend in historical data, while ARMA is applied to capture the autocorrelation in residue time series (e.g., measurements minus seasonal trends).more » The generated synthetic wind speed data is then utilized to perform probabilistic analysis of a particular hybrid energy system con guration, which consists of nuclear power plant, wind farm, battery storage, natural gas boiler, and chemical plant. As a result, requirements on component ramping rate, economic and environmental impacts of hybrid energy systems, and the effects of deploying different sizes of batteries in smoothing renewable variability, are all investigated.« less
Synthetic wind speed scenarios generation for probabilistic analysis of hybrid energy systems
Chen, Jun; Rabiti, Cristian
2016-11-25
Hybrid energy systems consisting of multiple energy inputs and multiple energy outputs have been proposed to be an effective element to enable ever increasing penetration of clean energy. In order to better understand the dynamic and probabilistic behavior of hybrid energy systems, this paper proposes a model combining Fourier series and autoregressive moving average (ARMA) to characterize historical weather measurements and to generate synthetic weather (e.g., wind speed) data. In particular, Fourier series is used to characterize the seasonal trend in historical data, while ARMA is applied to capture the autocorrelation in residue time series (e.g., measurements minus seasonal trends).more » The generated synthetic wind speed data is then utilized to perform probabilistic analysis of a particular hybrid energy system con guration, which consists of nuclear power plant, wind farm, battery storage, natural gas boiler, and chemical plant. As a result, requirements on component ramping rate, economic and environmental impacts of hybrid energy systems, and the effects of deploying different sizes of batteries in smoothing renewable variability, are all investigated.« less
Statistical analysis of CSP plants by simulating extensive meteorological series
NASA Astrophysics Data System (ADS)
Pavón, Manuel; Fernández, Carlos M.; Silva, Manuel; Moreno, Sara; Guisado, María V.; Bernardos, Ana
2017-06-01
The feasibility analysis of any power plant project needs the estimation of the amount of energy it will be able to deliver to the grid during its lifetime. To achieve this, its feasibility study requires a precise knowledge of the solar resource over a long term period. In Concentrating Solar Power projects (CSP), financing institutions typically requires several statistical probability of exceedance scenarios of the expected electric energy output. Currently, the industry assumes a correlation between probabilities of exceedance of annual Direct Normal Irradiance (DNI) and energy yield. In this work, this assumption is tested by the simulation of the energy yield of CSP plants using as input a 34-year series of measured meteorological parameters and solar irradiance. The results of this work show that, even if some correspondence between the probabilities of exceedance of annual DNI values and energy yields is found, the intra-annual distribution of DNI may significantly affect this correlation. This result highlights the need of standardized procedures for the elaboration of representative DNI time series representative of a given probability of exceedance of annual DNI.
The International Satellite Cloud Climatology Project H-Series climate data record product
NASA Astrophysics Data System (ADS)
Young, Alisa H.; Knapp, Kenneth R.; Inamdar, Anand; Hankins, William; Rossow, William B.
2018-03-01
This paper describes the new global long-term International Satellite Cloud Climatology Project (ISCCP) H-series climate data record (CDR). The H-series data contain a suite of level 2 and 3 products for monitoring the distribution and variation of cloud and surface properties to better understand the effects of clouds on climate, the radiation budget, and the global hydrologic cycle. This product is currently available for public use and is derived from both geostationary and polar-orbiting satellite imaging radiometers with common visible and infrared (IR) channels. The H-series data currently span July 1983 to December 2009 with plans for continued production to extend the record to the present with regular updates. The H-series data are the longest combined geostationary and polar orbiter satellite-based CDR of cloud properties. Access to the data is provided in network common data form (netCDF) and archived by NOAA's National Centers for Environmental Information (NCEI) under the satellite Climate Data Record Program (https://doi.org/10.7289/V5QZ281S). The basic characteristics, history, and evolution of the dataset are presented herein with particular emphasis on and discussion of product changes between the H-series and the widely used predecessor D-series product which also spans from July 1983 through December 2009. Key refinements included in the ISCCP H-series CDR are based on improved quality control measures, modified ancillary inputs, higher spatial resolution input and output products, calibration refinements, and updated documentation and metadata to bring the H-series product into compliance with existing standards for climate data records.
Task-Based Language Teaching for Beginner-Level Learners of L2 French: An Exploratory Study
ERIC Educational Resources Information Center
Erlam, Rosemary; Ellis, Rod
2018-01-01
This study investigated the effect of input-based tasks on the acquisition of vocabulary and grammar by beginner-level learners of L2 French and reported the introduction of task-based teaching as an innovation in a state secondary school. The experimental group (n = 19) completed a series of focused input-based language tasks, taught by their…
ERIC Educational Resources Information Center
Andersen, Elaine S., Comp.
Thirty-one papers and reports dealing with recent work on language input to children are listed in this annotated bibliography. The annotations, which are descriptive rather than evaluative, summarize the design of each study, the nature of the data, and some of the results and conclusions. Entries by P. Broen, J. Bynon, L. Cherry, J. M. Crawford,…
An analysis of a nonlinear instability in the implementation of a VTOL control system
NASA Technical Reports Server (NTRS)
Weber, J. M.
1982-01-01
The contributions to nonlinear behavior and unstable response of the model following yaw control system of a VTOL aircraft during hover were determined. The system was designed as a state rate feedback implicit model follower that provided yaw rate command/heading hold capability and used combined full authority parallel and limited authority series servo actuators to generate an input to the yaw reaction control system of the aircraft. Both linear and nonlinear system models, as well as describing function linearization techniques were used to determine the influence on the control system instability of input magnitude and bandwidth, series servo authority, and system bandwidth. Results of the analysis describe stability boundaries as a function of these system design characteristics.
Automated Generation of Technical Documentation and Provenance for Reproducible Research
NASA Astrophysics Data System (ADS)
Jolly, B.; Medyckyj-Scott, D.; Spiekermann, R.; Ausseil, A. G.
2017-12-01
Data provenance and detailed technical documentation are essential components of high-quality reproducible research, however are often only partially addressed during a research project. Recording and maintaining this information during the course of a project can be a difficult task to get right as it is a time consuming and often boring process for the researchers involved. As a result, provenance records and technical documentation provided alongside research results can be incomplete or may not be completely consistent with the actual processes followed. While providing access to the data and code used by the original researchers goes some way toward enabling reproducibility, this does not count as, or replace, data provenance. Additionally, this can be a poor substitute for good technical documentation and is often more difficult for a third-party to understand - particularly if they do not understand the programming language(s) used. We present and discuss a tool built from the ground up for the production of well-documented and reproducible spatial datasets that are created by applying a series of classification rules to a number of input layers. The internal model of the classification rules required by the tool to process the input data is exploited to also produce technical documentation and provenance records with minimal additional user input. Available provenance records that accompany input datasets are incorporated into those that describe the current process. As a result, each time a new iteration of the analysis is performed the documentation and provenance records are re-generated to provide an accurate description of the exact process followed. The generic nature of this tool, and the lessons learned during its creation, have wider application to other fields where the production of derivative datasets must be done in an open, defensible, and reproducible way.
Learning from adaptive neural dynamic surface control of strict-feedback systems.
Wang, Min; Wang, Cong
2015-06-01
Learning plays an essential role in autonomous control systems. However, how to achieve learning in the nonstationary environment for nonlinear systems is a challenging problem. In this paper, we present learning method for a class of n th-order strict-feedback systems by adaptive dynamic surface control (DSC) technology, which achieves the human-like ability of learning by doing and doing with learned knowledge. To achieve the learning, this paper first proposes stable adaptive DSC with auxiliary first-order filters, which ensures the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in a finite time. With the help of DSC, the derivative of the filter output variable is used as the neural network (NN) input instead of traditional intermediate variables. As a result, the proposed adaptive DSC method reduces greatly the dimension of NN inputs, especially for high-order systems. After the stable DSC design, we decompose the stable closed-loop system into a series of linear time-varying perturbed subsystems. Using a recursive design, the recurrent property of NN input variables is easily verified since the complexity is overcome using DSC. Subsequently, the partial persistent excitation condition of the radial basis function NN is satisfied. By combining a state transformation, accurate approximations of the closed-loop system dynamics are recursively achieved in a local region along recurrent orbits. Then, the learning control method using the learned knowledge is proposed to achieve the closed-loop stability and the improved control performance. Simulation studies are performed to demonstrate the proposed scheme can not only reuse the learned knowledge to achieve the better control performance with the faster tracking convergence rate and the smaller tracking error but also greatly alleviate the computational burden because of reducing the number and complexity of NN input variables.
NASA Astrophysics Data System (ADS)
Luo, Y.; Boudreau, B. P.; Dickens, G. R.; Sluijs, A.; Middelburg, J. J.
2015-12-01
Carbon dioxide (CO2) release during the Paleocene-Eocene Thermal Maximum (PETM, 55.8 Myr BP) acidified the oceans, causing a decrease in calcium carbonate (CaCO3) preservation. During the subsequent recovery from this acidification, the sediment CaCO3 content came to exceed pre-PETM values, known as over-deepening or over-shooting. Past studies claim to explain these trends, but have failed to reproduce quantitatively the time series of CaCO3 preservation. We employ a simple biogeochemical model to recreate the CaCO3 records preserved at Walvis Ridge of the Atlantic Ocean. Replication of the observed changes, both shallowing and the subsequent over-deepening, requires two conditions not previously considered: (1) limited deep-water exchange between the Indo-Atlantic and Pacific oceans and (2) a ~50% reduction in the export of CaCO3 to the deep sea during acidification. Contrary to past theories that attributed over-deepening to increased riverine alkalinity input, we find that over-deepening is an emergent property, generated at constant riverine input when attenuation of CaCO3 export causes an unbalanced alkalinity input to the deep oceans (alkalinization) and the development of deep super-saturation. Restoration of CaCO3 export, particularly in the super-saturated deep Indo-Atlantic ocean, later in the PETM leads to greater accumulation of carbonates, ergo over-shooting, which returns the ocean to pre-PETM conditions over a time scale greater than 200 kyr. While this feedback between carbonate export and the riverine input has not previously been considered, it appears to constitute an important modification of the classic carbonate compensation concept used to explain oceanic response to acidification.
NASA Astrophysics Data System (ADS)
Moulds, S.; Djordjevic, S.; Savic, D.
2017-12-01
The Global Change Assessment Model (GCAM), an integrated assessment model, provides insight into the interactions and feedbacks between physical and human systems. The land system component of GCAM, which simulates land use activities and the production of major crops, produces output at the subregional level which must be spatially downscaled in order to use with gridded impact assessment models. However, existing downscaling routines typically consider cropland as a homogeneous class and do not provide information about land use intensity or specific management practices such as irrigation and multiple cropping. This paper presents a spatial allocation procedure to downscale crop production data from GCAM to a spatial grid, producing a time series of maps which show the spatial distribution of specific crops (e.g. rice, wheat, maize) at four input levels (subsistence, low input rainfed, high input rainfed and high input irrigated). The model algorithm is constrained by available cropland at each time point and therefore implicitly balances extensification and intensification processes in order to meet global food demand. It utilises a stochastic approach such that an increase in production of a particular crop is more likely to occur in grid cells with a high biophysical suitability and neighbourhood influence, while a fall in production will occur more often in cells with lower suitability. User-supplied rules define the order in which specific crops are downscaled as well as allowable transitions. A regional case study demonstrates the ability of the model to reproduce historical trends in India by comparing the model output with district-level agricultural inventory data. Lastly, the model is used to predict the spatial distribution of crops globally under various GCAM scenarios.
O'Sullivan, F; Kirrane, J; Muzi, M; O'Sullivan, J N; Spence, A M; Mankoff, D A; Krohn, K A
2010-03-01
Kinetic quantitation of dynamic positron emission tomography (PET) studies via compartmental modeling usually requires the time-course of the radio-tracer concentration in the arterial blood as an arterial input function (AIF). For human and animal imaging applications, significant practical difficulties are associated with direct arterial sampling and as a result there is substantial interest in alternative methods that require no blood sampling at the time of the study. A fixed population template input function derived from prior experience with directly sampled arterial curves is one possibility. Image-based extraction, including requisite adjustment for spillover and recovery, is another approach. The present work considers a hybrid statistical approach based on a penalty formulation in which the information derived from a priori studies is combined in a Bayesian manner with information contained in the sampled image data in order to obtain an input function estimate. The absolute scaling of the input is achieved by an empirical calibration equation involving the injected dose together with the subject's weight, height and gender. The technique is illustrated in the context of (18)F -Fluorodeoxyglucose (FDG) PET studies in humans. A collection of 79 arterially sampled FDG blood curves are used as a basis for a priori characterization of input function variability, including scaling characteristics. Data from a series of 12 dynamic cerebral FDG PET studies in normal subjects are used to evaluate the performance of the penalty-based AIF estimation technique. The focus of evaluations is on quantitation of FDG kinetics over a set of 10 regional brain structures. As well as the new method, a fixed population template AIF and a direct AIF estimate based on segmentation are also considered. Kinetics analyses resulting from these three AIFs are compared with those resulting from radially sampled AIFs. The proposed penalty-based AIF extraction method is found to achieve significant improvements over the fixed template and the segmentation methods. As well as achieving acceptable kinetic parameter accuracy, the quality of fit of the region of interest (ROI) time-course data based on the extracted AIF, matches results based on arterially sampled AIFs. In comparison, significant deviation in the estimation of FDG flux and degradation in ROI data fit are found with the template and segmentation methods. The proposed AIF extraction method is recommended for practical use.
NASA Astrophysics Data System (ADS)
Müller, H.; Haberlandt, U.
2018-01-01
Rainfall time series of high temporal resolution and spatial density are crucial for urban hydrology. The multiplicative random cascade model can be used for temporal rainfall disaggregation of daily data to generate such time series. Here, the uniform splitting approach with a branching number of 3 in the first disaggregation step is applied. To achieve a final resolution of 5 min, subsequent steps after disaggregation are necessary. Three modifications at different disaggregation levels are tested in this investigation (uniform splitting at Δt = 15 min, linear interpolation at Δt = 7.5 min and Δt = 3.75 min). Results are compared both with observations and an often used approach, based on the assumption that a time steps with Δt = 5.625 min, as resulting if a branching number of 2 is applied throughout, can be replaced with Δt = 5 min (called the 1280 min approach). Spatial consistence is implemented in the disaggregated time series using a resampling algorithm. In total, 24 recording stations in Lower Saxony, Northern Germany with a 5 min resolution have been used for the validation of the disaggregation procedure. The urban-hydrological suitability is tested with an artificial combined sewer system of about 170 hectares. The results show that all three variations outperform the 1280 min approach regarding reproduction of wet spell duration, average intensity, fraction of dry intervals and lag-1 autocorrelation. Extreme values with durations of 5 min are also better represented. For durations of 1 h, all approaches show only slight deviations from the observed extremes. The applied resampling algorithm is capable to achieve sufficient spatial consistence. The effects on the urban hydrological simulations are significant. Without spatial consistence, flood volumes of manholes and combined sewer overflow are strongly underestimated. After resampling, results using disaggregated time series as input are in the range of those using observed time series. Best overall performance regarding rainfall statistics are obtained by the method in which the disaggregation process ends at time steps with 7.5 min duration, deriving the 5 min time steps by linear interpolation. With subsequent resampling this method leads to a good representation of manhole flooding and combined sewer overflow volume in terms of hydrological simulations and outperforms the 1280 min approach.
The series-elastic shock absorber: tendons attenuate muscle power during eccentric actions
Azizi, Emanuel
2010-01-01
Elastic tendons can act as muscle power amplifiers or energy-conserving springs during locomotion. We used an in situ muscle-tendon preparation to examine the mechanical function of tendons during lengthening contractions, when muscles absorb energy. Force, length, and power were measured in the lateral gastrocnemius muscle of wild turkeys. Sonomicrometry was used to measure muscle fascicle length independently from muscle-tendon unit (MTU) length, as measured by a muscle lever system (servomotor). A series of ramp stretches of varying velocities was applied to the MTU in fully activated muscles. Fascicle length changes were decoupled from length changes imposed on the MTU by the servomotor. Under most conditions, muscle fascicles shortened on average, while the MTU lengthened. Energy input to the MTU during the fastest lengthenings was −54.4 J/kg, while estimated work input to the muscle fascicles during this period was only −11.24 J/kg. This discrepancy indicates that energy was first absorbed by elastic elements, then released to do work on muscle fascicles after the lengthening phase of the contraction. The temporary storage of energy by elastic elements also resulted in a significant attenuation of power input to the muscle fascicles. At the fastest lengthening rates, peak instantaneous power input to the MTU reached −2,143.9 W/kg, while peak power input to the fascicles was only −557.6 W/kg. These results demonstrate that tendons may act as mechanical buffers by limiting peak muscle forces, lengthening rates, and power inputs during energy-absorbing contractions. PMID:20507964
Neural Networks as a Tool for Constructing Continuous NDVI Time Series from AVHRR and MODIS
NASA Technical Reports Server (NTRS)
Brown, Molly E.; Lary, David J.; Vrieling, Anton; Stathakis, Demetris; Mussa, Hamse
2008-01-01
The long term Advanced Very High Resolution Radiometer-Normalized Difference Vegetation Index (AVHRR-NDVI) record provides a critical historical perspective on vegetation dynamics necessary for global change research. Despite the proliferation of new sources of global, moderate resolution vegetation datasets, the remote sensing community is still struggling to create datasets derived from multiple sensors that allow the simultaneous use of spectral vegetation for time series analysis. To overcome the non-stationary aspect of NDVI, we use an artificial neural network (ANN) to map the NDVI indices from AVHRR to those from MODIS using atmospheric, surface type and sensor-specific inputs to account for the differences between the sensors. The NDVI dynamics and range of MODIS NDVI data at one degree is matched and extended through the AVHRR record. Four years of overlap between the two sensors is used to train a neural network to remove atmospheric and sensor specific effects on the AVHRR NDVI. In this paper, we present the resulting continuous dataset, its relationship to MODIS data, and a validation of the product.
Bivariate autoregressive state-space modeling of psychophysiological time series data.
Smith, Daniel M; Abtahi, Mohammadreza; Amiri, Amir Mohammad; Mankodiya, Kunal
2016-08-01
Heart rate (HR) and electrodermal activity (EDA) are often used as physiological measures of psychological arousal in various neuropsychology experiments. In this exploratory study, we analyze HR and EDA data collected from four participants, each with a history of suicidal tendencies, during a cognitive task known as the Paced Auditory Serial Addition Test (PASAT). A central aim of this investigation is to guide future research by assessing heterogeneity in the population of individuals with suicidal tendencies. Using a state-space modeling approach to time series analysis, we evaluate the effect of an exogenous input, i.e., the stimulus presentation rate which was increased systematically during the experimental task. Participants differed in several parameters characterizing the way in which psychological arousal was experienced during the task. Increasing the stimulus presentation rate was associated with an increase in EDA in participants 2 and 4. The effect on HR was positive for participant 2 and negative for participants 3 and 4. We discuss future directions in light of the heterogeneity in the population indicated by these findings.
Galka, Andreas; Siniatchkin, Michael; Stephani, Ulrich; Groening, Kristina; Wolff, Stephan; Bosch-Bayard, Jorge; Ozaki, Tohru
2010-12-01
The analysis of time series obtained by functional magnetic resonance imaging (fMRI) may be approached by fitting predictive parametric models, such as nearest-neighbor autoregressive models with exogeneous input (NNARX). As a part of the modeling procedure, it is possible to apply instantaneous linear transformations to the data. Spatial smoothing, a common preprocessing step, may be interpreted as such a transformation. The autoregressive parameters may be constrained, such that they provide a response behavior that corresponds to the canonical haemodynamic response function (HRF). We present an algorithm for estimating the parameters of the linear transformations and of the HRF within a rigorous maximum-likelihood framework. Using this approach, an optimal amount of both the spatial smoothing and the HRF can be estimated simultaneously for a given fMRI data set. An example from a motor-task experiment is discussed. It is found that, for this data set, weak, but non-zero, spatial smoothing is optimal. Furthermore, it is demonstrated that activated regions can be estimated within the maximum-likelihood framework.
Users Manual for the Geospatial Stream Flow Model (GeoSFM)
Artan, Guleid A.; Asante, Kwabena; Smith, Jodie; Pervez, Md Shahriar; Entenmann, Debbie; Verdin, James P.; Rowland, James
2008-01-01
The monitoring of wide-area hydrologic events requires the manipulation of large amounts of geospatial and time series data into concise information products that characterize the location and magnitude of the event. To perform these manipulations, scientists at the U.S. Geological Survey Center for Earth Resources Observation and Science (EROS), with the cooperation of the U.S. Agency for International Development, Office of Foreign Disaster Assistance (USAID/OFDA), have implemented a hydrologic modeling system. The system includes a data assimilation component to generate data for a Geospatial Stream Flow Model (GeoSFM) that can be run operationally to identify and map wide-area streamflow anomalies. GeoSFM integrates a geographical information system (GIS) for geospatial preprocessing and postprocessing tasks and hydrologic modeling routines implemented as dynamically linked libraries (DLLs) for time series manipulations. Model results include maps that depicting the status of streamflow and soil water conditions. This Users Manual provides step-by-step instructions for running the model and for downloading and processing the input data required for initial model parameterization and daily operation.
Rand, Troy J.; Myers, Sara A.; Kyvelidou, Anastasia; Mukherjee, Mukul
2015-01-01
A healthy biological system is characterized by a temporal structure that exhibits fractal properties and is highly complex. Unhealthy systems demonstrate lowered complexity and either greater or less predictability in the temporal structure of a time series. The purpose of this research was to determine if support surface translations with different temporal structures would affect the temporal structure of the center of pressure (COP) signal. Eight healthy young participants stood on a force platform that was translated in the anteroposterior direction for input conditions of varying complexity: white noise, pink noise, brown noise, and sine wave. Detrended fluctuation analysis was used to characterize the long-range correlations of the COP time series in the AP direction. Repeated measures ANOVA revealed differences among conditions (P < .001). The less complex support surface translations resulted in a less complex COP compared to normal standing. A quadratic trend analysis demonstrated an inverted-u shape across an increasing order of predictability of the conditions (P < .001). The ability to influence the complexity of postural control through support surface translations can have important implications for rehabilitation. PMID:25994281
Leinweber, Peter; Bathmann, Ulrich; Buczko, Uwe; Douhaire, Caroline; Eichler-Löbermann, Bettina; Frossard, Emmanuel; Ekardt, Felix; Jarvie, Helen; Krämer, Inga; Kabbe, Christian; Lennartz, Bernd; Mellander, Per-Erik; Nausch, Günther; Ohtake, Hisao; Tränckner, Jens
2018-01-01
This special issue of Ambio compiles a series of contributions made at the 8th International Phosphorus Workshop (IPW8), held in September 2016 in Rostock, Germany. The introducing overview article summarizes major published scientific findings in the time period from IPW7 (2015) until recently, including presentations from IPW8. The P issue was subdivided into four themes along the logical sequence of P utilization in production, environmental, and societal systems: (1) Sufficiency and efficiency of P utilization, especially in animal husbandry and crop production; (2) P recycling: technologies and product applications; (3) P fluxes and cycling in the environment; and (4) P governance. The latter two themes had separate sessions for the first time in the International Phosphorus Workshops series; thus, this overview presents a scene-setting rather than an overview of the latest research for these themes. In summary, this paper details new findings in agricultural and environmental P research, which indicate reduced P inputs, improved management options, and provide translations into governance options for a more sustainable P use.
Post-Flight Estimation of Motion of Space Structures: Part 2
NASA Technical Reports Server (NTRS)
Brugarolas, Paul; Breckenridge, William
2008-01-01
A computer program related to the one described in the immediately preceding article estimates the relative position of two space structures that are hinged to each other. The input to the program consists of time-series data on distances, measured by two range finders at different positions on one structure, to a corner-cube retroreflector on the other structure. Given a Cartesian (x,y,z) coordinate system and the known x coordinate of the retroreflector relative to the y,z plane that contains the range finders, the program estimates the y and z coordinates of the retroreflector. The estimation process involves solving for the y,z coordinates of the intersection between (1) the y,z plane that contains the retroreflector and (2) spheres, centered on the range finders, having radii equal to the measured distances. In general, there are two such solutions and the program chooses the one consistent with the design of the structures. The program implements a Kalman filter. The output of the program is a time series of estimates of the relative position of the structures.
Position-sensitive proportional counter with low-resistance metal-wire anode
Kopp, Manfred K.
1980-01-01
A position-sensitive proportional counter circuit is provided which allows the use of a conventional (low-resistance, metal-wire anode) proportional counter for spatial resolution of an ionizing event along the anode of the counter. A pair of specially designed active-capacitance preamplifiers are used to terminate the anode ends wherein the anode is treated as an RC line. The preamplifiers act as stabilized active capacitance loads and each is composed of a series-feedback, low-noise amplifier, a unity-gain, shunt-feedback amplifier whose output is connected through a feedback capacitor to the series-feedback amplifier input. The stabilized capacitance loading of the anode allows distributed RC-line position encoding and subsequent time difference decoding by sensing the difference in rise times of pulses at the anode ends where the difference is primarily in response to the distributed capacitance along the anode. This allows the use of lower resistance wire anodes for spatial radiation detection which simplifies the counter construction and handling of the anodes, and stabilizes the anode resistivity at high count rates (>10.sup.6 counts/sec).
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain.
Barba, Lida; Rodríguez, Nibaldo
2017-01-01
Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain
Rodríguez, Nibaldo
2017-01-01
Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT. PMID:28261267
NASA Astrophysics Data System (ADS)
Engeland, K.; Steinsland, I.; Petersen-Øverleir, A.; Johansen, S.
2012-04-01
The aim of this study is to assess the uncertainties in streamflow simulations when uncertainties in both observed inputs (precipitation and temperature) and streamflow observations used in the calibration of the hydrological model are explicitly accounted for. To achieve this goal we applied the elevation distributed HBV model operating on daily time steps to a small catchment in high elevation in Southern Norway where the seasonal snow cover is important. The uncertainties in precipitation inputs were quantified using conditional simulation. This procedure accounts for the uncertainty related to the density of the precipitation network, but neglects uncertainties related to measurement bias/errors and eventual elevation gradients in precipitation. The uncertainties in temperature inputs were quantified using a Bayesian temperature interpolation procedure where the temperature lapse rate is re-estimated every day. The uncertainty in the lapse rate was accounted for whereas the sampling uncertainty related to network density was neglected. For every day a random sample of precipitation and temperature inputs were drawn to be applied as inputs to the hydrologic model. The uncertainties in observed streamflow were assessed based on the uncertainties in the rating curve model. A Bayesian procedure was applied to estimate the probability for rating curve models with 1 to 3 segments and the uncertainties in their parameters. This method neglects uncertainties related to errors in observed water levels. Note that one rating curve was drawn to make one realisation of a whole time series of streamflow, thus the rating curve errors lead to a systematic bias in the streamflow observations. All these uncertainty sources were linked together in both calibration and evaluation of the hydrologic model using a DREAM based MCMC routine. Effects of having less information (e.g. missing one streamflow measurement for defining the rating curve or missing one precipitation station) was also investigated.
Identification and modification of dominant noise sources in diesel engines
NASA Astrophysics Data System (ADS)
Hayward, Michael D.
Determination of dominant noise sources in diesel engines is an integral step in the creation of quiet engines, but is a process which can involve an extensive series of expensive, time-consuming fired and motored tests. The goal of this research is to determine dominant noise source characteristics of a diesel engine in the near and far-fields with data from fewer tests than is currently required. Pre-conditioning and use of numerically robust methods to solve a set of cross-spectral density equations results in accurate calculation of the transfer paths between the near- and far-field measurement points. Application of singular value decomposition to an input cross-spectral matrix determines the spectral characteristics of a set of independent virtual sources, that, when scaled and added, result in the input cross spectral matrix. Each virtual source power spectral density is a singular value resulting from the decomposition performed over a range of frequencies. The complex relationship between virtual and physical sources is estimated through determination of virtual source contributions to each input measurement power spectral density. The method is made more user-friendly through use of a percentage contribution color plotting technique, where different normalizations can be used to help determine the presence of sources and the strengths of their contributions. Convolution of input measurements with the estimated path impulse responses results in a set of far-field components, to which the same singular value contribution plotting technique can be applied, thus allowing dominant noise source characteristics in the far-field to also be examined. Application of the methods presented results in determination of the spectral characteristics of dominant noise sources both in the near- and far-fields from one fired test, which significantly reduces the need for extensive fired and motored testing. Finally, it is shown that the far-field noise time history of a physically altered engine can be simulated through modification of singular values and recalculation of transfer paths between input and output measurements of previously recorded data.
NASA Astrophysics Data System (ADS)
Manfron, Giacinto; Delmotte, Sylvestre; Busetto, Lorenzo; Hossard, Laure; Ranghetti, Luigi; Brivio, Pietro Alessandro; Boschetti, Mirco
2017-05-01
Crop simulation models are commonly used to forecast the performance of cropping systems under different hypotheses of change. Their use on a regional scale is generally constrained, however, by a lack of information on the spatial and temporal variability of environment-related input variables (e.g., soil) and agricultural practices (e.g., sowing dates) that influence crop yields. Satellite remote sensing data can shed light on such variability by providing timely information on crop dynamics and conditions over large areas. This paper proposes a method for analyzing time series of MODIS satellite data in order to estimate the inter-annual variability of winter wheat sowing dates. A rule-based method was developed to automatically identify a reliable sample of winter wheat field time series, and to infer the corresponding sowing dates. The method was designed for a case study in the Camargue region (France), where winter wheat is characterized by vernalization, as in other temperate regions. The detection criteria were chosen on the grounds of agronomic expertise and by analyzing high-confidence time-series vegetation index profiles for winter wheat. This automatic method identified the target crop on more than 56% (four-year average) of the cultivated areas, with low commission errors (11%). It also captured the seasonal variability in sowing dates with errors of ±8 and ±16 days in 46% and 66% of cases, respectively. Extending the analysis to the years 2002-2012 showed that sowing in the Camargue was usually done on or around November 1st (±4 days). Comparing inter-annual sowing date variability with the main local agro-climatic drivers showed that the type of preceding crop and the weather conditions during the summer season before the wheat sowing had a prominent role in influencing winter wheat sowing dates.
Effective precipitation duration for runoff peaks based on catchment modelling
NASA Astrophysics Data System (ADS)
Sikorska, A. E.; Viviroli, D.; Seibert, J.
2018-01-01
Despite precipitation intensities may greatly vary during one flood event, detailed information about these intensities may not be required to accurately simulate floods with a hydrological model which rather reacts to cumulative precipitation sums. This raises two questions: to which extent is it important to preserve sub-daily precipitation intensities and how long does it effectively rain from the hydrological point of view? Both questions might seem straightforward to answer with a direct analysis of past precipitation events but require some arbitrary choices regarding the length of a precipitation event. To avoid these arbitrary decisions, here we present an alternative approach to characterize the effective length of precipitation event which is based on runoff simulations with respect to large floods. More precisely, we quantify the fraction of a day over which the daily precipitation has to be distributed to faithfully reproduce the large annual and seasonal floods which were generated by the hourly precipitation rate time series. New precipitation time series were generated by first aggregating the hourly observed data into daily totals and then evenly distributing them over sub-daily periods (n hours). These simulated time series were used as input to a hydrological bucket-type model and the resulting runoff flood peaks were compared to those obtained when using the original precipitation time series. We define then the effective daily precipitation duration as the number of hours n, for which the largest peaks are simulated best. For nine mesoscale Swiss catchments this effective daily precipitation duration was about half a day, which indicates that detailed information on precipitation intensities is not necessarily required to accurately estimate peaks of the largest annual and seasonal floods. These findings support the use of simple disaggregation approaches to make usage of past daily precipitation observations or daily precipitation simulations (e.g. from climate models) for hydrological modeling at an hourly time step.
NASA Astrophysics Data System (ADS)
Kurtulus, Bedri; Razack, Moumtaz
2010-02-01
SummaryThis paper compares two methods for modeling karst aquifers, which are heterogeneous, highly non-linear, and hierarchical systems. There is a clear need to model these systems given the crucial role they play in water supply in many countries. In recent years, the main components of soft computing (fuzzy logic (FL), and Artificial Neural Networks, (ANNs)) have come to prevail in the modeling of complex non-linear systems in different scientific and technologic disciplines. In this study, Artificial Neural Networks and Adaptive Neuro-Fuzzy Interface System (ANFIS) methods were used for the prediction of daily discharge of karstic aquifers and their capability was compared. The approach was applied to 7 years of daily data of La Rochefoucauld karst system in south-western France. In order to predict the karst daily discharges, single-input (rainfall, piezometric level) vs. multiple-input (rainfall and piezometric level) series were used. In addition to these inputs, all models used measured or simulated discharges from the previous days with a specified delay. The models were designed in a Matlab™ environment. An automatic procedure was used to select the best calibrated models. Daily discharge predictions were then performed using the calibrated models. Comparing predicted and observed hydrographs indicates that both models (ANN and ANFIS) provide close predictions of the karst daily discharges. The summary statistics of both series (observed and predicted daily discharges) are comparable. The performance of both models is improved when the number of inputs is increased from one to two. The root mean square error between the observed and predicted series reaches a minimum for two-input models. However, the ANFIS model demonstrates a better performance than the ANN model to predict peak flow. The ANFIS approach demonstrates a better generalization capability and slightly higher performance than the ANN, especially for peak discharges.
NASA Astrophysics Data System (ADS)
Jolivet, R.; Simons, M.
2016-12-01
InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests showing the efficiency of our method. We then reconstruct a 20-year continuous time series covering Northern Chile. Without input from any additional GNSS data, we recover the secular deformation rate, seasonal oscillations and the deformation fields from the 2005 Mw 7.8 Tarapaca and 2007 Mw 7.7 Tocopilla earthquakes.
Geocenter Motion Derived from the JTRF2014 Combination
NASA Astrophysics Data System (ADS)
Abbondanza, C.; Chin, T. M.; Gross, R. S.; Heflin, M. B.; Parker, J. W.; van Dam, T. M.; Wu, X.
2016-12-01
JTRF2014 represents the JPL Terrestrial Reference Frame (TRF) recently obtained as a result of the combination of the space-geodetic reprocessed inputs to the ITRF2014. Based upon a Kalman filter and smoother approach, JTRF2014 assimilates station positions and Earth-Orientation Parameters (EOPs) from GNSS, VLBI, SLR and DORIS and combine them through local tie measurements. JTRF is in its essence a time-series based TRF. In the JTRF2014 the dynamical evolution of the station positions is formulated by introducing linear and seasonal terms (annual and semi-annual periodic modes). Non-secular and non-seasonal motions of the geodetic sites are included in the smoothed time series by properly defining the station position process noise whose variance is characterized by analyzing station displacements induced by temporal changes of planetary fluid masses (atmosphere, oceans and continental surface water). With its station position time series output at a weekly resolution, JTRF2014 materializes a sub-secular frame whose origin is at the quasi-instantaneous Center of Mass (CM) as sensed by SLR. Both SLR and VLBI contribute to the scale of the combined frame. The sub-secular nature of the frame allows the users to directly access the quasi-instantaneous geocenter and scale information. Unlike standard combined TRF products which only give access to the secular component of the CM-CN motions, JTRF2014 is able to preserve -in addition to the long-term- the seasonal, non-seasonal and non-secular components of the geocenter motion. In the JTRF2014 assimilation scheme, local tie measurements are used to transfer the geocenter information from SLR to the space-geodetic techniques which are either insensitive to CM (VLBI) or whose geocenter motion is poorly determined (GNSS and DORIS). Properly tied to the CM frame through local ties and co-motion constraints, GNSS, VLBI and DORIS contribute to improve the SLR network geometry. In this paper, the determination of the weekly (CM-CN) time series as inferred from the JTRF2014 combination will be presented. Comparisons with geocenter time series derived from global inversions of GPS, GRACE and ocean bottom pressure models show the JTRF2014-derived geocenter favourably compares to the results of the inversion.
NASA Astrophysics Data System (ADS)
Hartnett, H. E.; Palta, M. M.; Grimm, N. B.; Ruhi, A.; van Shaijik, M.
2016-12-01
Tempe Town Lake (TTL) is a hydrologically-regulated reservoir in Tempe, Arizona. The lake has high primary production and receives dissolved organic carbon (DOC) from rainfall, storm flow, and upstream river discharge. We applied an ARIMA time-series model to a three-year period for which we have high-frequency chemistry, meteorology, and streamflow data and analyzed external (rainfall, stream flow) and internal (dissolved O2) drivers of DOC content and composition. DOC composition was represented by fluorescence-based indices (fluorescence index, humification index, freshness) related to DOC source (microbially- vs. terrestrially-derived) and reactivity DOC. Patterns in DOC concentration and composition suggest carbon cycling in the lake responds to both meteorological events and to anthropogenic activity. The fluorescence-derived DOC composition is consistent with seasonally-distinct inputs of algal- and terrestrially-derived carbon. For example, Tempe Town Lake is supersaturated in O2 over 70% of the time, suggesting the system is autotrophic and primary productivity (i.e., O2 saturation state) was the strongest driver of DOC concentration. In contrast, external drivers (rainfall pattern, streamflow) were the strongest determinants of DOC composition. Biological processes (e.g., algal growth) generate carbon in the lake during spring and summer, and high Fluorescence Index and Freshness values at this time are indicative of algal-derived material; these parameters generally decrease with rain or flow suggesting algal-derived carbon is diluted by external water inputs. During dry periods, carbon builds up on the land surface and subsequent rainfall events deliver terrestrial carbon to the lake. Further evidence that rain and streamflow deliver land-derived material are increases in the Humification Index (an indicator of terrestrial material) following rain/flow events. Our results indicate that Tempe Town Lake generates autochthonous carbon and has the capacity to process allochthonous carbon from the urban environment. Ongoing work is comparing these results to other periods in the 10-year time series to test if the driver-DOC relationships are robust over longer time-scales and evaluating how changes in lake management and climate have altered DOC over time.
Temporal Patterns in Dissolved Organic Carbon Composition in an Urban Lake
NASA Astrophysics Data System (ADS)
Hartnett, H. E.; Palta, M. M.; Grimm, N. B.; Ruhi, A.; van Shaijik, M.
2017-12-01
Tempe Town Lake (TTL) is a hydrologically-regulated reservoir in Tempe, Arizona. The lake has high primary production and receives dissolved organic carbon (DOC) from rainfall, storm flow, and upstream river discharge. We applied an ARIMA time-series model to a three-year period for which we have high-frequency chemistry, meteorology, and streamflow data and analyzed external (rainfall, stream flow) and internal (dissolved O2) drivers of DOC content and composition. DOC composition was represented by fluorescence-based indices (fluorescence index, humification index, freshness) related to DOC source (microbially- vs. terrestrially-derived) and reactivity DOC. Patterns in DOC concentration and composition suggest carbon cycling in the lake responds to both meteorological events and to anthropogenic activity. The fluorescence-derived DOC composition is consistent with seasonally-distinct inputs of algal- and terrestrially-derived carbon. For example, Tempe Town Lake is supersaturated in O2 over 70% of the time, suggesting the system is autotrophic and primary productivity (i.e., O2 saturation state) was the strongest driver of DOC concentration. In contrast, external drivers (rainfall pattern, streamflow) were the strongest determinants of DOC composition. Biological processes (e.g., algal growth) generate carbon in the lake during spring and summer, and high Fluorescence Index and Freshness values at this time are indicative of algal-derived material; these parameters generally decrease with rain or flow suggesting algal-derived carbon is diluted by external water inputs. During dry periods, carbon builds up on the land surface and subsequent rainfall events deliver terrestrial carbon to the lake. Further evidence that rain and streamflow deliver land-derived material are increases in the Humification Index (an indicator of terrestrial material) following rain/flow events. Our results indicate that Tempe Town Lake generates autochthonous carbon and has the capacity to process allochthonous carbon from the urban environment. Ongoing work is comparing these results to other periods in the 10-year time series to test if the driver-DOC relationships are robust over longer time-scales and evaluating how changes in lake management and climate have altered DOC over time.
NASA Astrophysics Data System (ADS)
George, Daniel; Huerta, E. A.
2018-03-01
The recent Nobel-prize-winning detections of gravitational waves from merging black holes and the subsequent detection of the collision of two neutron stars in coincidence with electromagnetic observations have inaugurated a new era of multimessenger astrophysics. To enhance the scope of this emergent field of science, we pioneered the use of deep learning with convolutional neural networks, that take time-series inputs, for rapid detection and characterization of gravitational wave signals. This approach, Deep Filtering, was initially demonstrated using simulated LIGO noise. In this article, we present the extension of Deep Filtering using real data from LIGO, for both detection and parameter estimation of gravitational waves from binary black hole mergers using continuous data streams from multiple LIGO detectors. We demonstrate for the first time that machine learning can detect and estimate the true parameters of real events observed by LIGO. Our results show that Deep Filtering achieves similar sensitivities and lower errors compared to matched-filtering while being far more computationally efficient and more resilient to glitches, allowing real-time processing of weak time-series signals in non-stationary non-Gaussian noise with minimal resources, and also enables the detection of new classes of gravitational wave sources that may go unnoticed with existing detection algorithms. This unified framework for data analysis is ideally suited to enable coincident detection campaigns of gravitational waves and their multimessenger counterparts in real-time.
Continuous variables logic via coupled automata using a DNAzyme cascade with feedback.
Lilienthal, S; Klein, M; Orbach, R; Willner, I; Remacle, F; Levine, R D
2017-03-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series.
NASA Astrophysics Data System (ADS)
Chowdhury, S.; Sharma, A.
2005-12-01
Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise present. SIMEX is based on theory that the trend in alternate parameters can be extrapolated back to the notional error free zone. We illustrate the utility of SIMEX in a synthetic rainfall-runoff modelling scenario and an application to study the dependence of uncertain distributed sea surface temperature anomalies with an indicator of the El Nino Southern Oscillation, the Southern Oscillation Index (SOI). The errors in rainfall data and its affect is explored using Sacramento rainfall runoff model. The rainfall uncertainty is assumed to be multiplicative and temporally invariant. The model used to relate the sea surface temperature anomalies (SSTA) to the SOI is assumed to be of a linear form. The nature of uncertainty in the SSTA is additive and varies with time. The SIMEX framework allows assessment of the relationship between the error free inputs and response. Cook, J.R., Stefanski, L. A., Simulation-Extrapolation Estimation in Parametric Measurement Error Models, Journal of the American Statistical Association, 89 (428), 1314-1328, 1994.
Baeza, A; Guillén, J; Ontalba Salamanca, M A; Rodríguez, A; Ager, F J
2009-10-01
The Proserpina dam was built in Roman times to provide drinking water to Emerita Augusta (today's Mérida in SW Spain). During maintenance work, a sediment core was extracted, offering an excellent opportunity to analyze the historical environmental impacts of the dam and its reservoir over the 2000 years since Roman times. In order to establish an accurate chronology, (14)C ages were determined by accelerator mass spectrometry (AMS). Core samples were assayed for their content in uranium and thorium series isotopes, (40)K, and the anthropogenic radionuclides (137)Cs, (90)Sr, and (239+240)Pu. Potassium-40 presented the highest activity level and was not constant with depth. The uranium and thorium series were generally in equilibrium, suggesting there had been no additional input of natural radionuclides. The presence of (137)Cs was only found in relation with the global fallout in the early 1960s. Multi-element assays were performed using the PIXE and PIGE techniques. Some variations in the multi-element concentrations were observed with depth, but the sediment core could be considered as clean, and no presumptive anthropogenic pollutants were found. Nevertheless, an unusually high Zn content was detected at depths corresponding to pre-Roman times, due to geological anomalies in the area.
NASA Astrophysics Data System (ADS)
Witt, Thomas J.; Fletcher, N. E.
2010-10-01
We investigate some statistical properties of ac voltages from a white noise source measured with a digital lock-in amplifier equipped with finite impulse response output filters which introduce correlations between successive voltage values. The main goal of this work is to propose simple solutions to account for correlations when calculating the standard deviation of the mean (SDM) for a sequence of measurement data acquired using such an instrument. The problem is treated by time series analysis based on a moving average model of the filtering process. Theoretical expressions are derived for the power spectral density (PSD), the autocorrelation function, the equivalent noise bandwidth and the Allan variance; all are related to the SDM. At most three parameters suffice to specify any of the above quantities: the filter time constant, the time between successive measurements (both set by the lock-in operator) and the PSD of the white noise input, h0. Our white noise source is a resistor so that the PSD is easily calculated; there are no free parameters. Theoretical expressions are checked against their respective sample estimates and, with the exception of two of the bandwidth estimates, agreement to within 11% or better is found.
Data assimilation using a GPU accelerated path integral Monte Carlo approach
NASA Astrophysics Data System (ADS)
Quinn, John C.; Abarbanel, Henry D. I.
2011-09-01
The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.
Generalized Optoelectronic Model of Series-Connected Multijunction Solar Cells
Geisz, John F.; Steiner, Myles A.; Garcia, Ivan; ...
2015-10-02
The emission of light from each junction in a series-connected multijunction solar cell, we found, both complicates and elucidates the understanding of its performance under arbitrary conditions. Bringing together many recent advances in this understanding, we present a general 1-D model to describe luminescent coupling that arises from both voltage-driven electroluminescence and voltage-independent photoluminescence in nonideal junctions that include effects such as Sah-Noyce-Shockley (SNS) recombination with n ≠ 2, Auger recombination, shunt resistance, reverse-bias breakdown, series resistance, and significant dark area losses. The individual junction voltages and currents are experimentally determined from measured optical and electrical inputs and outputs ofmore » the device within the context of the model to fit parameters that describe the devices performance under arbitrary input conditions. Furthermore, our techniques to experimentally fit the model are demonstrated for a four-junction inverted metamorphic solar cell, and the predictions of the model are compared with concentrator flash measurements.« less
NASA Astrophysics Data System (ADS)
Baisden, W. T.
2011-12-01
Time-series radiocarbon measurements have substantial ability to constrain the size and residence time of the soil C pools commonly represented in ecosystem models. Radiocarbon remains unique in the ability to constrain the large stabilized C pool with decadal residence times. Radiocarbon also contributes usefully to constraining the size and turnover rate of the passive pool, but typically struggles to constrain pools with residence times less than a few years. Overall, the number of pools and associated turnover rates that can be constrained depends upon the number of time-series samples available, the appropriateness of chemical or physical fractions to isolate unequivocal pools, and the utility of additional C flux data to provide additional constraints. In New Zealand pasture soils, we demonstrate the ability to constrain decadal turnover times with in a few years for the stabilized pool and reasonably constrain the passive fraction. Good constraint is obtained with two time-series samples spaced 10 or more years apart after 1970. Three or more time-series samples further improve the level of constraint. Work within this context shows that a two-pool model does explain soil radiocarbon data for the most detailed profiles available (11 time-series samples), and identifies clear and consistent differences in rates of C turnover and passive fraction in Andisols vs Non-Andisols. Furthermore, samples from multiple horizons can commonly be combined, yielding consistent residence times and passive fraction estimates that are stable with, or increase with, depth in different sites. Radiocarbon generally fails to quantify rapid C turnover, however. Given that the strength of radiocarbon is estimating the size and turnover of the stabilized (decadal) and passive (millennial) pools, the magnitude of fast cycling pool(s) can be estimated by subtracting the radiocarbon-based estimates of turnover within stabilized and passive pools from total estimates of NPP. In grazing land, these estimates can be derived primarily from measured aboveground NPP and calculated belowground NPP. Results suggest that only 19-36% of heterotrophic soil respiration is derived from the soil C with rapid turnover times. A final logical step in synthesis is the analysis of temporal variation in NPP, primarily due to climate, as driver of changes in plant inputs and resulting in dynamic changes in rapid and decadal soil C pools. In sites with good time series samples from 1959-1975, we examine the apparent impacts of measured or modelled (Biome-BGC) NPP on soil Δ14C. Ultimately, these approaches have the ability to empirically constrain, and provide limited verification, of the soil C cycle as commonly depicted ecosystem biogeochemistry models.
Laboratory Performance Evaluation Report of SEL 421 Phasor Measurement Unit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; faris, Anthony J.; Martin, Kenneth E.
2007-12-01
PNNL and BPA have been in close collaboration on laboratory performance evaluation of phasor measurement units for over ten years. A series of evaluation tests are designed to confirm accuracy and determine measurement performance under a variety of conditions that may be encountered in actual use. Ultimately the testing conducted should provide parameters that can be used to adjust all measurements to a standardized basis. These tests are performed with a standard relay test set using recorded files of precisely generated test signals. The test set provides test signals at a level and in a format suitable for input tomore » a PMU that accurately reproduces the signals in both signal amplitude and timing. Test set outputs are checked to confirm the accuracy of the output signal. The recorded signals include both current and voltage waveforms and a digital timing track used to relate the PMU measured value with the test signal. Test signals include steady-state waveforms to test amplitude, phase, and frequency accuracy, modulated signals to determine measurement and rejection bands, and step tests to determine timing and response accuracy. Additional tests are included as necessary to fully describe the PMU operation. Testing is done with a BPA phasor data concentrator (PDC) which provides communication support and monitors data input for dropouts and data errors.« less
NASA Astrophysics Data System (ADS)
Andrus, C. F. T.; Bassett, C.; Black, H. D.; Payne, T. N.
2016-12-01
Several recent studies demonstrate that nitrogen isotope analysis of the organic fraction of mollusk shells can serve as a proxy for anthropogenic environmental impacts, including sewage input into estuaries. Analysis of δ15N in shells from archaeological sites permits construction of time-series proxy data from the present day to pre-industrial times, yielding insight into the history of some human environmental influences such as waste input and land use changes. Most such studies utilize a single bulk analysis per valve, combining shell material grown over time periods of one or more years. However, large, fast-growing species (e.g. some scallops and abalone) may permit sub-annual sampling, potentially yielding insight into seasonal processes. Such sclerochronological sampling of archaeological shells may enable researchers to detect variation at a finer temporal scale than has been attempted to date, which in turn may facilitate analysis of seasonal resource procurement strategies and related actions. This presentation will incorporate new and published data from the Atlantic, Pacific and Gulf of Mexico coasts of North America to assess how sclerochronological δ15N data can be useful to better understand pre-industrial human-environmental interaction and change, and also address diagenesis and other preservational concerns commonly found in archaeological samples.
Nitrogen Isotope Analyses in Mollusk Shell: Applications to Environmental Sciences and Archaeology.
NASA Astrophysics Data System (ADS)
Andrus, C. F. T.; Bassett, C.; Black, H. D.; Payne, T. N.
2017-12-01
Several recent studies demonstrate that nitrogen isotope analysis of the organic fraction of mollusk shells can serve as a proxy for anthropogenic environmental impacts, including sewage input into estuaries. Analysis of δ15N in shells from archaeological sites permits construction of time-series proxy data from the present day to pre-industrial times, yielding insight into the history of some human environmental influences such as waste input and land use changes. Most such studies utilize a single bulk analysis per valve, combining shell material grown over time periods of one or more years. However, large, fast-growing species (e.g. some scallops and abalone) may permit sub-annual sampling, potentially yielding insight into seasonal processes. Such sclerochronological sampling of archaeological shells may enable researchers to detect variation at a finer temporal scale than has been attempted to date, which in turn may facilitate analysis of seasonal resource procurement strategies and related actions. This presentation will incorporate new and published data from the Atlantic, Pacific and Gulf of Mexico coasts of North America to assess how sclerochronological δ15N data can be useful to better understand pre-industrial human-environmental interaction and change, and also address diagenesis and other preservational concerns commonly found in archaeological samples.
VizieR Online Data Catalog: Evolution of solar irradiance during Holocene (Vieira+, 2011)
NASA Astrophysics Data System (ADS)
Vieira, L. E. A.; Solanki, S. K.; Krivova, N. A.; Usoskin, I.
2011-05-01
This is a composite total solar irradiance (TSI) time series for 9495BC to 2007AD constructed as described in Sect. 3.3 of the paper. Since the TSI is the main external heat input into the Earth's climate system, a consistent record covering as long period as possible is needed for climate models. This was our main motivation for constructing this composite TSI time series. In order to produce a representative time series, we divided the Holocene into four periods according to the available data for each period. Table 4 (see below) summarizes the periods considered and the models available for each period. After the end of the Maunder Minimum we compute daily values, while prior to the end of the Maunder Minimum we compute 10-year averages. For the period for which both solar disk magnetograms and continuum images are available (period 1) we employ the SATIRE-S reconstruction (Krivova et al. 2003A&A...399L...1K; Wenzler et al. 2006A&A...460..583W). SATIRE-T (Krivova et al. 2010JGRA..11512112K) reconstruction is used from the beginning of the Maunder Minimum (approximately 1640AD) to 1977AD. Prior to 1640AD reconstructions are based on cosmogenic isotopes (this paper). Different models of the Earth's geomagnetic field are available before and after approximately 5000BC. Therefore we treat periods 3 and 4 (before and after 5000BC) separately. Further details can be found in the paper. We emphasize that the reconstructions based on different proxies have different time resolutions. (1 data file).
NASA Astrophysics Data System (ADS)
Kudomi, Nobuyuki; Watabe, Hiroshi; Hayashi, Takuya; Iida, Hidehiro
2007-04-01
Cerebral metabolic rate of oxygen (CMRO2), oxygen extraction fraction (OEF) and cerebral blood flow (CBF) images can be quantified using positron emission tomography (PET) by administrating 15O-labelled water (H152O) and oxygen (15O2). Conventionally, those images are measured with separate scans for three tracers C15O for CBV, H152O for CBF and 15O2 for CMRO2, and there are additional waiting times between the scans in order to minimize the influence of the radioactivity from the previous tracers, which results in a relatively long study period. We have proposed a dual tracer autoradiographic (DARG) approach (Kudomi et al 2005), which enabled us to measure CBF, OEF and CMRO2 rapidly by sequentially administrating H152O and 15O2 within a short time. Because quantitative CBF and CMRO2 values are sensitive to arterial input function, it is necessary to obtain accurate input function and a drawback of this approach is to require separation of the measured arterial blood time-activity curve (TAC) into pure water and oxygen input functions under the existence of residual radioactivity from the first injected tracer. For this separation, frequent manual sampling was required. The present paper describes two calculation methods: namely a linear and a model-based method, to separate the measured arterial TAC into its water and oxygen components. In order to validate these methods, we first generated a blood TAC for the DARG approach by combining the water and oxygen input functions obtained in a series of PET studies on normal human subjects. The combined data were then separated into water and oxygen components by the present methods. CBF and CMRO2 were calculated using those separated input functions and tissue TAC. The quantitative accuracy in the CBF and CMRO2 values by the DARG approach did not exceed the acceptable range, i.e., errors in those values were within 5%, when the area under the curve in the input function of the second tracer was larger than half of the first one. Bias and deviation in those values were also compatible to that of the conventional method, when noise was imposed on the arterial TAC. We concluded that the present calculation based methods could be of use for quantitatively calculating CBF and CMRO2 with the DARG approach.
Soni, Kirti; Parmar, Kulwinder Singh; Kapoor, Sangeeta; Kumar, Nishant
2016-05-15
A lot of studies in the literature of Aerosol Optical Depth (AOD) done by using Moderate Resolution Imaging Spectroradiometer (MODIS) derived data, but the accuracy of satellite data in comparison to ground data derived from ARrosol Robotic NETwork (AERONET) has been always questionable. So to overcome from this situation, comparative study of a comprehensive ground based and satellite data for the period of 2001-2012 is modeled. The time series model is used for the accurate prediction of AOD and statistical variability is compared to assess the performance of the model in both cases. Root mean square error (RMSE), mean absolute percentage error (MAPE), stationary R-squared, R-squared, maximum absolute percentage error (MAPE), normalized Bayesian information criterion (NBIC) and Ljung-Box methods are used to check the applicability and validity of the developed ARIMA models revealing significant precision in the model performance. It was found that, it is possible to predict the AOD by statistical modeling using time series obtained from past data of MODIS and AERONET as input data. Moreover, the result shows that MODIS data can be formed from AERONET data by adding 0.251627 ± 0.133589 and vice-versa by subtracting. From the forecast available for AODs for the next four years (2013-2017) by using the developed ARIMA model, it is concluded that the forecasted ground AOD has increased trend. Copyright © 2016 Elsevier B.V. All rights reserved.
Simulation Exploration through Immersive Parallel Planes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
NASA Astrophysics Data System (ADS)
Medina, H.; Romano, N.; Chirico, G. B.
2012-12-01
We present a dual Kalman Filter (KF) approach for retrieving states and parameters controlling soil water dynamics in a homogenous soil column by using near-surface state observations. The dual Kalman filter couples a standard KF algorithm for retrieving the states and an unscented KF algorithm for retrieving the parameters. We examine the performance of the dual Kalman Filter applied to two alternative state-space formulations of the Richards equation, respectively differentiated by the type of variable employed for representing the states: either the soil water content (θ) or the soil matric pressure head (h). We use a synthetic time-series series of true states and noise corrupted observations and a synthetic time-series of meteorological forcing. The performance analyses account for the effect of the input parameters, the observation depth and the assimilation frequency as well as the relationship between the retrieved states and the assimilated variables. We show that the identifiability of the parameters is strongly conditioned by several factors, such as the initial guess of the unknown parameters, the wet or dry range of the retrieved states, the boundary conditions, as well as the form (h-based or θ-based) of the state-space formulation. State identifiability is instead efficient even with a relatively coarse time-resolution of the assimilated observation. The accuracy of the retrieved states exhibits limited sensitivity to the observation depth and the assimilation frequency.
NASA Astrophysics Data System (ADS)
Dutta, D.; Das, P. K.; Paul, S.; Sharma, J. R.; Dadhwal, V. K.
2014-11-01
The mangrove ecosystem of Sundarbans region plays an important ecological and socio-economical role in both India and Bangladesh. The ecological disturbance in the coastal mangrove forests are mainly attributed to the periodic cyclones caused by deep depression formed over the Bay of Bengal. In the present study, three of the major cyclones in the Sundarbans region were analyzed to establish the cause-and-effect relationship between cyclones and the resultant ecological disturbance. The Moderate Resolution Imaging Spectroradiometer (MODIS) time-series data was used to generate MODIS global disturbance index (MGDI) and its potential was explored to assess the instantaneous ecological disturbance caused by cyclones with varying landfall intensities and at different stages of mangrove phenology. The time-series MGDI was converted into the percentage change in MGDI using its multi-year mean for each pixel, and its response towards several cyclonic events was studied. The affected areas were identified by analyzing the Landsat-8 satellite data before and after the cyclone and the MGDI values of the affected areas were utilized to develop the threshold for delineation of the disturbed pixels. The selected threshold was applied on the time-series MGDI images to delineate the disturbed areas for each year individually to identify the frequently disturbed areas. The classified intensity map could able to detect the chronically affected areas, which can serve as a valuable input towards modelling the biomigration of the invasive species and efficient forest management.
NASA Astrophysics Data System (ADS)
Clark, F. R.; McKee, B. A.; Duncan, D. D.
2002-12-01
Particulate and dissolved phases of a suite of metals and radionuclides were analyzed in fluid mud samples collected during a time series. This time series was taken during the passage of a winter storm on the Atchafalaya Shelf off the coast of Louisiana. The shelf receives an estimated 30% of the flow of the Mississippi River from its distributary, the Atchafalaya River. This input contributes a high sediment load to the shelf. Frequent winter storms provide shear stress to resuspend sediments and form fluid mud. Samples of fluid mud and overlying water were collected every two hours for 56 hours. Meteorological data as well as turbidity measurements by OBS were collected throughout the study. Bottom sediments were also collected before and after the time series. Partitioning effects were investigated on Be7, Th234, and Pb210 by gamma spectroscopy. These effects were also studied on several redox-sensitive metals, including Fe, Mn, Mo, Te, Re, U, Al, Ti, and V by ICP-MS analysis. Preliminary results indicate a rapid establishment of reducing conditions in fluid mud immediately overlying the seabed. These conditions persist until the suspended sediments in the fluid mud settle, and the fluid mud dissipates. The recurrence of storm front passages and their subsequent fluid mud formation cause repeated cycling from oxic to suboxic conditions in these coastal bottom waters. This redox cycling could potentially alter the fates of redox-sensitive metals, especially those associated with metal oxide carrier phases.
Simulation Exploration through Immersive Parallel Planes: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
NASA Astrophysics Data System (ADS)
Mizukami, N.; Smith, M. B.
2010-12-01
It is common for the error characteristics of long-term precipitation data to change over time due to various factors such as gauge relocation and changes in data processing methods. The temporal consistency of precipitation data error characteristics is as important as data accuracy itself for hydrologic model calibration and subsequent use of the calibrated model for streamflow prediction. In mountainous areas, the generation of precipitation grids relies on sparse gage networks, the makeup of which often varies over time. This causes a change in error characteristics of the long-term precipitation data record. We will discuss the diagnostic analysis of the consistency of gridded precipitation time series and illustrate the adverse effect of inconsistent precipitation data on a hydrologic model simulation. We used hourly 4 km gridded precipitation time series over a mountainous basin in the Sierra Nevada Mountains of California from October 1988 through September 2006. The basin is part of the broader study area that served as the focus of the second phase of the Distributed Model Intercomparison Project (DMIP-2), organized by the U.S. National Weather Service (NWS) of the National Oceanographic and Atmospheric Administration (NOAA). To check the consistency of the gridded precipitation time series, double mass analysis was performed using single pixel and basin mean areal precipitation (MAP) values derived from gridded DMIP-2 and Parameter-Elevation Regressions on Independent Slopes Model (PRISM) precipitation data. The analysis leads to the conclusion that over the entire study time period, a clear change in error characteristics in the DMIP-2 data occurred in the beginning of 2003. This matches the timing of one of the major gage network changes. The inconsistency of two MAP time series computed from the gridded precipitation fields over two elevation zones was corrected by adjusting hourly values based on the double mass analysis. We show that model simulations using the adjusted MAP data produce improved stream flow compared to simulations using the inconsistent MAP input data.
Schaarup-Jensen, K; Rasmussen, M R; Thorndahl, S
2009-01-01
In urban drainage modelling long-term extreme statistics has become an important basis for decision-making e.g. in connection with renovation projects. Therefore it is of great importance to minimize the uncertainties with regards to long-term prediction of maximum water levels and combined sewer overflow (CSO) in drainage systems. These uncertainties originate from large uncertainties regarding rainfall inputs, parameters, and assessment of return periods. This paper investigates how the choice of rainfall time series influences the extreme events statistics of max water levels in manholes and CSO volumes. Traditionally, long-term rainfall series, from a local rain gauge, are unavailable. In the present case study, however, long and local rain series are available. 2 rainfall gauges have recorded events for approximately 9 years at 2 locations within the catchment. Beside these 2 gauges another 7 gauges are located at a distance of max 20 kilometers from the catchment. All gauges are included in the Danish national rain gauge system which was launched in 1976. The paper describes to what extent the extreme events statistics based on these 9 series diverge from each other and how this diversity can be handled, e.g. by introducing an "averaging procedure" based on the variability within the set of statistics. All simulations are performed by means of the MOUSE LTS model.
Real-time seam tracking control system based on line laser visions
NASA Astrophysics Data System (ADS)
Zou, Yanbiao; Wang, Yanbo; Zhou, Weilin; Chen, Xiangzhi
2018-07-01
A set of six-degree-of-freedom robotic welding automatic tracking platform was designed in this study to realize the real-time tracking of weld seams. Moreover, the feature point tracking method and the adaptive fuzzy control algorithm in the welding process were studied and analyzed. A laser vision sensor and its measuring principle were designed and studied, respectively. Before welding, the initial coordinate values of the feature points were obtained using morphological methods. After welding, the target tracking method based on Gaussian kernel was used to extract the real-time feature points of the weld. An adaptive fuzzy controller was designed to input the deviation value of the feature points and the change rate of the deviation into the controller. The quantization factors, scale factor, and weight function were adjusted in real time. The input and output domains, fuzzy rules, and membership functions were constantly updated to generate a series of smooth bias robot voltage. Three groups of experiments were conducted on different types of curve welds in a strong arc and splash noise environment using the welding current of 120 A short-circuit Metal Active Gas (MAG) Arc Welding. The tracking error was less than 0.32 mm and the sensor's metrical frequency can be up to 20 Hz. The end of the torch run smooth during welding. Weld trajectory can be tracked accurately, thereby satisfying the requirements of welding applications.
Impedance Matching Antenna-Integrated High-Efficiency Energy Harvesting Circuit
Shinki, Yuharu; Shibata, Kyohei; Mansour, Mohamed
2017-01-01
This paper describes the design of a high-efficiency energy harvesting circuit with an integrated antenna. The circuit is composed of series resonance and boost rectifier circuits for converting radio frequency power into boosted direct current (DC) voltage. The measured output DC voltage is 5.67 V for an input of 100 mV at 900 MHz. Antenna input impedance matching is optimized for greater efficiency and miniaturization. The measured efficiency of this antenna-integrated energy harvester is 60% for −4.85 dBm input power and a load resistance equal to 20 kΩ at 905 MHz. PMID:28763043
Impedance Matching Antenna-Integrated High-Efficiency Energy Harvesting Circuit.
Shinki, Yuharu; Shibata, Kyohei; Mansour, Mohamed; Kanaya, Haruichi
2017-08-01
This paper describes the design of a high-efficiency energy harvesting circuit with an integrated antenna. The circuit is composed of series resonance and boost rectifier circuits for converting radio frequency power into boosted direct current (DC) voltage. The measured output DC voltage is 5.67 V for an input of 100 mV at 900 MHz. Antenna input impedance matching is optimized for greater efficiency and miniaturization. The measured efficiency of this antenna-integrated energy harvester is 60% for -4.85 dBm input power and a load resistance equal to 20 kΩ at 905 MHz.
Quasi-Global Precipitation as Depicted in the GPCPV2.2 and TMPA V7
NASA Technical Reports Server (NTRS)
Huffman, George J.; Bolvin, David T.; Nelkin, Eric J.; Adler, Robert F.
2012-01-01
After a lengthy incubation period, the year 2012 saw the release of the Global Precipitation Climatology Project (GPCP) Version 2.2 monthly dataset and the TRMM Multi-satellite Precipitation Analysis (TMPA) Version 7. One primary feature of the new data sets is that DMSP SSMIS data are now used, which entailed a great deal of development work to overcome calibration issues. In addition, the GPCP V2.2 included a slight upgrade to the gauge analysis input datasets, particularly over China, while the TMPA V7 saw more-substantial upgrades: 1) The gauge analysis record in Version 6 used the (older) GPCP monitoring product through April 2005 and the CAMS analysis thereafter, which introduced an inhomogeneity. Version 7 uses the Version 6 GPCC Full analysis, switching to the Version 4 Monitoring analysis thereafter. 2) The inhomogeneously processed AMSU record in Version 6 is uniformly processed in Version 7. 3) The TMI and SSMI input data have been upgraded to the GPROF2010 algorithm. The global-change, water cycle, and other user communities are acutely interested in how these data sets compare, as consistency between differently processed, long-term, quasi-global data sets provides some assurance that the statistics computed from them provide a good representation of the atmosphere's behavior. Within resolution differences, the two data sets agree well over land as the gauge data (which tend to dominate the land results) are the same in both. Over ocean the results differ more because the satellite products used for calibration are based on very different algorithms and the dominant input data sets are different. The time series of tropical (30 N-S) ocean average precipitation shows that the TMPA V7 follows the TMI-PR Combined Product calibrator, although running approximately 5% higher on average. The GPCP and TMPA time series are fairly consistent, although the GPCP runs approximately 10% lower than the TMPA, and has a somewhat larger interannual variation. As well, the GPCP and TMPA interannual variations have an apparent phase shift, with GPCP running a few months later. Additional diagnostics will include mean maps and selected scatter plots.
Bender, David A.; Asher, William E.; Zogorski, John S.
2003-01-01
This report documents LakeVOC, a model to estimate volatile organic compound (VOC) concentrations in lakes and reservoirs. LakeVOC represents the lake or reservoir as a two-layer system and estimates VOC concentrations in both the epilimnion and hypolimnion. The air-water flux of a VOC is characterized in LakeVOC in terms of the two-film model of air-water exchange. LakeVOC solves the system of coupled differential equations for the VOC concentration in the epilimnion, the VOC concentration in the hypolimnion, the total mass of the VOC in the lake, the volume of the epilimnion, and the volume of the hypolimnion. A series of nine simulations were conducted to verify LakeVOC representation of mixing, dilution, and gas exchange characteristics in a hypothetical lake, and two additional estimates of lake volume and MTBE concentrations were done in an actual reservoir under environmental conditions. These 11 simulations showed that LakeVOC correctly handled mixing, dilution, and gas exchange. The model also adequately estimated VOC concentrations within the epilimnion in an actual reservoir with daily input parameters. As the parameter-input time scale increased (from daily to weekly to monthly, for example), the differences between the measured-averaged concentrations and the model-estimated concentrations generally increased, especially for the hypolimnion. This may be because as the time scale is increased from daily to weekly to monthly, the averaging of model inputs may cause a loss of detail in the model estimates.
Combining control input with flight path data to evaluate pilot performance in transport aircraft.
Ebbatson, Matt; Harris, Don; Huddlestone, John; Sears, Rodney
2008-11-01
When deriving an objective assessment of piloting performance from flight data records, it is common to employ metrics which purely evaluate errors in flight path parameters. The adequacy of pilot performance is evaluated from the flight path of the aircraft. However, in large jet transport aircraft these measures may be insensitive and require supplementing with frequency-based measures of control input parameters. Flight path and control input data were collected from pilots undertaking a jet transport aircraft conversion course during a series of symmetric and asymmetric approaches in a flight simulator. The flight path data were analyzed for deviations around the optimum flight path while flying an instrument landing approach. Manipulation of the flight controls was subject to analysis using a series of power spectral density measures. The flight path metrics showed no significant differences in performance between the symmetric and asymmetric approaches. However, control input frequency domain measures revealed that the pilots employed highly different control strategies in the pitch and yaw axes. The results demonstrate that to evaluate pilot performance fully in large aircraft, it is necessary to employ performance metrics targeted at both the outer control loop (flight path) and the inner control loop (flight control) parameters in parallel, evaluating both the product and process of a pilot's performance.
On the Selection of Models for Runtime Prediction of System Resources
NASA Astrophysics Data System (ADS)
Casolari, Sara; Colajanni, Michele
Applications and services delivered through large Internet Data Centers are now feasible thanks to network and server improvement, but also to virtualization, dynamic allocation of resources and dynamic migrations. The large number of servers and resources involved in these systems requires autonomic management strategies because no amount of human administrators would be capable of cloning and migrating virtual machines in time, as well as re-distributing or re-mapping the underlying hardware. At the basis of most autonomic management decisions, there is the need of evaluating own global behavior and change it when the evaluation indicates that they are not accomplishing what they were intended to do or some relevant anomalies are occurring. Decisions algorithms have to satisfy different time scales constraints. In this chapter we are interested to short-term contexts where runtime prediction models work on the basis of time series coming from samples of monitored system resources, such as disk, CPU and network utilization. In similar environments, we have to address two main issues. First, original time series are affected by limited predictability because measurements are characterized by noises due to system instability, variable offered load, heavy-tailed distributions, hardware and software interactions. Moreover, there is no existing criteria that can help us to choose a suitable prediction model and related parameters with the purpose of guaranteeing an adequate prediction quality. In this chapter, we evaluate the impact that different choices on prediction models have on different time series, and we suggest how to treat input data and whether it is convenient to choose the parameters of a prediction model in a static or dynamic way. Our conclusions are supported by a large set of analyses on realistic and synthetic data traces.
Simulating extreme low-discharge events for the Rhine using a stochastic model
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Mens, Marjolein; Schasfoort, Femke; Diermanse, Ferdinand; Pulido-Velazquez, Manuel
2017-04-01
The specific features of hydrological droughts make them more difficult to be analysed than other water-related phenomena: longer time scales (months to several years) so less historical events are available, and the drought severity and associate damage depends on a combination of variables with no clear prevalence (e.g., total water deficit, maximum deficit and duration). As part of drought risk analysis, which aims to provide insight into the variability of hydrological conditions and associated socio-economic impacts, long synthetic time series should therefore be developed. In this contribution, we increase the length of the available inflow time series using stochastic autoregressive modelling. This enhancement could improve the characterization of the extreme range and can define extreme droughts with similar periods of return but different patterns that can lead to distinctly different damages. The methodology consists of: 1) fitting an autoregressive model (AR, ARMA…) to the available records; 2) generating extended time series (thousands of years); 3) performing a frequency analysis with different characteristic variables (total, deficit, maximum deficit and so on); and 4) selecting extreme drought events associated with different characteristic variables and return periods. The methodology was applied to the Rhine river discharge at location Lobith, where the Rhine enters The Netherlands. A monthly ARMA(1,1) autoregressive model with seasonally varying parameters was fitted and successfully validated to the historical records available since year 1901. The maximum monthly deficit with respect to a threshold value of 1800 m3/s and the average discharge for a given time span in m3/s were chosen as indicators to identify drought periods. A synthetic series of 10,000 years of discharges was generated using the validated ARMA model. Two time spans were considered in the analysis: the whole calendar year and the half-year period between April and September (the summer half year, where water demands are highest). Frequency analysis was performed for both indicators and time spans for the generated time series and the historical records. The comparison between observed and generated series showed that the ARMA model provides a good reproduction of the maximum deficits and total discharges, especially for the summer half-year period. The resulting synthetic series are therefore considered credible. These synthetic series, with its wealth of information, can then be used as inputs for the damage assessment models, together with information on precipitation deficits, in order to estimate the risk that lower inflows can have on the urban, the agricultural, the shipping sector and so on. This will help in associating economic losses and periods of return, as well as for estimating how droughts with similar periods of return but different patterns can lead to different damages. ACKNOWLEDGEMENT This study has been supported by the European Union's Horizon 2020 research and innovation programme under the IMPREX project (grant agreement no: 641.811), and by the Climate-KIC Pioneers into Practice Program supported by the European Union's EIT.
Climate science and famine early warning
Verdin, James P.; Funk, Chris; Senay, Gabriel B.; Choularton, R.
2005-01-01
Food security assessment in sub-Saharan Africa requires simultaneous consideration of multiple socio-economic and environmental variables. Early identification of populations at risk enables timely and appropriate action. Since large and widely dispersed populations depend on rainfed agriculture and pastoralism, climate monitoring and forecasting are important inputs to food security analysis. Satellite rainfall estimates (RFE) fill in gaps in station observations, and serve as input to drought index maps and crop water balance models. Gridded rainfall time-series give historical context, and provide a basis for quantitative interpretation of seasonal precipitation forecasts. RFE are also used to characterize flood hazards, in both simple indices and stream flow models. In the future, many African countries are likely to see negative impacts on subsistence agriculture due to the effects of global warming. Increased climate variability is forecast, with more frequent extreme events. Ethiopia requires special attention. Already facing a food security emergency, troubling persistent dryness has been observed in some areas, associated with a positive trend in Indian Ocean sea surface temperatures. Increased African capacity for rainfall observation, forecasting, data management and modelling applications is urgently needed. Managing climate change and increased climate variability require these fundamental technical capacities if creative coping strategies are to be devised.
A Novel Estimator for the Rate of Information Transfer by Continuous Signals
Takalo, Jouni; Ignatova, Irina; Weckström, Matti; Vähäsöyrinki, Mikko
2011-01-01
The information transfer rate provides an objective and rigorous way to quantify how much information is being transmitted through a communications channel whose input and output consist of time-varying signals. However, current estimators of information content in continuous signals are typically based on assumptions about the system's linearity and signal statistics, or they require prohibitive amounts of data. Here we present a novel information rate estimator without these limitations that is also optimized for computational efficiency. We validate the method with a simulated Gaussian information channel and demonstrate its performance with two example applications. Information transfer between the input and output signals of a nonlinear system is analyzed using a sensory receptor neuron as the model system. Then, a climate data set is analyzed to demonstrate that the method can be applied to a system based on two outputs generated by interrelated random processes. These analyses also demonstrate that the new method offers consistent performance in situations where classical methods fail. In addition to these examples, the method is applicable to a wide range of continuous time series commonly observed in the natural sciences, economics and engineering. PMID:21494562
Climate science and famine early warning.
Verdin, James; Funk, Chris; Senay, Gabriel; Choularton, Richard
2005-11-29
Food security assessment in sub-Saharan Africa requires simultaneous consideration of multiple socio-economic and environmental variables. Early identification of populations at risk enables timely and appropriate action. Since large and widely dispersed populations depend on rainfed agriculture and pastoralism, climate monitoring and forecasting are important inputs to food security analysis. Satellite rainfall estimates (RFE) fill in gaps in station observations, and serve as input to drought index maps and crop water balance models. Gridded rainfall time-series give historical context, and provide a basis for quantitative interpretation of seasonal precipitation forecasts. RFE are also used to characterize flood hazards, in both simple indices and stream flow models. In the future, many African countries are likely to see negative impacts on subsistence agriculture due to the effects of global warming. Increased climate variability is forecast, with more frequent extreme events. Ethiopia requires special attention. Already facing a food security emergency, troubling persistent dryness has been observed in some areas, associated with a positive trend in Indian Ocean sea surface temperatures. Increased African capacity for rainfall observation, forecasting, data management and modelling applications is urgently needed. Managing climate change and increased climate variability require these fundamental technical capacities if creative coping strategies are to be devised.
Climate science and famine early warning
Verdin, James; Funk, Chris; Senay, Gabriel; Choularton, Richard
2005-01-01
Food security assessment in sub-Saharan Africa requires simultaneous consideration of multiple socio-economic and environmental variables. Early identification of populations at risk enables timely and appropriate action. Since large and widely dispersed populations depend on rainfed agriculture and pastoralism, climate monitoring and forecasting are important inputs to food security analysis. Satellite rainfall estimates (RFE) fill in gaps in station observations, and serve as input to drought index maps and crop water balance models. Gridded rainfall time-series give historical context, and provide a basis for quantitative interpretation of seasonal precipitation forecasts. RFE are also used to characterize flood hazards, in both simple indices and stream flow models. In the future, many African countries are likely to see negative impacts on subsistence agriculture due to the effects of global warming. Increased climate variability is forecast, with more frequent extreme events. Ethiopia requires special attention. Already facing a food security emergency, troubling persistent dryness has been observed in some areas, associated with a positive trend in Indian Ocean sea surface temperatures. Increased African capacity for rainfall observation, forecasting, data management and modelling applications is urgently needed. Managing climate change and increased climate variability require these fundamental technical capacities if creative coping strategies are to be devised. PMID:16433101
Climate effects on phytoplankton floral composition in Chesapeake Bay
NASA Astrophysics Data System (ADS)
Harding, L. W.; Adolf, J. E.; Mallonee, M. E.; Miller, W. D.; Gallegos, C. L.; Perry, E. S.; Johnson, J. M.; Sellner, K. G.; Paerl, H. W.
2015-09-01
Long-term data on floral composition of phytoplankton are presented to document seasonal and inter-annual variability in Chesapeake Bay related to climate effects on hydrology. Source data consist of the abundances of major taxonomic groups of phytoplankton derived from algal photopigments (1995-2004) and cell counts (1985-2007). Algal photopigments were measured by high-performance liquid chromatography (HPLC) and analyzed using the software CHEMTAX to determine the proportions of chlorophyll-a (chl-a) in major taxonomic groups. Cell counts determined microscopically provided species identifications, enumeration, and dimensions used to obtain proportions of cell volume (CV), plasma volume (PV), and carbon (C) in the same taxonomic groups. We drew upon these two independent data sets to take advantage of the unique strengths of each method, using comparable quantitative measures to express floral composition for the main stem bay. Spatial and temporal variability of floral composition was quantified using data aggregated by season, year, and salinity zone. Both time-series were sufficiently long to encompass the drought-flood cycle with commensurate effects on inputs of freshwater and solutes. Diatoms emerged as the predominant taxonomic group, with significant contributions by dinoflagellates, cryptophytes, and cyanobacteria, depending on salinity zone and season. Our analyses revealed increased abundance of diatoms in wet years compared to long-term average (LTA) or dry years. Results are presented in the context of long-term nutrient over-enrichment of the bay, punctuated by inter-annual variability of freshwater flow that strongly affects nutrient loading, chl-a, and floral composition. Statistical analyses generated flow-adjusted diatom abundance and showed significant trends late in the time series, suggesting current and future decreases of nutrient inputs may lead to a reduction of the proportion of biomass comprised by diatoms in an increasingly diverse flora.
The magnitude and effects of extreme solar particle events
NASA Astrophysics Data System (ADS)
Jiggens, Piers; Chavy-Macdonald, Marc-Andre; Santin, Giovanni; Menicucci, Alessandra; Evans, Hugh; Hilgers, Alain
2014-06-01
The solar energetic particle (SEP) radiation environment is an important consideration for spacecraft design, spacecraft mission planning and human spaceflight. Herein is presented an investigation into the likely severity of effects of a very large Solar Particle Event (SPE) on technology and humans in space. Fluences for SPEs derived using statistical models are compared to historical SPEs to verify their appropriateness for use in the analysis which follows. By combining environment tools with tools to model effects behind varying layers of spacecraft shielding it is possible to predict what impact a large SPE would be likely to have on a spacecraft in Near-Earth interplanetary space or geostationary Earth orbit. Also presented is a comparison of results generated using the traditional method of inputting the environment spectra, determined using a statistical model, into effects tools and a new method developed as part of the ESA SEPEM Project allowing for the creation of an effect time series on which statistics, previously applied to the flux data, can be run directly. The SPE environment spectra is determined and presented as energy integrated proton fluence (cm-2) as a function of particle energy (in MeV). This is input into the SHIELDOSE-2, MULASSIS, NIEL, GRAS and SEU effects tools to provide the output results. In the case of the new method for analysis, the flux time series is fed directly into the MULASSIS and GEMAT tools integrated into the SEPEM system. The output effect quantities include total ionising dose (in rads), non-ionising energy loss (MeV g-1), single event upsets (upsets/bit) and the dose in humans compared to established limits for stochastic (or cancer-causing) effects and tissue reactions (such as acute radiation sickness) in humans given in grey-equivalent and sieverts respectively.
Nonlinear Site Response Validation Studies Using KIK-net Strong Motion Data
NASA Astrophysics Data System (ADS)
Asimaki, D.; Shi, J.
2014-12-01
Earthquake simulations are nowadays producing realistic ground motion time-series in the range of engineering design applications. Of particular significance to engineers are simulations of near-field motions and large magnitude events, for which observations are scarce. With the engineering community slowly adopting the use of simulated ground motions, site response models need to be re-evaluated in terms of their capabilities and limitations to 'translate' the simulated time-series from rock surface output to structural analyses input. In this talk, we evaluate three one-dimensional site response models: linear viscoelastic, equivalent linear and nonlinear. We evaluate the performance of the models by comparing predictions to observations at 30 downhole stations of the Japanese network KIK-Net that have recorded several strong events, including the 2011 Tohoku earthquake. Velocity profiles are used as the only input to all models, while additional parameters such as quality factor, density and nonlinear dynamic soil properties are estimated from empirical correlations. We quantify the differences of ground surface predictions and observations in terms of both seismological and engineering intensity measures, including bias ratios of peak ground response and visual comparisons of elastic spectra, and inelastic to elastic deformation ratio for multiple ductility ratios. We observe that PGV/Vs,30 — as measure of strain— is a better predictor of site nonlinearity than PGA, and that incremental nonlinear analyses are necessary to produce reliable estimates of high-frequency ground motion components at soft sites. We finally discuss the implications of our findings on the parameterization of nonlinear amplification factors in GMPEs, and on the extensive use of equivalent linear analyses in probabilistic seismic hazard procedures.
U.S. Geological Survey Near Real-Time Dst Index
Gannon, J.L.; Love, J.J.; Friberg, P.A.; Stewart, D.C.; Lisowski, S.W.
2011-01-01
The operational version of the United States Geological Survey one-minute Dst index (a global geomagnetic disturbance-intensity index for scientific studies and definition of space-weather effects) uses either four- or three-station input (including Honolulu, Hawaii; San Juan, Puerto Rico; Hermanus, South Africa; and Kakioka, Japan; or Honolulu, San Juan and Guam) and a method based on the U.S. Geological Survey definitive Dst index, in which Dst is more rigorously calculated. The method uses a combination of time-domain techniques and frequency-space filtering to produce the disturbance time series at an individual observatory. The operational output is compared to the U.S. Geological Survey one-minute Dst index (definitive version) and to the Kyoto (Japan) Final Dst to show that the U.S. Geological Survey operational output matches both definitive indices well.
NASA Astrophysics Data System (ADS)
Hwang, Sunghwan
1997-08-01
One of the most prominent features of helicopter rotor dynamics in forward flight is the periodic coefficients in the equations of motion introduced by the rotor rotation. The frequency response characteristics of such a linear time periodic system exhibits sideband behavior, which is not the case for linear time invariant systems. Therefore, a frequency domain identification methodology for linear systems with time periodic coefficients was developed, because the linear time invariant theory cannot account for sideband behavior. The modulated complex Fourier series was introduced to eliminate the smearing effect of Fourier series expansions of exponentially modulated periodic signals. A system identification theory was then developed using modulated complex Fourier series expansion. Correlation and spectral density functions were derived using the modulated complex Fourier series expansion for linear time periodic systems. Expressions of the identified harmonic transfer function were then formulated using the spectral density functions both with and without additive noise processes at input and/or output. A procedure was developed to identify parameters of a model to match the frequency response characteristics between measured and estimated harmonic transfer functions by minimizing an objective function defined in terms of the trace of the squared frequency response error matrix. Feasibility was demonstrated by the identification of the harmonic transfer function and parameters for helicopter rigid blade flapping dynamics in forward flight. This technique is envisioned to satisfy the needs of system identification in the rotating frame, especially in the context of individual blade control. The technique was applied to the coupled flap-lag-inflow dynamics of a rigid blade excited by an active pitch link. The linear time periodic technique results were compared with the linear time invariant technique results. Also, the effect of noise processes and initial parameter guess on the identification procedure were investigated. To study the effect of elastic modes, a rigid blade with a trailing edge flap excited by a smart actuator was selected and system parameters were successfully identified, but with some expense of computational storage and time. Conclusively, the linear time periodic technique substantially improved the identified parameter accuracy compared to the linear time invariant technique. Also, the linear time periodic technique was robust to noises and initial guess of parameters. However, an elastic mode of higher frequency relative to the system pumping frequency tends to increase the computer storage requirement and computing time.
Off-line tracking of series parameters in distribution systems using AMI data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Tess L.; Sun, Yannan; Schneider, Kevin
2016-05-01
Electric distribution systems have historically lacked measurement points, and equipment is often operated to its failure point, resulting in customer outages. The widespread deployment of sensors at the distribution level is enabling observability. This paper presents an off-line parameter value tracking procedure that takes advantage of the increasing number of measurement devices being deployed at the distribution level to estimate changes in series impedance parameter values over time. The tracking of parameter values enables non-diurnal and non-seasonal change to be flagged for investigation. The presented method uses an unbalanced Distribution System State Estimation (DSSE) and a measurement residual-based parameter estimationmore » procedure. Measurement residuals from multiple measurement snapshots are combined in order to increase the effective local redundancy and improve the robustness of the calculations in the presence of measurement noise. Data from devices on the primary distribution system and from customer meters, via an AMI system, form the input data set. Results of simulations on the IEEE 13-Node Test Feeder are presented to illustrate the proposed approach applied to changes in series impedance parameters. A 5% change in series resistance elements can be detected in the presence of 2% measurement error when combining less than 1 day of measurement snapshots into a single estimate.« less
Spatio-temporal prediction of daily temperatures using time-series of MODIS LST images
NASA Astrophysics Data System (ADS)
Hengl, Tomislav; Heuvelink, Gerard B. M.; Perčec Tadić, Melita; Pebesma, Edzer J.
2012-01-01
A computational framework to generate daily temperature maps using time-series of publicly available MODIS MOD11A2 product Land Surface Temperature (LST) images (1 km resolution; 8-day composites) is illustrated using temperature measurements from the national network of meteorological stations (159) in Croatia. The input data set contains 57,282 ground measurements of daily temperature for the year 2008. Temperature was modeled as a function of latitude, longitude, distance from the sea, elevation, time, insolation, and the MODIS LST images. The original rasters were first converted to principal components to reduce noise and filter missing pixels in the LST images. The residual were next analyzed for spatio-temporal auto-correlation; sum-metric separable variograms were fitted to account for zonal and geometric space-time anisotropy. The final predictions were generated for time-slices of a 3D space-time cube, constructed in the R environment for statistical computing. The results show that the space-time regression model can explain a significant part of the variation in station-data (84%). MODIS LST 8-day (cloud-free) images are unbiased estimator of the daily temperature, but with relatively low precision (±4.1°C); however their added value is that they systematically improve detection of local changes in land surface temperature due to local meteorological conditions and/or active heat sources (urban areas, land cover classes). The results of 10-fold cross-validation show that use of spatio-temporal regression-kriging and incorporation of time-series of remote sensing images leads to significantly more accurate maps of temperature than if plain spatial techniques were used. The average (global) accuracy of mapping temperature was ±2.4°C. The regression-kriging explained 91% of variability in daily temperatures, compared to 44% for ordinary kriging. Further software advancement—interactive space-time variogram exploration and automated retrieval, resampling and filtering of MODIS images—are anticipated.
Condon, David E; Tran, Phu V; Lien, Yu-Chin; Schug, Jonathan; Georgieff, Michael K; Simmons, Rebecca A; Won, Kyoung-Jae
2018-02-05
Identification of differentially methylated regions (DMRs) is the initial step towards the study of DNA methylation-mediated gene regulation. Previous approaches to call DMRs suffer from false prediction, use extreme resources, and/or require library installation and input conversion. We developed a new approach called Defiant to identify DMRs. Employing Weighted Welch Expansion (WWE), Defiant showed superior performance to other predictors in the series of benchmarking tests on artificial and real data. Defiant was subsequently used to investigate DNA methylation changes in iron-deficient rat hippocampus. Defiant identified DMRs close to genes associated with neuronal development and plasticity, which were not identified by its competitor. Importantly, Defiant runs between 5 to 479 times faster than currently available software packages. Also, Defiant accepts 10 different input formats widely used for DNA methylation data. Defiant effectively identifies DMRs for whole-genome bisulfite sequencing (WGBS), reduced-representation bisulfite sequencing (RRBS), Tet-assisted bisulfite sequencing (TAB-seq), and HpaII tiny fragment enrichment by ligation-mediated PCR-tag (HELP) assays.
Writing and compiling code into biochemistry.
Shea, Adam; Fett, Brian; Riedel, Marc D; Parhi, Keshab
2010-01-01
This paper presents a methodology for translating iterative arithmetic computation, specified as high-level programming constructs, into biochemical reactions. From an input/output specification, we generate biochemical reactions that produce output quantities of proteins as a function of input quantities performing operations such as addition, subtraction, and scalar multiplication. Iterative constructs such as "while" loops and "for" loops are implemented by transferring quantities between protein types, based on a clocking mechanism. Synthesis first is performed at a conceptual level, in terms of abstract biochemical reactions - a task analogous to high-level program compilation. Then the results are mapped onto specific biochemical reactions selected from libraries - a task analogous to machine language compilation. We demonstrate our approach through the compilation of a variety of standard iterative functions: multiplication, exponentiation, discrete logarithms, raising to a power, and linear transforms on time series. The designs are validated through transient stochastic simulation of the chemical kinetics. We are exploring DNA-based computation via strand displacement as a possible experimental chassis.
Analysis of rainfall distribution in Kelantan river basin, Malaysia
NASA Astrophysics Data System (ADS)
Che Ros, Faizah; Tosaka, Hiroyuki
2018-03-01
Using rainfall gauge on its own as input carries great uncertainties regarding runoff estimation, especially when the area is large and the rainfall is measured and recorded at irregular spaced gauging stations. Hence spatial interpolation is the key to obtain continuous and orderly rainfall distribution at unknown points to be the input to the rainfall runoff processes for distributed and semi-distributed numerical modelling. It is crucial to study and predict the behaviour of rainfall and river runoff to reduce flood damages of the affected area along the Kelantan river. Thus, a good knowledge on rainfall distribution is essential in early flood prediction studies. Forty six rainfall stations and their daily time-series were used to interpolate gridded rainfall surfaces using inverse-distance weighting (IDW), inverse-distance and elevation weighting (IDEW) methods and average rainfall distribution. Sensitivity analysis for distance and elevation parameters were conducted to see the variation produced. The accuracy of these interpolated datasets was examined using cross-validation assessment.
A silicon central pattern generator controls locomotion in vivo.
Vogelstein, R J; Tenore, F; Guevremont, L; Etienne-Cummings, R; Mushahwar, V K
2008-09-01
We present a neuromorphic silicon chip that emulates the activity of the biological spinal central pattern generator (CPG) and creates locomotor patterns to support walking. The chip implements ten integrate-and-fire silicon neurons and 190 programmable digital-to-analog converters that act as synapses. This architecture allows for each neuron to make synaptic connections to any of the other neurons as well as to any of eight external input signals and one tonic bias input. The chip's functionality is confirmed by a series of experiments in which it controls the motor output of a paralyzed animal in real-time and enables it to walk along a three-meter platform. The walking is controlled under closed-loop conditions with the aide of sensory feedback that is recorded from the animal's legs and fed into the silicon CPG. Although we and others have previously described biomimetic silicon locomotor control systems for robots, this is the first demonstration of a neuromorphic device that can replace some functions of the central nervous system in vivo.
A Data-driven Approach for Forecasting Next-day River Discharge
NASA Astrophysics Data System (ADS)
Sharif, H. O.; Billah, K. S.
2017-12-01
This study focuses on evaluating the performance of the Soil and Water Assessment Tool (SWAT) eco-hydrological model, a simple Auto-Regressive with eXogenous input (ARX) model, and a Gene expression programming (GEP)-based model in one-day-ahead forecasting of discharge of a subtropical basin (the upper Kentucky River Basin). The three models were calibrated with daily flow at the US Geological Survey (USGS) stream gauging station not affected by flow regulation for the period of 2002-2005. The calibrated models were then validated at the same gauging station as well as another USGS gauge 88 km downstream for the period of 2008-2010. The results suggest that simple models outperform a sophisticated hydrological model with GEP having the advantage of being able to generate functional relationships that allow scientific investigation of the complex nonlinear interrelationships among input variables. Unlike SWAT, GEP, and to some extent, ARX are less sensitive to the length of the calibration time series and do not require a spin-up period.
Impact of input mask signals on delay-based photonic reservoir computing with semiconductor lasers.
Kuriki, Yoma; Nakayama, Joma; Takano, Kosuke; Uchida, Atsushi
2018-03-05
We experimentally investigate delay-based photonic reservoir computing using semiconductor lasers with optical feedback and injection. We apply different types of temporal mask signals, such as digital, chaos, and colored-noise mask signals, as the weights between the input signal and the virtual nodes in the reservoir. We evaluate the performance of reservoir computing by using a time-series prediction task for the different mask signals. The chaos mask signal shows superior performance than that of the digital mask signals. However, similar prediction errors can be achieved for the chaos and colored-noise mask signals. Mask signals with larger amplitudes result in better performance for all mask signals in the range of the amplitude accessible in our experiment. The performance of reservoir computing is strongly dependent on the cut-off frequency of the colored-noise mask signals, which is related to the resonance of the relaxation oscillation frequency of the laser used as the reservoir.
NASA Astrophysics Data System (ADS)
Bogiatzis, P.; Altoé, I. L.; Karamitrou, A.; Ishii, M.; Ishii, H.
2015-12-01
DigitSeis is a new open-source, interactive digitization software written in MATLAB that converts digital, raster images of analog seismograms to readily usable, discretized time series using image processing algorithms. DigitSeis automatically identifies and corrects for various geometrical distortions of seismogram images that are acquired through the original recording, storage, and scanning procedures. With human supervision, the software further identifies and classifies important features such as time marks and notes, corrects time-mark offsets from the main trace, and digitizes the combined trace with an analysis to obtain as accurate timing as possible. Although a large effort has been made to minimize the human input, DigitSeis provides interactive tools for challenging situations such as trace crossings and stains in the paper. The effectiveness of the software is demonstrated with the digitization of seismograms that are over half a century old from the Harvard-Adam Dziewoński observatory that is still in operation as a part of the Global Seismographic Network (station code HRV and network code IU). The spectral analysis of the digitized time series shows no spurious features that may be related to the occurrence of minute and hour marks. They also display signals associated with significant earthquakes, and a comparison of the spectrograms with modern recordings reveals similarities in the background noise.
NASA Astrophysics Data System (ADS)
Duveiller, G.; Donatelli, M.; Fumagalli, D.; Zucchini, A.; Nelson, R.; Baruth, B.
2017-02-01
Coupled atmosphere-ocean general circulation models (GCMs) simulate different realizations of possible future climates at global scale under contrasting scenarios of land-use and greenhouse gas emissions. Such data require several additional processing steps before it can be used to drive impact models. Spatial downscaling, typically by regional climate models (RCM), and bias-correction are two such steps that have already been addressed for Europe. Yet, the errors in resulting daily meteorological variables may be too large for specific model applications. Crop simulation models are particularly sensitive to these inconsistencies and thus require further processing of GCM-RCM outputs. Moreover, crop models are often run in a stochastic manner by using various plausible weather time series (often generated using stochastic weather generators) to represent climate time scale for a period of interest (e.g. 2000 ± 15 years), while GCM simulations typically provide a single time series for a given emission scenario. To inform agricultural policy-making, data on near- and medium-term decadal time scale is mostly requested, e.g. 2020 or 2030. Taking a sample of multiple years from these unique time series to represent time horizons in the near future is particularly problematic because selecting overlapping years may lead to spurious trends, creating artefacts in the results of the impact model simulations. This paper presents a database of consolidated and coherent future daily weather data for Europe that addresses these problems. Input data consist of daily temperature and precipitation from three dynamically downscaled and bias-corrected regional climate simulations of the IPCC A1B emission scenario created within the ENSEMBLES project. Solar radiation is estimated from temperature based on an auto-calibration procedure. Wind speed and relative air humidity are collected from historical series. From these variables, reference evapotranspiration and vapour pressure deficit are estimated ensuring consistency within daily records. The weather generator ClimGen is then used to create 30 synthetic years of all variables to characterize the time horizons of 2000, 2020 and 2030, which can readily be used for crop modelling studies.
HONTIOR - HIGHER-ORDER NEURAL NETWORK FOR TRANSFORMATION INVARIANT OBJECT RECOGNITION
NASA Technical Reports Server (NTRS)
Spirkovska, L.
1994-01-01
Neural networks have been applied in numerous fields, including transformation invariant object recognition, wherein an object is recognized despite changes in the object's position in the input field, size, or rotation. One of the more successful neural network methods used in invariant object recognition is the higher-order neural network (HONN) method. With a HONN, known relationships are exploited and the desired invariances are built directly into the architecture of the network, eliminating the need for the network to learn invariance to transformations. This results in a significant reduction in the training time required, since the network needs to be trained on only one view of each object, not on numerous transformed views. Moreover, one hundred percent accuracy is guaranteed for images characterized by the built-in distortions, providing noise is not introduced through pixelation. The program HONTIOR implements a third-order neural network having invariance to translation, scale, and in-plane rotation built directly into the architecture, Thus, for 2-D transformation invariance, the network needs only to be trained on just one view of each object. HONTIOR can also be used for 3-D transformation invariant object recognition by training the network only on a set of out-of-plane rotated views. Historically, the major drawback of HONNs has been that the size of the input field was limited to the memory required for the large number of interconnections in a fully connected network. HONTIOR solves this problem by coarse coding the input images (coding an image as a set of overlapping but offset coarser images). Using this scheme, large input fields (4096 x 4096 pixels) can easily be represented using very little virtual memory (30Mb). The HONTIOR distribution consists of three main programs. The first program contains the training and testing routines for a third-order neural network. The second program contains the same training and testing procedures as the first, but it also contains a number of functions to display and edit training and test images. Finally, the third program is an auxiliary program which calculates the included angles for a given input field size. HONTIOR is written in C language, and was originally developed for Sun3 and Sun4 series computers. Both graphic and command line versions of the program are provided. The command line version has been successfully compiled and executed both on computers running the UNIX operating system and on DEC VAX series computer running VMS. The graphic version requires the SunTools windowing environment, and therefore runs only on Sun series computers. The executable for the graphics version of HONTIOR requires 1Mb of RAM. The standard distribution medium for HONTIOR is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The package includes sample input and output data. HONTIOR was developed in 1991. Sun, Sun3 and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation.
NASA Technical Reports Server (NTRS)
Reddy, C. J.; Deshpande, M. D.
1997-01-01
Application of Asymptotic Waveform Evaluation (AWE) is presented in conjunction with a hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique to calculate the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FEM/MoM technique is used to form an integro-partial-differential equation to compute the electric field distribution of the cavity-backed aperture antenna. The electric field, thus obtained, is expanded in a Taylor series around the frequency of interest. The coefficients of 'Taylor series (called 'moments') are obtained using the frequency derivatives of the integro-partial-differential Equation formed by the hybrid FEM/MoM technique. Using the moments, the electric field in the cavity is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency band. Numerical results for an open coaxial line, probe fed cavity, and cavity-backed microstrip patch antennas are presented. Good agreement between AWE and the exact solution over the frequency range is observed.
Forecasting Geomagnetic Activity Using Kalman Filters
NASA Astrophysics Data System (ADS)
Veeramani, T.; Sharma, A.
2006-05-01
The coupling of energy from the solar wind to the magnetosphere leads to the geomagnetic activity in the form of storms and substorms and are characterized by indices such as AL, Dst and Kp. The geomagnetic activity has been predicted near-real time using local linear filter models of the system dynamics wherein the time series of the input solar wind and the output magnetospheric response were used to reconstruct the phase space of the system by a time-delay embedding technique. Recently, the radiation belt dynamics have been studied using a adaptive linear state space model [Rigler et al. 2004]. This was achieved by assuming a linear autoregressive equation for the underlying process and an adaptive identification of the model parameters using a Kalman filter approach. We use such a model for predicting the geomagnetic activity. In the case of substorms, the Bargatze et al [1985] data set yields persistence like behaviour when a time resolution of 2.5 minutes was used to test the model for the prediction of the AL index. Unlike the local linear filters, which are driven by the solar wind input without feedback from the observations, the Kalman filter makes use of the observations as and when available to optimally update the model parameters. The update procedure requires the prediction intervals to be long enough so that the forecasts can be used in practice. The time resolution of the data suitable for such forecasting is studied by taking averages over different durations.
ERIC Educational Resources Information Center
Brandenburg, Sara A., Ed.; Vanderheiden, Gregg C., Ed.
One of a series of three resource guides concerned with communication, control, and computer access for disabled and elderly individuals, the directory focuses on communication aids. The book's six chapters each cover products with the same primary function. Cross reference indexes allow access to listings of products by function, input/output…
Li, Kan; Príncipe, José C.
2018-01-01
This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime. PMID:29666568
Li, Kan; Príncipe, José C
2018-01-01
This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime.
NASA Astrophysics Data System (ADS)
Anwar, Faizan; Bárdossy, András; Seidel, Jochen
2017-04-01
Estimating missing values in a time series of a hydrological variable is an everyday task for a hydrologist. Existing methods such as inverse distance weighting, multivariate regression, and kriging, though simple to apply, provide no indication of the quality of the estimated value and depend mainly on the values of neighboring stations at a given step in the time series. Copulas have the advantage of representing the pure dependence structure between two or more variables (given the relationship between them is monotonic). They rid us of questions such as transforming the data before use or calculating functions that model the relationship between the considered variables. A copula-based approach is suggested to infill discharge, precipitation, and temperature data. As a first step the normal copula is used, subsequently, the necessity to use non-normal / non-symmetrical dependence is investigated. Discharge and temperature are treated as regular continuous variables and can be used without processing for infilling and quality checking. Due to the mixed distribution of precipitation values, it has to be treated differently. This is done by assigning a discrete probability to the zeros and treating the rest as a continuous distribution. Building on the work of others, along with infilling, the normal copula is also utilized to identify values in a time series that might be erroneous. This is done by treating the available value as missing, infilling it using the normal copula and checking if it lies within a confidence band (5 to 95% in our case) of the obtained conditional distribution. Hydrological data from two catchments Upper Neckar River (Germany) and Santa River (Peru) are used to demonstrate the application for datasets with different data quality. The Python code used here is also made available on GitHub. The required input is the time series of a given variable at different stations.
NASA Astrophysics Data System (ADS)
Mukherjee, Amritendu; Ramachandran, Parthasarathy
2018-03-01
Prediction of Ground Water Level (GWL) is extremely important for sustainable use and management of ground water resource. The motivations for this work is to understand the relationship between Gravity Recovery and Climate Experiment (GRACE) derived terrestrial water change (ΔTWS) data and GWL, so that ΔTWS could be used as a proxy measurement for GWL. In our study, we have selected five observation wells from different geographic regions in India. The datasets are unevenly spaced time series data which restricts us from applying standard time series methodologies and therefore in order to model and predict GWL with the help of ΔTWS, we have built Linear Regression Model (LRM), Support Vector Regression (SVR) and Artificial Neural Network (ANN). Comparative performances of LRM, SVR and ANN have been evaluated with the help of correlation coefficient (ρ) and Root Mean Square Error (RMSE) between the actual and fitted (for training dataset) or predicted (for test dataset) values of GWL. It has been observed in our study that ΔTWS is highly significant variable to model GWL and the amount of total variations in GWL that could be explained with the help of ΔTWS varies from 36.48% to 74.28% (0.3648 ⩽R2 ⩽ 0.7428) . We have found that for the model GWL ∼ Δ TWS, for both training and test dataset, performances of SVR and ANN are better than that of LRM in terms of ρ and RMSE. It also has been found in our study that with the inclusion of meteorological variables along with ΔTWS as input parameters to model GWL, the performance of SVR improves and it performs better than ANN. These results imply that for modelling irregular time series GWL data, ΔTWS could be very useful.
Matejicek, Lubos; Janour, Zbynek; Benes, Ludek; Bodnar, Tomas; Gulikova, Eva
2008-06-06
Projects focusing on spatio-temporal modelling of the living environment need to manage a wide range of terrain measurements, existing spatial data, time series, results of spatial analysis and inputs/outputs from numerical simulations. Thus, GISs are often used to manage data from remote sensors, to provide advanced spatial analysis and to integrate numerical models. In order to demonstrate the integration of spatial data, time series and methods in the framework of the GIS, we present a case study focused on the modelling of dust transport over a surface coal mining area, exploring spatial data from 3D laser scanners, GPS measurements, aerial images, time series of meteorological observations, inputs/outputs form numerical models and existing geographic resources. To achieve this, digital terrain models, layers including GPS thematic mapping, and scenes with simulation of wind flows are created to visualize and interpret coal dust transport over the mine area and a neighbouring residential zone. A temporary coal storage and sorting site, located near the residential zone, is one of the dominant sources of emissions. Using numerical simulations, the possible effects of wind flows are observed over the surface, modified by natural objects and man-made obstacles. The coal dust drifts with the wind in the direction of the residential zone and is partially deposited in this area. The simultaneous display of the digital map layers together with the location of the dominant emission source, wind flows and protected areas enables a risk assessment of the dust deposition in the area of interest to be performed. In order to obtain a more accurate simulation of wind flows over the temporary storage and sorting site, 3D laser scanning and GPS thematic mapping are used to create a more detailed digital terrain model. Thus, visualization of wind flows over the area of interest combined with 3D map layers enables the exploration of the processes of coal dust deposition at a local scale. In general, this project could be used as a template for dust-transport modelling which couples spatial data focused on the construction of digital terrain models and thematic mapping with data generated by numerical simulations based on Reynolds averaged Navier-Stokes equations.
Matejicek, Lubos; Janour, Zbynek; Benes, Ludek; Bodnar, Tomas; Gulikova, Eva
2008-01-01
Projects focusing on spatio-temporal modelling of the living environment need to manage a wide range of terrain measurements, existing spatial data, time series, results of spatial analysis and inputs/outputs from numerical simulations. Thus, GISs are often used to manage data from remote sensors, to provide advanced spatial analysis and to integrate numerical models. In order to demonstrate the integration of spatial data, time series and methods in the framework of the GIS, we present a case study focused on the modelling of dust transport over a surface coal mining area, exploring spatial data from 3D laser scanners, GPS measurements, aerial images, time series of meteorological observations, inputs/outputs form numerical models and existing geographic resources. To achieve this, digital terrain models, layers including GPS thematic mapping, and scenes with simulation of wind flows are created to visualize and interpret coal dust transport over the mine area and a neighbouring residential zone. A temporary coal storage and sorting site, located near the residential zone, is one of the dominant sources of emissions. Using numerical simulations, the possible effects of wind flows are observed over the surface, modified by natural objects and man-made obstacles. The coal dust drifts with the wind in the direction of the residential zone and is partially deposited in this area. The simultaneous display of the digital map layers together with the location of the dominant emission source, wind flows and protected areas enables a risk assessment of the dust deposition in the area of interest to be performed. In order to obtain a more accurate simulation of wind flows over the temporary storage and sorting site, 3D laser scanning and GPS thematic mapping are used to create a more detailed digital terrain model. Thus, visualization of wind flows over the area of interest combined with 3D map layers enables the exploration of the processes of coal dust deposition at a local scale. In general, this project could be used as a template for dust-transport modelling which couples spatial data focused on the construction of digital terrain models and thematic mapping with data generated by numerical simulations based on Reynolds averaged Navier-Stokes equations. PMID:27879911
Magnetic tunnel junction based spintronic logic devices
NASA Astrophysics Data System (ADS)
Lyle, Andrew Paul
The International Technology Roadmap for Semiconductors (ITRS) predicts that complimentary metal oxide semiconductor (CMOS) based technologies will hit their last generation on or near the 16 nm node, which we expect to reach by the year 2025. Thus future advances in computational power will not be realized from ever-shrinking device sizes, but rather by 'outside the box' designs and new physics, including molecular or DNA based computation, organics, magnonics, or spintronic. This dissertation investigates magnetic logic devices for post-CMOS computation. Three different architectures were studied, each relying on a different magnetic mechanism to compute logic functions. Each design has it benefits and challenges that must be overcome. This dissertation focuses on pushing each design from the drawing board to a realistic logic technology. The first logic architecture is based on electrically connected magnetic tunnel junctions (MTJs) that allow direct communication between elements without intermediate sensing amplifiers. Two and three input logic gates, which consist of two and three MTJs connected in parallel, respectively were fabricated and are compared. The direct communication is realized by electrically connecting the output in series with the input and applying voltage across the series connections. The logic gates rely on the fact that a change in resistance at the input modulates the voltage that is needed to supply the critical current for spin transfer torque switching the output. The change in resistance at the input resulted in a voltage margin of 50--200 mV and 250--300 mV for the closest input states for the three and two input designs, respectively. The two input logic gate realizes the AND, NAND, NOR, and OR logic functions. The three input logic function realizes the Majority, AND, NAND, NOR, and OR logic operations. The second logic architecture utilizes magnetostatically coupled nanomagnets to compute logic functions, which is the basis of Magnetic Quantum Cellular Automata (MQCA). MQCA has the potential to be thousands of times more energy efficient than CMOS technology. While interesting, these systems are academic unless they can be interfaced into current technologies. This dissertation pushed past a major hurdle by experimentally demonstrating a spintronic input/output (I/O) interface for the magnetostatically coupled nanomagnets by incorporating MTJs. This spintronic interface allows individual nanomagnets to be programmed using spin transfer torque and read using magneto resistance structure. Additionally the spintronic interface allows statistical data on the reliability of the magnetic coupling utilized for data propagation to be easily measured. The integration of spintronics and MQCA for an electrical interface to achieve a magnetic logic device with low power creates a competitive post-CMOS logic device. The final logic architecture that was studied used MTJs to compute logic functions and magnetic domain walls to communicate between gates. Simulations were used to optimize the design of this architecture. Spin transfer torque was used to compute logic function at each MTJ gate and was used to drive the domain walls. The design demonstrated that multiple nanochannels could be connected to each MTJ to realize fan-out from the logic gates. As a result this logic scheme eliminates the need for intermediate reads and conversions to pass information from one logic gate to another.
Nowicki, Dimitri; Siegelmann, Hava
2010-01-01
This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013
A Millennial-length Reconstruction of the Western Pacific Pattern with Associated Paleoclimate
NASA Astrophysics Data System (ADS)
Wright, W. E.; Guan, B. T.; Wei, K.
2010-12-01
The Western Pacific Pattern (WP) is a lesser known 500 hPa pressure pattern similar to the NAO or PNA. As defined, the poles of the WP index are centered on 60°N over the Kamchatka peninsula and the neighboring Pacific and on 32.5°N over the western north Pacific. However, the area of influence for the southern half of the dipole includes a wide swath from East Asia, across Taiwan, through the Philippine Sea, to the western north Pacific. Tree rings of Taiwanese Chamaecyparis obtusa var. formosana in this extended region show significant correlation with the WP, and with local temperature. The WP is also significantly correlated with atmospheric temperatures over Taiwan, especially at 850hPa and 700 hPa, pressure levels that bracket the tree site. Spectral analysis indicates that variations in the WP occur at relatively high frequency, with most power at less than 5 years. Simple linear regression against high frequency variants of the tree-ring chronology yielded the most significant correlation coefficients. Two reconstructions are presented. The first uses a tree-ring time series produced as the first intrinsic mode function (IMF) from an Ensemble Empirical Mode Decomposition (EEMD), based on the Hilbert-Huang Transform. The significance of the regression using the EEMD-derived time series was much more significant than time series produced using traditional high pass filtering. The second also uses the first IMF of a tree-ring time series, but the dataset was first sorted and partitioned at a specified quantile prior to EEMD decomposition, with the mean of the partitioned data forming the input to the EEMD. The partitioning was done to filter out the less climatically sensitive tree rings, a common problem with shade tolerant trees. Time series statistics indicate that the first reconstruction is reliable to 1241 of the Common Era. Reliability of the second reconstruction is dependent on the development of statistics related to the quantile partitioning, and the consequent reduction in sample depth. However, the correlation coefficients from regressions over the instrumental period greatly exceed those from any other method of chronology generation, and so the technique holds promise. Additional atmospheric parameters having significant correlations against the WPO and tree ring time series with similar spatial patterns are also presented. These include vertical wind shear (850hPa-700hPa) over the northern Philippines and the Philippine Sea, surface Omega and 850hPa v-winds over the East China Sea, Japan and Taiwan. Possible links to changes in the subtropical jet stream will also be discussed.
Wave-plate structures, power selective optical filter devices, and optical systems using same
Koplow, Jeffrey P [San Ramon, CA
2012-07-03
In an embodiment, an optical filter device includes an input polarizer for selectively transmitting an input signal. The device includes a wave-plate structure positioned to receive the input signal, which includes first and second substantially zero-order, zero-wave plates arranged in series with and oriented at an angle relative to each other. The first and second zero-wave plates are configured to alter a polarization state of the input signal passing in a manner that depends on the power of the input signal. Each zero-wave plate includes an entry and exit wave plate each having a fast axis, with the fast axes oriented substantially perpendicular to each other. Each entry wave plate is oriented relative to a transmission axis of the input polarizer at a respective angle. An output polarizer is positioned to receive a signal output from the wave-plate structure and selectively transmits the signal based on the polarization state.
NASA Astrophysics Data System (ADS)
Baisden, W. T.; Canessa, S.
2013-01-01
In 1959, Athol Rafter began a substantial programme of systematically monitoring the flow of 14C produced by atmospheric thermonuclear tests through organic matter in New Zealand soils under stable land use. A database of ∼500 soil radiocarbon measurements spanning 50 years has now been compiled, and is used here to identify optimal approaches for soil C-cycle studies. Our results confirm the potential of 14C to determine residence times, by estimating the amount of ‘bomb 14C’ incorporated. High-resolution time series confirm this approach is appropriate, and emphasise that residence times can be calculated routinely with two or more time points as little as 10 years apart. This approach is generally robust to the key assumptions that can create large errors when single time-point 14C measurements are modelled. The three most critical assumptions relate to: (1) the distribution of turnover times, and particularly the proportion of old C (‘passive fraction’), (2) the lag time between photosynthesis and C entering the modelled pool, (3) changes in the rates of C input. When carrying out approaches using robust assumptions on time-series samples, multiple soil layers can be aggregated using a mixing equation. Where good archived samples are available, AMS measurements can develop useful understanding for calibrating models of the soil C cycle at regional to continental scales with sample numbers on the order of hundreds rather than thousands. Sample preparation laboratories and AMS facilities can play an important role in coordinating the efficient delivery of robust calculated residence times for soil carbon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belov, A. S., E-mail: alexis-belov@yandex.ru
2015-10-15
Results of numerical simulations of the near-Earth plasma perturbations induced by powerful HF radio waves from the SURA heating facility are presented. The simulations were performed using a modified version of the SAMI2 ionospheric model for the input parameters corresponding to the series of in-situ SURA–DEMETER experiments. The spatial structure and developmental dynamics of large-scale plasma temperature and density perturbations have been investigated. The characteristic formation and relaxation times of the induced large-scale plasma perturbations at the altitudes of the Earth’s outer ionosphere have been determined.
Stochastic ground motion simulation
Rezaeian, Sanaz; Xiaodan, Sun; Beer, Michael; Kougioumtzoglou, Ioannis A.; Patelli, Edoardo; Siu-Kui Au, Ivan
2014-01-01
Strong earthquake ground motion records are fundamental in engineering applications. Ground motion time series are used in response-history dynamic analysis of structural or geotechnical systems. In such analysis, the validity of predicted responses depends on the validity of the input excitations. Ground motion records are also used to develop ground motion prediction equations(GMPEs) for intensity measures such as spectral accelerations that are used in response-spectrum dynamic analysis. Despite the thousands of available strong ground motion records, there remains a shortage of records for large-magnitude earthquakes at short distances or in specific regions, as well as records that sample specific combinations of source, path, and site characteristics.
Baker, Zachary Kent; Power, John Fredrick; Tripp, Justin Leonard; Dunham, Mark Edward; Stettler, Matthew W; Jones, John Alexander
2014-10-14
Disclosed is a method and system for performing operations on at least one input data vector in order to produce at least one output vector to permit easy, scalable and fast programming of a petascale equivalent supercomputer. A PetaFlops Router may comprise one or more PetaFlops Nodes, which may be connected to each other and/or external data provider/consumers via a programmable crossbar switch external to the PetaFlops Node. Each PetaFlops Node has a FPGA and a programmable intra-FPGA crossbar switch that permits input and output variables to be configurably connected to various physical operators contained in the FPGA as desired by a user. This allows a user to specify the instruction set of the system on a per-application basis. Further, the intra-FPGA crossbar switch permits the output of one operation to be delivered as an input to a second operation. By configuring the external crossbar switch, the output of a first operation on a first PetaFlops Node may be used as the input for a second operation on a second PetaFlops Node. An embodiment may provide an ability for the system to recognize and generate pipelined functions. Streaming operators may be connected together at run-time and appropriately staged to allow data to flow through a series of functions. This allows the system to provide high throughput and parallelism when possible. The PetaFlops Router may implement the user desired instructions by appropriately configuring the intra-FPGA crossbar switch on each PetaFlops Node and the external crossbar switch.
Series-Connected Buck Boost Regulators
NASA Technical Reports Server (NTRS)
Birchenough, Arthur G.
2005-01-01
A series-connected buck boost regulator (SCBBR) is an electronic circuit that bucks a power-supply voltage to a lower regulated value or boosts it to a higher regulated value. The concept of the SCBBR is a generalization of the concept of the SCBR, which was reported in "Series-Connected Boost Regulators" (LEW-15918), NASA Tech Briefs, Vol. 23, No. 7 (July 1997), page 42. Relative to prior DC-voltage-regulator concepts, the SCBBR concept can yield significant reductions in weight and increases in power-conversion efficiency in many applications in which input/output voltage ratios are relatively small and isolation is not required, as solar-array regulation or battery charging with DC-bus regulation. Usually, a DC voltage regulator is designed to include a DC-to-DC converter to reduce its power loss, size, and weight. Advances in components, increases in operating frequencies, and improved circuit topologies have led to continual increases in efficiency and/or decreases in the sizes and weights of DC voltage regulators. The primary source of inefficiency in the DC-to-DC converter portion of a voltage regulator is the conduction loss and, especially at high frequencies, the switching loss. Although improved components and topology can reduce the switching loss, the reduction is limited by the fact that the converter generally switches all the power being regulated. Like the SCBR concept, the SCBBR concept involves a circuit configuration in which only a fraction of the power is switched, so that the switching loss is reduced by an amount that is largely independent of the specific components and circuit topology used. In an SCBBR, the amount of power switched by the DC-to-DC converter is only the amount needed to make up the difference between the input and output bus voltage. The remaining majority of the power passes through the converter without being switched. The weight and power loss of a DC-to-DC converter are determined primarily by the amount of power processed. In the SCBBR, the unswitched majority of the power is passed through with very little power loss, and little if any increase in the sizes of the converter components is needed to enable the components to handle the unswitched power. As a result, the power-conversion efficiency of the regulator can be very high, as shown in the example of Figure 1. A basic SCBBR includes a DC-to-DC converter (see Figure 2). The switches and primary winding of a transformer in the converter is connected across the input bus, while the secondary winding and switches are connected in series with the output bus, so that the output voltage is the sum of the input voltage and the secondary voltage of the converter. In the breadboard SCBBR, the input voltage applied to the primary winding is switched by use of metal oxide/semiconductor field-effect transistors (MOSFETs) in a full bridge circuit; the secondary winding is center-tapped, with two MOSFET switches and diode rectifiers connected in opposed series in each leg. The sets of opposed switches and rectifiers are what enable operation in either a boost or a buck mode. In the boost mode, input voltage and current, and the output voltage and current are all positive; that is, the secondary voltage is added to the input voltage and the net output voltage can be regulated at a value equal or greater than the input voltage. In the buck mode, input voltage is still positive and the current still flows in the same direction in the secondary, but the switches are controlled such that some power flows from the secondary to the primary. The voltage across the secondary and the current into the primary are reversed. The result is that the output voltage is lower than the input voltage, and some power is recirculated from the converter secondary back to the input. Quantitatively, the advantage of an SCBBR is a direct function of the regulation range required. If, for example, a regulation range of 20 percent is required for a 500-W supply, th it suffices to design the DC-to-DC converter in the SCBBR for a power rating of only 100 W. The switching loss and size are much smaller than those of a conventional regulator that must be rated for switching of all 500 W. The reduction in size and the increase in efficiency are not directly proportional to switched-power ratio of 5:1 because the additional switches contribute some conduction loss and the input and output filters must be larger than those typically required for a 100-W converter. Nevertheless, the power loss and the size can be much smaller than those of a 500-W converter.
NASA Astrophysics Data System (ADS)
Sun, W.; Dryer, M.; Fry, C. D.; Deehr, C. S.; Smith, Z.; Akasofu, S.-I.; Kartalev, M. D.; Grigorov, K. G.
2002-04-01
We compare simulation results of real time shock arrival time prediction with observations by the ACE satellite for a series of solar flares/coronal mass ejections which took place between 28 March and 18 April, 2001 on the basis of the Hakamada-Akasofu-Fry, version 2 (HAFv.2) model. It is found, via an ex post facto calculation, that the initial speed of shock waves as an input parameter of the modeling is crucial for the agreement between the observation and the simulation. The initial speed determined by metric Type II radio burst observations must be substantially reduced (30 percent in average) for most high-speed shock waves.
Introduction of the ASGARD Code
NASA Technical Reports Server (NTRS)
Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv; Fayock, Brian
2017-01-01
ASGARD stands for 'Automated Selection and Grouping of events in AIA Regional Data'. The code is a refinement of the event detection method in Ugarte-Urra & Warren (2014). It is intended to automatically detect and group brightenings ('events') in the AIA EUV channels, to record event parameters, and to find related events over multiple channels. Ultimately, the goal is to automatically determine heating and cooling timescales in the corona and to significantly increase statistics in this respect. The code is written in IDL and requires the SolarSoft library. It is parallelized and can run with multiple CPUs. Input files are regions of interest (ROIs) in time series of AIA images from the JSOC cutout service (http://jsoc.stanford.edu/ajax/exportdata.html). The ROIs need to be tracked, co-registered, and limited in time (typically 12 hours).