Analysis of Time-Series Quasi-Experiments. Final Report.
ERIC Educational Resources Information Center
Glass, Gene V.; Maguire, Thomas O.
The objective of this project was to investigate the adequacy of statistical models developed by G. E. P. Box and G. C. Tiao for the analysis of time-series quasi-experiments: (1) The basic model developed by Box and Tiao is applied to actual time-series experiment data from two separate experiments, one in psychology and one in educational…
The Value of Interrupted Time-Series Experiments for Community Intervention Research
Biglan, Anthony; Ary, Dennis; Wagenaar, Alexander C.
2015-01-01
Greater use of interrupted time-series experiments is advocated for community intervention research. Time-series designs enable the development of knowledge about the effects of community interventions and policies in circumstances in which randomized controlled trials are too expensive, premature, or simply impractical. The multiple baseline time-series design typically involves two or more communities that are repeatedly assessed, with the intervention introduced into one community at a time. It is particularly well suited to initial evaluations of community interventions and the refinement of those interventions. This paper describes the main features of multiple baseline designs and related repeated-measures time-series experiments, discusses the threats to internal validity in multiple baseline designs, and outlines techniques for statistical analyses of time-series data. Examples are given of the use of multiple baseline designs in evaluating community interventions and policy changes. PMID:11507793
The Consequences of Model Misidentification in the Interrupted Time-Series Experiment.
ERIC Educational Resources Information Center
Padia, William L.
Campbell (l969) argued for the interrupted time-series experiment as a useful methodology for testing intervention effects in the social sciences. The validity of the statistical hypothesis testing of time-series, is, however, dependent upon the proper identification of the underlying stochastic nature of the data. Several types of model…
ERIC Educational Resources Information Center
St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly
2014-01-01
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…
Fulcher, Ben D; Jones, Nick S
2017-11-22
Phenotype measurements frequently take the form of time series, but we currently lack a systematic method for relating these complex data streams to scientifically meaningful outcomes, such as relating the movement dynamics of organisms to their genotype or measurements of brain dynamics of a patient to their disease diagnosis. Previous work addressed this problem by comparing implementations of thousands of diverse scientific time-series analysis methods in an approach termed highly comparative time-series analysis. Here, we introduce hctsa, a software tool for applying this methodological approach to data. hctsa includes an architecture for computing over 7,700 time-series features and a suite of analysis and visualization algorithms to automatically select useful and interpretable time-series features for a given application. Using exemplar applications to high-throughput phenotyping experiments, we show how hctsa allows researchers to leverage decades of time-series research to quantify and understand informative structure in time-series data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Nonstationary time series prediction combined with slow feature analysis
NASA Astrophysics Data System (ADS)
Wang, G.; Chen, X.
2015-07-01
Almost all climate time series have some degree of nonstationarity due to external driving forces perturbing the observed system. Therefore, these external driving forces should be taken into account when constructing the climate dynamics. This paper presents a new technique of obtaining the driving forces of a time series from the slow feature analysis (SFA) approach, and then introduces them into a predictive model to predict nonstationary time series. The basic theory of the technique is to consider the driving forces as state variables and to incorporate them into the predictive model. Experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted to test the model. The results showed improved prediction skills.
a Method of Time-Series Change Detection Using Full Polsar Images from Different Sensors
NASA Astrophysics Data System (ADS)
Liu, W.; Yang, J.; Zhao, J.; Shi, H.; Yang, L.
2018-04-01
Most of the existing change detection methods using full polarimetric synthetic aperture radar (PolSAR) are limited to detecting change between two points in time. In this paper, a novel method was proposed to detect the change based on time-series data from different sensors. Firstly, the overall difference image of a time-series PolSAR was calculated by ominous statistic test. Secondly, difference images between any two images in different times ware acquired by Rj statistic test. Generalized Gaussian mixture model (GGMM) was used to obtain time-series change detection maps in the last step for the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection by using the time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can detect the time-series change from different sensors.
Seasonal variability and degradation investigation of iodocarbons in a coastal fjord
NASA Astrophysics Data System (ADS)
Shi, Qiang; Wallace, Douglas
2016-04-01
Methyl iodide (CH3I) is considered an important carrier of iodine atoms from sea to air. The importance of other volatile iodinated compounds, such as very short-lived iodocarbons (e.g. CH2ClI, CH2I2), has also been demonstrated [McFiggans, 2005; O'Dowd and Hoffmann, 2005; Carpenter et al., 2013]. The production pathways of iodocarbons, and controls on their sea-to-air flux can be investigated by in-situ studies (e.g. surface layer mass balance from time-series studies) and by incubation experiments. Shi et al., [2014] reported previously unrecognised large, night-time losses of CH3I observed during incubation experiments with coastal waters. These losses were significant for controlling the sea-to-air flux but are not yet understood. As part of a study to further investigate sources and sinks of CH3I and other iodocarbons in coastal waters, samples have been analysed weekly since April 2015 at 4 depths (5 to 60 m) in the Bedford Basin, Halifax, Canada. The time-series study was part of a broader study that included measurement of other, potentially related parameters (temperature, salinity, Chlorophyll a etc.). A set of repeated degradation experiments was conducted, in the context of this time-series, including incubations within a solar simulator using 13C labelled CH3I. Results of the time-series sampling and incubation experiments will be presented.
Nonstationary time series prediction combined with slow feature analysis
NASA Astrophysics Data System (ADS)
Wang, G.; Chen, X.
2015-01-01
Almost all climate time series have some degree of nonstationarity due to external driving forces perturbations of the observed system. Therefore, these external driving forces should be taken into account when reconstructing the climate dynamics. This paper presents a new technique of combining the driving force of a time series obtained using the Slow Feature Analysis (SFA) approach, then introducing the driving force into a predictive model to predict non-stationary time series. In essence, the main idea of the technique is to consider the driving forces as state variables and incorporate them into the prediction model. To test the method, experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted. The results showed improved and effective prediction skill.
NASA Astrophysics Data System (ADS)
Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia
2016-04-01
Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field.
Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia
2016-01-01
Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field. PMID:27118260
Causal strength induction from time series data.
Soo, Kevin W; Rottman, Benjamin M
2018-04-01
One challenge when inferring the strength of cause-effect relations from time series data is that the cause and/or effect can exhibit temporal trends. If temporal trends are not accounted for, a learner could infer that a causal relation exists when it does not, or even infer that there is a positive causal relation when the relation is negative, or vice versa. We propose that learners use a simple heuristic to control for temporal trends-that they focus not on the states of the cause and effect at a given instant, but on how the cause and effect change from one observation to the next, which we call transitions. Six experiments were conducted to understand how people infer causal strength from time series data. We found that participants indeed use transitions in addition to states, which helps them to reach more accurate causal judgments (Experiments 1A and 1B). Participants use transitions more when the stimuli are presented in a naturalistic visual format than a numerical format (Experiment 2), and the effect of transitions is not driven by primacy or recency effects (Experiment 3). Finally, we found that participants primarily use the direction in which variables change rather than the magnitude of the change for estimating causal strength (Experiments 4 and 5). Collectively, these studies provide evidence that people often use a simple yet effective heuristic for inferring causal strength from time series data. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A perturbative approach for enhancing the performance of time series forecasting.
de Mattos Neto, Paulo S G; Ferreira, Tiago A E; Lima, Aranildo R; Vasconcelos, Germano C; Cavalcanti, George D C
2017-04-01
This paper proposes a method to perform time series prediction based on perturbation theory. The approach is based on continuously adjusting an initial forecasting model to asymptotically approximate a desired time series model. First, a predictive model generates an initial forecasting for a time series. Second, a residual time series is calculated as the difference between the original time series and the initial forecasting. If that residual series is not white noise, then it can be used to improve the accuracy of the initial model and a new predictive model is adjusted using residual series. The whole process is repeated until convergence or the residual series becomes white noise. The output of the method is then given by summing up the outputs of all trained predictive models in a perturbative sense. To test the method, an experimental investigation was conducted on six real world time series. A comparison was made with six other methods experimented and ten other results found in the literature. Results show that not only the performance of the initial model is significantly improved but also the proposed method outperforms the other results previously published. Copyright © 2017 Elsevier Ltd. All rights reserved.
Learning time series for intelligent monitoring
NASA Technical Reports Server (NTRS)
Manganaris, Stefanos; Fisher, Doug
1994-01-01
We address the problem of classifying time series according to their morphological features in the time domain. In a supervised machine-learning framework, we induce a classification procedure from a set of preclassified examples. For each class, we infer a model that captures its morphological features using Bayesian model induction and the minimum message length approach to assign priors. In the performance task, we classify a time series in one of the learned classes when there is enough evidence to support that decision. Time series with sufficiently novel features, belonging to classes not present in the training set, are recognized as such. We report results from experiments in a monitoring domain of interest to NASA.
Adaptive time-variant models for fuzzy-time-series forecasting.
Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching
2010-12-01
A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.
Sorokin, Anatoly; Selkov, Gene; Goryanin, Igor
2012-07-16
The volume of the experimentally measured time series data is rapidly growing, while storage solutions offering better data types than simple arrays of numbers or opaque blobs for keeping series data are sorely lacking. A number of indexing methods have been proposed to provide efficient access to time series data, but none has so far been integrated into a tried-and-proven database system. To explore the possibility of such integration, we have developed a data type for time series storage in PostgreSQL, an object-relational database system, and equipped it with an access method based on SAX (Symbolic Aggregate approXimation). This new data type has been successfully tested in a database supporting a large-scale plant gene expression experiment, and it was additionally tested on a very large set of simulated time series data. Copyright © 2011 Elsevier B.V. All rights reserved.
Self-organising mixture autoregressive model for non-stationary time series modelling.
Ni, He; Yin, Hujun
2008-12-01
Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.
Variance fluctuations in nonstationary time series: a comparative study of music genres
NASA Astrophysics Data System (ADS)
Jennings, Heather D.; Ivanov, Plamen Ch.; De Martins, Allan M.; da Silva, P. C.; Viswanathan, G. M.
2004-05-01
An important problem in physics concerns the analysis of audio time series generated by transduced acoustic phenomena. Here, we develop a new method to quantify the scaling properties of the local variance of nonstationary time series. We apply this technique to analyze audio signals obtained from selected genres of music. We find quantitative differences in the correlation properties of high art music, popular music, and dance music. We discuss the relevance of these objective findings in relation to the subjective experience of music.
NASA Astrophysics Data System (ADS)
Chandler, Susan; Lukesh, Gordon
2006-11-01
Ground-to-space illumination experiments, such as the Floodbeam I (FBE I, 1993), Floodbeam II (FBE II, 1996) and Active Imaging Testbed (AIT, 1999), fielded by the Imaging Branch of the United States Air Force Research Laboratory at Starfire Optical Range (SOR) on Kirtland AFB, NM, obtained considerable information from these highly successful experiments. While the experiments were primarily aimed at collecting focal/pupil plane data, the authors recognized during data reduction that the received time-series signals from the integrated full receiver focal plane data contains considerable hitherto unexploited information. For more than 10 years the authors have investigated the exploitation of data contained within the time-series signal from ground-to-space experiments. Results have been presented at numerous SPIE and EOS Remote Sensing Meetings. In July 2005, the authors were honored as invited speakers at the XIIth Symposium "Atmosphere and Ocean Optics; Atmospheric Physics" Tomsk, Russia. The authors were invited to return to Tomsk in 2006 however a serious automobile accident precluded attendance. This paper, requested for publication, provides an important summary of recent results.
Analysis of Complex Intervention Effects in Time-Series Experiments.
ERIC Educational Resources Information Center
Bower, Cathleen
An iterative least squares procedure for analyzing the effect of various kinds of intervention in time-series data is described. There are numerous applications of this design in economics, education, and psychology, although until recently, no appropriate analysis techniques had been developed to deal with the model adequately. This paper…
ERIC Educational Resources Information Center
Kidney, John
This self-instructional module, the eleventh in a series of 16 on techniques for coordinating work experience programs, deals with federal and state employment laws. Addressed in the module are federal and state employment laws pertaining to minimum wage for student learners, minimum wage for full-time students, unemployment insurance, child labor…
Wang, Jin; Sun, Xiangping; Nahavandi, Saeid; Kouzani, Abbas; Wu, Yuchuan; She, Mary
2014-11-01
Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Rivera, Diego; Lillo, Mario; Granda, Stalin
2014-12-01
The concept of time stability has been widely used in the design and assessment of monitoring networks of soil moisture, as well as in hydrological studies, because it is as a technique that allows identifying of particular locations having the property of representing mean values of soil moisture in the field. In this work, we assess the effect of time stability calculations as new information is added and how time stability calculations are affected at shorter periods, subsampled from the original time series, containing different amounts of precipitation. In doing so, we defined two experiments to explore the time stability behavior. The first experiment sequentially adds new data to the previous time series to investigate the long-term influence of new data in the results. The second experiment applies a windowing approach, taking sequential subsamples from the entire time series to investigate the influence of short-term changes associated with the precipitation in each window. Our results from an operating network (seven monitoring points equipped with four sensors each in a 2-ha blueberry field) show that as information is added to the time series, there are changes in the location of the most stable point (MSP), and that taking the moving 21-day windows, it is clear that most of the variability of soil water content changes is associated with both the amount and intensity of rainfall. The changes of the MSP over each window depend on the amount of water entering the soil and the previous state of the soil water content. For our case study, the upper strata are proxies for hourly to daily changes in soil water content, while the deeper strata are proxies for medium-range stored water. Thus, different locations and depths are representative of processes at different time scales. This situation must be taken into account when water management depends on soil water content values from fixed locations.
Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio
2015-12-01
This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.
Higher-Order Hurst Signatures: Dynamical Information in Time Series
NASA Astrophysics Data System (ADS)
Ferenbaugh, Willis
2005-10-01
Understanding and comparing time series from different systems requires characteristic measures of the dynamics embedded in the series. The Hurst exponent is a second-order dynamical measure of a time series which grew up within the blossoming fractal world of Mandelbrot. This characteristic measure is directly related to the behavior of the autocorrelation, the power-spectrum, and other second-order things. And as with these other measures, the Hurst exponent captures and quantifies some but not all of the intrinsic nature of a series. The more elusive characteristics live in the phase spectrum and the higher-order spectra. This research is a continuing quest to (more) fully characterize the dynamical information in time series produced by plasma experiments or models. The goal is to supplement the series information which can be represented by a Hurst exponent, and we would like to develop supplemental techniques in analogy with Hurst's original R/S analysis. These techniques should be another way to plumb the higher-order dynamics.
NASA Technical Reports Server (NTRS)
Remsberg, Ellis E.
2009-01-01
Fourteen-year time series of mesospheric and upper stratospheric temperatures from the Halogen Occultation Experiment (HALOE) are analyzed and reported. The data have been binned according to ten-degree wide latitude zones from 40S to 40N and at 10 altitudes from 43 to 80 km-a total of 90 separate time series. Multiple linear regression (MLR) analysis techniques have been applied to those time series. This study focuses on resolving their 11-yr solar cycle (or SC-like) responses and their linear trend terms. Findings for T(z) from HALOE are compared directly with published results from ground-based Rayleigh lidar and rocketsonde measurements. SC-like responses from HALOE compare well with those from lidar station data at low latitudes. The cooling trends from HALOE also agree reasonably well with those from the lidar data for the concurrent decade. Cooling trends of the lower mesosphere from HALOE are not as large as those from rocketsondes and from lidar station time series of the previous two decades, presumably because the changes in the upper stratospheric ozone were near zero during the HALOE time period and did not affect those trends.
Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.
Malkin, Zinovy
2016-04-01
The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.
Using time series structural characteristics to analyze grain prices in food insecure countries
Davenport, Frank; Funk, Chris
2015-01-01
Two components of food security monitoring are accurate forecasts of local grain prices and the ability to identify unusual price behavior. We evaluated a method that can both facilitate forecasts of cross-country grain price data and identify dissimilarities in price behavior across multiple markets. This method, characteristic based clustering (CBC), identifies similarities in multiple time series based on structural characteristics in the data. Here, we conducted a simulation experiment to determine if CBC can be used to improve the accuracy of maize price forecasts. We then compared forecast accuracies among clustered and non-clustered price series over a rolling time horizon. We found that the accuracy of forecasts on clusters of time series were equal to or worse than forecasts based on individual time series. However, in the following experiment we found that CBC was still useful for price analysis. We used the clusters to explore the similarity of price behavior among Kenyan maize markets. We found that price behavior in the isolated markets of Mandera and Marsabit has become increasingly dissimilar from markets in other Kenyan cities, and that these dissimilarities could not be explained solely by geographic distance. The structural isolation of Mandera and Marsabit that we find in this paper is supported by field studies on food security and market integration in Kenya. Our results suggest that a market with a unique price series (as measured by structural characteristics that differ from neighboring markets) may lack market integration and food security.
Effects of propranolol on time of useful function (TUF) in rats.
DOT National Transportation Integrated Search
1979-02-01
To assess the effects of propranolol on tolerance to rapid decompression, a series of experiments was conducted measuring time of useful function (TUF) in rats exposed to a rapid decompression profile in an altitude chamber. In other experiments TUF ...
An improvement of the measurement of time series irreversibility with visibility graph approach
NASA Astrophysics Data System (ADS)
Wu, Zhenyu; Shang, Pengjian; Xiong, Hui
2018-07-01
We propose a method to improve the measure of real-valued time series irreversibility which contains two tools: the directed horizontal visibility graph and the Kullback-Leibler divergence. The degree of time irreversibility is estimated by the Kullback-Leibler divergence between the in and out degree distributions presented in the associated visibility graph. In our work, we reframe the in and out degree distributions by encoding them with different embedded dimensions used in calculating permutation entropy(PE). With this improved method, we can not only estimate time series irreversibility efficiently, but also detect time series irreversibility from multiple dimensions. We verify the validity of our method and then estimate the amount of time irreversibility of series generated by chaotic maps as well as global stock markets over the period 2005-2015. The result shows that the amount of time irreversibility reaches the peak with embedded dimension d = 3 under circumstances of experiment and financial markets.
PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting
Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie
2013-01-01
Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream. PMID:23956693
PRESEE: an MDL/MML algorithm to time-series stream segmenting.
Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie
2013-01-01
Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.
Signal processing of anthropometric data
NASA Astrophysics Data System (ADS)
Zimmermann, W. J.
1983-09-01
The Anthropometric Measurements Laboratory has accumulated a large body of data from a number of previous experiments. The data is very noisy, therefore it requires the application of some signal processing schemes. Moreover, it was not regarded as time series measurements but as positional information; hence, the data is stored as coordinate points as defined by the motion of the human body. The accumulated data defines two groups or classes. Some of the data was collected from an experiment designed to measure the flexibility of the limbs, referred to as radial movement. The remaining data was collected from experiments designed to determine the surface of the reach envelope. An interactive signal processing package was designed and implemented. Since the data does not include time this package does not include a time series element. Presently the results is restricted to processing data obtained from those experiments designed to measure flexibility.
Signal processing of anthropometric data
NASA Technical Reports Server (NTRS)
Zimmermann, W. J.
1983-01-01
The Anthropometric Measurements Laboratory has accumulated a large body of data from a number of previous experiments. The data is very noisy, therefore it requires the application of some signal processing schemes. Moreover, it was not regarded as time series measurements but as positional information; hence, the data is stored as coordinate points as defined by the motion of the human body. The accumulated data defines two groups or classes. Some of the data was collected from an experiment designed to measure the flexibility of the limbs, referred to as radial movement. The remaining data was collected from experiments designed to determine the surface of the reach envelope. An interactive signal processing package was designed and implemented. Since the data does not include time this package does not include a time series element. Presently the results is restricted to processing data obtained from those experiments designed to measure flexibility.
Everyday Learning about Sleep. Everyday Learning Series. Volume 5, Number 1
ERIC Educational Resources Information Center
Linke, Pam
2007-01-01
The Everyday Learning Series has been developed to focus attention on the every day life experiences of early childhood and to offer insight about how parents and carers can make the most of these experiences. Having a new baby is wonderful and exciting and one of the most trying times in a parent's life. So it is no wonder that anyone caring for…
NASA Astrophysics Data System (ADS)
He, Jiayi; Shang, Pengjian; Xiong, Hui
2018-06-01
Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.
Quantification of Toxic Effects for Water Concentration-based Aquatic Life Criteria -Part B
Erickson et al. (1991) conducted a series of experiments on the toxicity of pentachloroethane (PCE) to juvenile fathead minnows. These experiments included evaluations of bioaccumulation kinetics, the time-course of mortality under both constant and time-variable exposures, the r...
Collins, Dannie L.; Flynn, Kathleen M.
1979-01-01
This report summarizes and makes available to other investigators the measured hydraulic data collected during a series of experiments designed to study the effect of patterned bed roughness on steady and unsteady open-channel flow. The patterned effect of the roughness was obtained by clear-cut mowing of designated areas of an otherwise fairly dense coverage of coastal Bermuda grass approximately 250 mm high. All experiments were conducted in the Flood Plain Simulation Facility during the period of October 7 through December 12, 1974. Data from 18 steady flow experiments and 10 unsteady flow experiments are summarized. Measured data included are ground-surface elevations, grass heights and densities, water-surface elevations and point velocities for all experiments. Additional tables of water-surface elevations and measured point velocities are included for the clear-cut areas for most experiments. One complete set of average water-surface elevations and one complete set of measured point velocities are tabulated for each steady flow experiment. Time series data, on a 2-minute time interval, are tabulated for both water-surface elevations and point velocities for each unsteady flow experiment. All data collected, including individual records of water-surface elevations for the steady flow experiments, have been stored on computer disk storage and can be retrieved using the computer programs listed in the attachment to this report. (Kosco-USGS)
Statistical modeling of isoform splicing dynamics from RNA-seq time series data.
Huang, Yuanhua; Sanguinetti, Guido
2016-10-01
Isoform quantification is an important goal of RNA-seq experiments, yet it remains problematic for genes with low expression or several isoforms. These difficulties may in principle be ameliorated by exploiting correlated experimental designs, such as time series or dosage response experiments. Time series RNA-seq experiments, in particular, are becoming increasingly popular, yet there are no methods that explicitly leverage the experimental design to improve isoform quantification. Here, we present DICEseq, the first isoform quantification method tailored to correlated RNA-seq experiments. DICEseq explicitly models the correlations between different RNA-seq experiments to aid the quantification of isoforms across experiments. Numerical experiments on simulated datasets show that DICEseq yields more accurate results than state-of-the-art methods, an advantage that can become considerable at low coverage levels. On real datasets, our results show that DICEseq provides substantially more reproducible and robust quantifications, increasing the correlation of estimates from replicate datasets by up to 10% on genes with low or moderate expression levels (bottom third of all genes). Furthermore, DICEseq permits to quantify the trade-off between temporal sampling of RNA and depth of sequencing, frequently an important choice when planning experiments. Our results have strong implications for the design of RNA-seq experiments, and offer a novel tool for improved analysis of such datasets. Python code is freely available at http://diceseq.sf.net G.Sanguinetti@ed.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
"Batch" kinetics in flow: online IR analysis and continuous control.
Moore, Jason S; Jensen, Klavs F
2014-01-07
Currently, kinetic data is either collected under steady-state conditions in flow or by generating time-series data in batch. Batch experiments are generally considered to be more suitable for the generation of kinetic data because of the ability to collect data from many time points in a single experiment. Now, a method that rapidly generates time-series reaction data from flow reactors by continuously manipulating the flow rate and reaction temperature has been developed. This approach makes use of inline IR analysis and an automated microreactor system, which allowed for rapid and tight control of the operating conditions. The conversion/residence time profiles at several temperatures were used to fit parameters to a kinetic model. This method requires significantly less time and a smaller amount of starting material compared to one-at-a-time flow experiments, and thus allows for the rapid generation of kinetic data. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Du, Kongchang; Zhao, Ying; Lei, Jiaqiang
2017-09-01
In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.
Featureless classification of light curves
NASA Astrophysics Data System (ADS)
Kügler, S. D.; Gianniotis, N.; Polsterer, K. L.
2015-08-01
In the era of rapidly increasing amounts of time series data, classification of variable objects has become the main objective of time-domain astronomy. Classification of irregularly sampled time series is particularly difficult because the data cannot be represented naturally as a vector which can be directly fed into a classifier. In the literature, various statistical features serve as vector representations. In this work, we represent time series by a density model. The density model captures all the information available, including measurement errors. Hence, we view this model as a generalization to the static features which directly can be derived, e.g. as moments from the density. Similarity between each pair of time series is quantified by the distance between their respective models. Classification is performed on the obtained distance matrix. In the numerical experiments, we use data from the OGLE (Optical Gravitational Lensing Experiment) and ASAS (All Sky Automated Survey) surveys and demonstrate that the proposed representation performs up to par with the best currently used feature-based approaches. The density representation preserves all static information present in the observational data, in contrast to a less-complete description by features. The density representation is an upper boundary in terms of information made available to the classifier. Consequently, the predictive power of the proposed classification depends on the choice of similarity measure and classifier, only. Due to its principled nature, we advocate that this new approach of representing time series has potential in tasks beyond classification, e.g. unsupervised learning.
Calculation of Rate Spectra from Noisy Time Series Data
Voelz, Vincent A.; Pande, Vijay S.
2011-01-01
As the resolution of experiments to measure folding kinetics continues to improve, it has become imperative to avoid bias that may come with fitting data to a predetermined mechanistic model. Towards this end, we present a rate spectrum approach to analyze timescales present in kinetic data. Computing rate spectra of noisy time series data via numerical discrete inverse Laplace transform is an ill-conditioned inverse problem, so a regularization procedure must be used to perform the calculation. Here, we show the results of different regularization procedures applied to noisy multi-exponential and stretched exponential time series, as well as data from time-resolved folding kinetics experiments. In each case, the rate spectrum method recapitulates the relevant distribution of timescales present in the data, with different priors on the rate amplitudes naturally corresponding to common biases toward simple phenomenological models. These results suggest an attractive alternative to the “Occam’s razor” philosophy of simply choosing models with the fewest number of relaxation rates. PMID:22095854
A hybrid group method of data handling with discrete wavelet transform for GDP forecasting
NASA Astrophysics Data System (ADS)
Isa, Nadira Mohamed; Shabri, Ani
2013-09-01
This study is proposed the application of hybridization model using Group Method of Data Handling (GMDH) and Discrete Wavelet Transform (DWT) in time series forecasting. The objective of this paper is to examine the flexibility of the hybridization GMDH in time series forecasting by using Gross Domestic Product (GDP). A time series data set is used in this study to demonstrate the effectiveness of the forecasting model. This data are utilized to forecast through an application aimed to handle real life time series. This experiment compares the performances of a hybrid model and a single model of Wavelet-Linear Regression (WR), Artificial Neural Network (ANN), and conventional GMDH. It is shown that the proposed model can provide a promising alternative technique in GDP forecasting.
Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu
2016-12-07
The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long ( 6 × 10 5 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series.
Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu
2016-01-01
The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long (6×105 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series. PMID:27941600
ERIC Educational Resources Information Center
St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D.
2016-01-01
We explore the conditions under which short, comparative interrupted time-series (CITS) designs represent valid alternatives to randomized experiments in educational evaluations. To do so, we conduct three within-study comparisons, each of which uses a unique data set to test the validity of the CITS design by comparing its causal estimates to…
'Botanic Man:' Education or Entertainment?
ERIC Educational Resources Information Center
Freeman, Richard
1979-01-01
The experience of Thames Television in presenting an educational series during prime time is described. "The Botanic Man," a series on ecology, is a rating success. Several difficulties encountered in collaboration efforts and follow-up activities, including courses and workbook publications, are identified. (JMF)
Zhang, Yatao; Wei, Shoushui; Liu, Hai; Zhao, Lina; Liu, Chengyu
2016-09-01
The Lempel-Ziv (LZ) complexity and its variants have been extensively used to analyze the irregularity of physiological time series. To date, these measures cannot explicitly discern between the irregularity and the chaotic characteristics of physiological time series. Our study compared the performance of an encoding LZ (ELZ) complexity algorithm, a novel variant of the LZ complexity algorithm, with those of the classic LZ (CLZ) and multistate LZ (MLZ) complexity algorithms. Simulation experiments on Gaussian noise, logistic chaotic, and periodic time series showed that only the ELZ algorithm monotonically declined with the reduction in irregularity in time series, whereas the CLZ and MLZ approaches yielded overlapped values for chaotic time series and time series mixed with Gaussian noise, demonstrating the accuracy of the proposed ELZ algorithm in capturing the irregularity, rather than the complexity, of physiological time series. In addition, the effect of sequence length on the ELZ algorithm was more stable compared with those on CLZ and MLZ, especially when the sequence length was longer than 300. A sensitivity analysis for all three LZ algorithms revealed that both the MLZ and the ELZ algorithms could respond to the change in time sequences, whereas the CLZ approach could not. Cardiac interbeat (RR) interval time series from the MIT-BIH database were also evaluated, and the results showed that the ELZ algorithm could accurately measure the inherent irregularity of the RR interval time series, as indicated by lower LZ values yielded from a congestive heart failure group versus those yielded from a normal sinus rhythm group (p < 0.01). Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Recurrent Neural Networks for Multivariate Time Series with Missing Values.
Che, Zhengping; Purushotham, Sanjay; Cho, Kyunghyun; Sontag, David; Liu, Yan
2018-04-17
Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provide useful insights for better understanding and utilization of missing values in time series analysis.
Clinical time series prediction: Toward a hierarchical dynamical system framework.
Liu, Zitao; Hauskrecht, Milos
2015-09-01
Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.
Homogenising time series: Beliefs, dogmas and facts
NASA Astrophysics Data System (ADS)
Domonkos, P.
2010-09-01
For obtaining reliable information about climate change and climate variability the use of high quality data series is essentially important, and one basic tool of quality improvements is the statistical homogenisation of observed time series. In the recent decades large number of homogenisation methods has been developed, but the real effects of their application on time series are still not known entirely. The ongoing COST HOME project (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. The author believes that several old theoretical rules have to be re-evaluated. Some examples of the hot questions, a) Statistically detected change-points can be accepted only with the confirmation of metadata information? b) Do semi-hierarchic algorithms for detecting multiple change-points in time series function effectively in practise? c) Is it good to limit the spatial comparison of candidate series with up to five other series in the neighbourhood? Empirical results - those from the COST benchmark, and other experiments too - show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities seem like part of the climatic variability, thus the pure application of classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality, than in raw time series. The developers and users of homogenisation methods have to bear in mind that the eventual purpose of homogenisation is not to find change-points, but to have the observed time series with statistical properties those characterise well the climate change and climate variability.
Identification of human operator performance models utilizing time series analysis
NASA Technical Reports Server (NTRS)
Holden, F. M.; Shinners, S. M.
1973-01-01
The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.
One nanosecond time synchronization using series and GPS
NASA Technical Reports Server (NTRS)
Buennagel, A. A.; Spitzmesser, D. J.; Young, L. E.
1983-01-01
Subnanosecond time sychronization between two remote rubidium frequency standards is verified by a traveling clock comparison. Using a novel, code ignorant Global Positioning System (GPS) receiver developed at JPL, the SERIES geodetic baseline measurement system is applied to establish the offset between the 1 Hz. outputs of the remote standards. Results of the two intercomparison experiments to date are presented as well as experimental details.
ERIC Educational Resources Information Center
Sween, Joyce; Campbell, Donald T.
Computational formulae for the following three tests of significance, useful in the interrupted time series design, are given: (1) a "t" test (Mood, 1950) for the significance of the first post-change observation from a value predicted by a linear fit of the pre-change observations; (2) an "F" test (Walker and Lev, 1953) of the…
Biogeochemistry from Gliders at the Hawaii Ocean Times-Series
NASA Astrophysics Data System (ADS)
Nicholson, D. P.; Barone, B.; Karl, D. M.
2016-02-01
At the Hawaii Ocean Time-series (HOT) autonomous, underwater gliders equipped with biogeochemical sensors observe the oceans for months at a time, sampling spatiotemporal scales missed by the ship-based programs. Over the last decade, glider data augmented by a foundation of time-series observations have shed light on biogeochemical dynamics occuring spatially at meso- and submesoscales and temporally on scales from diel to annual. We present insights gained from the synergy between glider observations, time-series measurements and remote sensing in the subtropical North Pacific. We focus on diel variability observed in dissolved oxygen and bio-optics and approaches to autonomously quantify net community production and gross primary production (GPP) as developed during the 2012 Hawaii Ocean Experiment - DYnamics of Light And Nutrients (HOE-DYLAN). Glider-based GPP measurements were extended to explore the relationship between GPP and mesoscale context over multiple years of Seaglider deployments.
The Effect of Time of Day on the Reaction to Stress. Final Report.
ERIC Educational Resources Information Center
Osborne, Francis H.
This study obtains evidence for the effect of time of day on learning in a stressful situation. A series of five experiments were performed to assess the effects of this variable on learning using albino rat subjects. None of the experiments provide overwhelming evidence for the effect of time of day when taken alone and each leaves questions…
Fourier Magnitude-Based Privacy-Preserving Clustering on Time-Series Data
NASA Astrophysics Data System (ADS)
Kim, Hea-Suk; Moon, Yang-Sae
Privacy-preserving clustering (PPC in short) is important in publishing sensitive time-series data. Previous PPC solutions, however, have a problem of not preserving distance orders or incurring privacy breach. To solve this problem, we propose a new PPC approach that exploits Fourier magnitudes of time-series. Our magnitude-based method does not cause privacy breach even though its techniques or related parameters are publicly revealed. Using magnitudes only, however, incurs the distance order problem, and we thus present magnitude selection strategies to preserve as many Euclidean distance orders as possible. Through extensive experiments, we showcase the superiority of our magnitude-based approach.
Spatio-Temporal Mining of PolSAR Satellite Image Time Series
NASA Astrophysics Data System (ADS)
Julea, A.; Meger, N.; Trouve, E.; Bolon, Ph.; Rigotti, C.; Fallourd, R.; Nicolas, J.-M.; Vasile, G.; Gay, M.; Harant, O.; Ferro-Famil, L.
2010-12-01
This paper presents an original data mining approach for describing Satellite Image Time Series (SITS) spatially and temporally. It relies on pixel-based evolution and sub-evolution extraction. These evolutions, namely the frequent grouped sequential patterns, are required to cover a minimum surface and to affect pixels that are sufficiently connected. These spatial constraints are actively used to face large data volumes and to select evolutions making sense for end-users. In this paper, a specific application to fully polarimetric SAR image time series is presented. Preliminary experiments performed on a RADARSAT-2 SITS covering the Chamonix Mont-Blanc test-site are used to illustrate the proposed approach.
Testing for nonlinearity in time series: The method of surrogate data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theiler, J.; Galdrikian, B.; Longtin, A.
1991-01-01
We describe a statistical approach for identifying nonlinearity in time series; in particular, we want to avoid claims of chaos when simpler models (such as linearly correlated noise) can explain the data. The method requires a careful statement of the null hypothesis which characterizes a candidate linear process, the generation of an ensemble of surrogate'' data sets which are similar to the original time series but consistent with the null hypothesis, and the computation of a discriminating statistic for the original and for each of the surrogate data sets. The idea is to test the original time series against themore » null hypothesis by checking whether the discriminating statistic computed for the original time series differs significantly from the statistics computed for each of the surrogate sets. We present algorithms for generating surrogate data under various null hypotheses, and we show the results of numerical experiments on artificial data using correlation dimension, Lyapunov exponent, and forecasting error as discriminating statistics. Finally, we consider a number of experimental time series -- including sunspots, electroencephalogram (EEG) signals, and fluid convection -- and evaluate the statistical significance of the evidence for nonlinear structure in each case. 56 refs., 8 figs.« less
Low gravity investigations in suborbital rockets
NASA Technical Reports Server (NTRS)
Wessling, Francis C.; Lundquist, Charles A.
1990-01-01
Two series of suborbital rocket missions are outlined which are intended to support materials and biotechnology investigations under microgravity conditions and enhance commercial rocket activity. The Consort series of missions employs the two-stage Starfire I rocket and recovery systems as well as a payload of three sealed or vented cylindrical sections. The Consort 1 and 2 missions are described which successfully supported six classes of experiments each. The Joust program is the second series of rocket missions, and the Prospector rocket is employed to provide comparable payload masses with twice as much microgravity time as the Consort series. The Joust and Consort missions provide 6-8 and 13-15 mins, respectively, of microgravity flight to support such experiments as polymer processing, scientific apparatus testing, and electrodeposition.
Modelling short time series in metabolomics: a functional data analysis approach.
Montana, Giovanni; Berk, Maurice; Ebbels, Tim
2011-01-01
Metabolomics is the study of the complement of small molecule metabolites in cells, biofluids and tissues. Many metabolomic experiments are designed to compare changes observed over time under two or more experimental conditions (e.g. a control and drug-treated group), thus producing time course data. Models from traditional time series analysis are often unsuitable because, by design, only very few time points are available and there are a high number of missing values. We propose a functional data analysis approach for modelling short time series arising in metabolomic studies which overcomes these obstacles. Our model assumes that each observed time series is a smooth random curve, and we propose a statistical approach for inferring this curve from repeated measurements taken on the experimental units. A test statistic for detecting differences between temporal profiles associated with two experimental conditions is then presented. The methodology has been applied to NMR spectroscopy data collected in a pre-clinical toxicology study.
Liu, Zitao; Hauskrecht, Milos
2017-11-01
Building of an accurate predictive model of clinical time series for a patient is critical for understanding of the patient condition, its dynamics, and optimal patient management. Unfortunately, this process is not straightforward. First, patient-specific variations are typically large and population-based models derived or learned from many different patients are often unable to support accurate predictions for each individual patient. Moreover, time series observed for one patient at any point in time may be too short and insufficient to learn a high-quality patient-specific model just from the patient's own data. To address these problems we propose, develop and experiment with a new adaptive forecasting framework for building multivariate clinical time series models for a patient and for supporting patient-specific predictions. The framework relies on the adaptive model switching approach that at any point in time selects the most promising time series model out of the pool of many possible models, and consequently, combines advantages of the population, patient-specific and short-term individualized predictive models. We demonstrate that the adaptive model switching framework is very promising approach to support personalized time series prediction, and that it is able to outperform predictions based on pure population and patient-specific models, as well as, other patient-specific model adaptation strategies.
Liu, Wensong; Yang, Jie; Zhao, Jinqi; Shi, Hongtao; Yang, Le
2018-02-12
The traditional unsupervised change detection methods based on the pixel level can only detect the changes between two different times with same sensor, and the results are easily affected by speckle noise. In this paper, a novel method is proposed to detect change based on time-series data from different sensors. Firstly, the overall difference image of the time-series PolSAR is calculated by omnibus test statistics, and difference images between any two images in different times are acquired by R j test statistics. Secondly, the difference images are segmented with a Generalized Statistical Region Merging (GSRM) algorithm which can suppress the effect of speckle noise. Generalized Gaussian Mixture Model (GGMM) is then used to obtain the time-series change detection maps in the final step of the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection using time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can not only detect the time-series change from different sensors, but it can also better suppress the influence of speckle noise and improve the overall accuracy and Kappa coefficient.
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different initial rough imputation methods.
Defect-Repairable Latent Feature Extraction of Driving Behavior via a Deep Sparse Autoencoder
Taniguchi, Tadahiro; Takenaka, Kazuhito; Bando, Takashi
2018-01-01
Data representing driving behavior, as measured by various sensors installed in a vehicle, are collected as multi-dimensional sensor time-series data. These data often include redundant information, e.g., both the speed of wheels and the engine speed represent the velocity of the vehicle. Redundant information can be expected to complicate the data analysis, e.g., more factors need to be analyzed; even varying the levels of redundancy can influence the results of the analysis. We assume that the measured multi-dimensional sensor time-series data of driving behavior are generated from low-dimensional data shared by the many types of one-dimensional data of which multi-dimensional time-series data are composed. Meanwhile, sensor time-series data may be defective because of sensor failure. Therefore, another important function is to reduce the negative effect of defective data when extracting low-dimensional time-series data. This study proposes a defect-repairable feature extraction method based on a deep sparse autoencoder (DSAE) to extract low-dimensional time-series data. In the experiments, we show that DSAE provides high-performance latent feature extraction for driving behavior, even for defective sensor time-series data. In addition, we show that the negative effect of defects on the driving behavior segmentation task could be reduced using the latent features extracted by DSAE. PMID:29462931
Xu, Daolin; Lu, Fangfang
2006-12-01
We address the problem of reconstructing a set of nonlinear differential equations from chaotic time series. A method that combines the implicit Adams integration and the structure-selection technique of an error reduction ratio is proposed for system identification and corresponding parameter estimation of the model. The structure-selection technique identifies the significant terms from a pool of candidates of functional basis and determines the optimal model through orthogonal characteristics on data. The technique with the Adams integration algorithm makes the reconstruction available to data sampled with large time intervals. Numerical experiment on Lorenz and Rossler systems shows that the proposed strategy is effective in global vector field reconstruction from noisy time series.
Decay of homogeneous turbulence from a specified state
NASA Technical Reports Server (NTRS)
Deissler, R. G.
1972-01-01
The homogeneous turbulence problem is formulated by first specifying the multipoint velocity correlations or their spectral equivalents at an initial time. Those quantities, together with the correlation or spectral equations, are then used to calculate initial time derivatives of correlations or spectra. The derivatives in turn are used in time series to calculate the evolution of turbulence quantities with time. When the problem is treated in this way, the correlation equations are closed by the initial specification of the turbulence and no closure assumption is necessary. An exponential series which is an iterative solution of the Navier stokes equations gave much better results than a Taylor power series when used with the limited available initial data. In general, the agreement between theory and experiment was good.
NASA Astrophysics Data System (ADS)
Piecuch, Christopher G.; Landerer, Felix W.; Ponte, Rui M.
2018-05-01
Monthly ocean bottom pressure solutions from the Gravity Recovery and Climate Experiment (GRACE), derived using surface spherical cap mass concentration (MC) blocks and spherical harmonics (SH) basis functions, are compared to tide gauge (TG) monthly averaged sea level data over 2003-2015 to evaluate improved gravimetric data processing methods near the coast. MC solutions can explain ≳ 42% of the monthly variance in TG time series over broad shelf regions and in semi-enclosed marginal seas. MC solutions also generally explain ˜5-32 % more TG data variance than SH estimates. Applying a coastline resolution improvement algorithm in the GRACE data processing leads to ˜ 31% more variance in TG records explained by the MC solution on average compared to not using this algorithm. Synthetic observations sampled from an ocean general circulation model exhibit similar patterns of correspondence between modeled TG and MC time series and differences between MC and SH time series in terms of their relationship with TG time series, suggesting that observational results here are generally consistent with expectations from ocean dynamics. This work demonstrates the improved quality of recent MC solutions compared to earlier SH estimates over the coastal ocean, and suggests that the MC solutions could be a useful tool for understanding contemporary coastal sea level variability and change.
Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.
Bühler, Jonas; von Lieres, Eric; Huber, Gregor J
2018-01-01
Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.
Online Conditional Outlier Detection in Nonstationary Time Series
Liu, Siqi; Wright, Adam; Hauskrecht, Milos
2017-01-01
The objective of this work is to develop methods for detecting outliers in time series data. Such methods can become the key component of various monitoring and alerting systems, where an outlier may be equal to some adverse condition that needs human attention. However, real-world time series are often affected by various sources of variability present in the environment that may influence the quality of detection; they may (1) explain some of the changes in the signal that would otherwise lead to false positive detections, as well as, (2) reduce the sensitivity of the detection algorithm leading to increase in false negatives. To alleviate these problems, we propose a new two-layer outlier detection approach that first tries to model and account for the nonstationarity and periodic variation in the time series, and then tries to use other observable variables in the environment to explain any additional signal variation. Our experiments on several data sets in different domains show that our method provides more accurate modeling of the time series, and that it is able to significantly improve outlier detection performance. PMID:29644345
Online Conditional Outlier Detection in Nonstationary Time Series.
Liu, Siqi; Wright, Adam; Hauskrecht, Milos
2017-05-01
The objective of this work is to develop methods for detecting outliers in time series data. Such methods can become the key component of various monitoring and alerting systems, where an outlier may be equal to some adverse condition that needs human attention. However, real-world time series are often affected by various sources of variability present in the environment that may influence the quality of detection; they may (1) explain some of the changes in the signal that would otherwise lead to false positive detections, as well as, (2) reduce the sensitivity of the detection algorithm leading to increase in false negatives. To alleviate these problems, we propose a new two-layer outlier detection approach that first tries to model and account for the nonstationarity and periodic variation in the time series, and then tries to use other observable variables in the environment to explain any additional signal variation. Our experiments on several data sets in different domains show that our method provides more accurate modeling of the time series, and that it is able to significantly improve outlier detection performance.
NASA Technical Reports Server (NTRS)
Sanders, Abram F. J.; Verstraeten, Willem W.; Kooreman, Maurits L.; van Leth, Thomas C.; Beringer, Jason; Joiner, Joanna
2016-01-01
A global, monthly averaged time series of Sun-induced Fluorescence (SiF), spanning January 2007 to June 2015, was derived from Metop-A Global Ozone Monitoring Experiment 2 (GOME-2) spectral measurements. Far-red SiF was retrieved using the filling-in of deep solar Fraunhofer lines and atmospheric absorption bands based on the general methodology described by Joiner et al, AMT, 2013. A Principal Component (PC) analysis of spectra over non-vegetated areas was performed to describe the effects of atmospheric absorption. Our implementation (SiF KNMI) is an independent algorithm and differs from the latest implementation of Joiner et al, AMT, 2013 (SiF NASA, v26), because we used desert reference areas for determining PCs (as opposed to cloudy ocean and some desert) and a wider fit window that covers water vapour and oxygen absorption bands (as opposed to only Fraunhofer lines). As a consequence, more PCs were needed (35 as opposed to 12). The two time series (SiF KNMI and SiF NASA, v26) correlate well (overall R of 0.78) except for tropical rain forests. Sensitivity experiments suggest the strong impact of the water vapour absorption band on retrieved SiF values. Furthermore, we evaluated the SiF time series with Gross Primary Productivity (GPP) derived from twelve flux towers in Australia. Correlations for individual towers range from 0.37 to 0.84. They are particularly high for managed biome types. In the de-seasonalized Australian SiF time series, the break of the Millennium Drought during local summer of 2010/2011 is clearly observed.
Applying dynamic Bayesian networks to perturbed gene expression data.
Dojer, Norbert; Gambin, Anna; Mizera, Andrzej; Wilczyński, Bartek; Tiuryn, Jerzy
2006-05-08
A central goal of molecular biology is to understand the regulatory mechanisms of gene transcription and protein synthesis. Because of their solid basis in statistics, allowing to deal with the stochastic aspects of gene expressions and noisy measurements in a natural way, Bayesian networks appear attractive in the field of inferring gene interactions structure from microarray experiments data. However, the basic formalism has some disadvantages, e.g. it is sometimes hard to distinguish between the origin and the target of an interaction. Two kinds of microarray experiments yield data particularly rich in information regarding the direction of interactions: time series and perturbation experiments. In order to correctly handle them, the basic formalism must be modified. For example, dynamic Bayesian networks (DBN) apply to time series microarray data. To our knowledge the DBN technique has not been applied in the context of perturbation experiments. We extend the framework of dynamic Bayesian networks in order to incorporate perturbations. Moreover, an exact algorithm for inferring an optimal network is proposed and a discretization method specialized for time series data from perturbation experiments is introduced. We apply our procedure to realistic simulations data. The results are compared with those obtained by standard DBN learning techniques. Moreover, the advantages of using exact learning algorithm instead of heuristic methods are analyzed. We show that the quality of inferred networks dramatically improves when using data from perturbation experiments. We also conclude that the exact algorithm should be used when it is possible, i.e. when considered set of genes is small enough.
A series solution for horizontal infiltration in an initially dry aquifer
NASA Astrophysics Data System (ADS)
Furtak-Cole, Eden; Telyakovskiy, Aleksey S.; Cooper, Clay A.
2018-06-01
The porous medium equation (PME) is a generalization of the traditional Boussinesq equation for hydraulic conductivity as a power law function of height. We analyze the horizontal recharge of an initially dry unconfined aquifer of semi-infinite extent, as would be found in an aquifer adjacent a rising river. If the water level can be modeled as a power law function of time, similarity variables can be introduced and the original problem can be reduced to a boundary value problem for a nonlinear ordinary differential equation. The position of the advancing front is not known ahead of time and must be found in the process of solution. We present an analytical solution in the form of a power series, with the coefficients of the series given by a recurrence relation. The analytical solution compares favorably with a highly accurate numerical solution, and only a small number of terms of the series are needed to achieve high accuracy in the scenarios considered here. We also conduct a series of physical experiments in an initially dry wedged Hele-Shaw cell, where flow is modeled by a special form of the PME. Our analytical solution closely matches the hydraulic head profiles in the Hele-Shaw cell experiment.
NASA Technical Reports Server (NTRS)
Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.
2013-01-01
Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.
Clinical time series prediction: towards a hierarchical dynamical system framework
Liu, Zitao; Hauskrecht, Milos
2014-01-01
Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671
A cluster merging method for time series microarray with production values.
Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio
2014-09-01
A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.
PREJUDICED - HOW DO PEOPLE GET THAT WAY.
ERIC Educational Resources Information Center
VAN TIL, WILLIAM
WRITTEN FOR THE ELEMENTARY SCHOOL LEVEL, A SERIES OF SIMPLE STORIES IS PRESENTED. THESE ARE INTENDED TO ILLUSTRATE CONCEPTS IN THE DEVELOPMENT AND MAINTENANCE OF THE PREJUDICIAL AND DISCRIMINATORY ATTITUDES OF OUR TIME. A SERIES OF LEARNING EXPERIENCES MAY BE DEVELOPED, USING THE STORIES AS A TOOL. THE FOLLOWING HEADINGS ARE GIVEN--"SCHOOL…
Alternatives to Pyrotechnic Distress Signals; Additional Signal Evaluation
2017-06-01
conducted a series of laboratory experiments designed to determine the optimal signal color and temporal pattern for identification against a variety of...practice” trials at approximately 2030 local time and began the actual Test 1 observation trials at approximately 2130. The series of trials finished at...Lewandowski , 860-271-2692, email: M.J.Lewandowski@uscg.mil 16. Abstract (MAXIMUM 200 WORDS) This report is the fourth in a series that details work
Hensman, James; Lawrence, Neil D; Rattray, Magnus
2013-08-20
Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.
Powerful Learning Experiences and Suzuki Music Teachers
ERIC Educational Resources Information Center
Reuning-Hummel, Carrie; Meyer, Allison; Rowland, Gordon
2016-01-01
Powerful Learning Experiences (PLEs) of Suzuki music teachers were examined in this fifth study in a series. The definition of a PLE is: "Experiences that stand out in memory because of their high quality, their impact on one's thoughts and actions over time, and their transfer to a wide range of contexts and circumstances." Ten…
ERIC Educational Resources Information Center
Shipman, Virginia C.; Goldman, Karla S.
The fixation task used in this study measures the amount of time a child fixates or looks at a given picture as it is repeated over six trials and then is followed by a novel picture on the seventh. Two series of slides were used. The first was a redundant nonsocial visual stimulus: six trials of a slide showing 20 chromatic straight lines and a…
NASA Astrophysics Data System (ADS)
Chen, Yonghong; Bressler, Steven L.; Knuth, Kevin H.; Truccolo, Wilson A.; Ding, Mingzhou
2006-06-01
In this article we consider the stochastic modeling of neurobiological time series from cognitive experiments. Our starting point is the variable-signal-plus-ongoing-activity model. From this model a differentially variable component analysis strategy is developed from a Bayesian perspective to estimate event-related signals on a single trial basis. After subtracting out the event-related signal from recorded single trial time series, the residual ongoing activity is treated as a piecewise stationary stochastic process and analyzed by an adaptive multivariate autoregressive modeling strategy which yields power, coherence, and Granger causality spectra. Results from applying these methods to local field potential recordings from monkeys performing cognitive tasks are presented.
Time-Series Analysis: A Cautionary Tale
NASA Technical Reports Server (NTRS)
Damadeo, Robert
2015-01-01
Time-series analysis has often been a useful tool in atmospheric science for deriving long-term trends in various atmospherically important parameters (e.g., temperature or the concentration of trace gas species). In particular, time-series analysis has been repeatedly applied to satellite datasets in order to derive the long-term trends in stratospheric ozone, which is a critical atmospheric constituent. However, many of the potential pitfalls relating to the non-uniform sampling of the datasets were often ignored and the results presented by the scientific community have been unknowingly biased. A newly developed and more robust application of this technique is applied to the Stratospheric Aerosol and Gas Experiment (SAGE) II version 7.0 ozone dataset and the previous biases and newly derived trends are presented.
NASA Astrophysics Data System (ADS)
Kim, Y.; Johnson, M. S.
2017-12-01
Spectral entropy (Hs) is an index which can be used to measure the structural complexity of time series data. When a time series is made up of one periodic function, the Hs value becomes smaller, while Hs becomes larger when a time series is composed of several periodic functions. We hypothesized that this characteristic of the Hs could be used to quantify the water stress history of vegetation. For the ideal condition for which sufficient water is supplied to an agricultural crop or natural vegetation, there should be a single distinct phenological cycle represented in a vegetation index time series (e.g., NDVI and EVI). However, time series data for a vegetation area that repeatedly experiences water stress may include several fluctuations that can be observed in addition to the predominant phenological cycle. This is because the process of experiencing water stress and recovering from it generates small fluctuations in phenological characteristics. Consequently, the value of Hs increases when vegetation experiences several water shortages. Therefore, the Hs could be used as an indicator for water stress history. To test this hypothesis, we analyzed Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) data for a natural area in comparison to a nearby sugarcane area in seasonally-dry western Costa Rica. In this presentation we will illustrate the use of spectral entropy to evaluate the vegetative responses of natural vegetation (dry tropical forest) and sugarcane under three different irrigation techniques (center pivot irrigation, drip irrigation and flood irrigation). Through this comparative analysis, the utility of Hs as an indicator will be tested. Furthermore, crop response to the different irrigation methods will be discussed in terms of Hs, NDVI and yield.
Utilization of Historic Information in an Optimisation Task
NASA Technical Reports Server (NTRS)
Boesser, T.
1984-01-01
One of the basic components of a discrete model of motor behavior and decision making, which describes tracking and supervisory control in unitary terms, is assumed to be a filtering mechanism which is tied to the representational principles of human memory for time-series information. In a series of experiments subjects used the time-series information with certain significant limitations: there is a range-effect; asymmetric distributions seem to be recognized, but it does not seem to be possible to optimize performance based on skewed distributions. Thus there is a transformation of the displayed data between the perceptual system and representation in memory involving a loss of information. This rules out a number of representational principles for time-series information in memory and fits very well into the framework of a comprehensive discrete model for control of complex systems, modelling continuous control (tracking), discrete responses, supervisory behavior and learning.
Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory
Tao, Qing
2017-01-01
Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM. PMID:29391864
Robust and Adaptive Online Time Series Prediction with Long Short-Term Memory.
Yang, Haimin; Pan, Zhisong; Tao, Qing
2017-01-01
Online time series prediction is the mainstream method in a wide range of fields, ranging from speech analysis and noise cancelation to stock market analysis. However, the data often contains many outliers with the increasing length of time series in real world. These outliers can mislead the learned model if treated as normal points in the process of prediction. To address this issue, in this paper, we propose a robust and adaptive online gradient learning method, RoAdam (Robust Adam), for long short-term memory (LSTM) to predict time series with outliers. This method tunes the learning rate of the stochastic gradient algorithm adaptively in the process of prediction, which reduces the adverse effect of outliers. It tracks the relative prediction error of the loss function with a weighted average through modifying Adam, a popular stochastic gradient method algorithm for training deep neural networks. In our algorithm, the large value of the relative prediction error corresponds to a small learning rate, and vice versa. The experiments on both synthetic data and real time series show that our method achieves better performance compared to the existing methods based on LSTM.
The J = 1 para levels of the v = 0 to 6 np singlet Rydberg series of molecular hydrogen revisited.
Glass-Maujean, M; Schmoranzer, H; Haar, I; Knie, A; Reiss, P; Ehresmann, A
2012-04-07
The energies and the widths of the J = 1 para levels of the v = 0 to 6 Rydberg np singlet series of molecular hydrogen with absolute intensities of the R(0) and P(2) absorption lines were measured by a high - resolution synchrotron radiation experiment and calculated through a full ab initio multichannel quantum defect theory approach. On the basis of the agreement between theory and experiment, 31 levels were either reassigned or assigned for the first time.
Jones, Luke A; Allely, Clare S; Wearden, John H
2011-02-01
A series of experiments demonstrated that a 5-s train of clicks that have been shown in previous studies to increase the subjective duration of tones they precede (in a manner consistent with "speeding up" timing processes) could also have an effect on information-processing rate. Experiments used studies of simple and choice reaction time (Experiment 1), or mental arithmetic (Experiment 2). In general, preceding trials by clicks made response times significantly shorter than those for trials without clicks, but white noise had no effects on response times. Experiments 3 and 4 investigated the effects of clicks on performance on memory tasks, using variants of two classic experiments of cognitive psychology: Sperling's (1960) iconic memory task and Loftus, Johnson, and Shimamura's (1985) iconic masking task. In both experiments participants were able to recall or recognize significantly more information from stimuli preceded by clicks than those preceded by silence.
NASA Astrophysics Data System (ADS)
Telesca, Luciano; Haro-Pérez, Catalina; Moreno-Torres, L. Rebeca; Ramirez-Rojas, Alejandro
2018-01-01
Some properties of spatial confinement of tracer colloidal particles within polyacrylamide dispersions are studied by means of the well-known dynamic light scattering (DLS) technique. DLS allows obtaining sequences of elapsed times of scattered photons. In this work, the aqueous polyacrylamide dispersion has no crosslinking and the volume fraction occupied by the tracer particles is 0.02 %. Our experimental setup provides two sequences of photons scattered by the same scattering volume that corresponds to two simultaneous experiments (Channel A and Channel B). By integration of these sequences, the intensity time series are obtained. We find that both channels are antipersistent with Hurst exponent, H ∼0.43 and 0.36, respectively. The antipersistence of the intensity time series indicates a subdiffusive dynamics of the tracers in the polymeric network, which is in agreement with the time dependence of the tracer's mean square displacement.
ERIC Educational Resources Information Center
Sunderlin, Lee S.; Ryzhov, Victor; Keller, Lanea M. M.; Gaillard, Elizabeth R.
2005-01-01
An experiment is performed to measure the relative gas-phase basicities of a series of five amino acids to compare the results to literature values. The experiments use the kinetic method for deriving ion thermochemistry and allow students to perform accurate measurements of thermodynamics in a relatively short time.
Parameterizing time in electronic health record studies.
Hripcsak, George; Albers, David J; Perotte, Adler
2015-07-01
Fields like nonlinear physics offer methods for analyzing time series, but many methods require that the time series be stationary-no change in properties over time.Objective Medicine is far from stationary, but the challenge may be able to be ameliorated by reparameterizing time because clinicians tend to measure patients more frequently when they are ill and are more likely to vary. We compared time parameterizations, measuring variability of rate of change and magnitude of change, and looking for homogeneity of bins of temporal separation between pairs of time points. We studied four common laboratory tests drawn from 25 years of electronic health records on 4 million patients. We found that sequence time-that is, simply counting the number of measurements from some start-produced more stationary time series, better explained the variation in values, and had more homogeneous bins than either traditional clock time or a recently proposed intermediate parameterization. Sequence time produced more accurate predictions in a single Gaussian process model experiment. Of the three parameterizations, sequence time appeared to produce the most stationary series, possibly because clinicians adjust their sampling to the acuity of the patient. Parameterizing by sequence time may be applicable to association and clustering experiments on electronic health record data. A limitation of this study is that laboratory data were derived from only one institution. Sequence time appears to be an important potential parameterization. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work properly cited. For commercial re-use, please contact journals.permissions@oup.com.
ERIC Educational Resources Information Center
Liu, Wendy; Aaker, Jennifer
2007-01-01
In this research, we investigate the impact of significant life experiences on intertemporal decisions among young adults. A series of experiments focus specifically on the impact of experiencing the death of a close other by cancer. We show that such an experience, which bears information about time, is associated with making decisions that favor…
a Spiral-Based Downscaling Method for Generating 30 M Time Series Image Data
NASA Astrophysics Data System (ADS)
Liu, B.; Chen, J.; Xing, H.; Wu, H.; Zhang, J.
2017-09-01
The spatial detail and updating frequency of land cover data are important factors influencing land surface dynamic monitoring applications in high spatial resolution scale. However, the fragmentized patches and seasonal variable of some land cover types (e. g. small crop field, wetland) make it labor-intensive and difficult in the generation of land cover data. Utilizing the high spatial resolution multi-temporal image data is a possible solution. Unfortunately, the spatial and temporal resolution of available remote sensing data like Landsat or MODIS datasets can hardly satisfy the minimum mapping unit and frequency of current land cover mapping / updating at the same time. The generation of high resolution time series may be a compromise to cover the shortage in land cover updating process. One of popular way is to downscale multi-temporal MODIS data with other high spatial resolution auxiliary data like Landsat. But the usual manner of downscaling pixel based on a window may lead to the underdetermined problem in heterogeneous area, result in the uncertainty of some high spatial resolution pixels. Therefore, the downscaled multi-temporal data can hardly reach high spatial resolution as Landsat data. A spiral based method was introduced to downscale low spatial and high temporal resolution image data to high spatial and high temporal resolution image data. By the way of searching the similar pixels around the adjacent region based on the spiral, the pixel set was made up in the adjacent region pixel by pixel. The underdetermined problem is prevented to a large extent from solving the linear system when adopting the pixel set constructed. With the help of ordinary least squares, the method inverted the endmember values of linear system. The high spatial resolution image was reconstructed on the basis of high spatial resolution class map and the endmember values band by band. Then, the high spatial resolution time series was formed with these high spatial resolution images image by image. Simulated experiment and remote sensing image downscaling experiment were conducted. In simulated experiment, the 30 meters class map dataset Globeland30 was adopted to investigate the effect on avoid the underdetermined problem in downscaling procedure and a comparison between spiral and window was conducted. Further, the MODIS NDVI and Landsat image data was adopted to generate the 30m time series NDVI in remote sensing image downscaling experiment. Simulated experiment results showed that the proposed method had a robust performance in downscaling pixel in heterogeneous region and indicated that it was superior to the traditional window-based methods. The high resolution time series generated may be a benefit to the mapping and updating of land cover data.
Dou, Chao
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. PMID:28090205
Miao, Beibei; Dou, Chao; Jin, Xuebo
2016-01-01
The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always "dirty," which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the "dirty" data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. .
Studies in astronomical time series analysis. I - Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1981-01-01
Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.
NASA Astrophysics Data System (ADS)
Jia, Duo; Wang, Cangjiao; Lei, Shaogang
2018-01-01
Mapping vegetation dynamic types in mining areas is significant for revealing the mechanisms of environmental damage and for guiding ecological construction. Dynamic types of vegetation can be identified by applying interannual normalized difference vegetation index (NDVI) time series. However, phase differences and time shifts in interannual time series decrease mapping accuracy in mining regions. To overcome these problems and to increase the accuracy of mapping vegetation dynamics, an interannual Landsat time series for optimum vegetation growing status was constructed first by using the enhanced spatial and temporal adaptive reflectance fusion model algorithm. We then proposed a Markov random field optimized semisupervised Gaussian dynamic time warping kernel-based fuzzy c-means (FCM) cluster algorithm for interannual NDVI time series to map dynamic vegetation types in mining regions. The proposed algorithm has been tested in the Shengli mining region and Shendong mining region, which are typical representatives of China's open-pit and underground mining regions, respectively. Experiments show that the proposed algorithm can solve the problems of phase differences and time shifts to achieve better performance when mapping vegetation dynamic types. The overall accuracies for the Shengli and Shendong mining regions were 93.32% and 89.60%, respectively, with improvements of 7.32% and 25.84% when compared with the original semisupervised FCM algorithm.
Mobile Visualization and Analysis Tools for Spatial Time-Series Data
NASA Astrophysics Data System (ADS)
Eberle, J.; Hüttich, C.; Schmullius, C.
2013-12-01
The Siberian Earth System Science Cluster (SIB-ESS-C) provides access and analysis services for spatial time-series data build on products from the Moderate Resolution Imaging Spectroradiometer (MODIS) and climate data from meteorological stations. Until now a webportal for data access, visualization and analysis with standard-compliant web services was developed for SIB-ESS-C. As a further enhancement a mobile app was developed to provide an easy access to these time-series data for field campaigns. The app sends the current position from the GPS receiver and a specific dataset (like land surface temperature or vegetation indices) - selected by the user - to our SIB-ESS-C web service and gets the requested time-series data for the identified pixel back in real-time. The data is then being plotted directly in the app. Furthermore the user has possibilities to analyze the time-series data for breaking points and other phenological values. These processings are executed on demand of the user on our SIB-ESS-C web server and results are transfered to the app. Any processing can also be done at the SIB-ESS-C webportal. The aim of this work is to make spatial time-series data and analysis functions available for end users without the need of data processing. In this presentation the author gives an overview on this new mobile app, the functionalities, the technical infrastructure as well as technological issues (how the app was developed, our made experiences).
Altiparmak, Fatih; Ferhatosmanoglu, Hakan; Erdal, Selnur; Trost, Donald C
2006-04-01
An effective analysis of clinical trials data involves analyzing different types of data such as heterogeneous and high dimensional time series data. The current time series analysis methods generally assume that the series at hand have sufficient length to apply statistical techniques to them. Other ideal case assumptions are that data are collected in equal length intervals, and while comparing time series, the lengths are usually expected to be equal to each other. However, these assumptions are not valid for many real data sets, especially for the clinical trials data sets. An addition, the data sources are different from each other, the data are heterogeneous, and the sensitivity of the experiments varies by the source. Approaches for mining time series data need to be revisited, keeping the wide range of requirements in mind. In this paper, we propose a novel approach for information mining that involves two major steps: applying a data mining algorithm over homogeneous subsets of data, and identifying common or distinct patterns over the information gathered in the first step. Our approach is implemented specifically for heterogeneous and high dimensional time series clinical trials data. Using this framework, we propose a new way of utilizing frequent itemset mining, as well as clustering and declustering techniques with novel distance metrics for measuring similarity between time series data. By clustering the data, we find groups of analytes (substances in blood) that are most strongly correlated. Most of these relationships already known are verified by the clinical panels, and, in addition, we identify novel groups that need further biomedical analysis. A slight modification to our algorithm results an effective declustering of high dimensional time series data, which is then used for "feature selection." Using industry-sponsored clinical trials data sets, we are able to identify a small set of analytes that effectively models the state of normal health.
ERIC Educational Resources Information Center
van der Wel, Robrecht P. R. D.; Fleckenstein, Robin M.; Jax, Steven A.; Rosenbaum, David A.
2007-01-01
Previous research suggests that motor equivalence is achieved through reliance on effector-independent spatiotemporal forms. Here the authors report a series of experiments investigating the role of such forms in the production of movement sequences. Participants were asked to complete series of arm movements in time with a metronome and, on some…
ERIC Educational Resources Information Center
Herschbach, Dennis R.; And Others
This student booklet is fifth in an illustrated series of eleven learning activity packets for use in teaching job hunting and application procedures and the management of wages to secondary students. Two units are included in this packet: the first describing the various ways of being paid: salary (including overtime and compensatory time),…
A synergic simulation-optimization approach for analyzing biomolecular dynamics in living organisms.
Sadegh Zadeh, Kouroush
2011-01-01
A synergic duo simulation-optimization approach was developed and implemented to study protein-substrate dynamics and binding kinetics in living organisms. The forward problem is a system of several coupled nonlinear partial differential equations which, with a given set of kinetics and diffusion parameters, can provide not only the commonly used bleached area-averaged time series in fluorescence microscopy experiments but more informative full biomolecular/drug space-time series and can be successfully used to study dynamics of both Dirac and Gaussian fluorescence-labeled biomacromolecules in vivo. The incomplete Cholesky preconditioner was coupled with the finite difference discretization scheme and an adaptive time-stepping strategy to solve the forward problem. The proposed approach was validated with analytical as well as reference solutions and used to simulate dynamics of GFP-tagged glucocorticoid receptor (GFP-GR) in mouse cancer cell during a fluorescence recovery after photobleaching experiment. Model analysis indicates that the commonly practiced bleach spot-averaged time series is not an efficient approach to extract physiological information from the fluorescence microscopy protocols. It was recommended that experimental biophysicists should use full space-time series, resulting from experimental protocols, to study dynamics of biomacromolecules and drugs in living organisms. It was also concluded that in parameterization of biological mass transfer processes, setting the norm of the gradient of the penalty function at the solution to zero is not an efficient stopping rule to end the inverse algorithm. Theoreticians should use multi-criteria stopping rules to quantify model parameters by optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.
Evidence for Motor Simulation in Imagined Locomotion
ERIC Educational Resources Information Center
Kunz, Benjamin R.; Creem-Regehr, Sarah H.; Thompson, William B.
2009-01-01
A series of experiments examined the role of the motor system in imagined movement, finding a strong relationship between imagined walking performance and the biomechanical information available during actual walking. Experiments 1 through 4 established the finding that real and imagined locomotion differ in absolute walking time. We then tested…
Hibino, Kei; Yukawa, Shintaro
2004-02-01
This study investigated time series changes and relationships of affects, cognitions, and behaviors immediately, a few days, and a week after anger episodes. Two hundred undergraduates (96 men, and 104 women) completed a questionnaire. The results were as follows. Anger intensely aroused immediately after anger episodes, and was rapidly calmed as time passed. Anger and depression correlated in each period, so depression was accompanied with anger experiences. The results of covariance structure analysis showed that aggressive behavior was evoked only by affects (especially anger) immediately, and was evoked only by cognitions (especially inflating) a few days after the episode. One week after the episode, aggressive behavior decreased, and was not influenced by affects and cognitions. Anger elicited all anger-expressive behaviors such as aggressive behavior, social sharing, and object-displacement, while depression accompanied with anger episodes elicited only object-displacement.
ERIC Educational Resources Information Center
Beaulieu, Lionel J.; Barfield, Melissa
This study examines the link between human capital endowments of Southern workers and their labor force experiences over time. Using a national longitudinal survey, the experiences of 4,566 individuals who left high school in 1982 were traced through 1992. Findings show similar patterns of educational attainment between women and men, but African…
Voltage and Current Clamp Transients with Membrane Dielectric Loss
Fitzhugh, R.; Cole, K. S.
1973-01-01
Transient responses of a space-clamped squid axon membrane to step changes of voltage or current are often approximated by exponential functions of time, corresponding to a series resistance and a membrane capacity of 1.0 μF/cm2. Curtis and Cole (1938, J. Gen. Physiol. 21:757) found, however, that the membrane had a constant phase angle impedance z = z1(jωτ)-α, with a mean α = 0.85. (α = 1.0 for an ideal capacitor; α < 1.0 may represent dielectric loss.) This result is supported by more recently published experimental data. For comparison with experiments, we have computed functions expressing voltage and current transients with constant phase angle capacitance, a parallel leakage conductance, and a series resistance, at nine values of α from 0.5 to 1.0. A series in powers of tα provided a good approximation for short times; one in powers of t-α, for long times; for intermediate times, a rational approximation matching both series for a finite number of terms was used. These computations may help in determining experimental series resistances and parallel leakage conductances from membrane voltage or current clamp data. PMID:4754194
Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.
Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L
2018-02-01
This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Escalas, M.; Queralt, P.; Ledo, J.; Marcuello, A.
2012-04-01
Magnetotelluric (MT) method is a passive electromagnetic technique, which is currently used to characterize sites for the geological storage of CO2. These later ones are usually located nearby industrialized, urban or farming areas, where man-made electromagnetic (EM) signals contaminate the MT data. The identification and characterization of the artificial EM sources which generate the so-called "cultural noise" is an important challenge to obtain the most reliable results with the MT method. The polarization attributes of an EM signal (tilt angle, ellipticity and phase difference between its orthogonal components) are related to the character of its source. In a previous work (Escalas et al. 2011), we proposed a method to distinguish natural signal from cultural noise in the raw MT data. It is based on the polarization analysis of the MT time-series in the time-frequency domain, using a wavelet scheme. We developed an algorithm to implement the method, and was tested with both synthetic and field data. In 2010, we carried out a controlled-source electromagnetic (CSEM) experiment in the Hontomín site (the Research Laboratory on Geological Storage of CO2 in Spain). MT time-series were contaminated at different frequencies with the signal emitted by a controlled artificial EM source: two electric dipoles (1 km long, arranged in North-South and East-West directions). The analysis with our algorithm of the electric field time-series acquired in this experiment was successful: the polarization attributes of both the natural and artificial signal were obtained in the time-frequency domain, highlighting their differences. The processing of the magnetic field time-series acquired in the Hontomín experiment has been done in the present work. This new analysis of the polarization attributes of the magnetic field data has provided additional information to detect the contribution of the artificial source in the measured data. Moreover, the joint analysis of the polarization attributes of the electric and magnetic field has been crucial to fully characterize the properties and the location of the noise source. Escalas, M., Queralt, P., Ledo, J., Marcuello, A., 2011. Identification of cultural noise sources in magnetotelluric data: estimating polarization attributes in the time-frequency domain using wavelet analysis. Geophysical Research Abstracts Vol. 13, EGU2011-6085. EGU General Assembly 2011.
NASA Astrophysics Data System (ADS)
Robey, H. F.; Munro, D. H.; Spears, B. K.; Marinak, M. M.; Jones, O. S.; Patel, M. V.; Haan, S. W.; Salmonson, J. D.; Landen, O. L.; Boehly, T. R.; Nikroo, A.
2008-05-01
Ignition capsule implosions planned for the National Ignition Facility (NIF) require a pulse shape with a carefully designed series of four steps, which launch a corresponding series of shocks through the ablator and DT ice shell. The relative timing of these shocks is critical for maintaining the DT fuel on a low adiabat. The current NIF specification requires that the timing of all four shocks be tuned to an accuracy of <= +/- 100ps. To meet these stringent requirements, dedicated tuning experiments are being planned to measure and adjust the shock timing on NIF. These tuning experiments will be performed in a modified hohlraum geometry, where a re-entrant Au cone is added to the standard NIF hohlraum to provide optical diagnostic (VISAR and SOP) access to the shocks as they break out of the ablator. This modified geometry is referred to as the 'keyhole' hohlraum and introduces a geometric difference between these tuning-experiments and the full ignition geometry. In order to assess the surrogacy of this modified geometry, 3D simulations using HYDRA [1] have been performed. The results from simulations of a quarter of the target geometry are presented. Comparisons of the hohlraum drive conditions and the resulting effect on the shock timing in the keyhole hohlraum are compared with the corresponding results for the standard ignition hohlraum.
Containment Prospectus for the TRUMPET Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawloski, G A
2004-02-05
TRUMPET is a series of dynamic subcritical experiments planned for execution in the U1a.102D alcove of the U1a Complex at the Nevada Test Site (NTS). The location of LLNL drifts at the U1a Complex is shown in Figure 1. The data from the TRUMPET experiments will be used in the Stockpile Stewardship Program to assess the aging of nuclear weapons components and to better model the long-term performance of weapons in the enduring stockpile. The TRUMPET series of experiments will be conducted in an almost identical way as the OBOE series of experiments. Individual TRUMPET experiments will be housed inmore » an experiment vessel, as was done for OBOE. These vessels are the same as those utilized for OBOE. All TRUMPET experiments will occur in the zero room in the U1a.102D alcove, which is on the opposite side of the U1a.102 drift from U1a.102C, which housed the OBOE experiments. The centerlines of these two alcoves are separated by only 10 feet. As with OBOE experiments, expended TRUMPET experiment vessels will be moved to the back of the alcove and entombed in grout. After the TRUMPET series of experiments is completed, another experiment will be sited in the U1a.102D alcove and it will be the final experiment in the zero room, as was similarly done for the OBOE series of experiments followed by the execution of the PIANO experiment. Each experimental package for TRUMPET will be composed of high explosive (HE) and special nuclear material (SNM) in a subcritical assembly. Each experimental package will be placed in an experimental vessel within the TRUMPET zero room in the U1a.102D alcove. The containment plan for the TRUMPET experiments utilizes a two-nested containment vessel concept, similar to OBOE and other subcritical experiments in the U1a Complex. The first containment vessel is formed by the primary containment barrier that seals the U1a.102D drift. The second containment vessel is formed by the secondary containment barrier in the U1a.100 drift. While it is likely that the experiment vessel will contain the SNM from the experiment, the containment plan for the TRUMPET experiments only assumes that the experiment vessel provides shock mitigation and serves as a sink for the heat produced by the detonation of the HE. It is possible that one or more of the experiment vessels may seep SNM into the zero room from a failure of a seal on the vessel. This containment plan covers the entire series of TRUMPET experiments. At this time, we don't know exactly how many experiments will actually be conducted in the TRUMPET series. However, we know that the maximum planned number of experiments in the TRUMPET series is 20. This number may be modified on the basis of results obtained from each TRUMPET experiment. After the final experiment in the TRUMPET series is completed, a larger experiment will be conducted in the U1a.102D alcove. A separate containment plan will be developed and presented to the Containment Review Panel (CRP) for that larger experiment. As with OBOE, this containment plan is intended to cover all TRUMPET experiments. We will not develop a separate containment plan for each experiment. Before each experiment we will present a statement to the CRP that each TRUMPET experiment falls within the parameters presented in this document. If an experiment falls outside the parameters in this document, a containment plan for that experiment will be developed and presented to the CRP for a full containment review.« less
76 FR 20571 - Bidding by Affiliates in Open Seasons for Pipeline Capacity
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-13
... time value of money to determine the present value of a time series of discounted cash flows.\\6\\ The... addressing yesterday's concerns may not address tomorrow's concerns. Over time, however, experience in...:30 a.m. to 5 p.m. Eastern time) at 888 First Street, NE., Room 2A, Washington, DC 20426. 27. From...
Time series models of environmental exposures: Good predictions or good understanding.
Barnett, Adrian G; Stephen, Dimity; Huang, Cunrui; Wolkewitz, Martin
2017-04-01
Time series data are popular in environmental epidemiology as they make use of the natural experiment of how changes in exposure over time might impact on disease. Many published time series papers have used parameter-heavy models that fully explained the second order patterns in disease to give residuals that have no short-term autocorrelation or seasonality. This is often achieved by including predictors of past disease counts (autoregression) or seasonal splines with many degrees of freedom. These approaches give great residuals, but add little to our understanding of cause and effect. We argue that modelling approaches should rely more on good epidemiology and less on statistical tests. This includes thinking about causal pathways, making potential confounders explicit, fitting a limited number of models, and not over-fitting at the cost of under-estimating the true association between exposure and disease. Copyright © 2017 Elsevier Inc. All rights reserved.
Grootswagers, Tijl; Wardle, Susan G; Carlson, Thomas A
2017-04-01
Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain-computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to "decode" different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.
Science, technology and mission design for LATOR experiment
NASA Astrophysics Data System (ADS)
Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth L.
2017-11-01
The Laser Astrometric Test of Relativity (LATOR) is a Michelson-Morley-type experiment designed to test the Einstein's general theory of relativity in the most intense gravitational environment available in the solar system - the close proximity to the Sun. By using independent time-series of highly accurate measurements of the Shapiro time-delay (laser ranging accurate to 1 cm) and interferometric astrometry (accurate to 0.1 picoradian), LATOR will measure gravitational deflection of light by the solar gravity with accuracy of 1 part in a billion, a factor {30,000 better than currently available. LATOR will perform series of highly-accurate tests of gravitation and cosmology in its search for cosmological remnants of scalar field in the solar system. We present science, technology and mission design for the LATOR mission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urniezius, Renaldas
2011-03-14
The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigidmore » body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.« less
Wang, Yi Kan; Hurley, Daniel G.; Schnell, Santiago; Print, Cristin G.; Crampin, Edmund J.
2013-01-01
We develop a new regression algorithm, cMIKANA, for inference of gene regulatory networks from combinations of steady-state and time-series gene expression data. Using simulated gene expression datasets to assess the accuracy of reconstructing gene regulatory networks, we show that steady-state and time-series data sets can successfully be combined to identify gene regulatory interactions using the new algorithm. Inferring gene networks from combined data sets was found to be advantageous when using noisy measurements collected with either lower sampling rates or a limited number of experimental replicates. We illustrate our method by applying it to a microarray gene expression dataset from human umbilical vein endothelial cells (HUVECs) which combines time series data from treatment with growth factor TNF and steady state data from siRNA knockdown treatments. Our results suggest that the combination of steady-state and time-series datasets may provide better prediction of RNA-to-RNA interactions, and may also reveal biological features that cannot be identified from dynamic or steady state information alone. Finally, we consider the experimental design of genomics experiments for gene regulatory network inference and show that network inference can be improved by incorporating steady-state measurements with time-series data. PMID:23967277
NASA Astrophysics Data System (ADS)
Zhou, Ya-Tong; Fan, Yu; Chen, Zi-Yi; Sun, Jian-Cheng
2017-05-01
The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expectation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHC-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval. SHC-EM outperforms the traditional variational learning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning. Supported by the National Natural Science Foundation of China under Grant No 60972106, the China Postdoctoral Science Foundation under Grant No 2014M561053, the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108, and the Hebei Province Natural Science Foundation under Grant No E2016202341.
Wong, Raymond
2013-01-01
Voice biometrics is one kind of physiological characteristics whose voice is different for each individual person. Due to this uniqueness, voice classification has found useful applications in classifying speakers' gender, mother tongue or ethnicity (accent), emotion states, identity verification, verbal command control, and so forth. In this paper, we adopt a new preprocessing method named Statistical Feature Extraction (SFX) for extracting important features in training a classification model, based on piecewise transformation treating an audio waveform as a time-series. Using SFX we can faithfully remodel statistical characteristics of the time-series; together with spectral analysis, a substantial amount of features are extracted in combination. An ensemble is utilized in selecting only the influential features to be used in classification model induction. We focus on the comparison of effects of various popular data mining algorithms on multiple datasets. Our experiment consists of classification tests over four typical categories of human voice data, namely, Female and Male, Emotional Speech, Speaker Identification, and Language Recognition. The experiments yield encouraging results supporting the fact that heuristically choosing significant features from both time and frequency domains indeed produces better performance in voice classification than traditional signal processing techniques alone, like wavelets and LPC-to-CC. PMID:24288684
NASA Astrophysics Data System (ADS)
Chanard, Kristel; Fleitout, Luce; Calais, Eric; Rebischung, Paul; Avouac, Jean-Philippe
2018-04-01
We model surface displacements induced by variations in continental water, atmospheric pressure, and nontidal oceanic loading, derived from the Gravity Recovery and Climate Experiment (GRACE) for spherical harmonic degrees two and higher. As they are not observable by GRACE, we use at first the degree-1 spherical harmonic coefficients from Swenson et al. (2008, https://doi.org/10.1029/2007JB005338). We compare the predicted displacements with the position time series of 689 globally distributed continuous Global Navigation Satellite System (GNSS) stations. While GNSS vertical displacements are well explained by the model at a global scale, horizontal displacements are systematically underpredicted and out of phase with GNSS station position time series. We then reestimate the degree 1 deformation field from a comparison between our GRACE-derived model, with no a priori degree 1 loads, and the GNSS observations. We show that this approach reconciles GRACE-derived loading displacements and GNSS station position time series at a global scale, particularly in the horizontal components. Assuming that they reflect surface loading deformation only, our degree-1 estimates can be translated into geocenter motion time series. We also address and assess the impact of systematic errors in GNSS station position time series at the Global Positioning System (GPS) draconitic period and its harmonics on the comparison between GNSS and GRACE-derived annual displacements. Our results confirm that surface mass redistributions observed by GRACE, combined with an elastic spherical and layered Earth model, can be used to provide first-order corrections for loading deformation observed in both horizontal and vertical components of GNSS station position time series.
SPE5 Sub-Scale Test Series Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vandersall, Kevin S.; Reeves, Robert V.; DeHaven, Martin R.
2016-01-14
A series of 2 SPE5 sub-scale tests were performed to experimentally confirm that a booster system designed and evaluated in prior tests would properly initiate the PBXN-110 case charge fill. To conduct the experiments, a canister was designed to contain the nominally 50 mm diameter booster tube with an outer fill of approximately 150 mm diameter by 150 mm in length. The canisters were filled with PBXN-110 at NAWS-China Lake and shipped back to LLNL for testing in the High Explosives Applications Facility (HEAF). Piezoelectric crystal pins were placed on the outside of the booster tube before filling, and amore » series of piezoelectric crystal pins along with Photonic Doppler Velocimetry (PDV) probes were placed on the outer surface of the canister to measure the relative timing and magnitude of the detonation. The 2 piezoelectric crystal pins integral to the booster design were also utilized along with a series of either piezoelectric crystal pins or piezoelectric polymer pads on the top of the canister or outside case that utilized direct contact, gaps, or different thicknesses of RTV cushions to obtain time of arrival data to evaluate the response in preparation for the large-scale SPE5 test. To further quantify the margin of the booster operation, the 1st test (SPE5SS1) was functioned with both detonators and the 2nd test (SPE5SS2) was functioned with only 1 detonator. A full detonation of the material was observed in both experiments as observed by the pin timing and PDV signals. The piezoelectric pads were found to provide a greater measured signal magnitude during the testing with an RTV layer present, and the improved response is due to the larger measurement surface area of the pad. This report will detail the experiment design, canister assembly for filling, final assembly, experiment firing, presentation of the diagnostic results, and a discussion of the results.« less
Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L
2018-02-01
Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.
Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M
2014-01-01
This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.
Explosive Infrasonic Events: Sensor Comparison Experiment (SCE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnurr, J. M.; Garces, M.; Rodgers, A. J.
SCE (sensor comparison experiment) 1 through 4 consists of a series of four controlled above-ground explosions designed to provide new data for overpressure propagation. Infrasound data were collected by LLNL iPhones and other sensors. Origin times, locations HOB, and yields are not being released at this time and are therefore not included in this report. This preliminary report will be updated as access to additional data changes, or instrument responses are determined.
Dutta, Debaditya; Mahmoud, Ahmed M.; Leers, Steven A.; Kim, Kang
2013-01-01
Large lipid pools in vulnerable plaques, in principle, can be detected using US based thermal strain imaging (US-TSI). One practical challenge for in vivo cardiovascular application of US-TSI is that the thermal strain is masked by the mechanical strain caused by cardiac pulsation. ECG gating is a widely adopted method for cardiac motion compensation, but it is often susceptible to electrical and physiological noise. In this paper, we present an alternative time series analysis approach to separate thermal strain from the mechanical strain without using ECG. The performance and feasibility of the time-series analysis technique was tested via numerical simulation as well as in vitro water tank experiments using a vessel mimicking phantom and an excised human atherosclerotic artery where the cardiac pulsation is simulated by a pulsatile pump. PMID:24808628
Deceleration-stats save much time during phototrophic culture optimization.
Hoekema, Sebastiaan; Rinzema, Arjen; Tramper, Johannes; Wijffels, René H; Janssen, Marcel
2014-04-01
In case of phototrophic cultures, photobioreactor costs contribute significantly to the total operating costs. Therefore one of the most important parameters to be determined is the maximum biomass production rate, if biomass or a biomass associated product is the desired product. This is traditionally determined in time consuming series of chemostat cultivations. The goal of this work is to assess the experimental time that can be saved by applying the deceleration stat (D-stat) technique to assess the maximum biomass production rate of a phototrophic cultivation system, instead of a series of chemostat cultures. A mathematical model developed by Geider and co-workers was adapted in order to describe the rate of photosynthesis as a function of the local light intensity. This is essential for the accurate description of biomass productivity in phototrophic cultures. The presented simulations demonstrate that D-stat experiments executed in the absence of pseudo steady-state (i.e., the arbitrary situation that the observed specific growth rate deviates <5% from the dilution rate) can still be used to accurately determine the maximum biomass productivity of the system. Moreover, this approach saves up to 94% of the time required to perform a series of chemostat experiments that has the same accuracy. In case more information on the properties of the system is required, the reduction in experimental time is reduced but still significant. © 2013 Wiley Periodicals, Inc.
BATS: a Bayesian user-friendly software for analyzing time series microarray experiments.
Angelini, Claudia; Cutillo, Luisa; De Canditiis, Daniela; Mutarelli, Margherita; Pensky, Marianna
2008-10-06
Gene expression levels in a given cell can be influenced by different factors, namely pharmacological or medical treatments. The response to a given stimulus is usually different for different genes and may depend on time. One of the goals of modern molecular biology is the high-throughput identification of genes associated with a particular treatment or a biological process of interest. From methodological and computational point of view, analyzing high-dimensional time course microarray data requires very specific set of tools which are usually not included in standard software packages. Recently, the authors of this paper developed a fully Bayesian approach which allows one to identify differentially expressed genes in a 'one-sample' time-course microarray experiment, to rank them and to estimate their expression profiles. The method is based on explicit expressions for calculations and, hence, very computationally efficient. The software package BATS (Bayesian Analysis of Time Series) presented here implements the methodology described above. It allows an user to automatically identify and rank differentially expressed genes and to estimate their expression profiles when at least 5-6 time points are available. The package has a user-friendly interface. BATS successfully manages various technical difficulties which arise in time-course microarray experiments, such as a small number of observations, non-uniform sampling intervals and replicated or missing data. BATS is a free user-friendly software for the analysis of both simulated and real microarray time course experiments. The software, the user manual and a brief illustrative example are freely available online at the BATS website: http://www.na.iac.cnr.it/bats.
Challenges to validity in single-group interrupted time series analysis.
Linden, Ariel
2017-04-01
Single-group interrupted time series analysis (ITSA) is a popular evaluation methodology in which a single unit of observation is studied; the outcome variable is serially ordered as a time series, and the intervention is expected to "interrupt" the level and/or trend of the time series, subsequent to its introduction. The most common threat to validity is history-the possibility that some other event caused the observed effect in the time series. Although history limits the ability to draw causal inferences from single ITSA models, it can be controlled for by using a comparable control group to serve as the counterfactual. Time series data from 2 natural experiments (effect of Florida's 2000 repeal of its motorcycle helmet law on motorcycle fatalities and California's 1988 Proposition 99 to reduce cigarette sales) are used to illustrate how history biases results of single-group ITSA results-as opposed to when that group's results are contrasted to those of a comparable control group. In the first example, an external event occurring at the same time as the helmet repeal appeared to be the cause of a rise in motorcycle deaths, but was only revealed when Florida was contrasted with comparable control states. Conversely, in the second example, a decreasing trend in cigarette sales prior to the intervention raised question about a treatment effect attributed to Proposition 99, but was reinforced when California was contrasted with comparable control states. Results of single-group ITSA should be considered preliminary, and interpreted with caution, until a more robust study design can be implemented. © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Cannas, Barbara; Fanni, Alessandra; Murari, Andrea; Pisano, Fabio; Contributors, JET
2018-02-01
In this paper, the dynamic characteristics of type-I ELM time-series from the JET tokamak, the world’s largest magnetic confinement plasma physics experiment, have been investigated. The dynamic analysis has been focused on the detection of nonlinear structure in D α radiation time series. Firstly, the method of surrogate data has been applied to evaluate the statistical significance of the null hypothesis of static nonlinear distortion of an underlying Gaussian linear process. Several nonlinear statistics have been evaluated, such us the time delayed mutual information, the correlation dimension and the maximal Lyapunov exponent. The obtained results allow us to reject the null hypothesis, giving evidence of underlying nonlinear dynamics. Moreover, no evidence of low-dimensional chaos has been found; indeed, the analysed time series are better characterized by the power law sensitivity to initial conditions which can suggest a motion at the ‘edge of chaos’, at the border between chaotic and regular non-chaotic dynamics. This uncertainty makes it necessary to further investigate about the nature of the nonlinear dynamics. For this purpose, a second surrogate test to distinguish chaotic orbits from pseudo-periodic orbits has been applied. In this case, we cannot reject the null hypothesis which means that the ELM time series is possibly pseudo-periodic. In order to reproduce pseudo-periodic dynamical properties, a periodic state-of-the-art model, proposed to reproduce the ELM cycle, has been corrupted by a dynamical noise, obtaining time series qualitatively in agreement with experimental time series.
Mayaud, C; Wagner, T; Benischke, R; Birk, S
2014-04-16
The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the synthetic system allow to deduce that similar aquifer properties are relevant in both systems. In particular, the heterogeneity of aquifer parameters appears to be a controlling factor. Moreover, the location of the overflow connecting the sub-catchments of the two springs is found to be of primary importance, regarding the occurrence of inter-catchment flow. This further supports our current understanding of an overflow zone located in the upper part of the Lurbach karst aquifer. Thus, time series analysis of single events can potentially be used to characterize transient inter-catchment flow behavior of karst systems.
Mayaud, C.; Wagner, T.; Benischke, R.; Birk, S.
2014-01-01
Summary The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the synthetic system allow to deduce that similar aquifer properties are relevant in both systems. In particular, the heterogeneity of aquifer parameters appears to be a controlling factor. Moreover, the location of the overflow connecting the sub-catchments of the two springs is found to be of primary importance, regarding the occurrence of inter-catchment flow. This further supports our current understanding of an overflow zone located in the upper part of the Lurbach karst aquifer. Thus, time series analysis of single events can potentially be used to characterize transient inter-catchment flow behavior of karst systems. PMID:24748687
Ion beam plume and efflux characterization flight experiment study. [space shuttle payload
NASA Technical Reports Server (NTRS)
Sellen, J. M., Jr.; Zafran, S.; Cole, A.; Rosiak, G.; Komatsu, G. K.
1977-01-01
A flight experiment and flight experiment package for a shuttle-borne flight test of an 8-cm mercury ion thruster was designed to obtain charged particle and neutral particle material transport data that cannot be obtained in conventional ground based laboratory testing facilities. By the use of both ground and space testing of ion thrusters, the flight worthiness of these ion thrusters, for other spacecraft applications, may be demonstrated. The flight experiment definition for the ion thruster initially defined a broadly ranging series of flight experiments and flight test sensors. From this larger test series and sensor list, an initial flight test configuration was selected with measurements in charged particle material transport, condensible neutral material transport, thruster internal erosion, ion beam neutralization, and ion thrust beam/space plasma electrical equilibration. These measurement areas may all be examined for a seven day shuttle sortie mission and for available test time in the 50 - 100 hour period.
Remediation and recycling of WBP-treated lumber for use as flakeboard
Ronald Sabo; Jerrold E. Winandy; Carol A. Clausen; Altaf Basta
2008-01-01
Laboratory-scale experiments were conducted in which preservative metals (As, Cr, & Cu) were thermochemically extracted from CCA-treated spruce (Picea engelmannii) using oxalic acid and sodium hydroxide. The effects of extraction time, temperature, and pH were examined and laboratory scale optimization was achieved. Two series of experiments were carried out. In...
Meteorological conditions during the summer 1986 CITE 2 flight series
NASA Technical Reports Server (NTRS)
Shipham, Mark C.; Cahoon, Donald R.; Bachmeier, A. Scott
1990-01-01
An overview of meteorological conditions during the NASA Global Tropospheric Experiment/Chemical Instrumentation Testing and Evaluation (GTE/CITE 2) summer 1986 flight series is presented. Computer-generated isentropic trajectories are used to trace the history of air masses encountered along each aircraft flight path. The synoptic-scale wind fields are depicted based on Montgomery stream function analyses. Time series of aircraft-measured temperature, dew point, ozone, and altitude are shown to depict air mass variability. Observed differences between maritime tropical and maritime polar air masses are discussed.
Live from Antarctica: Then and now
NASA Astrophysics Data System (ADS)
This real-time educational video series, featuring Camille Jennings from Maryland Public Television, includes information from Antarctic scientists and interactive discussion between the scientists and school children from both Maryland and Hawaii. This is part of a 'Passport to Knowledge Special' series. In this part of the four part Antarctic series, the history of Antarctica from its founding to the present, its mammals, plants, and other life forms are shown and discussed. The importance of Antarctica as a research facility is explained, along with different experiments and research that the facilities there perform.
Live from Antarctica: Then and Now
NASA Technical Reports Server (NTRS)
1994-01-01
This real-time educational video series, featuring Camille Jennings from Maryland Public Television, includes information from Antarctic scientists and interactive discussion between the scientists and school children from both Maryland and Hawaii. This is part of a 'Passport to Knowledge Special' series. In this part of the four part Antarctic series, the history of Antarctica from its founding to the present, its mammals, plants, and other life forms are shown and discussed. The importance of Antarctica as a research facility is explained, along with different experiments and research that the facilities there perform.
Cardiovascular response to acute stress in freely moving rats: time-frequency analysis.
Loncar-Turukalo, Tatjana; Bajic, Dragana; Japundzic-Zigon, Nina
2008-01-01
Spectral analysis of cardiovascular series is an important tool for assessing the features of the autonomic control of the cardiovascular system. In this experiment Wistar rats ecquiped with intraarterial catheter for blood pressure (BP) recording were exposed to stress induced by blowing air. The problem of non stationary data was overcomed applying the Smoothed Pseudo Wigner Villle (SPWV) time-frequency distribution. Spectral analysis was done before stress, during stress, immediately after stress and later in recovery. The spectral indices were calculated for both systolic blood pressure (SBP) and pulse interval (PI) series. The time evolution of spectral indices showed perturbed sympathovagal balance.
Distinguishing time-delayed causal interactions using convergent cross mapping
Ye, Hao; Deyle, Ethan R.; Gilarranz, Luis J.; Sugihara, George
2015-01-01
An important problem across many scientific fields is the identification of causal effects from observational data alone. Recent methods (convergent cross mapping, CCM) have made substantial progress on this problem by applying the idea of nonlinear attractor reconstruction to time series data. Here, we expand upon the technique of CCM by explicitly considering time lags. Applying this extended method to representative examples (model simulations, a laboratory predator-prey experiment, temperature and greenhouse gas reconstructions from the Vostok ice core, and long-term ecological time series collected in the Southern California Bight), we demonstrate the ability to identify different time-delayed interactions, distinguish between synchrony induced by strong unidirectional-forcing and true bidirectional causality, and resolve transitive causal chains. PMID:26435402
NASA Astrophysics Data System (ADS)
Boudhina, Nissaf; Zitouna-Chebbi, Rim; Mekki, Insaf; Jacob, Frédéric; Ben Mechlia, Nétij; Masmoudi, Moncef; Prévot, Laurent
2018-06-01
Estimating evapotranspiration in hilly watersheds is paramount for managing water resources, especially in semiarid/subhumid regions. The eddy covariance (EC) technique allows continuous measurements of latent heat flux (LE). However, time series of EC measurements often experience large portions of missing data because of instrumental malfunctions or quality filtering. Existing gap-filling methods are questionable over hilly crop fields because of changes in airflow inclination and subsequent aerodynamic properties. We evaluated the performances of different gap-filling methods before and after tailoring to conditions of hilly crop fields. The tailoring consisted of splitting the LE time series beforehand on the basis of upslope and downslope winds. The experiment was setup within an agricultural hilly watershed in northeastern Tunisia. EC measurements were collected throughout the growth cycle of three wheat crops, two of them located in adjacent fields on opposite hillslopes, and the third one located in a flat field. We considered four gap-filling methods: the REddyProc method, the linear regression between LE and net radiation (Rn), the multi-linear regression of LE against the other energy fluxes, and the use of evaporative fraction (EF). Regardless of the method, the splitting of the LE time series did not impact the gap-filling rate, and it might improve the accuracies on LE retrievals in some cases. Regardless of the method, the obtained accuracies on LE estimates after gap filling were close to instrumental accuracies, and they were comparable to those reported in previous studies over flat and mountainous terrains. Overall, REddyProc was the most appropriate method, for both gap-filling rate and retrieval accuracy. Thus, it seems possible to conduct gap filling for LE time series collected over hilly crop fields, provided the LE time series are split beforehand on the basis of upslope-downslope winds. Future works should address consecutive vegetation growth cycles for a larger panel of conditions in terms of climate, vegetation, and water status.
Matsunaga, Yasuhiro
2018-01-01
Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. PMID:29723137
Matsunaga, Yasuhiro; Sugita, Yuji
2018-05-03
Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. © 2018, Matsunaga et al.
Vanlaar, Ward; Robertson, Robyn; Marcoux, Kyla
2014-01-01
The objective of this study was to evaluate the impact of Winnipeg's photo enforcement safety program on speeding, i.e., "speed on green", and red-light running behavior at intersections as well as on crashes resulting from these behaviors. ARIMA time series analyses regarding crashes related to red-light running (right-angle crashes and rear-end crashes) and crashes related to speeding (injury crashes and property damage only crashes) occurring at intersections were conducted using monthly crash counts from 1994 to 2008. A quasi-experimental intersection camera experiment was also conducted using roadside data on speeding and red-light running behavior at intersections. These data were analyzed using logistic regression analysis. The time series analyses showed that for crashes related to red-light running, there had been a 46% decrease in right-angle crashes at camera intersections, but that there had also been an initial 42% increase in rear-end crashes. For crashes related to speeding, analyses revealed that the installation of cameras was not associated with increases or decreases in crashes. Results of the intersection camera experiment show that there were significantly fewer red light running violations at intersections after installation of cameras and that photo enforcement had a protective effect on speeding behavior at intersections. However, the data also suggest photo enforcement may be less effective in preventing serious speeding violations at intersections. Overall, Winnipeg's photo enforcement safety program had a positive net effect on traffic safety. Results from both the ARIMA time series and the quasi-experimental design corroborate one another. However, the protective effect of photo enforcement is not equally pronounced across different conditions so further monitoring is required to improve the delivery of this measure. Results from this study as well as limitations are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick
2018-01-01
When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Anguera, A; Barreiro, J M; Lara, J A; Lizcano, D
2016-01-01
One of the major challenges in the medical domain today is how to exploit the huge amount of data that this field generates. To do this, approaches are required that are capable of discovering knowledge that is useful for decision making in the medical field. Time series are data types that are common in the medical domain and require specialized analysis techniques and tools, especially if the information of interest to specialists is concentrated within particular time series regions, known as events. This research followed the steps specified by the so-called knowledge discovery in databases (KDD) process to discover knowledge from medical time series derived from stabilometric (396 series) and electroencephalographic (200) patient electronic health records (EHR). The view offered in the paper is based on the experience gathered as part of the VIIP project. Knowledge discovery in medical time series has a number of difficulties and implications that are highlighted by illustrating the application of several techniques that cover the entire KDD process through two case studies. This paper illustrates the application of different knowledge discovery techniques for the purposes of classification within the above domains. The accuracy of this application for the two classes considered in each case is 99.86% and 98.11% for epilepsy diagnosis in the electroencephalography (EEG) domain and 99.4% and 99.1% for early-age sports talent classification in the stabilometry domain. The KDD techniques achieve better results than other traditional neural network-based classification techniques.
Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T
2015-01-01
To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.
NASA Technical Reports Server (NTRS)
Sabaka, T. J.; Rowlands, D. D.; Luthcke, S. B.; Boy, J.-P.
2010-01-01
We describe Earth's mass flux from April 2003 through November 2008 by deriving a time series of mas cons on a global 2deg x 2deg equal-area grid at 10 day intervals. We estimate the mass flux directly from K band range rate (KBRR) data provided by the Gravity Recovery and Climate Experiment (GRACE) mission. Using regularized least squares, we take into account the underlying process dynamics through continuous space and time-correlated constraints. In addition, we place the mascon approach in the context of other filtering techniques, showing its equivalence to anisotropic, nonsymmetric filtering, least squares collocation, and Kalman smoothing. We produce mascon time series from KBRR data that have and have not been corrected (forward modeled) for hydrological processes and fmd that the former produce superior results in oceanic areas by minimizing signal leakage from strong sources on land. By exploiting the structure of the spatiotemporal constraints, we are able to use a much more efficient (in storage and computation) inversion algorithm based upon the conjugate gradient method. This allows us to apply continuous rather than piecewise continuous time-correlated constraints, which we show via global maps and comparisons with ocean-bottom pressure gauges, to produce time series with reduced random variance and full systematic signal. Finally, we present a preferred global model, a hybrid whose oceanic portions are derived using forward modeling of hydrology but whose land portions are not, and thus represent a pure GRACE-derived signal.
Atmospheric Science Data Center
2017-12-22
... The First ISCCP Regional Experiment is a series of field missions which have collected cirrus and marine stratocumulus ... Home Page (tar file) FIRE I - Extended Time Observations Home Page (tar file) FIRE Project Home Page for ...
A novel time series link prediction method: Learning automata approach
NASA Astrophysics Data System (ADS)
Moradabadi, Behnaz; Meybodi, Mohammad Reza
2017-09-01
Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.
Design of a 9-loop quasi-exponential waveform generator
NASA Astrophysics Data System (ADS)
Banerjee, Partha; Shukla, Rohit; Shyam, Anurag
2015-12-01
We know in an under-damped L-C-R series circuit, current follows a damped sinusoidal waveform. But if a number of sinusoidal waveforms of decreasing time period, generated in an L-C-R circuit, be combined in first quarter cycle of time period, then a quasi-exponential nature of output current waveform can be achieved. In an L-C-R series circuit, quasi-exponential current waveform shows a rising current derivative and thereby finds many applications in pulsed power. Here, we have described design and experiment details of a 9-loop quasi-exponential waveform generator. In that, design details of magnetic switches have also been described. In the experiment, output current of 26 kA has been achieved. It has been shown that how well the experimentally obtained output current profile matches with the numerically computed output.
Design of a 9-loop quasi-exponential waveform generator.
Banerjee, Partha; Shukla, Rohit; Shyam, Anurag
2015-12-01
We know in an under-damped L-C-R series circuit, current follows a damped sinusoidal waveform. But if a number of sinusoidal waveforms of decreasing time period, generated in an L-C-R circuit, be combined in first quarter cycle of time period, then a quasi-exponential nature of output current waveform can be achieved. In an L-C-R series circuit, quasi-exponential current waveform shows a rising current derivative and thereby finds many applications in pulsed power. Here, we have described design and experiment details of a 9-loop quasi-exponential waveform generator. In that, design details of magnetic switches have also been described. In the experiment, output current of 26 kA has been achieved. It has been shown that how well the experimentally obtained output current profile matches with the numerically computed output.
A stochastic HMM-based forecasting model for fuzzy time series.
Li, Sheng-Tun; Cheng, Yi-Chung
2010-10-01
Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.
Apollo experience report: Manned thermal-vacuum testing of spacecraft
NASA Technical Reports Server (NTRS)
Mclane, J. C., Jr.
1974-01-01
Manned thermal-vacuum tests of the Apollo spacecraft presented many first-time problems in the areas of test philosophy, operational concepts, and program implementation. The rationale used to resolve these problems is explained and examined critically in view of actual experience. The series of 12 tests involving 1517 hours of chamber operating time resulted in the disclosure of numerous equipment and procedural deficiencies of significance to the flight mission. Test experience and results in view of subsequent flight experience confirmed that thermal-vacuum testing of integrated manned spacecraft provides a feasible, cost-effective, and safe technique with which to obtain maximum confidence in spacecraft flight worthiness early in the program.
Probing the Hofmeister series beyond water: Specific-ion effects in non-aqueous solvents
NASA Astrophysics Data System (ADS)
Mazzini, Virginia; Liu, Guangming; Craig, Vincent S. J.
2018-06-01
We present an experimental investigation of specific-ion effects in non-aqueous solvents, with the aim of elucidating the role of the solvent in perturbing the fundamental ion-specific trend. The focus is on the anions: CH3COO->F->Cl->Br->I->ClO4 ->SCN- in the solvents water, methanol, formamide, dimethyl sulfoxide (DMSO), and propylene carbonate (PC). Two types of experiments are presented. The first experiment employs the technique of size exclusion chromatography to evaluate the elution times of electrolytes in the different solvents. We observe that the fundamental (Hofmeister) series is observed in water and methanol, whilst the series is reversed in DMSO and PC. No clear series is observed for formamide. The second experiment uses the quartz crystal microbalance technique to follow the ion-induced swelling and collapse of a polyelectrolyte brush. Here the fundamental series is observed in the protic solvents water, methanol, and formamide, and the series is once again reversed in DMSO and PC. These behaviours are not attributed to the protic/aprotic nature of the solvents, but rather to the polarisability of the solvents and are due to the competition between the interaction of ions with the solvent and the surface. A rule of thumb is proposed for ion specificity in non-aqueous solvents. In weakly polarisable solvents, the trends in specific-ion effects will follow those in water, whereas in strongly polarisable solvents the reverse trend will be observed. Solvents of intermediate polarisability will give weak specific-ion effects.
Probing the Hofmeister series beyond water: Specific-ion effects in non-aqueous solvents.
Mazzini, Virginia; Liu, Guangming; Craig, Vincent S J
2018-06-14
We present an experimental investigation of specific-ion effects in non-aqueous solvents, with the aim of elucidating the role of the solvent in perturbing the fundamental ion-specific trend. The focus is on the anions: CH 3 COO - >F - >Cl - >Br - >I - >ClO 4 - >SCN - in the solvents water, methanol, formamide, dimethyl sulfoxide (DMSO), and propylene carbonate (PC). Two types of experiments are presented. The first experiment employs the technique of size exclusion chromatography to evaluate the elution times of electrolytes in the different solvents. We observe that the fundamental (Hofmeister) series is observed in water and methanol, whilst the series is reversed in DMSO and PC. No clear series is observed for formamide. The second experiment uses the quartz crystal microbalance technique to follow the ion-induced swelling and collapse of a polyelectrolyte brush. Here the fundamental series is observed in the protic solvents water, methanol, and formamide, and the series is once again reversed in DMSO and PC. These behaviours are not attributed to the protic/aprotic nature of the solvents, but rather to the polarisability of the solvents and are due to the competition between the interaction of ions with the solvent and the surface. A rule of thumb is proposed for ion specificity in non-aqueous solvents. In weakly polarisable solvents, the trends in specific-ion effects will follow those in water, whereas in strongly polarisable solvents the reverse trend will be observed. Solvents of intermediate polarisability will give weak specific-ion effects.
Crustal Movements and Gravity Variations in the Southeastern Po Plain, Italy
NASA Astrophysics Data System (ADS)
Zerbini, S.; Bruni, S.; Errico, M.; Santi, E.; Wilmes, H.; Wziontek, H.
2014-12-01
At the Medicina observatory, in the southeastern Po Plain, in Italy, we have started a project of continuous GPS and gravity observations in mid 1996. The experiment, focused on a comparison between height and gravity variations, is still ongoing; these uninterrupted time series certainly constitute a most important data base to observe and estimate reliably long-period behaviors but also to derive deeper insights on the nature of the crustal deformation. Almost two decades of continuous GPS observations from two closely located receivers have shown that the coordinate time series are characterized by linear and non-linear variations as well as by sudden jumps. Both over long- and short-period time scales, the GPS height series show signals induced by different phenomena, for example, those related to mass transport in the Earth system. Seasonal effects are clearly recognizable and are mainly associated with the water table seasonal behavior. To understand and separate the contribution of different forcings is not an easy task; to this end, the information provided by the superconducting gravimeter observations and also by absolute gravity measurements offers a most important means to detect and understand mass contributions. In addition to GPS and gravity data, at Medicina, a number of environmental parameters time series are also regularly acquired, among them water table levels. We present the results of study investigating correlations between height, gravity and environmental parameters time series.
Error-based Extraction of States and Energy Landscapes from Experimental Single-Molecule Time-Series
NASA Astrophysics Data System (ADS)
Taylor, J. Nicholas; Li, Chun-Biu; Cooper, David R.; Landes, Christy F.; Komatsuzaki, Tamiki
2015-03-01
Characterization of states, the essential components of the underlying energy landscapes, is one of the most intriguing subjects in single-molecule (SM) experiments due to the existence of noise inherent to the measurements. Here we present a method to extract the underlying state sequences from experimental SM time-series. Taking into account empirical error and the finite sampling of the time-series, the method extracts a steady-state network which provides an approximation of the underlying effective free energy landscape. The core of the method is the application of rate-distortion theory from information theory, allowing the individual data points to be assigned to multiple states simultaneously. We demonstrate the method's proficiency in its application to simulated trajectories as well as to experimental SM fluorescence resonance energy transfer (FRET) trajectories obtained from isolated agonist binding domains of the AMPA receptor, an ionotropic glutamate receptor that is prevalent in the central nervous system.
Culturally Diverse Cohorts: The Exploration of Learning in Context and Community
ERIC Educational Resources Information Center
Callaghan, Carolyn M.
2012-01-01
This dissertation explores the experiences of culturally diverse interactions and learning in adult cohorts. A cohort is defined as a group of students who enter a program of study together and complete a series of common learning experiences during a specified period of time (Saltiel & Russo, 2001). There is much research on the general use,…
NASA Technical Reports Server (NTRS)
Jackson, T. J.; Shiue, J.; Oneill, P.; Wang, J.; Fuchs, J.; Owe, M.
1984-01-01
The verification of a multi-sensor aircraft system developed to study soil moisture applications is discussed. This system consisted of a three beam push broom L band microwave radiometer, a thermal infrared scanner, a multispectral scanner, video and photographic cameras and an onboard navigational instrument. Ten flights were made of agricultural sites in Maryland and Delaware with little or no vegetation cover. Comparisons of aircraft and ground measurements showed that the system was reliable and consistent. Time series analysis of microwave and evaporation data showed a strong similarity that indicates a potential direction for future research.
Complexity analysis of the turbulent environmental fluid flow time series
NASA Astrophysics Data System (ADS)
Mihailović, D. T.; Nikolić-Đorić, E.; Drešković, N.; Mimić, G.
2014-02-01
We have used the Kolmogorov complexities, sample and permutation entropies to quantify the randomness degree in river flow time series of two mountain rivers in Bosnia and Herzegovina, representing the turbulent environmental fluid, for the period 1926-1990. In particular, we have examined the monthly river flow time series from two rivers (the Miljacka and the Bosnia) in the mountain part of their flow and then calculated the Kolmogorov complexity (KL) based on the Lempel-Ziv Algorithm (LZA) (lower-KLL and upper-KLU), sample entropy (SE) and permutation entropy (PE) values for each time series. The results indicate that the KLL, KLU, SE and PE values in two rivers are close to each other regardless of the amplitude differences in their monthly flow rates. We have illustrated the changes in mountain river flow complexity by experiments using (i) the data set for the Bosnia River and (ii) anticipated human activities and projected climate changes. We have explored the sensitivity of considered measures in dependence on the length of time series. In addition, we have divided the period 1926-1990 into three subintervals: (a) 1926-1945, (b) 1946-1965, (c) 1966-1990, and calculated the KLL, KLU, SE, PE values for the various time series in these subintervals. It is found that during the period 1946-1965, there is a decrease in their complexities, and corresponding changes in the SE and PE, in comparison to the period 1926-1990. This complexity loss may be primarily attributed to (i) human interventions, after the Second World War, on these two rivers because of their use for water consumption and (ii) climate change in recent times.
Exploratory Causal Analysis in Bivariate Time Series Data
NASA Astrophysics Data System (ADS)
McCracken, James M.
Many scientific disciplines rely on observational data of systems for which it is difficult (or impossible) to implement controlled experiments and data analysis techniques are required for identifying causal information and relationships directly from observational data. This need has lead to the development of many different time series causality approaches and tools including transfer entropy, convergent cross-mapping (CCM), and Granger causality statistics. In this thesis, the existing time series causality method of CCM is extended by introducing a new method called pairwise asymmetric inference (PAI). It is found that CCM may provide counter-intuitive causal inferences for simple dynamics with strong intuitive notions of causality, and the CCM causal inference can be a function of physical parameters that are seemingly unrelated to the existence of a driving relationship in the system. For example, a CCM causal inference might alternate between ''voltage drives current'' and ''current drives voltage'' as the frequency of the voltage signal is changed in a series circuit with a single resistor and inductor. PAI is introduced to address both of these limitations. Many of the current approaches in the times series causality literature are not computationally straightforward to apply, do not follow directly from assumptions of probabilistic causality, depend on assumed models for the time series generating process, or rely on embedding procedures. A new approach, called causal leaning, is introduced in this work to avoid these issues. The leaning is found to provide causal inferences that agree with intuition for both simple systems and more complicated empirical examples, including space weather data sets. The leaning may provide a clearer interpretation of the results than those from existing time series causality tools. A practicing analyst can explore the literature to find many proposals for identifying drivers and causal connections in times series data sets, but little research exists of how these tools compare to each other in practice. This work introduces and defines exploratory causal analysis (ECA) to address this issue along with the concept of data causality in the taxonomy of causal studies introduced in this work. The motivation is to provide a framework for exploring potential causal structures in time series data sets. ECA is used on several synthetic and empirical data sets, and it is found that all of the tested time series causality tools agree with each other (and intuitive notions of causality) for many simple systems but can provide conflicting causal inferences for more complicated systems. It is proposed that such disagreements between different time series causality tools during ECA might provide deeper insight into the data than could be found otherwise.
Reinforcement Learning and Savings Behavior.
Choi, James J; Laibson, David; Madrian, Brigitte C; Metrick, Andrew
2009-12-01
We show that individual investors over-extrapolate from their personal experience when making savings decisions. Investors who experience particularly rewarding outcomes from saving in their 401(k)-a high average and/or low variance return-increase their 401(k) savings rate more than investors who have less rewarding experiences with saving. This finding is not driven by aggregate time-series shocks, income effects, rational learning about investing skill, investor fixed effects, or time-varying investor-level heterogeneity that is correlated with portfolio allocations to stock, bond, and cash asset classes. We discuss implications for the equity premium puzzle and interventions aimed at improving household financial outcomes.
Aircraft Cabin Turbulence Warning Experiment
NASA Technical Reports Server (NTRS)
Bogue, Rodney K.; Larcher, Kenneth
2006-01-01
New turbulence prediction technology offers the potential for advance warning of impending turbulence encounters, thereby allowing necessary cabin preparation time prior to the encounter. The amount of time required for passengers and flight attendants to be securely seated (that is, seated with seat belts fastened) currently is not known. To determine secured seating-based warning times, a consortium of aircraft safety organizations have conducted an experiment involving a series of timed secured seating trials. This demonstrative experiment, conducted on October 1, 2, and 3, 2002, used a full-scale B-747 wide-body aircraft simulator, human passenger subjects, and supporting staff from six airlines. Active line-qualified flight attendants from three airlines participated in the trials. Definitive results have been obtained to provide secured seating-based warning times for the developers of turbulence warning technology
NASA Technical Reports Server (NTRS)
Paloski, William H.; Odette, Louis L.; Krever, Alfred J.; West, Allison K.
1987-01-01
A real-time expert system is being developed to serve as the astronaut interface for a series of Spacelab vestibular experiments. This expert system is written in a version of Prolog that is itself written in Forth. The Prolog contains a predicate that can be used to execute Forth definitions; thus, the Forth becomes an embedded real-time operating system within the Prolog programming environment. The expert system consists of a data base containing detailed operational instructions for each experiment, a rule base containing Prolog clauses used to determine the next step in an experiment sequence, and a procedure base containing Prolog goals formed from real-time routines coded in Forth. In this paper, we demonstrate and describe the techniques and considerations used to develop this real-time expert system, and we conclude that Forth-based Prolog provides a viable implementation vehicle for this and similar applications.
Monitoring volcano activity through Hidden Markov Model
NASA Astrophysics Data System (ADS)
Cassisi, C.; Montalto, P.; Prestifilippo, M.; Aliotta, M.; Cannata, A.; Patanè, D.
2013-12-01
During 2011-2013, Mt. Etna was mainly characterized by cyclic occurrences of lava fountains, totaling to 38 episodes. During this time interval Etna volcano's states (QUIET, PRE-FOUNTAIN, FOUNTAIN, POST-FOUNTAIN), whose automatic recognition is very useful for monitoring purposes, turned out to be strongly related to the trend of RMS (Root Mean Square) of the seismic signal recorded by stations close to the summit area. Since RMS time series behavior is considered to be stochastic, we can try to model the system generating its values, assuming to be a Markov process, by using Hidden Markov models (HMMs). HMMs are a powerful tool in modeling any time-varying series. HMMs analysis seeks to recover the sequence of hidden states from the observed emissions. In our framework, observed emissions are characters generated by the SAX (Symbolic Aggregate approXimation) technique, which maps RMS time series values with discrete literal emissions. The experiments show how it is possible to guess volcano states by means of HMMs and SAX.
NASA Astrophysics Data System (ADS)
Reinovsky, R. E.; Levi, P. S.; Bueck, J. C.; Goforth, J. H.
The Air Force Weapons Laboratory, working jointly with Los Alamos National Laboratory, has conducted a series of experiments directed at exploring composite, or staged, switching techniques for use in opening switches in applications which require the conduction of very high currents (or current densities) with very low losses for relatively long times (several tens of microseconds), and the interruption of these currents in much shorter times (ultimately a few hundred nanoseconds). The results of those experiments are reported.
A new method of real-time detection of changes in periodic data stream
NASA Astrophysics Data System (ADS)
Lyu, Chen; Lu, Guoliang; Cheng, Bin; Zheng, Xiangwei
2017-07-01
The change point detection in periodic time series is much desirable in many practical usages. We present a novel algorithm for this task, which includes two phases: 1) anomaly measure- on the basis of a typical regression model, we propose a new computation method to measure anomalies in time series which does not require any reference data from other measurement(s); 2) change detection- we introduce a new martingale test for detection which can be operated in an unsupervised and nonparametric way. We have conducted extensive experiments to systematically test our algorithm. The results make us believe that our algorithm can be directly applicable in many real-world change-point-detection applications.
NASA Astrophysics Data System (ADS)
Evans, K. D.; Early, A. B.; Northup, E. A.; Ames, D. P.; Teng, W. L.; Olding, S. W.; Krotkov, N. A.; Arctur, D. K.; Beach, A. L., III; Silverman, M. L.
2017-12-01
The role of NASA's Earth Science Data Systems Working Groups (ESDSWG) is to make recommendations relevant to NASA's Earth science data systems from users' experiences and community insight. Each group works independently, focusing on a unique topic. Progress of two of the 2017 Working Groups will be presented. In a single airborne field campaign, there can be several different instruments and techniques that measure the same parameter on one or more aircraft platforms. Many of these same parameters are measured during different airborne campaigns using similar or different instruments and techniques. The Airborne Composition Standard Variable Name Working Group is working to create a list of variable standard names that can be used across all airborne field campaigns in order to assist in the transition to the ICARTT Version 2.0 file format. The overall goal is to enhance the usability of ICARTT files and the search ability of airborne field campaign data. The Time Series Working Group (TSWG) is a continuation of the 2015 and 2016 Time Series Working Groups. In 2015, we started TSWG with the intention of exploring the new OGC (Open Geospatial Consortium) WaterML 2 standards as a means for encoding point-based time series data from NASA satellites. In this working group, we realized that WaterML 2 might not be the best solution for this type of data, for a number of reasons. Our discussion with experts from other agencies, who have worked on similar issues, identified several challenges that we would need to address. As a result, we made the recommendation to study the new TimeseriesML 1.0 standard of OGC as a potential NASA time series standard. The 2016 TSWG examined closely the TimeseriesML 1.0 and, in coordination with the OGC TimeseriesML Standards Working Group, identified certain gaps in TimeseriesML 1.0 that would need to be addressed for the standard to be applicable to NASA time series data. An engineering report was drafted based on the OGC Engineering Report template, describing recommended changes to TimeseriesML 1.0, in the form of use cases. In 2017, we are conducting interoperability experiments to implement the use cases and demonstrate the feasibility and suitability of these modifications for NASA and related user communities. The results will be incorporated into the existing draft engineering report.
NASA Technical Reports Server (NTRS)
Evans, Keith D.; Early, Amanda; Northup, Emily; Ames, Dan; Teng, William; Archur, David; Beach, Aubrey; Olding, Steve; Krotkov, Nickolay A.
2017-01-01
The role of NASA's Earth Science Data Systems Working Groups (ESDSWG) is to make recommendations relevant to NASA's Earth science data systems from users' experiences and community insight. Each group works independently, focusing on a unique topic. Progress of two of the 2017 Working Groups will be presented. In a single airborne field campaign, there can be several different instruments and techniques that measure the same parameter on one or more aircraft platforms. Many of these same parameters are measured during different airborne campaigns using similar or different instruments and techniques. The Airborne Composition Standard Variable Name Working Group is working to create a list of variable standard names that can be used across all airborne field campaigns in order to assist in the transition to the ICARTT Version 2.0 file format. The overall goal is to enhance the usability of ICARTT files and the search ability of airborne field campaign data. The Time Series Working Group (TSWG) is a continuation of the 2015 and 2016 Time Series Working Groups. In 2015, we started TSWG with the intention of exploring the new OGC (Open Geospatial Consortium) WaterML 2 standards as a means for encoding point-based time series data from NASA satellites. In this working group, we realized that WaterML 2 might not be the best solution for this type of data, for a number of reasons. Our discussion with experts from other agencies, who have worked on similar issues, identified several challenges that we would need to address. As a result, we made the recommendation to study the new TimeseriesML 1.0 standard of OGC as a potential NASA time series standard. The 2016 TSWG examined closely the TimeseriesML 1.0 and, in coordination with the OGC TimeseriesML Standards Working Group, identified certain gaps in TimeseriesML 1.0 that would need to be addressed for the standard to be applicable to NASA time series data. An engineering report was drafted based on the OGC Engineering Report template, describing recommended changes to TimeseriesML 1.0, in the form of use cases. In 2017, we are conducting interoperability experiments to implement the use cases and demonstrate the feasibility and suitability of these modifications for NASA and related user communities. The results will be incorporated into the existing draft engineering report.
NASA Astrophysics Data System (ADS)
Aliotta, M. A.; Cassisi, C.; Prestifilippo, M.; Cannata, A.; Montalto, P.; Patanè, D.
2014-12-01
During the last years, volcanic activity at Mt. Etna was often characterized by cyclic occurrences of fountains. In the period between January 2011 and June 2013, 38 episodes of lava fountains has been observed. Automatic recognition of the volcano's states related to lava fountain episodes (Quiet, Pre-Fountaining, Fountaining, Post-Fountaining) is very useful for monitoring purposes. We discovered that such states are strongly related to the trend of RMS (Root Mean Square) of the seismic signal recorded in the summit area. In the framework of the project PON SIGMA (Integrated Cloud-Sensor System for Advanced Multirisk Management) work, we tried to model the system generating its sampled values (assuming to be a Markov process and assuming that RMS time series is a stochastic process), by using Hidden Markov models (HMMs), that are a powerful tool for modeling any time-varying series. HMMs analysis seeks to discover the sequence of hidden states from the observed emissions. In our framework, observed emissions are characters generated by SAX (Symbolic Aggregate approXimation) technique. SAX is able to map RMS time series values with discrete literal emissions. Our experiments showed how to predict volcano states by means of SAX and HMMs.
Spectral Unmixing Analysis of Time Series Landsat 8 Images
NASA Astrophysics Data System (ADS)
Zhuo, R.; Xu, L.; Peng, J.; Chen, Y.
2018-05-01
Temporal analysis of Landsat 8 images opens up new opportunities in the unmixing procedure. Although spectral analysis of time series Landsat imagery has its own advantage, it has rarely been studied. Nevertheless, using the temporal information can provide improved unmixing performance when compared to independent image analyses. Moreover, different land cover types may demonstrate different temporal patterns, which can aid the discrimination of different natures. Therefore, this letter presents time series K-P-Means, a new solution to the problem of unmixing time series Landsat imagery. The proposed approach is to obtain the "purified" pixels in order to achieve optimal unmixing performance. The vertex component analysis (VCA) is used to extract endmembers for endmember initialization. First, nonnegative least square (NNLS) is used to estimate abundance maps by using the endmember. Then, the estimated endmember is the mean value of "purified" pixels, which is the residual of the mixed pixel after excluding the contribution of all nondominant endmembers. Assembling two main steps (abundance estimation and endmember update) into the iterative optimization framework generates the complete algorithm. Experiments using both simulated and real Landsat 8 images show that the proposed "joint unmixing" approach provides more accurate endmember and abundance estimation results compared with "separate unmixing" approach.
Wang, Hong; Gao, Jian-en; Li, Xing-hua; Zhang, Shao-long; Wang, Hong-jie
2015-01-01
To evaluate the process of nitrate accumulation and leaching in surface and ground water, we conducted simulated rainfall experiments. The experiments were performed in areas of 5.3 m2 with bare slopes of 3° that were treated with two nitrogen fertilizer inputs, high (22.5 g/m2 NH4NO3) and control (no fertilizer), and subjected to 2 hours of rainfall, with. From the 1st to the 7th experiments, the same content of fertilizer mixed with soil was uniformly applied to the soil surface at 10 minutes before rainfall, and no fertilizer was applied for the 8th through 12th experiments. Initially, the time-series nitrate concentration in the surface flow quickly increased, and then it rapidly decreased and gradually stabilized at a low level during the fertilizer experiments. The nitrogen loss in the surface flow primarily occurred during the first 18.6 minutes of rainfall. For the continuous fertilizer experiments, the mean nitrate concentrations in the groundwater flow remained at less than 10 mg/L before the 5th experiment, and after the 7th experiment, these nitrate concentrations were greater than 10 mg/L throughout the process. The time-series process of the changing concentration in the groundwater flow exhibited the same parabolic trend for each fertilizer experiment. However, the time at which the nitrate concentration began to change lagged behind the start time of groundwater flow by approximately 0.94 hours on average. The experiments were also performed with no fertilizer. In these experiments, the mean nitrate concentration of groundwater initially increased continuously, and then, the process exhibited the same parabolic trend as the results of the fertilization experiments. The nitrate concentration decreased in the subsequent experiments. Eight days after the 12 rainfall experiments, 50.53% of the total nitrate applied remained in the experimental soil. Nitrate residues mainly existed at the surface and in the bottom soil layers, which represents a potentially more dangerous pollution scenario for surface and ground water. The surface and subsurface flow would enter into and contaminate water bodies, thus threatening the water environment. PMID:26291616
Wang, Hong; Gao, Jian-en; Li, Xing-hua; Zhang, Shao-long; Wang, Hong-jie
2015-01-01
To evaluate the process of nitrate accumulation and leaching in surface and ground water, we conducted simulated rainfall experiments. The experiments were performed in areas of 5.3 m2 with bare slopes of 3° that were treated with two nitrogen fertilizer inputs, high (22.5 g/m2 NH4NO3) and control (no fertilizer), and subjected to 2 hours of rainfall, with. From the 1st to the 7th experiments, the same content of fertilizer mixed with soil was uniformly applied to the soil surface at 10 minutes before rainfall, and no fertilizer was applied for the 8th through 12th experiments. Initially, the time-series nitrate concentration in the surface flow quickly increased, and then it rapidly decreased and gradually stabilized at a low level during the fertilizer experiments. The nitrogen loss in the surface flow primarily occurred during the first 18.6 minutes of rainfall. For the continuous fertilizer experiments, the mean nitrate concentrations in the groundwater flow remained at less than 10 mg/L before the 5th experiment, and after the 7th experiment, these nitrate concentrations were greater than 10 mg/L throughout the process. The time-series process of the changing concentration in the groundwater flow exhibited the same parabolic trend for each fertilizer experiment. However, the time at which the nitrate concentration began to change lagged behind the start time of groundwater flow by approximately 0.94 hours on average. The experiments were also performed with no fertilizer. In these experiments, the mean nitrate concentration of groundwater initially increased continuously, and then, the process exhibited the same parabolic trend as the results of the fertilization experiments. The nitrate concentration decreased in the subsequent experiments. Eight days after the 12 rainfall experiments, 50.53% of the total nitrate applied remained in the experimental soil. Nitrate residues mainly existed at the surface and in the bottom soil layers, which represents a potentially more dangerous pollution scenario for surface and ground water. The surface and subsurface flow would enter into and contaminate water bodies, thus threatening the water environment.
Retrieving hydrological connectivity from empirical causality in karst systems
NASA Astrophysics Data System (ADS)
Delforge, Damien; Vanclooster, Marnik; Van Camp, Michel; Poulain, Amaël; Watlet, Arnaud; Hallet, Vincent; Kaufmann, Olivier; Francis, Olivier
2017-04-01
Because of their complexity, karst systems exhibit nonlinear dynamics. Moreover, if one attempts to model a karst, the hidden behavior complicates the choice of the most suitable model. Therefore, both intense investigation methods and nonlinear data analysis are needed to reveal the underlying hydrological connectivity as a prior for a consistent physically based modelling approach. Convergent Cross Mapping (CCM), a recent method, promises to identify causal relationships between time series belonging to the same dynamical systems. The method is based on phase space reconstruction and is suitable for nonlinear dynamics. As an empirical causation detection method, it could be used to highlight the hidden complexity of a karst system by revealing its inner hydrological and dynamical connectivity. Hence, if one can link causal relationships to physical processes, the method should show great potential to support physically based model structure selection. We present the results of numerical experiments using karst model blocks combined in different structures to generate time series from actual rainfall series. CCM is applied between the time series to investigate if the empirical causation detection is consistent with the hydrological connectivity suggested by the karst model.
Physiological Evidence for Response Inhibition in Choice Reaction Time Tasks
ERIC Educational Resources Information Center
Burle, Boris; Vidal, Frank; Tandonnet, Christophe; Hasbroucq, Thierry
2004-01-01
Inhibition is a widely used notion proposed to account for data obtained in choice reaction time (RT) tasks. However, this concept is weakly supported by empirical facts. In this paper, we review a series of experiments using Hoffman reflex, transcranial magnetic stimulation and electroencephalography to study inhibition in choice RT tasks. We…
What's on Your Radar Screen? Distance-Rate-Time Problems from NASA
ERIC Educational Resources Information Center
Condon, Gregory W.; Landesman, Miriam F.; Calasanz-Kaiser, Agnes
2006-01-01
This article features NASA's FlyBy Math, a series of six standards-based distance-rate-time investigations in air traffic control. Sixth-grade students--acting as pilots, air traffic controllers, and NASA scientists--conduct an experiment and then use multiple mathematical representations to analyze and solve a problem involving two planes flying…
Ten Time-Saving Tips for Undergraduate Research Mentors
ERIC Educational Resources Information Center
Coker, Jeffrey Scott; Davies, Eric
2006-01-01
Undergraduate research experiences can be extremely valuable for students, but can also be very time-consuming for mentors. A series of surveys were administered to plant biologists during the last 4 years to understand the perspectives of mentors on training undergraduate and high school student researchers. The survey responses provided a wealth…
USDA-ARS?s Scientific Manuscript database
A series of experiments were conducted to examine reductions in bacterial contamination of broiler carcasses washed for various times in a spray cabinet with a 2% lauric acid (LA)-1% potassium hydroxide (KOH) (w/v) solution. Forty eviscerated carcasses and 5 ceca were obtained from the processing l...
IMP series report/bibliography
NASA Technical Reports Server (NTRS)
King, J. H.
1971-01-01
The main characteristics of the IMP spacecraft and experiments are considered and the scientific knowledge gained is presented in the form of abstracts of scientific papers using IMP data. Spacecraft characteristics, including temporal and spatial coverages, are presented followed by an annotated bibliography. Experiments conducted on all IMP's (including prelaunch IMP's H and J) are described. Figures are presented showing the time histories, through the end of 1970, of magnetic field, plasma, and energetic particle experiments.
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen; Talpe, Matthieu; Shum, C. K.; Schmidt, Michael
2017-12-01
In recent decades, decomposition techniques have enabled increasingly more applications for dimension reduction, as well as extraction of additional information from geophysical time series. Traditionally, the principal component analysis (PCA)/empirical orthogonal function (EOF) method and more recently the independent component analysis (ICA) have been applied to extract, statistical orthogonal (uncorrelated), and independent modes that represent the maximum variance of time series, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the autocovariance matrix and diagonalizing higher (than two) order statistical tensors from centered time series, respectively. However, the stationarity assumption in these techniques is not justified for many geophysical and climate variables even after removing cyclic components, e.g., the commonly removed dominant seasonal cycles. In this paper, we present a novel decomposition method, the complex independent component analysis (CICA), which can be applied to extract non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA, where (a) we first define a new complex dataset that contains the observed time series in its real part, and their Hilbert transformed series as its imaginary part, (b) an ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex dataset in (a), and finally, (c) the dominant independent complex modes are extracted and used to represent the dominant space and time amplitudes and associated phase propagation patterns. The performance of CICA is examined by analyzing synthetic data constructed from multiple physically meaningful modes in a simulation framework, with known truth. Next, global terrestrial water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) gravimetry mission (2003-2016), and satellite radiometric sea surface temperature (SST) data (1982-2016) over the Atlantic and Pacific Oceans are used with the aim of demonstrating signal separations of the North Atlantic Oscillation (NAO) from the Atlantic Multi-decadal Oscillation (AMO), and the El Niño Southern Oscillation (ENSO) from the Pacific Decadal Oscillation (PDO). CICA results indicate that ENSO-related patterns can be extracted from the Gravity Recovery And Climate Experiment Terrestrial Water Storage (GRACE TWS) with an accuracy of 0.5-1 cm in terms of equivalent water height (EWH). The magnitude of errors in extracting NAO or AMO from SST data using the complex EOF (CEOF) approach reaches up to 50% of the signal itself, while it is reduced to 16% when applying CICA. Larger errors with magnitudes of 100% and 30% of the signal itself are found while separating ENSO from PDO using CEOF and CICA, respectively. We thus conclude that the CICA is more effective than CEOF in separating non-stationary patterns.
Textural changes of FER-A peridotite in time series piston-cylinder experiments at 1.0 GPa, 1300°C
NASA Astrophysics Data System (ADS)
Schwab, B. E.; Mercer, C. N.; Johnston, A.
2012-12-01
A series of eight 1.0 GPa, 1300°C partial melting experiments were performed using FER-A peridotite starting material to investigate potential textural changes in the residual crystalline phases over time. Powdered peridotite with a layer of vitreous carbon spheres as a melt sink were sealed in graphite-lined Pt capsules and run in CaF2 furnace assemblies in 1.27cm piston-cylinder apparatus at the University of Oregon. Run durations ranged from 4 to 128 hours. Experimental charges were mounted in epoxy, cut, and polished for analysis. In a first attempt to quantify the mineral textures, individual 500x BSE images were collected from selected, representative locations on each of the experimental charges using the FEI Quanta 250 ESEM at Humboldt State University. Noran System Seven (NSS) EDS system was used to collect x-ray maps (spectral images) to aid in identification of phases. A combination of image analysis techniques within NSS and ImageJ software are being used to process the images and quantify the mineral textures observed. The goals are to quantify the size, shape, and abundance of residual olivine (ol), orthopyroxene (opx), clinopyroxene (cpx), and spinel crystals within the selected sample areas of the run products. Additional work will be done to compare the results of the selected areas with larger (lower magnification) images acquired using the same techniques. Preliminary results indicate that measurements of average grain area, minimum grain area, and average, maximum, and minimum grain perimeter show the greatest change (generally decreasing) in measurements for ol, opx, and cpx between the shortest-duration, 4-hour, experiment and the subsequent, 8-hour, experiment. The largest relative change in nearly all of these measurements appears to be for cpx. After the initial decrease, preliminary measurements remain relatively constant for ol, opx, and cpx, respectively, in experiments from 8 to 128 hours in duration. In contrast, measured parameters of spinel grains increase from the 4-hour to 8-hour experiment and continue to fluctuate over the time interval investigated. Spinel also represents the smallest number of individual grains (average n = 25) in any experiment. Average aspect ratios for all minerals remain relatively constant (~1.5-2) throughout the time series. Additional measurements and refinements are underway.
ERIC Educational Resources Information Center
Zeller, William J., Ed.
2008-01-01
Residence life programs play a key role in recruiting students, helping them make a successful transition to a new institution, and in retaining them, whether students are enrolling for the first time, transferring from another institution, or entering graduate school. Chapters in this book address theories of learning and development, new…
ERIC Educational Resources Information Center
Ronnberg, Linda
2007-01-01
In 1999, after a series of far-reaching reforms aiming at decentralisation, deregulation and increased local autonomy in Swedish education, the Government decided to introduce a five-year experiment, which would develop these reform efforts even further. Even though Swedish compulsory schools already were the most autonomous in Europe with regard…
A little bit of Africa in Brazil: ethnobiology experiences in the field of Afro-Brazilian religions
2014-01-01
This essay, which is the fourth in the series “Recollections, Reflections, and Revelations: Ethnobiologists and Their First Time in the Field”, is a personal reflection by the researcher on his first field experience with ethnobiology of so called Afro-brazilian cults. The author recounts his feelings and concerns associated with initial fieldwork. PMID:24467714
A little bit of Africa in Brazil: ethnobiology experiences in the field of Afro-Brazilian religions.
Albuquerque, Ulysses Paulino
2014-01-27
This essay, which is the fourth in the series "Recollections, Reflections, and Revelations: Ethnobiologists and Their First Time in the Field", is a personal reflection by the researcher on his first field experience with ethnobiology of so called Afro-Brazilian cults. The author recounts his feelings and concerns associated with initial fieldwork.
Dealing with gene expression missing data.
Brás, L P; Menezes, J C
2006-05-01
Compared evaluation of different methods is presented for estimating missing values in microarray data: weighted K-nearest neighbours imputation (KNNimpute), regression-based methods such as local least squares imputation (LLSimpute) and partial least squares imputation (PLSimpute) and Bayesian principal component analysis (BPCA). The influence in prediction accuracy of some factors, such as methods' parameters, type of data relationships used in the estimation process (i.e. row-wise, column-wise or both), missing rate and pattern and type of experiment [time series (TS), non-time series (NTS) or mixed (MIX) experiments] is elucidated. Improvements based on the iterative use of data (iterative LLS and PLS imputation--ILLSimpute and IPLSimpute), the need to perform initial imputations (modified PLS and Helland PLS imputation--MPLSimpute and HPLSimpute) and the type of relationships employed (KNNarray, LLSarray, HPLSarray and alternating PLS--APLSimpute) are proposed. Overall, it is shown that data set properties (type of experiment, missing rate and pattern) affect the data similarity structure, therefore influencing the methods' performance. LLSimpute and ILLSimpute are preferable in the presence of data with a stronger similarity structure (TS and MIX experiments), whereas PLS-based methods (MPLSimpute, IPLSimpute and APLSimpute) are preferable when estimating NTS missing data.
Identification of ELF3 as an early transcriptional regulator of human urothelium.
Böck, Matthias; Hinley, Jennifer; Schmitt, Constanze; Wahlicht, Tom; Kramer, Stefan; Southgate, Jennifer
2014-02-15
Despite major advances in high-throughput and computational modelling techniques, understanding of the mechanisms regulating tissue specification and differentiation in higher eukaryotes, particularly man, remains limited. Microarray technology has been explored exhaustively in recent years and several standard approaches have been established to analyse the resultant datasets on a genome-wide scale. Gene expression time series offer a valuable opportunity to define temporal hierarchies and gain insight into the regulatory relationships of biological processes. However, unless datasets are exactly synchronous, time points cannot be compared directly. Here we present a data-driven analysis of regulatory elements from a microarray time series that tracked the differentiation of non-immortalised normal human urothelial (NHU) cells grown in culture. The datasets were obtained by harvesting differentiating and control cultures from finite bladder- and ureter-derived NHU cell lines at different time points using two previously validated, independent differentiation-inducing protocols. Due to the asynchronous nature of the data, a novel ranking analysis approach was adopted whereby we compared changes in the amplitude of experiment and control time series to identify common regulatory elements. Our approach offers a simple, fast and effective ranking method for genes that can be applied to other time series. The analysis identified ELF3 as a candidate transcriptional regulator involved in human urothelial cytodifferentiation. Differentiation-associated expression of ELF3 was confirmed in cell culture experiments and by immunohistochemical demonstration in situ. The importance of ELF3 in urothelial differentiation was verified by knockdown in NHU cells, which led to reduced expression of FOXA1 and GRHL3 transcription factors in response to PPARγ activation. The consequences of this were seen in the repressed expression of late/terminal differentiation-associated uroplakin 3a gene expression and in the compromised development and regeneration of urothelial barrier function. Copyright © 2014 Elsevier Inc. All rights reserved.
Kim, Jongrae; Bates, Declan G; Postlethwaite, Ian; Heslop-Harrison, Pat; Cho, Kwang-Hyun
2008-05-15
Inherent non-linearities in biomolecular interactions make the identification of network interactions difficult. One of the principal problems is that all methods based on the use of linear time-invariant models will have fundamental limitations in their capability to infer certain non-linear network interactions. Another difficulty is the multiplicity of possible solutions, since, for a given dataset, there may be many different possible networks which generate the same time-series expression profiles. A novel algorithm for the inference of biomolecular interaction networks from temporal expression data is presented. Linear time-varying models, which can represent a much wider class of time-series data than linear time-invariant models, are employed in the algorithm. From time-series expression profiles, the model parameters are identified by solving a non-linear optimization problem. In order to systematically reduce the set of possible solutions for the optimization problem, a filtering process is performed using a phase-portrait analysis with random numerical perturbations. The proposed approach has the advantages of not requiring the system to be in a stable steady state, of using time-series profiles which have been generated by a single experiment, and of allowing non-linear network interactions to be identified. The ability of the proposed algorithm to correctly infer network interactions is illustrated by its application to three examples: a non-linear model for cAMP oscillations in Dictyostelium discoideum, the cell-cycle data for Saccharomyces cerevisiae and a large-scale non-linear model of a group of synchronized Dictyostelium cells. The software used in this article is available from http://sbie.kaist.ac.kr/software
Delay differential analysis of time series.
Lainscsek, Claudia; Sejnowski, Terrence J
2015-03-01
Nonlinear dynamical system analysis based on embedding theory has been used for modeling and prediction, but it also has applications to signal detection and classification of time series. An embedding creates a multidimensional geometrical object from a single time series. Traditionally either delay or derivative embeddings have been used. The delay embedding is composed of delayed versions of the signal, and the derivative embedding is composed of successive derivatives of the signal. The delay embedding has been extended to nonuniform embeddings to take multiple timescales into account. Both embeddings provide information on the underlying dynamical system without having direct access to all the system variables. Delay differential analysis is based on functional embeddings, a combination of the derivative embedding with nonuniform delay embeddings. Small delay differential equation (DDE) models that best represent relevant dynamic features of time series data are selected from a pool of candidate models for detection or classification. We show that the properties of DDEs support spectral analysis in the time domain where nonlinear correlation functions are used to detect frequencies, frequency and phase couplings, and bispectra. These can be efficiently computed with short time windows and are robust to noise. For frequency analysis, this framework is a multivariate extension of discrete Fourier transform (DFT), and for higher-order spectra, it is a linear and multivariate alternative to multidimensional fast Fourier transform of multidimensional correlations. This method can be applied to short or sparse time series and can be extended to cross-trial and cross-channel spectra if multiple short data segments of the same experiment are available. Together, this time-domain toolbox provides higher temporal resolution, increased frequency and phase coupling information, and it allows an easy and straightforward implementation of higher-order spectra across time compared with frequency-based methods such as the DFT and cross-spectral analysis.
Lawhern, Vernon; Hairston, W David; Robbins, Kay
2013-01-01
Recent advances in sensor and recording technology have allowed scientists to acquire very large time-series datasets. Researchers often analyze these datasets in the context of events, which are intervals of time where the properties of the signal change relative to a baseline signal. We have developed DETECT, a MATLAB toolbox for detecting event time intervals in long, multi-channel time series. Our primary goal is to produce a toolbox that is simple for researchers to use, allowing them to quickly train a model on multiple classes of events, assess the accuracy of the model, and determine how closely the results agree with their own manual identification of events without requiring extensive programming knowledge or machine learning experience. As an illustration, we discuss application of the DETECT toolbox for detecting signal artifacts found in continuous multi-channel EEG recordings and show the functionality of the tools found in the toolbox. We also discuss the application of DETECT for identifying irregular heartbeat waveforms found in electrocardiogram (ECG) data as an additional illustration.
Lawhern, Vernon; Hairston, W. David; Robbins, Kay
2013-01-01
Recent advances in sensor and recording technology have allowed scientists to acquire very large time-series datasets. Researchers often analyze these datasets in the context of events, which are intervals of time where the properties of the signal change relative to a baseline signal. We have developed DETECT, a MATLAB toolbox for detecting event time intervals in long, multi-channel time series. Our primary goal is to produce a toolbox that is simple for researchers to use, allowing them to quickly train a model on multiple classes of events, assess the accuracy of the model, and determine how closely the results agree with their own manual identification of events without requiring extensive programming knowledge or machine learning experience. As an illustration, we discuss application of the DETECT toolbox for detecting signal artifacts found in continuous multi-channel EEG recordings and show the functionality of the tools found in the toolbox. We also discuss the application of DETECT for identifying irregular heartbeat waveforms found in electrocardiogram (ECG) data as an additional illustration. PMID:23638169
NASA Technical Reports Server (NTRS)
Spruce, Joseph P.; Ryan, Robert E.; Smoot, James C.; Prados, Donald; McKellip, Rodney; Sader. Steven A.; Gasser, Jerry; May, George; Hargrove, William
2007-01-01
A NASA RPC (Rapid Prototyping Capability) experiment was conducted to assess the potential of VIIRS (Visible/Infrared Imager/Radiometer Suite) data for monitoring non-native gypsy moth (Lymantria dispar) defoliation of forests. This experiment compares defoliation detection products computed from simulated VIIRS and from MODIS (Moderate Resolution Imaging Spectroradiometer) time series products as potential inputs to a forest threat EWS (Early Warning System) being developed for the USFS (USDA Forest Service). Gypsy moth causes extensive defoliation of broadleaved forests in the United States and is specifically identified in the Healthy Forest Restoration Act (HFRA) of 2003. The HFRA mandates development of a national forest threat EWS. This system is being built by the USFS and NASA is aiding integration of needed satellite data products into this system, including MODIS products. This RPC experiment enabled the MODIS follow-on, VIIRS, to be evaluated as a data source for EWS forest monitoring products. The experiment included 1) assessment of MODIS-simulated VIIRS NDVI products, and 2) evaluation of gypsy moth defoliation mapping products from MODIS-simulated VIIRS and from MODIS NDVI time series data. This experiment employed MODIS data collected over the approximately 15 million acre mid-Appalachian Highlands during the annual peak defoliation time frame (approximately June 10 through July 27) during 2000-2006. NASA Stennis Application Research Toolbox software was used to produce MODIS-simulated VIIRS data and NASA Stennis Time Series Product Tool software was employed to process MODIS and MODIS-simulated VIIRS time series data scaled to planetary reflectance. MODIS-simulated VIIRS data was assessed through comparison to Hyperion-simulated VIIRS data using data collected during gypsy moth defoliation. Hyperion-simulated MODIS data showed a high correlation with actual MODIS data (NDVI R2 of 0.877 and RMSE of 0.023). MODIS-simulated VIIRS data for the same date showed moderately high correlation with Hyperion-simulated VIIRS data (NDVI R2 of 0.62 and RMSE of 0.035), even though the datasets were collected about a half an hour apart during changing weather conditions. MODIS products (MOD02, MOD09, and MOD13) and MOD02-simulated VIIRS time series data were used to generate defoliation mapping products based on image classification and image differencing change detection techniques. Accuracy of final defoliation mapping products was assessed by image interpreting over 170 randomly sampled locations found on Landsat and ASTER data in conjunction with defoliation map data from the USFS. The MOD02-simulated VIIRS 400-meter NDVI classification produced a similar overall accuracy (87.28 percent with 0.72 Kappa) to the MOD02 250-meter NDVI classification (86.71 percent with 0.71 Kappa). In addition, the VIIRS 400-meter NDVI, MOD02 250-meter NDVI, and MOD02 500-meter NDVI showed good user and producer accuracies for the defoliated forest class (70 percent) and acceptable Kappa values (0.66). MOD02 and MOD02-simulated VIIRS data both showed promise as data sources for regional monitoring of forest disturbance due to insect defoliation.
2011-01-01
Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778
Kennedy, Curtis E; Turley, James P
2011-10-24
Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.
Optimized Design and Analysis of Sparse-Sampling fMRI Experiments
Perrachione, Tyler K.; Ghosh, Satrajit S.
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power. PMID:23616742
Optimized design and analysis of sparse-sampling FMRI experiments.
Perrachione, Tyler K; Ghosh, Satrajit S
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power.
Large-scale Granger causality analysis on resting-state functional MRI
NASA Astrophysics Data System (ADS)
D'Souza, Adora M.; Abidin, Anas Zainul; Leistritz, Lutz; Wismüller, Axel
2016-03-01
We demonstrate an approach to measure the information flow between each pair of time series in resting-state functional MRI (fMRI) data of the human brain and subsequently recover its underlying network structure. By integrating dimensionality reduction into predictive time series modeling, large-scale Granger Causality (lsGC) analysis method can reveal directed information flow suggestive of causal influence at an individual voxel level, unlike other multivariate approaches. This method quantifies the influence each voxel time series has on every other voxel time series in a multivariate sense and hence contains information about the underlying dynamics of the whole system, which can be used to reveal functionally connected networks within the brain. To identify such networks, we perform non-metric network clustering, such as accomplished by the Louvain method. We demonstrate the effectiveness of our approach to recover the motor and visual cortex from resting state human brain fMRI data and compare it with the network recovered from a visuomotor stimulation experiment, where the similarity is measured by the Dice Coefficient (DC). The best DC obtained was 0.59 implying a strong agreement between the two networks. In addition, we thoroughly study the effect of dimensionality reduction in lsGC analysis on network recovery. We conclude that our approach is capable of detecting causal influence between time series in a multivariate sense, which can be used to segment functionally connected networks in the resting-state fMRI.
NASA Astrophysics Data System (ADS)
Xie, Hong-Bo; Dokos, Socrates
2013-06-01
We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.
Xie, Hong-Bo; Dokos, Socrates
2013-06-01
We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.
Two-dimensional NMR spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrar, T.C.
1987-06-01
This article is the second in a two-part series. In part one (ANALYTICAL CHEMISTRY, May 15) the authors discussed one-dimensional nuclear magnetic resonance (NMR) spectra and some relatively advanced nuclear spin gymnastics experiments that provide a capability for selective sensitivity enhancements. In this article and overview and some applications of two-dimensional NMR experiments are presented. These powerful experiments are important complements to the one-dimensional experiments. As in the more sophisticated one-dimensional experiments, the two-dimensional experiments involve three distinct time periods: a preparation period, t/sub 0/; an evolution period, t/sub 1/; and a detection period, t/sub 2/.
Guide to Using Onionskin Analysis Code (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fugate, Michael Lynn; Morzinski, Jerome Arthur
2016-09-15
This document is a guide to using R-code written for the purpose of analyzing onionskin experiments. We expect the user to be very familiar with statistical methods and the R programming language. For more details about onionskin experiments and the statistical methods mentioned in this document see Storlie, Fugate, et al. (2013). Engineers at LANL experiment with detonators and high explosives to assess performance. The experimental unit, called an onionskin, is a hemisphere consisting of a detonator and a booster pellet surrounded by explosive material. When the detonator explodes, a streak camera mounted above the pole of the hemisphere recordsmore » when the shock wave arrives at the surface. The output from the camera is a two-dimensional image that is transformed into a curve that shows the arrival time as a function of polar angle. The statistical challenge is to characterize a baseline population of arrival time curves and to compare the baseline curves to curves from a new, so-called, test series. The hope is that the new test series of curves is statistically similar to the baseline population.« less
de Vocht, Frank; Tilling, Kate; Pliakas, Triantafyllos; Angus, Colin; Egan, Matt; Brennan, Alan; Campbell, Rona; Hickman, Matthew
2017-09-01
Control of alcohol licensing at local government level is a key component of alcohol policy in England. There is, however, only weak evidence of any public health improvement. We used a novel natural experiment design to estimate the impact of new local alcohol licensing policies on hospital admissions and crime. We used Home Office licensing data (2007-2012) to identify (1) interventions: local areas where both a cumulative impact zone and increased licensing enforcement were introduced in 2011; and (2) controls: local areas with neither. Outcomes were 2009-2015 alcohol-related hospital admissions, violent and sexual crimes, and antisocial behaviour. Bayesian structural time series were used to create postintervention synthetic time series (counterfactuals) based on weighted time series in control areas. Intervention effects were calculated from differences between measured and expected trends. Validation analyses were conducted using randomly selected controls. 5 intervention and 86 control areas were identified. Intervention was associated with an average reduction in alcohol-related hospital admissions of 6.3% (95% credible intervals (CI) -12.8% to 0.2%) and to lesser extent with a reduced in violent crimes, especially up to 2013 (-4.6%, 95% CI -10.7% to 1.4%). There was weak evidence of an effect on sexual crimes up 2013 (-8.4%, 95% CI -21.4% to 4.6%) and insufficient evidence of an effect on antisocial behaviour as a result of a change in reporting. Moderate reductions in alcohol-related hospital admissions and violent and sexual crimes were associated with introduction of local alcohol licensing policies. This novel methodology holds promise for use in other natural experiments in public health. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Autoregressive-model-based missing value estimation for DNA microarray time series data.
Choong, Miew Keen; Charbit, Maurice; Yan, Hong
2009-01-01
Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.
Mapping deforestation and forest degradation using Landsat time series: a case of Sumatra—Indonesia
Belinda Arunarwati Margono
2013-01-01
Indonesia experiences the second highest rate of deforestation among tropical countries (FAO 2005, 2010). Consequently, timely and accurate forest data are required to combat deforestation and forest degradation in support of climate change mitigation and biodiversity conservation policy initiatives. Remote sensing is considered as a significant data source for forest...
Indicator saturation: a novel approach to detect multiple breaks in geodetic time series.
NASA Astrophysics Data System (ADS)
Jackson, L. P.; Pretis, F.; Williams, S. D. P.
2016-12-01
Geodetic time series can record long term trends, quasi-periodic signals at a variety of time scales from days to decades, and sudden breaks due to natural or anthropogenic causes. The causes of breaks range from instrument replacement to earthquakes to unknown (i.e. no attributable cause). Furthermore, breaks can be permanent or short-lived and range at least two orders of magnitude in size (mm to 100's mm). To account for this range of possible signal-characteristics requires a flexible time series method that can distinguish between true and false breaks, outliers and time-varying trends. One such method, Indicator Saturation (IS) comes from the field of econometrics where analysing stochastic signals in these terms is a common problem. The IS approach differs from alternative break detection methods by considering every point in the time series as a break until it is demonstrated statistically that it is not. A linear model is constructed with a break function at every point in time, and all but statistically significant breaks are removed through a general-to-specific model selection algorithm for more variables than observations. The IS method is flexible because it allows multiple breaks of different forms (e.g. impulses, shifts in the mean, and changing trends) to be detected, while simultaneously modelling any underlying variation driven by additional covariates. We apply the IS method to identify breaks in a suite of synthetic GPS time series used for the Detection of Offsets in GPS Experiments (DOGEX). We optimise the method to maximise the ratio of true-positive to false-positive detections, which improves estimates of errors in the long term rates of land motion currently required by the GPS community.
Impressionist Landscape Cartography
NASA Astrophysics Data System (ADS)
Todd, Stella W.
2018-05-01
Cartography helps to show us the world in which we reside by providing us a framework to explore space. We can select myriad themes to represent what is relevant to our lives: physical characteristics, human behaviors, hazards, opportunities. Themes are represented on a continuum between real-world images and pure abstractions. How we define cartography and what we expect from it changes with society and technology. We are now inundated with data but we still struggle with expressing our personal geographic experiences through cartography. In this age of information we have become more cognizant of our individual experience of place and our need to determine our own paths and therefore create our own maps. In order to reflect our journey we can add individual details to cartographic products or generalize information to concentrate on what is meaningful to us. Since time and space are interrelated we experience geography by viewing the landscape as changing scenes over time. This experience is both spatial and temporal since we experience geography by moving through space. Experiencing each scene is a separate event. This paper expands the personalization of maps to include our impressions of the travel experience. Rather than add art to cartography it provides geographic reference to art. It explores the use of a series of quick sketches drawn while traveling along roads using a single drawing pad to produce a time series of interpreted landscapes. With the use of geographic time stamps from global positioning systems these sketches are converted from a drawing to a map documenting the path of movement. Although the map scale varies between sketch entries each scene impression can be linked to one or more maps of consistent scale. The result is an artistic piece that expresses a dynamic geographic experience that can be viewed in conjunction with more traditional maps. Unlike mental maps which are constructed from memory, these maps reflect our direct impressions of the landscape. The use of art can help us convey our experience.
Reinforcement Learning and Savings Behavior*
Choi, James J.; Laibson, David; Madrian, Brigitte C.; Metrick, Andrew
2009-01-01
We show that individual investors over-extrapolate from their personal experience when making savings decisions. Investors who experience particularly rewarding outcomes from saving in their 401(k)—a high average and/or low variance return—increase their 401(k) savings rate more than investors who have less rewarding experiences with saving. This finding is not driven by aggregate time-series shocks, income effects, rational learning about investing skill, investor fixed effects, or time-varying investor-level heterogeneity that is correlated with portfolio allocations to stock, bond, and cash asset classes. We discuss implications for the equity premium puzzle and interventions aimed at improving household financial outcomes. PMID:20352013
Liu, Lizhe; Pilles, Bert M; Gontcharov, Julia; Bucher, Dominik B; Zinth, Wolfgang
2016-01-21
UV-induced formation of the cyclobutane pyrimidine dimer (CPD) lesion is investigated by stationary and time-resolved photosensitization experiments. The photosensitizer 2'-methoxyacetophenone with high intersystem crossing efficiency and large absorption cross-section in the UV-A range was used. A diffusion controlled reaction model is presented. Time-resolved experiments confirmed the validity of the reaction model and provided information on the dynamics of the triplet sensitization process. With a series of concentration dependent stationary illumination experiments, we determined the quantum efficiency for CPD formation from the triplet state of the thymine dinucleotide TpT to be 4 ± 0.2%.
Cornick, Matthew; Hunt, Brian; Ott, Edward; Kurtuldu, Huseyin; Schatz, Michael F
2009-03-01
Data assimilation refers to the process of estimating a system's state from a time series of measurements (which may be noisy or incomplete) in conjunction with a model for the system's time evolution. Here we demonstrate the applicability of a recently developed data assimilation method, the local ensemble transform Kalman filter, to nonlinear, high-dimensional, spatiotemporally chaotic flows in Rayleigh-Bénard convection experiments. Using this technique we are able to extract the full temperature and velocity fields from a time series of shadowgraph measurements. In addition, we describe extensions of the algorithm for estimating model parameters. Our results suggest the potential usefulness of our data assimilation technique to a broad class of experimental situations exhibiting spatiotemporal chaos.
Mesgouez, C; Rilliard, F; Matossian, L; Nassiri, K; Mandel, E
2003-03-01
The aim of this study was to determine the influence of operator experience on the time needed for canal preparation when using a rotary nickel-titanium (Ni-Ti) system. A total of 100 simulated curved canals in resin blocks were used. Four operators prepared a total of 25 canals each. The operators included practitioners with prior experience of the preparation technique, and practitioners with no experience. The working length for each instrument was precisely predetermined. All canals were instrumented with rotary Ni-Ti ProFile Variable Taper Series 29 engine-driven instruments using a high-torque handpiece (Maillefer, Ballaigues, Switzerland). The time taken to prepare each canal was recorded. Significant differences between the operators were analysed using the Student's t-test and the Kruskall-Wallis and Dunn nonparametric tests. Comparison of canal preparation times demonstrated a statistically significant difference between the four operators (P < 0.001). In the inexperienced group, a significant linear regression between canal number and preparation time occurred. Time required for canal preparation was inversely related to operator experience.
NASA Astrophysics Data System (ADS)
Honegger, D. A.; Haller, M. C.; Diaz Mendez, G. M.; Pittman, R.; Catalan, P. A.
2012-12-01
Land-based X-band marine radar observations were collected as part of the month-long DARLA-MURI / RIVET-DRI field experiment at New River Inlet, NC in May 2012. Here we present a synopsis of preliminary results utilizing microwave radar backscatter time series collected from an antenna located 400 m inside the inlet mouth and with a footprint spanning 1000 m beyond the ebb shoals. Two crucial factors in the forcing and constraining of nearshore numerical models are accurate bathymetry and offshore variability in the wave field. Image time series of radar backscatter from surface gravity waves can be utilized to infer these parameters over a large swath and during times of poor optical visibility. Presented are radar-derived wavenumber vector maps obtained from the Plant et al. (2008) algorithm and bathymetric estimates as calculated using Holman et al. (JGR, in review). We also evaluate the effects of tidal currents on the wave directions and depth inversion accuracy. In addition, shifts in the average wave breaking patterns at tidal frequencies shed light on depth- (and possibly current-) induced breaking as a function of tide level and tidal current velocity, while shifts over longer timescales imply bedform movement during the course of the experiment. Lastly, lowpass filtered radar image time series of backscatter intensity are shown to identify the structure and propagation of tidal plume fronts and multiscale ebb jets at the offshore shoal boundary.
Impact of missing data on the efficiency of homogenisation: experiments with ACMANTv3
NASA Astrophysics Data System (ADS)
Domonkos, Peter; Coll, John
2018-04-01
The impact of missing data on the efficiency of homogenisation with ACMANTv3 is examined with simulated monthly surface air temperature test datasets. The homogeneous database is derived from an earlier benchmarking of daily temperature data in the USA, and then outliers and inhomogeneities (IHs) are randomly inserted into the time series. Three inhomogeneous datasets are generated and used, one with relatively few and small IHs, another one with IHs of medium frequency and size, and a third one with large and frequent IHs. All of the inserted IHs are changes to the means. Most of the IHs are single sudden shifts or pair of shifts resulting in platform-shaped biases. Each test dataset consists of 158 time series of 100 years length, and their mean spatial correlation is 0.68-0.88. For examining the impacts of missing data, seven experiments are performed, in which 18 series are left complete, while variable quantities (10-70%) of the data of the other 140 series are removed. The results show that data gaps have a greater impact on the monthly root mean squared error (RMSE) than the annual RMSE and trend bias. When data with a large ratio of gaps is homogenised, the reduction of the upper 5% of the monthly RMSE is the least successful, but even there, the efficiency remains positive. In terms of reducing the annual RMSE and trend bias, the efficiency is 54-91%. The inclusion of short and incomplete series with sufficient spatial correlation in all cases improves the efficiency of homogenisation with ACMANTv3.
ImpulseDE: detection of differentially expressed genes in time series data using impulse models.
Sander, Jil; Schultze, Joachim L; Yosef, Nir
2017-03-01
Perturbations in the environment lead to distinctive gene expression changes within a cell. Observed over time, those variations can be characterized by single impulse-like progression patterns. ImpulseDE is an R package suited to capture these patterns in high throughput time series datasets. By fitting a representative impulse model to each gene, it reports differentially expressed genes across time points from a single or between two time courses from two experiments. To optimize running time, the code uses clustering and multi-threading. By applying ImpulseDE , we demonstrate its power to represent underlying biology of gene expression in microarray and RNA-Seq data. ImpulseDE is available on Bioconductor ( https://bioconductor.org/packages/ImpulseDE/ ). niryosef@berkeley.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
ERIC Educational Resources Information Center
Rattanavich, Saowalak
2017-01-01
This experimental study aims to investigate the effects of three vocational English classes, each one academic semester in duration, and using the concentrated language encounter approach and reciprocal peer teaching strategies. This study employed a time-series design with one pre-experiment and two post-experiments. Discourse and frequency…
ERIC Educational Resources Information Center
Ackerman, Richard H.; Maslin-Ostrowski, Pat
This book seeks to understand how school leaders cope with and respond to significant dilemmas in their practice and what the experience means to them. It is based on stories from conversations with self-described wounded leaders. By their accounts, these experiences are the kind that wound to the core (what some leaders call their integrity or…
Quasi-Experimental Designs for Causal Inference
ERIC Educational Resources Information Center
Kim, Yongnam; Steiner, Peter
2016-01-01
When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…
Role of man in flight experiment payloads, phase 1, appendices 1 and 2. [Spacelab project planning
NASA Technical Reports Server (NTRS)
Malone, T. B.; Kirkpatrick, M.
1974-01-01
The individual task durations are calculated in a series of time line realization problems, and a functional requirements data collection technique, designed to accommodate the data requirements for Spacelab payloads, is presented.
Including trait-based early warning signals helps predict population collapse
Clements, Christopher F.; Ozgul, Arpat
2016-01-01
Foreseeing population collapse is an on-going target in ecology, and this has led to the development of early warning signals based on expected changes in leading indicators before a bifurcation. Such signals have been sought for in abundance time-series data on a population of interest, with varying degrees of success. Here we move beyond these established methods by including parallel time-series data of abundance and fitness-related trait dynamics. Using data from a microcosm experiment, we show that including information on the dynamics of phenotypic traits such as body size into composite early warning indices can produce more accurate inferences of whether a population is approaching a critical transition than using abundance time-series alone. By including fitness-related trait information alongside traditional abundance-based early warning signals in a single metric of risk, our generalizable approach provides a powerful new way to assess what populations may be on the verge of collapse. PMID:27009968
Safe and effective nursing shift handover with NURSEPASS: An interrupted time series.
Smeulers, Marian; Dolman, Christine D; Atema, Danielle; van Dieren, Susan; Maaskant, Jolanda M; Vermeulen, Hester
2016-11-01
Implementation of a locally developed evidence based nursing shift handover blueprint with a bedside-safety-check to determine the effect size on quality of handover. A mixed methods design with: (1) an interrupted time series analysis to determine the effect on handover quality in six domains; (2) descriptive statistics to analyze the intercepted discrepancies by the bedside-safety-check; (3) evaluation sessions to gather experiences with the new handover process. We observed a continued trend of improvement in handover quality and a significant improvement in two domains of handover: organization/efficiency and contents. The bedside-safety-check successfully identified discrepancies on drains, intravenous medications, bandages or general condition and was highly appreciated. Use of the nursing shift handover blueprint showed promising results on effectiveness as well as on feasibility and acceptability. However, to enable long term measurement on effectiveness, evaluation with large scale interrupted times series or statistical process control is needed. Copyright © 2016 Elsevier Inc. All rights reserved.
Spectral decompositions of multiple time series: a Bayesian non-parametric approach.
Macaro, Christian; Prado, Raquel
2014-01-01
We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferraioli, Luigi; Hueller, Mauro; Vitale, Stefano
The scientific objectives of the LISA Technology Package experiment on board of the LISA Pathfinder mission demand accurate calibration and validation of the data analysis tools in advance of the mission launch. The level of confidence required in the mission outcomes can be reached only by intensively testing the tools on synthetically generated data. A flexible procedure allowing the generation of a cross-correlated stationary noise time series was set up. A multichannel time series with the desired cross-correlation behavior can be generated once a model for a multichannel cross-spectral matrix is provided. The core of the procedure comprises a noisemore » coloring, multichannel filter designed via a frequency-by-frequency eigendecomposition of the model cross-spectral matrix and a subsequent fit in the Z domain. The common problem of initial transients in a filtered time series is solved with a proper initialization of the filter recursion equations. The noise generator performance was tested in a two-dimensional case study of the closed-loop LISA Technology Package dynamics along the two principal degrees of freedom.« less
Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng
2018-04-20
Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.
Fast Fourier Tranformation Algorithms: Experiments with Microcomputers.
1986-07-01
is, functions a with a known, discrete Fourier transform A Such functions are given fn [I]. The functions, TF1 , TF2, and TF3, were used and are...the IBM PC, all with TF1 (Eq. 1). ’The compilers provided options to improve performance, as noted, for which a penalty in compiling time has to be...BASIC only. Series I In this series the procedures were as follows: (i) Calculate the input values for TF1 of ar and the modulus Iar (which is
Quasi-experimental study designs series-paper 1: introduction: two historical lineages.
Bärnighausen, Till; Røttingen, John-Arne; Rockers, Peter; Shemilt, Ian; Tugwell, Peter
2017-09-01
The objective of this study was to contrast the historical development of experiments and quasi-experiments and provide the motivation for a journal series on quasi-experimental designs in health research. A short historical narrative, with concrete examples, and arguments based on an understanding of the practice of health research and evidence synthesis. Health research has played a key role in developing today's gold standard for causal inference-the randomized controlled multiply blinded trial. Historically, allocation approaches developed from convenience and purposive allocation to alternate and, finally, to random allocation. This development was motivated both by concerns for manipulation in allocation as well as statistical and theoretical developments demonstrating the power of randomization in creating counterfactuals for causal inference. In contrast to the sequential development of experiments, quasi-experiments originated at very different points in time, from very different scientific perspectives, and with frequent and long interruptions in their methodological development. Health researchers have only recently started to recognize the value of quasi-experiments for generating novel insights on causal relationships. While quasi-experiments are unlikely to replace experiments in generating the efficacy and safety evidence required for clinical guidelines and regulatory approval of medical technologies, quasi-experiments can play an important role in establishing the effectiveness of health care practice, programs, and policies. The papers in this series describe and discuss a range of important issues in utilizing quasi-experimental designs for primary research and quasi-experimental results for evidence synthesis. Copyright © 2017 Elsevier Inc. All rights reserved.
Förster, Jens; Friedman, Ronald S; Liberman, Nira
2004-08-01
Six studies investigate whether and how distant future time perspective facilitates abstract thinking and impedes concrete thinking by altering the level at which mental representations are construed. In Experiments 1-3, participants who envisioned their lives and imagined themselves engaging in a task 1 year later as opposed to the next day subsequently performed better on a series of insight tasks. In Experiments 4 and 5 a distal perspective was found to improve creative generation of abstract solutions. Moreover, Experiment 5 demonstrated a similar effect with temporal distance manipulated indirectly, by making participants imagine their lives in general a year from now versus tomorrow prior to performance. In Experiment 6, distant time perspective undermined rather than enhanced analytical problem solving.
NASA Technical Reports Server (NTRS)
Carroll, Mark; Wooten, Margaret; DiMiceli, Charlene; Sohlberg, Robert; Kelly, Maureen
2016-01-01
The availability of a dense time series of satellite observations at moderate (30 m) spatial resolution is enabling unprecedented opportunities for understanding ecosystems around the world. A time series of data from Landsat was used to generate a series of three maps at decadal time step to show how surface water has changed from 1991 to 2011 in the high northern latitudes of North America. Previous attempts to characterize the change in surface water in this region have been limited in either spatial or temporal resolution, or both. This series of maps was generated for the NASA Arctic and Boreal Vulnerability Experiment (ABoVE), which began in fall 2015. These maps show a nominal extent of surface water by using multiple observations to make a single map for each time step. This increases the confidence that any detected changes are related to climate or ecosystem changes not simply caused by short duration weather events such as flood or drought. The methods and comparison to other contemporary maps of the region are presented here. Initial verification results indicate 96% producer accuracy and 54% user accuracy when compared to 2-m resolution World View-2 data. All water bodies that were omitted were one Landsat pixel or smaller, hence below detection limits of the instrument.
Capturing Context-Related Change in Emotional Dynamics via Fixed Moderated Time Series Analysis.
Adolf, Janne K; Voelkle, Manuel C; Brose, Annette; Schmiedek, Florian
2017-01-01
Much of recent affect research relies on intensive longitudinal studies to assess daily emotional experiences. The resulting data are analyzed with dynamic models to capture regulatory processes involved in emotional functioning. Daily contexts, however, are commonly ignored. This may not only result in biased parameter estimates and wrong conclusions, but also ignores the opportunity to investigate contextual effects on emotional dynamics. With fixed moderated time series analysis, we present an approach that resolves this problem by estimating context-dependent change in dynamic parameters in single-subject time series models. The approach examines parameter changes of known shape and thus addresses the problem of observed intra-individual heterogeneity (e.g., changes in emotional dynamics due to observed changes in daily stress). In comparison to existing approaches to unobserved heterogeneity, model estimation is facilitated and different forms of change can readily be accommodated. We demonstrate the approach's viability given relatively short time series by means of a simulation study. In addition, we present an empirical application, targeting the joint dynamics of affect and stress and how these co-vary with daily events. We discuss potentials and limitations of the approach and close with an outlook on the broader implications for understanding emotional adaption and development.
Time-series analysis to study the impact of an intersection on dispersion along a street canyon.
Richmond-Bryant, Jennifer; Eisner, Alfred D; Hahn, Intaek; Fortune, Christopher R; Drake-Richman, Zora E; Brixey, Laurie A; Talih, M; Wiener, Russell W; Ellenson, William D
2009-12-01
This paper presents data analysis from the Brooklyn Traffic Real-Time Ambient Pollutant Penetration and Environmental Dispersion (B-TRAPPED) study to assess the transport of ultrafine particulate matter (PM) across urban intersections. Experiments were performed in a street canyon perpendicular to a highway in Brooklyn, NY, USA. Real-time ultrafine PM samplers were positioned on either side of an intersection at multiple locations along a street to collect time-series number concentration data. Meteorology equipment was positioned within the street canyon and at an upstream background site to measure wind speed and direction. Time-series analysis was performed on the PM data to compute a transport velocity along the direction of the street for the cases where background winds were parallel and perpendicular to the street. The data were analyzed for sampler pairs located (1) on opposite sides of the intersection and (2) on the same block. The time-series analysis demonstrated along-street transport, including across the intersection when background winds were parallel to the street canyon and there was minimal transport and no communication across the intersection when background winds were perpendicular to the street canyon. Low but significant values of the cross-correlation function (CCF) underscore the turbulent nature of plume transport along the street canyon. The low correlations suggest that flow switching around corners or traffic-induced turbulence at the intersection may have aided dilution of the PM plume from the highway. This observation supports similar findings in the literature. Furthermore, the time-series analysis methodology applied in this study is introduced as a technique for studying spatiotemporal variation in the urban microscale environment.
Causal judgments about empirical information in an interrupted time series design.
White, Peter A
2016-07-19
Empirical information available for causal judgment in everyday life tends to take the form of quasi-experimental designs, lacking control groups, more than the form of contingency information that is usually presented in experiments. Stimuli were presented in which values of an outcome variable for a single individual were recorded over six time periods, and an intervention was introduced between the fifth and sixth time periods. Participants judged whether and how much the intervention affected the outcome. With numerical stimulus information, judgments were higher for a pre-intervention profile in which all values were the same than for pre-intervention profiles with any other kind of trend. With graphical stimulus information, judgments were more sensitive to trends, tending to be higher when an increase after the intervention was preceded by a decreasing series than when it was preceded by an increasing series ending on the same value at the fifth time period. It is suggested that a feature-analytic model, in which the salience of different features of information varies between presentation formats, may provide the best prospect of explaining the results.
Pridemore, William Alex; Chamlin, Mitchell B; Cochran, John K
2007-06-01
The dissolution of the Soviet Union resulted in sudden, widespread, and fundamental changes to Russian society. The former social welfare system-with its broad guarantees of employment, healthcare, education, and other forms of social support-was dismantled in the shift toward democracy, rule of law, and a free-market economy. This unique natural experiment provides a rare opportunity to examine the potentially disintegrative effects of rapid social change on deviance, and thus to evaluate one of Durkheim's core tenets. We took advantage of this opportunity by performing interrupted time-series analyses of annual age-adjusted homicide, suicide, and alcohol-related mortality rates for the Russian Federation using data from 1956 to 2002, with 1992-2002 as the postintervention time-frame. The ARIMA models indicate that, controlling for the long-term processes that generated these three time series, the breakup of the Soviet Union was associated with an appreciable increase in each of the cause-of-death rates. We interpret these findings as being consistent with the Durkheimian hypothesis that rapid social change disrupts social order, thereby increasing the level of crime and deviance.
Laser-driven planar Rayleigh-Taylor instability experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glendinning, S.G.; Weber, S.V.; Bell, P.
1992-08-24
We have performed a series of experiments on the Nova Laser Facility to examine the hydrodynamic behavior of directly driven planar foils with initial perturbations of varying wavelength. The foils were accelerated with a single, frequency doubled, smoothed and temporally shaped laser beam at 0.8{times}10{sup 14} W/cm{sup 2}. The experiments are in good agreement with numerical simulations using the computer codes LASNEX and ORCHID which show growth rates reduced to about 70% of classical for this nonlinear regime.
ERIC Educational Resources Information Center
Taube, Carl A.; Bryant, E. Earl
This report of the findings of a survey of a sample of 1,073 resident institutions which provide nursing or personal care to the aged or chronically ill emphasizes employee work experience, special training, and wages. The median total experience for all nursing and professional employees in the type of job held at the time of the survey was 4.1…
Autoregressive modeling for the spectral analysis of oceanographic data
NASA Technical Reports Server (NTRS)
Gangopadhyay, Avijit; Cornillon, Peter; Jackson, Leland B.
1989-01-01
Over the last decade there has been a dramatic increase in the number and volume of data sets useful for oceanographic studies. Many of these data sets consist of long temporal or spatial series derived from satellites and large-scale oceanographic experiments. These data sets are, however, often 'gappy' in space, irregular in time, and always of finite length. The conventional Fourier transform (FT) approach to the spectral analysis is thus often inapplicable, or where applicable, it provides questionable results. Here, through comparative analysis with the FT for different oceanographic data sets, the possibilities offered by autoregressive (AR) modeling to perform spectral analysis of gappy, finite-length series, are discussed. The applications demonstrate that as the length of the time series becomes shorter, the resolving power of the AR approach as compared with that of the FT improves. For the longest data sets examined here, 98 points, the AR method performed only slightly better than the FT, but for the very short ones, 17 points, the AR method showed a dramatic improvement over the FT. The application of the AR method to a gappy time series, although a secondary concern of this manuscript, further underlines the value of this approach.
Reid, Brian J; Papanikolaou, Niki D; Wilcox, Ronah K
2005-02-01
The catabolic activity with respect to the systemic herbicide isoproturon was determined in soil samples by (14)C-radiorespirometry. The first experiment assessed levels of intrinsic catabolic activity in soil samples that represented three dissimilar soil series under arable cultivation. Results showed average extents of isoproturon mineralisation (after 240 h assay time) in the three soil series to be low. A second experiment assessed the impact of addition of isoproturon (0.05 microg kg(-1)) into these soils on the levels of catabolic activity following 28 days of incubation. Increased catabolic activity was observed in all three soils. A third experiment assessed levels of intrinsic catabolic activity in soil samples representing a single soil series managed under either conventional agricultural practice (including the use of isoproturon) or organic farming practice (with no use of isoproturon). Results showed higher (and more consistent) levels of isoproturon mineralisation in the soil samples collected from conventional land use. The final experiment assessed the impact of isoproturon addition on the levels of inducible catabolic activity in these soils. The results showed no significant difference in the case of the conventional farm soil samples while the induction of catabolic activity in the organic farm soil samples was significant.
Sigoillot, Frederic D; Huckins, Jeremy F; Li, Fuhai; Zhou, Xiaobo; Wong, Stephen T C; King, Randall W
2011-01-01
Automated time-lapse microscopy can visualize proliferation of large numbers of individual cells, enabling accurate measurement of the frequency of cell division and the duration of interphase and mitosis. However, extraction of quantitative information by manual inspection of time-lapse movies is too time-consuming to be useful for analysis of large experiments. Here we present an automated time-series approach that can measure changes in the duration of mitosis and interphase in individual cells expressing fluorescent histone 2B. The approach requires analysis of only 2 features, nuclear area and average intensity. Compared to supervised learning approaches, this method reduces processing time and does not require generation of training data sets. We demonstrate that this method is as sensitive as manual analysis in identifying small changes in interphase or mitotic duration induced by drug or siRNA treatment. This approach should facilitate automated analysis of high-throughput time-lapse data sets to identify small molecules or gene products that influence timing of cell division.
Eliciting interval beliefs: An experimental study
Peeters, Ronald; Wolk, Leonard
2017-01-01
In this paper we study the interval scoring rule as a mechanism to elicit subjective beliefs under varying degrees of uncertainty. In our experiment, subjects forecast the termination time of a time series to be generated from a given but unknown stochastic process. Subjects gradually learn more about the underlying process over time and hence the true distribution over termination times. We conduct two treatments, one with a high and one with a low volatility process. We find that elicited intervals are better when subjects are facing a low volatility process. In this treatment, participants learn to position their intervals almost optimally over the course of the experiment. This is in contrast with the high volatility treatment, where subjects, over the course of the experiment, learn to optimize the location of their intervals but fail to provide the optimal length. PMID:28380020
The Impact of Nature Experience on Willingness to Support Conservation
Zaradic, Patricia A.; Pergams, Oliver R. W.; Kareiva, Peter
2009-01-01
We hypothesized that willingness to financially support conservation depends on one's experience with nature. In order to test this hypothesis, we used a novel time-lagged correlation analysis to look at times series data concerning nature participation, and evaluate its relationship with future conservation support (measured as contributions to conservation NGOs). Our results suggest that the type and timing of nature experience may determine future conservation investment. Time spent hiking or backpacking is correlated with increased conservation contributions 11–12 years later. On the other hand, contributions are negatively correlated with past time spent on activities such as public lands visitation or fishing. Our results suggest that each hiker or backpacker translates to $200–$300 annually in future NGO contributions. We project that the recent decline in popularity of hiking and backpacking will negatively impact conservation NGO contributions from approximately 2010–2011 through at least 2018. PMID:19809511
EFFECTS OF THERMAL TREATMENTS ON THE CHEMICAL REACTIVITY OF TRICHLOROETHYLENE
A series of experiments was completed to investigate abiotic degradation and reaction product formation of trichloroethylene (TCE) when heated. A quartz-tube apparatus was used to study short residence time and high temperature conditions that are thought to occur during thermal ...
Construction of regulatory networks using expression time-series data of a genotyped population.
Yeung, Ka Yee; Dombek, Kenneth M; Lo, Kenneth; Mittler, John E; Zhu, Jun; Schadt, Eric E; Bumgarner, Roger E; Raftery, Adrian E
2011-11-29
The inference of regulatory and biochemical networks from large-scale genomics data is a basic problem in molecular biology. The goal is to generate testable hypotheses of gene-to-gene influences and subsequently to design bench experiments to confirm these network predictions. Coexpression of genes in large-scale gene-expression data implies coregulation and potential gene-gene interactions, but provide little information about the direction of influences. Here, we use both time-series data and genetics data to infer directionality of edges in regulatory networks: time-series data contain information about the chronological order of regulatory events and genetics data allow us to map DNA variations to variations at the RNA level. We generate microarray data measuring time-dependent gene-expression levels in 95 genotyped yeast segregants subjected to a drug perturbation. We develop a Bayesian model averaging regression algorithm that incorporates external information from diverse data types to infer regulatory networks from the time-series and genetics data. Our algorithm is capable of generating feedback loops. We show that our inferred network recovers existing and novel regulatory relationships. Following network construction, we generate independent microarray data on selected deletion mutants to prospectively test network predictions. We demonstrate the potential of our network to discover de novo transcription-factor binding sites. Applying our construction method to previously published data demonstrates that our method is competitive with leading network construction algorithms in the literature.
ERIC Educational Resources Information Center
Fernandez, Chris
2015-01-01
Legally mandated student loan entrance counseling attempts to prepare first-time borrowers of federal student loans for this challenge; yet, researchers hypothesized that the online modules most borrowers use for this purpose have significant shortcomings. This report (the third in a series of five from TG Research) describes a study in which…
Steven P. Norman; William W. Hargrove; William M. Christie
2017-01-01
Mountainous regions experience complex phenological behavior along climatic, vegetational and topographic gradients. In this paper, we use a MODIS time series of the Normalized Difference Vegetation Index (NDVI) to understand the causes of variations in spring and autumn timing from 2000 to 2015, for a landscape renowned for its biological diversity. By filtering for...
ERIC Educational Resources Information Center
Compton-Lilly, Catherine
2012-01-01
While teachers cannot travel back in time to visit their students at earlier ages, they can draw on the rich sets of experiences and knowledge that students bring to classrooms. In her latest book, Catherine Compton-Lilly examines the literacy practices and school trajectories of eight middle school students and their families. Through a unique…
Modrák, Martin; Vohradský, Jiří
2018-04-13
Identifying regulons of sigma factors is a vital subtask of gene network inference. Integrating multiple sources of data is essential for correct identification of regulons and complete gene regulatory networks. Time series of expression data measured with microarrays or RNA-seq combined with static binding experiments (e.g., ChIP-seq) or literature mining may be used for inference of sigma factor regulatory networks. We introduce Genexpi: a tool to identify sigma factors by combining candidates obtained from ChIP experiments or literature mining with time-course gene expression data. While Genexpi can be used to infer other types of regulatory interactions, it was designed and validated on real biological data from bacterial regulons. In this paper, we put primary focus on CyGenexpi: a plugin integrating Genexpi with the Cytoscape software for ease of use. As a part of this effort, a plugin for handling time series data in Cytoscape called CyDataseries has been developed and made available. Genexpi is also available as a standalone command line tool and an R package. Genexpi is a useful part of gene network inference toolbox. It provides meaningful information about the composition of regulons and delivers biologically interpretable results.
Flight test experience using advanced airborne equipment in a time-based metered traffic environment
NASA Technical Reports Server (NTRS)
Morello, S. A.
1980-01-01
A series of test flights have demonstrated that time-based metering guidance and control was acceptable to pilots and air traffic controllers. The descent algorithm of the technique, with good representation of aircraft performance and wind modeling, yielded arrival time accuracy within 12 sec. It is expected that this will represent significant fuel savings (1) through a reduction of the time error dispersions at the metering fix for the entire fleet, and (2) for individual aircraft as well, through the presentation of guidance for a fuel-efficient descent. Air traffic controller workloads were also reduced, in keeping with the reduction of required communications resulting from the transfer of navigation responsibilities to pilots. A second series of test flights demonstrated that an existing flight management system could be modified to operate in the new mode.
Real-time flutter boundary prediction based on time series models
NASA Astrophysics Data System (ADS)
Gu, Wenjing; Zhou, Li
2018-03-01
For the purpose of predicting the flutter boundary in real time during flutter flight tests, two time series models accompanied with corresponding stability criterion are adopted in this paper. The first method simplifies a long nonstationary response signal as many contiguous intervals and each is considered to be stationary. The traditional AR model is then established to represent each interval of signal sequence. While the second employs a time-varying AR model to characterize actual measured signals in flutter test with progression variable speed (FTPVS). To predict the flutter boundary, stability parameters are formulated by the identified AR coefficients combined with Jury's stability criterion. The behavior of the parameters is examined using both simulated and wind-tunnel experiment data. The results demonstrate that both methods show significant effectiveness in predicting the flutter boundary at lower speed level. A comparison between the two methods is also given in this paper.
Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data
Hallac, David; Vare, Sagar; Boyd, Stephen; Leskovec, Jure
2018-01-01
Subsequence clustering of multivariate time series is a useful tool for discovering repeated patterns in temporal data. Once these patterns have been discovered, seemingly complicated datasets can be interpreted as a temporal sequence of only a small number of states, or clusters. For example, raw sensor data from a fitness-tracking application can be expressed as a timeline of a select few actions (i.e., walking, sitting, running). However, discovering these patterns is challenging because it requires simultaneous segmentation and clustering of the time series. Furthermore, interpreting the resulting clusters is difficult, especially when the data is high-dimensional. Here we propose a new method of model-based clustering, which we call Toeplitz Inverse Covariance-based Clustering (TICC). Each cluster in the TICC method is defined by a correlation network, or Markov random field (MRF), characterizing the interdependencies between different observations in a typical subsequence of that cluster. Based on this graphical representation, TICC simultaneously segments and clusters the time series data. We solve the TICC problem through alternating minimization, using a variation of the expectation maximization (EM) algorithm. We derive closed-form solutions to efficiently solve the two resulting subproblems in a scalable way, through dynamic programming and the alternating direction method of multipliers (ADMM), respectively. We validate our approach by comparing TICC to several state-of-the-art baselines in a series of synthetic experiments, and we then demonstrate on an automobile sensor dataset how TICC can be used to learn interpretable clusters in real-world scenarios. PMID:29770257
Evaluation of the significance of abrupt changes in precipitation and runoff process in China
NASA Astrophysics Data System (ADS)
Xie, Ping; Wu, Ziyi; Sang, Yan-Fang; Gu, Haiting; Zhao, Yuxi; Singh, Vijay P.
2018-05-01
Abrupt changes are an important manifestation of hydrological variability. How to accurately detect the abrupt changes in hydrological time series and evaluate their significance is an important issue, but methods for dealing with them effectively are lacking. In this study, we propose an approach to evaluate the significance of abrupt changes in time series at five levels: no, weak, moderate, strong, and dramatic. The approach was based on an index of correlation coefficient calculated for the original time series and its abrupt change component. A bigger value of correlation coefficient reflects a higher significance level of abrupt change. Results of Monte-Carlo experiments verified the reliability of the proposed approach, and also indicated the great influence of statistical characteristics of time series on the significance level of abrupt change. The approach was derived from the relationship between correlation coefficient index and abrupt change, and can estimate and grade the significance levels of abrupt changes in hydrological time series. Application of the proposed approach to ten major watersheds in China showed that abrupt changes mainly occurred in five watersheds in northern China, which have arid or semi-arid climate and severe shortages of water resources. Runoff processes in northern China were more sensitive to precipitation change than those in southern China. Although annual precipitation and surface water resources amount (SWRA) exhibited a harmonious relationship in most watersheds, abrupt changes in the latter were more significant. Compared with abrupt changes in annual precipitation, human activities contributed much more to the abrupt changes in the corresponding SWRA, except for the Northwest Inland River watershed.
Sharma, Ram C; Hara, Keitarou; Hirayama, Hidetake
2017-01-01
This paper presents the performance and evaluation of a number of machine learning classifiers for the discrimination between the vegetation physiognomic classes using the satellite based time-series of the surface reflectance data. Discrimination of six vegetation physiognomic classes, Evergreen Coniferous Forest, Evergreen Broadleaf Forest, Deciduous Coniferous Forest, Deciduous Broadleaf Forest, Shrubs, and Herbs, was dealt with in the research. Rich-feature data were prepared from time-series of the satellite data for the discrimination and cross-validation of the vegetation physiognomic types using machine learning approach. A set of machine learning experiments comprised of a number of supervised classifiers with different model parameters was conducted to assess how the discrimination of vegetation physiognomic classes varies with classifiers, input features, and ground truth data size. The performance of each experiment was evaluated by using the 10-fold cross-validation method. Experiment using the Random Forests classifier provided highest overall accuracy (0.81) and kappa coefficient (0.78). However, accuracy metrics did not vary much with experiments. Accuracy metrics were found to be very sensitive to input features and size of ground truth data. The results obtained in the research are expected to be useful for improving the vegetation physiognomic mapping in Japan.
Majewsky, Vera; Scherr, Claudia; Arlt, Sebastian Patrick; Kiener, Jonas; Frrokaj, Kristina; Schindler, Tobias; Klocke, Peter; Baumgartner, Stephan
2014-04-01
Reproducibility of basic research investigations in homeopathy is challenging. This study investigated if formerly observed effects of homeopathically potentised gibberellic acid (GA3) on growth of duckweed (Lemna gibba L.) were reproducible. Duckweed was grown in potencies (14x-30x) of GA3 and one time succussed and unsuccussed water controls. Outcome parameter area-related growth rate was determined by a computerised image analysis system. Three series including five independent blinded and randomised potency experiments (PE) each were carried out. System stability was controlled by three series of five systematic negative control (SNC) experiments. Gibbosity (a specific growth state of L. gibba) was investigated as possibly essential factor for reactivity of L. gibba towards potentised GA3 in one series of potency and SNC experiments, respectively. Only in the third series with gibbous L. gibba L. we observed a significant effect (p = 0.009, F-test) of the homeopathic treatment. However, growth rate increased in contrast to the former study, and most biologically active potency levels differed. Variability in PE was lower than in SNC experiments. The stability of the experimental system was verified by the SNC experiments. Gibbosity seems to be a necessary condition for reactivity of L. gibba to potentised GA3. Further still unknown conditions seem to govern effect direction and the pattern of active and inactive potency levels. When designing new reproducibility studies, the physiological state of the test organism must be considered. Variability might be an interesting parameter to investigate effects of homeopathic remedies in basic research. Copyright © 2014 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2000-01-01
The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.
NASA Astrophysics Data System (ADS)
Kawauchi, Satoko; Matsuyama, Hiroko; Obara, Minoru; Ishihara, Miya; Arai, Tsunenori; Kikuchi, Makoto; Katoh, Masayoshi
1997-05-01
We developed novel monitoring methodology for corneal surface hydration during photorefractive keratectomy (PRK) in order to solve undercorrection issue at the central part of cornea (Central island). We employed pulsed photothermal radiometry to monitor corneal surface hydration. We performed two experiments; gelatin gel experiments and porcine cornea experiments in vitro. In the case of the gelatin gel experiments, the e-folding decay time of transient infrared radiation waveform from the ArF laser irradiated surface was prolonged from 420 microsecond(s) to 30 ms with decreasing gelatin density from 15% to 0.15%. These measured e-folding decay times were good agreements with theoretical calculations. Using porcine cornea, we observed the e-folding decay time increase during the series of ArF excimer laser irradiations. Our method may be available to know ablation efficiency change to improve the controllability of refractive correction on the PRK.
Passenger train emergency systems : single-level commuter rail car egress experiments
DOT National Transportation Integrated Search
2015-04-01
Under FRA sponsorship, a series of three experimental egress trials was conducted in 2005 and 2006 to obtain human factors data relating to the amount of time necessary for individuals to exit from a passenger rail car. This final report describes th...
Ice Wedge Polygon Bromide Tracer Experiment in Subsurface Flow, Barrow, Alaska, 2015-2016
Nathan Wales
2018-02-15
Time series of bromide tracer concentrations at several points within a low-centered polygon and a high-centered polygon. Concentration values were obtained from the analysis of water samples via ion chromatography with an accuracy of 0.01 mg/l.
Therapeutic Assessment of Complex Trauma: A Single-Case Time-Series Study.
Tarocchi, Anna; Aschieri, Filippo; Fantini, Francesca; Smith, Justin D
2013-06-01
The cumulative effect of repeated traumatic experiences in early childhood incrementally increases the risk of adjustment problems later in life. Surviving traumatic environments can lead to the development of an interrelated constellation of emotional and interpersonal symptoms termed complex posttraumatic stress disorder (CPTSD). Effective treatment of trauma begins with a multimethod psychological assessment and requires the use of several evidence-based therapeutic processes, including establishing a safe therapeutic environment, reprocessing the trauma, constructing a new narrative, and managing emotional dysregulation. Therapeutic Assessment (TA) is a semistructured, brief intervention that uses psychological testing to promote positive change. The case study of Kelly, a middle-aged woman with a history of repeated interpersonal trauma, illustrates delivery of the TA model for CPTSD. Results of this single-case time-series experiment indicate statistically significant symptom improvement as a result of participating in TA. We discuss the implications of these findings for assessing and treating trauma-related concerns, such as CPTSD.
Automatic Detection of Driver Fatigue Using Driving Operation Information for Transportation Safety
Li, Zuojin; Chen, Liukui; Peng, Jun; Wu, Ying
2017-01-01
Fatigued driving is a major cause of road accidents. For this reason, the method in this paper is based on the steering wheel angles (SWA) and yaw angles (YA) information under real driving conditions to detect drivers’ fatigue levels. It analyzes the operation features of SWA and YA under different fatigue statuses, then calculates the approximate entropy (ApEn) features of a short sliding window on time series. Using the nonlinear feature construction theory of dynamic time series, with the fatigue features as input, designs a “2-6-6-3” multi-level back propagation (BP) Neural Networks classifier to realize the fatigue detection. An approximately 15-h experiment is carried out on a real road, and the data retrieved are segmented and labeled with three fatigue levels after expert evaluation, namely “awake”, “drowsy” and “very drowsy”. The average accuracy of 88.02% in fatigue identification was achieved in the experiment, endorsing the value of the proposed method for engineering applications. PMID:28587072
Automatic Detection of Driver Fatigue Using Driving Operation Information for Transportation Safety.
Li, Zuojin; Chen, Liukui; Peng, Jun; Wu, Ying
2017-05-25
Fatigued driving is a major cause of road accidents. For this reason, the method in this paper is based on the steering wheel angles (SWA) and yaw angles (YA) information under real driving conditions to detect drivers' fatigue levels. It analyzes the operation features of SWA and YA under different fatigue statuses, then calculates the approximate entropy (ApEn) features of a short sliding window on time series. Using the nonlinear feature construction theory of dynamic time series, with the fatigue features as input, designs a "2-6-6-3" multi-level back propagation (BP) Neural Networks classifier to realize the fatigue detection. An approximately 15-h experiment is carried out on a real road, and the data retrieved are segmented and labeled with three fatigue levels after expert evaluation, namely "awake", "drowsy" and "very drowsy". The average accuracy of 88.02% in fatigue identification was achieved in the experiment, endorsing the value of the proposed method for engineering applications.
Therapeutic Assessment of Complex Trauma: A Single-Case Time-Series Study
Tarocchi, Anna; Aschieri, Filippo; Fantini, Francesca; Smith, Justin D.
2013-01-01
The cumulative effect of repeated traumatic experiences in early childhood incrementally increases the risk of adjustment problems later in life. Surviving traumatic environments can lead to the development of an interrelated constellation of emotional and interpersonal symptoms termed complex posttraumatic stress disorder (CPTSD). Effective treatment of trauma begins with a multimethod psychological assessment and requires the use of several evidence-based therapeutic processes, including establishing a safe therapeutic environment, reprocessing the trauma, constructing a new narrative, and managing emotional dysregulation. Therapeutic Assessment (TA) is a semistructured, brief intervention that uses psychological testing to promote positive change. The case study of Kelly, a middle-aged woman with a history of repeated interpersonal trauma, illustrates delivery of the TA model for CPTSD. Results of this single-case time-series experiment indicate statistically significant symptom improvement as a result of participating in TA. We discuss the implications of these findings for assessing and treating trauma-related concerns, such as CPTSD. PMID:24159267
Thermodynamic Mixing Behavior Of F-OH Apatite Crystalline Solutions
NASA Astrophysics Data System (ADS)
Hovis, G. L.
2011-12-01
It is important to establish a thermodynamic data base for accessory minerals and mineral series that are useful in determining fluid composition during petrologic processes. As a starting point for apatite-system thermodynamics, Hovis and Harlov (2010, American Mineralogist 95, 946-952) reported enthalpies of mixing for a F-Cl apatite series. Harlov synthesized all such crystalline solutions at the GFZ-Potsdam using a slow-cooled molten-flux method. In order to expand thermodynamic characterization of the F-Cl-OH apatite system, a new study has been initiated along the F-OH apatite binary. Synthesis of this new series made use of National Institute of Standards and Technology (NIST) 2910a hydroxylapatite, a standard reference material made at NIST "by solution reaction of calcium hydroxide with phosphoric acid." Synthesis efforts at Lafayette College have been successful in producing fluorapatite through ion exchange between hydroxylapatite 2910a and fluorite. In these experiments, a thin layer of hydroxylapatite powder was placed on a polished CaF2 disc (obtained from a supplier of high-purity crystals for spectroscopy), pressed firmly against the disc, then annealed at 750 °C (1 bar) for three days. Longer annealing times did not produce further change in unit-cell dimensions of the resulting fluorapatite, but it is uncertain at this time whether this procedure produces a pure-F end member (chemical analyses to be performed in the near future). It is clear from the unit-cell dimensions, however, that the newly synthesized apatite contains a high percentage of fluorine, probably greater than 90 mol % F. Intermediate compositions for a F-OH apatite series were made by combining 2910a hydroxylapatite powder with the newly synthesized fluorapatite in various proportions, then conducting chemical homogenization experiments at 750 °C on each mixture. X-ray powder diffraction data indicated that these experiments were successful in producing chemically homogeneous intermediate series members, as doubled peaks merged into single diffraction maxima, the latter changing position systematically with bulk composition. All of the resulting F-OH apatite series members have hexagonal symmetry. The "a" unit-cell dimension behaves linearly with composition, and "c" is nearly constant across the series. Unit-cell volume also is linear with F:OH ratio, thus behaving in a thermodynamically ideal manner. Solution calorimetric experiments have been conducted in 20.0 wt % HCl at 50 °C on all series members. Enthalpies of F-OH mixing are nonexistent at F-rich compositions but have small negative values toward the hydroxylapatite end member. There is no enthalpy barrier, therefore, to complete F-OH mixing across the series, indicated as well by the ease of chemical homogenization for intermediate F:OH series members. In addition to the synthetic specimens described above, natural samples of hydroxylapatite, fluorapatite, and chlorapatite have been obtained for study from the U.S. National Museum of Natural History, as well as the American Museum of Natural History (our sincere appreciation to both museums for providing samples). Solution calorimetric results for these samples will be compared with data for the synthetic OH, F, and Cl apatite analogs noted above.
Teuchmann, K; Totterdell, P; Parker, S K
1999-01-01
Experience sampling methodology was used to examine how work demands translate into acute changes in affective response and thence into chronic response. Seven accountants reported their reactions 3 times a day for 4 weeks on pocket computers. Aggregated analysis showed that mood and emotional exhaustion fluctuated in parallel with time pressure over time. Disaggregated time-series analysis confirmed the direct impact of high-demand periods on the perception of control, time pressure, and mood and the indirect impact on emotional exhaustion. A curvilinear relationship between time pressure and emotional exhaustion was shown. The relationships between work demands and emotional exhaustion changed between high-demand periods and normal working periods. The results suggest that enhancing perceived control may alleviate the negative effects of time pressure.
NASA Astrophysics Data System (ADS)
Corzo, H. H.; Velasco, A. M.; Lavín, C.; Ortiz, J. V.
2018-02-01
Vertical excitation energies belonging to several Rydberg series of MgH have been inferred from 3+ electron-propagator calculations of the electron affinities of MgH+ and are in close agreement with experiment. Many electronically excited states with n > 3 are reported for the first time and new insight is given on the assignment of several Rydberg series. Valence and Rydberg excited states of MgH are distinguished respectively by high and low pole strengths corresponding to Dyson orbitals of electron attachment to the cation. By applying the Molecular Quantum Defect Orbital method, oscillator strengths for electronic transitions involving Rydberg states also have been determined.
Using satellite laser ranging to measure ice mass change in Greenland and Antarctica
NASA Astrophysics Data System (ADS)
Bonin, Jennifer A.; Chambers, Don P.; Cheng, Minkang
2018-01-01
A least squares inversion of satellite laser ranging (SLR) data over Greenland and Antarctica could extend gravimetry-based estimates of mass loss back to the early 1990s and fill any future gap between the current Gravity Recovery and Climate Experiment (GRACE) and the future GRACE Follow-On mission. The results of a simulation suggest that, while separating the mass change between Greenland and Antarctica is not possible at the limited spatial resolution of the SLR data, estimating the total combined mass change of the two areas is feasible. When the method is applied to real SLR and GRACE gravity series, we find significantly different estimates of inverted mass loss. There are large, unpredictable, interannual differences between the two inverted data types, making us conclude that the current 5×5 spherical harmonic SLR series cannot be used to stand in for GRACE. However, a comparison with the longer IMBIE time series suggests that on a 20-year time frame, the inverted SLR series' interannual excursions may average out, and the long-term mass loss estimate may be reasonable.
Opportunities for Policy Leadership on Afterschool Care. Policy Briefing Series. Issue 5
ERIC Educational Resources Information Center
Kang, Andrew; Weber, Julie
2010-01-01
For most full-time employed parents, the gap between the end of the school day and the time they arrive home from work adds up to about 20 to 25 hours per week. Thus, many parents look to afterschool programs to satisfy their desire for safe, enriching experiences for their children while they are working. "Afterschool" is the general term used to…
Results from field tests of the one-dimensional Time-Encoded Imaging System.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marleau, Peter; Brennan, James S.; Brubaker, Erik
2014-09-01
A series of field experiments were undertaken to evaluate the performance of the one dimensional time encoded imaging system. The significant detection of a Cf252 fission radiation source was demonstrated at a stand-off of 100 meters. Extrapolations to different quantities of plutonium equivalent at different distances are made. Hardware modifications to the system for follow on work are suggested.
Empirical mode decomposition and long-range correlation analysis of sunspot time series
NASA Astrophysics Data System (ADS)
Zhou, Yu; Leung, Yee
2010-12-01
Sunspots, which are the best known and most variable features of the solar surface, affect our planet in many ways. The number of sunspots during a period of time is highly variable and arouses strong research interest. When multifractal detrended fluctuation analysis (MF-DFA) is employed to study the fractal properties and long-range correlation of the sunspot series, some spurious crossover points might appear because of the periodic and quasi-periodic trends in the series. However many cycles of solar activities can be reflected by the sunspot time series. The 11-year cycle is perhaps the most famous cycle of the sunspot activity. These cycles pose problems for the investigation of the scaling behavior of sunspot time series. Using different methods to handle the 11-year cycle generally creates totally different results. Using MF-DFA, Movahed and co-workers employed Fourier truncation to deal with the 11-year cycle and found that the series is long-range anti-correlated with a Hurst exponent, H, of about 0.12. However, Hu and co-workers proposed an adaptive detrending method for the MF-DFA and discovered long-range correlation characterized by H≈0.74. In an attempt to get to the bottom of the problem in the present paper, empirical mode decomposition (EMD), a data-driven adaptive method, is applied to first extract the components with different dominant frequencies. MF-DFA is then employed to study the long-range correlation of the sunspot time series under the influence of these components. On removing the effects of these periods, the natural long-range correlation of the sunspot time series can be revealed. With the removal of the 11-year cycle, a crossover point located at around 60 months is discovered to be a reasonable point separating two different time scale ranges, H≈0.72 and H≈1.49. And on removing all cycles longer than 11 years, we have H≈0.69 and H≈0.28. The three cycle-removing methods—Fourier truncation, adaptive detrending and the proposed EMD-based method—are further compared, and possible reasons for the different results are given. Two numerical experiments are designed for quantitatively evaluating the performances of these three methods in removing periodic trends with inexact/exact cycles and in detecting the possible crossover points.
Control of molt in birds: association with prolactin and gonadal regression in starlings.
Dawson, Alistair
2006-07-01
Despite the importance of molt to birds, very little is known about its environmental or physiological control. In starlings Sturnus vulgaris, and other species, under both natural conditions and experimental regimes, gonadal regression coincides with peak prolactin secretion. The prebasic molt starts at the same time. The aim of this series of experiments was to keep starlings on photo-schedules that would challenge the normally close relationship between gonadal regression and molt, to determine how closely the start of molt is associated with gonadal regression and/or associated changes in prolactin concentrations. In one series of experiments, photosensitive starlings were moved from a short photoperiod, 8 h light per day (8L), to 13 or 18L, and from 13 to 18L or 13 to 8L during testicular maturation. Later, photorefractory birds under 13L that had finished molting were moved to 18L. In another series of experiments, photorefractory starlings were moved from 18 to 8L for 7 weeks, 4 weeks, 2 weeks, 1 week, 3 days, 1 day, or 0 days, before being returned to 18L. There was no consistent relationship between photoperiod, or the increase in photoperiod, and the timing of the start of molt. Nor was there a consistent relationship with gonadal regression and the start of molt-molt could be triggered in the absence of a gonadal cycle. However, there was always an association between the start of molt and prolactin. In all cases where molt was induced, there had been an earlier increase in prolactin. However, the timing of molt was related to the time of peak prolactin, not the magnitude of that peak. This relationship between peak prolactin and the start of molt could explain the normally close relationship between the end of breeding activity and the start of molt.
DOT National Transportation Integrated Search
1996-12-01
Although the speed of some guided ground transportation systems continues to increase, the reaction time and the sensory : and information processing capacities of railroad personnel remain constant. This second report in a series examining : critica...
Power Grid Data Analysis with R and Hadoop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Gibson, Tara D.; Kleese van Dam, Kerstin
This book chapter presents an approach to analysis of large-scale time-series sensor information based on our experience with power grid data. We use the R-Hadoop Integrated Programming Environment (RHIPE) to analyze a 2TB data set and present code and results for this analysis.
NASA Astrophysics Data System (ADS)
Korotaev, S. M.; Serdyuk, V. O.; Kiktenko, E. O.; Budnev, N. M.; Gorohov, J. V.
Although the general theory macroscopic quantum entanglement of is still in its infancy, consideration of the matter in the framework of action-at-a distance electrodynamics predicts for the random dissipative processes observability of the advanced nonlocal correlations (time reversal causality). These correlations were really revealed in our previous experiments with some large-scale heliogeophysical processes as the source ones and the lab detectors as the probe ones. Recently a new experiment has been performing on the base of Baikal Deep Water Neutrino Observatory. The thick water layer is an excellent shield against any local impacts on the detectors. The first annual series 2012/2013 has demonstrated that detector signals respond to the heliogeophysical (external) processes and causal connection of the signals directed downwards: from the Earth surface to the Baikal floor. But this nonlocal connection proved to be in reverse time. In addition advanced nonlocal correlation of the detector signal with the regional source-process: the random component of hydrological activity in the upper layer was revealed and the possibility of its forecast on nonlocal correlations was demonstrated. But the strongest macroscopic nonlocal correlations are observed at extremely low frequencies, that is at periods of several months. Therefore the above results should be verified in a longer experiment. We verify them by data of the second annual series 2013/2014 of the Baikal experiment. All the results have been confirmed, although some quantitative parameters of correlations and time reversal causal links turned out different due to nonstationarity of the source-processes. A new result is displaying of the advanced response of nonlocal correlation detector to the earthquake. This opens up the prospect of the earthquake forecast on the new physical principle, although further confirmation in the next events is certainly needed. The continuation of the Baikal experiment with expanded program is burning.
Testing and validating environmental models
Kirchner, J.W.; Hooper, R.P.; Kendall, C.; Neal, C.; Leavesley, G.
1996-01-01
Generally accepted standards for testing and validating ecosystem models would benefit both modellers and model users. Universally applicable test procedures are difficult to prescribe, given the diversity of modelling approaches and the many uses for models. However, the generally accepted scientific principles of documentation and disclosure provide a useful framework for devising general standards for model evaluation. Adequately documenting model tests requires explicit performance criteria, and explicit benchmarks against which model performance is compared. A model's validity, reliability, and accuracy can be most meaningfully judged by explicit comparison against the available alternatives. In contrast, current practice is often characterized by vague, subjective claims that model predictions show 'acceptable' agreement with data; such claims provide little basis for choosing among alternative models. Strict model tests (those that invalid models are unlikely to pass) are the only ones capable of convincing rational skeptics that a model is probably valid. However, 'false positive' rates as low as 10% can substantially erode the power of validation tests, making them insufficiently strict to convince rational skeptics. Validation tests are often undermined by excessive parameter calibration and overuse of ad hoc model features. Tests are often also divorced from the conditions under which a model will be used, particularly when it is designed to forecast beyond the range of historical experience. In such situations, data from laboratory and field manipulation experiments can provide particularly effective tests, because one can create experimental conditions quite different from historical data, and because experimental data can provide a more precisely defined 'target' for the model to hit. We present a simple demonstration showing that the two most common methods for comparing model predictions to environmental time series (plotting model time series against data time series, and plotting predicted versus observed values) have little diagnostic power. We propose that it may be more useful to statistically extract the relationships of primary interest from the time series, and test the model directly against them.
NASA Technical Reports Server (NTRS)
Remsberg, E. E.
2008-01-01
Results are presented on responses in 14-yr time series of stratospheric ozone and temperature from the Halogen Occultation Experiment (HALOE) of the Upper Atmosphere Research Satellite (UARS) to a solar cycle (SC-like) variation. The ozone time series are for ten, 20-degree wide, latitude bins from 45S to 45N and for thirteen "half-Umkehr" layers of about 2.5 km thickness and extending from 63 hPa to 0.7 hPa. The temperature time series analyses were restricted to pressure levels in the range of 2 hPa to 0.7 hPa. Multiple linear regression (MLR) techniques were applied to each of the 130 time series of zonally-averaged, sunrise plus sunset ozone points over that latitude/pressure domain. A simple, 11-yr periodic term and a linear trend term were added to the final MLR models after their seasonal and interannual terms had been determined. Where the amplitudes of the 11-yr terms were significant, they were in-phase with those of the more standard proxies for the solar uv-flux. The max minus min response for ozone is of order 2 to 3% from about 2 to 5 hPa and for the latitudes of 45S to 45N. There is also a significant max minus min response of order 1 K for temperature between 15S and 15N and from 2 to 0.7 hPa. The associated linear trends for ozone are near zero in the upper stratosphere. Negative ozone trends of 4 to 6%/decade were found at 10 to 20 hPa across the low to middle latitudes of both hemispheres. It is concluded that the analyzed responses from the HALOE data are of good quality and can be used to evaluate the responses of climate/chemistry models to a solar cycle forcing.
Proceedings of the Conference on the Design of Experiments (23rd) S
1978-07-01
of Statistics, Carnegie-Mellon University. * [12] Duran , B. S . (1976). A survey of nonparametric tests for scale. Comunications in Statistics A5, 1287...the twenty-third Design of Experiments Conference was the U. S . Army Combat Development Experimentation Command, Fort Ord, California. Excellent...Availability Prof. G. E. P. Box Time Series Modelling University of Wisconsin Dr. Churchill Eisenhart was recipient this year of the Samuel S . Wilks Memorial
Centrality measures in temporal networks with time series analysis
NASA Astrophysics Data System (ADS)
Huang, Qiangjuan; Zhao, Chengli; Zhang, Xue; Wang, Xiaojie; Yi, Dongyun
2017-05-01
The study of identifying important nodes in networks has a wide application in different fields. However, the current researches are mostly based on static or aggregated networks. Recently, the increasing attention to networks with time-varying structure promotes the study of node centrality in temporal networks. In this paper, we define a supra-evolution matrix to depict the temporal network structure. With using of the time series analysis, the relationships between different time layers can be learned automatically. Based on the special form of the supra-evolution matrix, the eigenvector centrality calculating problem is turned into the calculation of eigenvectors of several low-dimensional matrices through iteration, which effectively reduces the computational complexity. Experiments are carried out on two real-world temporal networks, Enron email communication network and DBLP co-authorship network, the results of which show that our method is more efficient at discovering the important nodes than the common aggregating method.
NASA Astrophysics Data System (ADS)
De Dreuzy, J. R.; Marçais, J.; Moatar, F.; Minaudo, C.; Courtois, Q.; Thomas, Z.; Longuevergne, L.; Pinay, G.
2017-12-01
Integration of hydrological and biogeochemical processes led to emerging patterns at the catchment scale. Monitoring in rivers reflects the aggregation of these effects. While discharge time series have been measured for decades, high frequency water quality monitoring in rivers now provides prominent measurements to characterize the interplay between hydrological and biogeochemical processes, especially to infer the processes that happen in the heterogeneous subsurface. However, we still lack frameworks to relate observed patterns to specific processes, because of the "organized complexity" of hydrological systems. Indeed, it is unclear what controls, for example, patterns in concentration-discharge (C/Q) relationships due to non-linear processes and hysteresis effects. Here we develop a non-intensive process-based model to test how the integration of different landforms (i.e. geological heterogeneities and structures, topographical features) with different biogeochemical reactivity assumptions (e.g. reactive zone locations) can shape the overall water quality time series. With numerical experiments, we investigate typical patterns in high frequency C/Q relationships. In headwater basins, we found that typical hysteretic patterns in C/Q relationships observed in data time series can be attributed to differences in water and solute locations stored across the hillslope. At the catchment scale though, these effects tend to average out by integrating contrasted hillslopes' landforms. Together these results suggest that information contained in headwater water quality monitoring can be used to understand how hydrochemical processes determine downstream conditions.
The ATLAS Experiment: Mapping the Secrets of the Universe (LBNL Summer Lecture Series)
Barnett, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Physics Division
2018-01-12
Summer Lecture Series 2007: Michael Barnett of Berkeley Lab's Physics Division discusses the ATLAS Experiment at the European Laboratory for Particle Physics' (CERN) Large Hadron Collider. The collider will explore the aftermath of collisions at the highest energy ever produced in the lab, and will recreate the conditions of the universe a billionth of a second after the Big Bang. The ATLAS detector is half the size of the Notre Dame Cathedral and required 2000 physicists and engineers from 35 countries for its construction. Its goals are to examine mini-black holes, identify dark matter, understand antimatter, search for extra dimensions of space, and learn about the fundamental forces that have shaped the universe since the beginning of time and will determine its fate.
Application of information theory methods to food web reconstruction
Moniz, L.J.; Cooch, E.G.; Ellner, S.P.; Nichols, J.D.; Nichols, J.M.
2007-01-01
In this paper we use information theory techniques on time series of abundances to determine the topology of a food web. At the outset, the food web participants (two consumers, two resources) are known; in addition we know that each consumer prefers one of the resources over the other. However, we do not know which consumer prefers which resource, and if this preference is absolute (i.e., whether or not the consumer will consume the non-preferred resource). Although the consumers and resources are identified at the beginning of the experiment, we also provide evidence that the consumers are not resources for each other, and the resources do not consume each other. We do show that there is significant mutual information between resources; the model is seasonally forced and some shared information between resources is expected. Similarly, because the model is seasonally forced, we expect shared information between consumers as they respond to the forcing of the resources. The model that we consider does include noise, and in an effort to demonstrate that these methods may be of some use in other than model data, we show the efficacy of our methods with decreasing time series size; in this particular case we obtain reasonably clear results with a time series length of 400 points. This approaches ecological time series lengths from real systems.
NASA Astrophysics Data System (ADS)
Shaulov, S. B.; Besshapov, S. P.; Kabanova, N. V.; Sysoeva, T. I.; Antonov, R. A.; Anyuhina, A. M.; Bronvech, E. A.; Chernov, D. V.; Galkin, V. I.; Tkaczyk, W.; Finger, M.; Sonsky, M.
2009-12-01
The expedition carried out in March, 2008 to Lake Baikal became an important stage in the development of the SPHERE experiment. During the expedition the SPHERE-2 installation was hoisted, for the first time, on a tethered balloon, APA, to a height of 700 m over the lake surface covered with ice and snow. A series of test measurements were made. Preliminary results of the data processing are presented. The next plan of the SPHERE experiment is to begin a set of statistics for constructing the CR spectrum in the energy range 10-10 eV.
Experimental relevance of global properties of time-delayed feedback control.
von Loewenich, Clemens; Benner, Hartmut; Just, Wolfram
2004-10-22
We show by means of theoretical considerations and electronic circuit experiments that time-delayed feedback control suffers from severe global constraints if transitions at the control boundaries are discontinuous. Subcritical behavior gives rise to small basins of attraction and thus limits the control performance. The reported properties are, on the one hand, universal since the mechanism is based on general arguments borrowed from bifurcation theory and, on the other hand, directly visible in experimental time series.
Chedid, Aljamir D.; Chedid, Marcio F.; Winkelmann, Leonardo V.; Filho, Tomaz J. M. Grezzana; Kruel, Cleber D. P.
2015-01-01
Perioperative mortality following pancreaticoduodenectomy has improved over time and is lower than 5% in selected high-volume centers. Based on several large literature series on pancreaticoduodenectomy from high-volume centers, some defend that high annual volumes are necessary for good outcomes after pancreaticoduodenectomy. We report here the outcomes of a low annual volume pancreaticoduodenectomy series after incorporating technical expertise from a high-volume center. We included all patients who underwent pancreaticoduodenectomy performed by a single surgeon (ADC.) as treatment for periampullary malignancies from 1981 to 2005. Outcomes of this series were compared to those of 3 high-volume literature series. Additionally, outcomes for first 10 cases in the present series were compared to those of all 37 remaining cases in this series. A total of 47 pancreaticoduodenectomies were performed over a 25-year period. Overall in-hospital mortality was 2 cases (4.3%), and morbidity occurred in 23 patients (48.9%). Both mortality and morbidity were similar to those of each of the three high-volume center comparison series. Comparison of the outcomes for the first 10 to the remaining 37 cases in this series revealed that the latter 37 cases had inferior mortality (20% versus 0%; P = 0.042), less tumor-positive margins (50 versus 13.5%; P = 0.024), less use of intraoperative blood transfusions (90% versus 32.4%; P = 0.003), and tendency to a shorter length of in-hospital stay (20 versus 15.8 days; P = 0.053). Accumulation of surgical experience and incorporation of expertise from high-volume centers may enable achieving satisfactory outcomes after pancreaticoduodenectomy in low-volume settings whenever referral to a high-volume center is limited. PMID:25875555
Parallel optimization of signal detection in active magnetospheric signal injection experiments
NASA Astrophysics Data System (ADS)
Gowanlock, Michael; Li, Justin D.; Rude, Cody M.; Pankratius, Victor
2018-05-01
Signal detection and extraction requires substantial manual parameter tuning at different stages in the processing pipeline. Time-series data depends on domain-specific signal properties, necessitating unique parameter selection for a given problem. The large potential search space makes this parameter selection process time-consuming and subject to variability. We introduce a technique to search and prune such parameter search spaces in parallel and select parameters for time series filters using breadth- and depth-first search strategies to increase the likelihood of detecting signals of interest in the field of magnetospheric physics. We focus on studying geomagnetic activity in the extremely and very low frequency ranges (ELF/VLF) using ELF/VLF transmissions from Siple Station, Antarctica, received at Québec, Canada. Our technique successfully detects amplified transmissions and achieves substantial speedup performance gains as compared to an exhaustive parameter search. We present examples where our algorithmic approach reduces the search from hundreds of seconds down to less than 1 s, with a ranked signal detection in the top 99th percentile, thus making it valuable for real-time monitoring. We also present empirical performance models quantifying the trade-off between the quality of signal recovered and the algorithm response time required for signal extraction. In the future, improved signal extraction in scenarios like the Siple experiment will enable better real-time diagnostics of conditions of the Earth's magnetosphere for monitoring space weather activity.
Alignment of time-resolved data from high throughput experiments.
Abidi, Nada; Franke, Raimo; Findeisen, Peter; Klawonn, Frank
2016-12-01
To better understand the dynamics of the underlying processes in cells, it is necessary to take measurements over a time course. Modern high-throughput technologies are often used for this purpose to measure the behavior of cell products like metabolites, peptides, proteins, [Formula: see text]RNA or mRNA at different points in time. Compared to classical time series, the number of time points is usually very limited and the measurements are taken at irregular time intervals. The main reasons for this are the costs of the experiments and the fact that the dynamic behavior usually shows a strong reaction and fast changes shortly after a stimulus and then slowly converges to a certain stable state. Another reason might simply be missing values. It is common to repeat the experiments and to have replicates in order to carry out a more reliable analysis. The ideal assumptions that the initial stimulus really started exactly at the same time for all replicates and that the replicates are perfectly synchronized are seldom satisfied. Therefore, there is a need to first adjust or align the time-resolved data before further analysis is carried out. Dynamic time warping (DTW) is considered as one of the common alignment techniques for time series data with equidistant time points. In this paper, we modified the DTW algorithm so that it can align sequences with measurements at different, non-equidistant time points with large gaps in between. This type of data is usually known as time-resolved data characterized by irregular time intervals between measurements as well as non-identical time points for different replicates. This new algorithm can be easily used to align time-resolved data from high-throughput experiments and to come across existing problems such as time scarcity and existing noise in the measurements. We propose a modified method of DTW to adapt requirements imposed by time-resolved data by use of monotone cubic interpolation splines. Our presented approach provides a nonlinear alignment of two sequences that neither need to have equi-distant time points nor measurements at identical time points. The proposed method is evaluated with artificial as well as real data. The software is available as an R package tra (Time-Resolved data Alignment) which is freely available at: http://public.ostfalia.de/klawonn/tra.zip .
Series resonant converter with auxiliary winding turns: analysis, design and implementation
NASA Astrophysics Data System (ADS)
Lin, Bor-Ren
2018-05-01
Conventional series resonant converters have researched and applied for high-efficiency power units due to the benefit of its low switching losses. The main problems of series resonant converters are wide frequency variation and high circulating current. Thus, resonant converter is limited at narrow input voltage range and large input capacitor is normally adopted in commercial power units to provide the minimum hold-up time requirement when AC power is off. To overcome these problems, the resonant converter with auxiliary secondary windings are presented in this paper to achieve high voltage gain at low input voltage case such as hold-up time duration when utility power is off. Since the high voltage gain is used at low input voltage cased, the frequency variation of the proposed converter compared to the conventional resonant converter is reduced. Compared to conventional resonant converter, the hold-up time in the proposed converter is more than 40ms. The larger magnetising inductance of transformer is used to reduce the circulating current losses. Finally, a laboratory prototype is constructed and experiments are provided to verify the converter performance.
Functional mixed effects spectral analysis
KRAFTY, ROBERT T.; HALL, MARTICA; GUO, WENSHENG
2011-01-01
SUMMARY In many experiments, time series data can be collected from multiple units and multiple time series segments can be collected from the same unit. This article introduces a mixed effects Cramér spectral representation which can be used to model the effects of design covariates on the second-order power spectrum while accounting for potential correlations among the time series segments collected from the same unit. The transfer function is composed of a deterministic component to account for the population-average effects and a random component to account for the unit-specific deviations. The resulting log-spectrum has a functional mixed effects representation where both the fixed effects and random effects are functions in the frequency domain. It is shown that, when the replicate-specific spectra are smooth, the log-periodograms converge to a functional mixed effects model. A data-driven iterative estimation procedure is offered for the periodic smoothing spline estimation of the fixed effects, penalized estimation of the functional covariance of the random effects, and unit-specific random effects prediction via the best linear unbiased predictor. PMID:26855437
Financial time series prediction using spiking neural networks.
Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam
2014-01-01
In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.
Prediction of Sea Surface Temperature Using Long Short-Term Memory
NASA Astrophysics Data System (ADS)
Zhang, Qin; Wang, Hui; Dong, Junyu; Zhong, Guoqiang; Sun, Xin
2017-10-01
This letter adopts long short-term memory(LSTM) to predict sea surface temperature(SST), which is the first attempt, to our knowledge, to use recurrent neural network to solve the problem of SST prediction, and to make one week and one month daily prediction. We formulate the SST prediction problem as a time series regression problem. LSTM is a special kind of recurrent neural network, which introduces gate mechanism into vanilla RNN to prevent the vanished or exploding gradient problem. It has strong ability to model the temporal relationship of time series data and can handle the long-term dependency problem well. The proposed network architecture is composed of two kinds of layers: LSTM layer and full-connected dense layer. LSTM layer is utilized to model the time series relationship. Full-connected layer is utilized to map the output of LSTM layer to a final prediction. We explore the optimal setting of this architecture by experiments and report the accuracy of coastal seas of China to confirm the effectiveness of the proposed method. In addition, we also show its online updated characteristics.
NASA Astrophysics Data System (ADS)
Antonik, Piotr; Haelterman, Marc; Massar, Serge
2017-05-01
Reservoir computing is a bioinspired computing paradigm for processing time-dependent signals. Its hardware implementations have received much attention because of their simplicity and remarkable performance on a series of benchmark tasks. In previous experiments, the output was uncoupled from the system and, in most cases, simply computed off-line on a postprocessing computer. However, numerical investigations have shown that feeding the output back into the reservoir opens the possibility of long-horizon time-series forecasting. Here, we present a photonic reservoir computer with output feedback, and we demonstrate its capacity to generate periodic time series and to emulate chaotic systems. We study in detail the effect of experimental noise on system performance. In the case of chaotic systems, we introduce several metrics, based on standard signal-processing techniques, to evaluate the quality of the emulation. Our work significantly enlarges the range of tasks that can be solved by hardware reservoir computers and, therefore, the range of applications they could potentially tackle. It also raises interesting questions in nonlinear dynamics and chaos theory.
Dean, Roger T; Dunsmuir, William T M
2016-06-01
Many articles on perception, performance, psychophysiology, and neuroscience seek to relate pairs of time series through assessments of their cross-correlations. Most such series are individually autocorrelated: they do not comprise independent values. Given this situation, an unfounded reliance is often placed on cross-correlation as an indicator of relationships (e.g., referent vs. response, leading vs. following). Such cross-correlations can indicate spurious relationships, because of autocorrelation. Given these dangers, we here simulated how and why such spurious conclusions can arise, to provide an approach to resolving them. We show that when multiple pairs of series are aggregated in several different ways for a cross-correlation analysis, problems remain. Finally, even a genuine cross-correlation function does not answer key motivating questions, such as whether there are likely causal relationships between the series. Thus, we illustrate how to obtain a transfer function describing such relationships, informed by any genuine cross-correlations. We illustrate the confounds and the meaningful transfer functions by two concrete examples, one each in perception and performance, together with key elements of the R software code needed. The approach involves autocorrelation functions, the establishment of stationarity, prewhitening, the determination of cross-correlation functions, the assessment of Granger causality, and autoregressive model development. Autocorrelation also limits the interpretability of other measures of possible relationships between pairs of time series, such as mutual information. We emphasize that further complexity may be required as the appropriate analysis is pursued fully, and that causal intervention experiments will likely also be needed.
Pridemore, William Alex; Chamlin, Mitchell B.; Cochran, John K.
2009-01-01
The dissolution of the Soviet Union resulted in sudden, widespread, and fundamental changes to Russian society. The former social welfare system-with its broad guarantees of employment, healthcare, education, and other forms of social support-was dismantled in the shift toward democracy, rule of law, and a free-market economy. This unique natural experiment provides a rare opportunity to examine the potentially disintegrative effects of rapid social change on deviance, and thus to evaluate one of Durkheim's core tenets. We took advantage of this opportunity by performing interrupted time-series analyses of annual age-adjusted homicide, suicide, and alcohol-related mortality rates for the Russian Federation using data from 1956 to 2002, with 1992-2002 as the postintervention time-frame. The ARIMA models indicate that, controlling for the long-term processes that generated these three time series, the breakup of the Soviet Union was associated with an appreciable increase in each of the cause-of-death rates. We interpret these findings as being consistent with the Durkheimian hypothesis that rapid social change disrupts social order, thereby increasing the level of crime and deviance. PMID:20165565
Doing Your Thing: Fourth Grade.
ERIC Educational Resources Information Center
Potter, Beverly
The fourth grade instructional unit, part of a grade school level career education series, is designed to assist learners in relating present experiences to past and future ones. Before the main body of the lessons is described, field testing results are reported, and key items are presented: the concepts, the estimated instructional time, the…
Experiments with a Magnetically Controlled Pendulum
ERIC Educational Resources Information Center
Kraftmakher, Yaakov
2007-01-01
A magnetically controlled pendulum is used for observing free and forced oscillations, including nonlinear oscillations and chaotic motion. A data-acquisition system stores the data and displays time series of the oscillations and related phase plane plots, Poincare maps, Fourier spectra and histograms. The decay constant of the pendulum can be…
Does a Regional Accent Perturb Speech Processing?
ERIC Educational Resources Information Center
Floccia, Caroline; Goslin, Jeremy; Girard, Frederique; Konopczynski, Gabrielle
2006-01-01
The processing costs involved in regional accent normalization were evaluated by measuring differences in lexical decision latencies for targets placed at the end of sentences with different French regional accents. Over a series of 6 experiments, the authors examined the time course of comprehension disruption by manipulating the duration and…
Technique and interpretation in tree seed radiography
Howard B. Kriebel
1966-01-01
The study of internal seed structure by radiography requires techniques which will give good definition. To establish the best procedures, we conducted a series of experiments in which we manipulated the principal controllable variables affecting the quality of X-radiographs: namely, focus-to-film distance, film speed (grain), exposure time, kilovoltage, and...
COBRE Research Workshop on Higher Education: Equity and Efficiency.
ERIC Educational Resources Information Center
Chicago Univ., IL.
This document comprises 8 papers presented at the COBRE Research Workshop on Higher Education. The papers are: (1) "Schooling and Equality from Generation to Generation;" (2) "Time Series Changes in Personal Income Inequality: The United States Experience, 1939 to 1985;" (3) "Education, Income, and Ability;" (4) "Proposals for Financing Higher…
ERIC Educational Resources Information Center
Dillon, Naomi
2008-01-01
Life, in general, is a series of ever-increasing challenges. Ideally, the lessons learned from previous experiences prepare a person for future ones. That isn't always the case, particularly when puberty hits. Despite the many environmental and personal variables converging at the same time, schools can be instrumental in guiding teens through…
Authentic Assessment: A Handbook for Educators. Assessment Bookshelf Series.
ERIC Educational Resources Information Center
Hart, Diane
This book reviews the assessment movement, from the history of testing to the practical considerations of enhancing classroom experiences for teachers and students. It explores making time for assessment, tailoring assessment to desired outcomes, and scoring and evaluating student performance. The chapters are: (1) "Where We've Been: Standardized…
Stories of Success: Latinas Redefining Cultural Capital
ERIC Educational Resources Information Center
Gonzales, Leslie D.
2012-01-01
In this essay, the stories of successful Latina scholars are captured and shared through a series of interviews. Inquiring about the k-20 experience of the Latinas, the study provides timely insights that counter mainstream deficit perspectives on the Latino population. Specifically, these Latinas' stories show how they have been inspired by…
Field Experiments in Manpower Issues.
ERIC Educational Resources Information Center
Mobilization for Youth, Inc., New York, NY. Experimental Manpower Lab.
The first three reports in this series describe the data-based results of systematic experimentation and survey research concerned with the following timely manpower issues: (1) The Effects of Monetary Incentives on the Learning of Remedial English by Disadvantaged Trainees, (2) The Reward Preferences of Neighborhood Youth Corps Trainees, and (3)…
Effects of Noun Phrase Type on Sentence Complexity
ERIC Educational Resources Information Center
Gordon, Peter C.; Hendrick, Randall; Johnson, Marcus
2004-01-01
A series of self-paced reading time experiments was performed to assess how characteristics of noun phrases (NPs) contribute to the difference in processing difficulty between object- and subject-extracted relative clauses. Structural semantic characteristics of the NP in the embedded clause (definite vs. indefinite and definite vs. generic) did…
21 CFR 314.81 - Other postmarketing reports.
Code of Federal Regulations, 2011 CFR
2011-04-01
... to safety (for example, epidemiologic studies or analyses of experience in a monitored series of... times two copies of the following reports: (1) NDA—Field alert report. The applicant shall submit... information, for example, submit a labeling supplement, add a warning to the labeling, or initiate a new study...
21 CFR 314.81 - Other postmarketing reports.
Code of Federal Regulations, 2012 CFR
2012-04-01
... to safety (for example, epidemiologic studies or analyses of experience in a monitored series of... times two copies of the following reports: (1) NDA—Field alert report. The applicant shall submit... information, for example, submit a labeling supplement, add a warning to the labeling, or initiate a new study...
21 CFR 314.81 - Other postmarketing reports.
Code of Federal Regulations, 2013 CFR
2013-04-01
... to safety (for example, epidemiologic studies or analyses of experience in a monitored series of... times two copies of the following reports: (1) NDA—Field alert report. The applicant shall submit... information, for example, submit a labeling supplement, add a warning to the labeling, or initiate a new study...
21 CFR 314.81 - Other postmarketing reports.
Code of Federal Regulations, 2014 CFR
2014-04-01
... to safety (for example, epidemiologic studies or analyses of experience in a monitored series of... times two copies of the following reports: (1) NDA—Field alert report. The applicant shall submit... information, for example, submit a labeling supplement, add a warning to the labeling, or initiate a new study...
Feature Assignment in Perception of Auditory Figure
ERIC Educational Resources Information Center
Gregg, Melissa K.; Samuel, Arthur G.
2012-01-01
Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…
Small area population forecasting: some experience with British models.
Openshaw, S; Van Der Knaap, G A
1983-01-01
This study is concerned with the evaluation of the various models including time-series forecasts, extrapolation, and projection procedures, that have been developed to prepare population forecasts for planning purposes. These models are evaluated using data for the Netherlands. "As part of a research project at the Erasmus University, space-time population data has been assembled in a geographically consistent way for the period 1950-1979. These population time series are of sufficient length for the first 20 years to be used to build models and then evaluate the performance of the model for the next 10 years. Some 154 different forecasting models for 832 municipalities have been evaluated. It would appear that the best forecasts are likely to be provided by either a Holt-Winters model, or a ratio-correction model, or a low order exponential-smoothing model." excerpt
NASA Astrophysics Data System (ADS)
Grunder, Anita L.; Laporte, Didier; Druitt, Tim H.
2005-04-01
The abrupt changes in character of variably welded pyroclastic deposits have invited decades of investigation and classification. We conducted two series of experiments using ash from the nonwelded base of the rhyolitic Rattlesnake Tuff of Oregon, USA, to examine conditions of welding. One series of experiments was conducted at atmospheric pressure (1 At) in a muffle furnace with variable run times and temperature and another series was conducted at 5 MPa and 600 °C in a cold seal apparatus with variable run times and water contents. We compared the results to a suite of incipiently to densely welded, natural samples of the Rattlesnake Tuff. Experiments at 1 At required a temperature above 900 °C to produce welding, which is in excess of the estimated pre-eruptive magmatic temperature of the tuff. The experiments also yielded globular clast textures unlike the natural tuff. During the cold-seal experiments, the gold sample capsules collapsed in response to sample densification. Textures and densities that closely mimic the natural suite were produced at 5 MPa, 600 °C and 0.4 wt.% H 2O, over run durations of hours to 2 days. Clast deformation and development of foliation in 2-week runs were greater than in natural samples. Both more and less water reduced the degree of welding at otherwise constant run conditions. For 5 MPa experiments, changes in the degree of foliation of shards and of axial ratios of bubble shards and non-bubble (mainly platy) shards, are consistent with early densification related to compaction and partial rotation of shards into a foliation. Subsequent densification was associated with viscous deformation as indicated by more sintered contacts and deformation of shards. Sintering (local fusion of shard-shard contacts) was increasingly important with longer run times, higher temperatures, and greater pressures. During runs with high water concentrations, sintering was rare and adhesion between clasts was dominated by precipitation of sublimates in pore spaces. A few tenths wt.% H 2O in the rhyolite glass promote the development of welding by sharp reduction of glass viscosity. Large amounts of water inhibit welding by creating surface sublimates that interfere with sintering and may exert fluid pressure counter to lithostatic load if sintering and vapor-phase sublimates seal permeability in the tuff.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaffner, D. A.; Brown, M. R.; Rock, A. B.
The frequency spectrum of magnetic fluctuations as measured on the Swarthmore Spheromak Experiment is broadband and exhibits a nearly Kolmogorov 5/3 scaling. It features a steepening region which is indicative of dissipation of magnetic fluctuation energy similar to that observed in fluid and magnetohydrodynamic turbulence systems. Two non-spectrum based time-series analysis techniques are implemented on this data set in order to seek other possible signatures of turbulent dissipation beyond just the steepening of fluctuation spectra. Presented here are results for the flatness, permutation entropy, and statistical complexity, each of which exhibits a particular character at spectral steepening scales which canmore » then be compared to the behavior of the frequency spectrum.« less
Historical instrumental climate data for Australia - quality and utility for palaeoclimatic studies
NASA Astrophysics Data System (ADS)
Nicholls, Neville; Collins, Dean; Trewin, Blair; Hope, Pandora
2006-10-01
The quality and availability of climate data suitable for palaeoclimatic calibration and verification for the Australian region are discussed and documented. Details of the various datasets, including problems with the data, are presented. High-quality datasets, where such problems are reduced or even eliminated, are discussed. Many climate datasets are now analysed onto grids, facilitating the preparation of regional-average time series. Work is under way to produce such high-quality, gridded datasets for a variety of hitherto unavailable climate data, including surface humidity, pan evaporation, wind, and cloud. An experiment suggests that only a relatively small number of palaeoclimatic time series could provide a useful estimate of long-term changes in Australian annual average temperature. Copyright
Konrad, Peter E.; Neimat, Joseph S.; Yu, Hong; Kao, Chris C.; Remple, Michael S.; D'Haese, Pierre-François; Dawant, Benoit M.
2011-01-01
Background The microTargeting™ platform (MTP) stereotaxy system (FHC Inc., Bowdoin, Me., USA) was FDA approved in 2001 utilizing rapid-prototyping technology to create custom platforms for human stereotaxy procedures. It has also been called the STarFix (surgical targeting fixture) system since it is based on the concept of a patient- and procedure-specific surgical fixture. This is an alternative stereotactic method by which planned trajectories are incorporated into custom-built, miniature stereotactic platforms mounted onto bone fiducial markers. Our goal is to report the clinical experience with this system over a 6-year period. Methods We present the largest reported series of patients who underwent deep brain stimulation (DBS) implantations using customized rapidly prototyped stereotactic frames (MTP). Clinical experience and technical features for the use of this stereotactic system are described. Final lead location analysis using postoperative CT was performed to measure the clinical accuracy of the stereotactic system. Results Our series included 263 patients who underwent 284 DBS implantation surgeries at one institution over a 6-year period. The clinical targeting error without accounting for brain shift in this series was found to be 1.99 mm (SD 0.9). Operating room time was reduced through earlier incision time by 2 h per case. Conclusion Customized, miniature stereotactic frames, namely STarFix platforms, are an acceptable and efficient alternative method for DBS implantation. Its clinical accuracy and outcome are comparable to those associated with traditional stereotactic frame systems. PMID:21160241
Role of the ocean's AMOC in setting the uptake efficiency of transient tracers
NASA Astrophysics Data System (ADS)
Romanou, A.; Marshall, J.; Kelley, M.; Scott, J. R.
2017-12-01
The central role played by the ocean's Atlantic Meridional Overturning Circulation (AMOC) in the uptake and sequestration of transient tracers is studied in a series of experiments with the Goddard Institute for Space Studies and Massachusetts Institute of Technology ocean circulation models. Forced by observed atmospheric time series of CFC-11, both models exhibit realistic distributions in the ocean, with similar surface biases but different response over time. To better understand what controls uptake, we ran idealized forcing experiments in which the AMOC strength varied over a wide range, bracketing the observations. We found that differences in the strength and vertical scale of the AMOC largely accounted for the different rates of CFC-11 uptake and vertical distribution thereof. A two-box model enables us to quantify and relate uptake efficiency of passive tracers to AMOC strength and how uptake efficiency decreases in time. We also discuss the relationship between passive tracer and heat uptake efficiency, of which the latter controls the transient climate response to anthropogenic forcing in the North Atlantic. We find that heat uptake efficiency is substantially less (by about a factor of 5) than that for a passive tracer.
Long-Term Stability Assessment of Sonoran Desert for Vicarious Calibration of GOES-R
NASA Astrophysics Data System (ADS)
Kim, W.; Liang, S.; Cao, C.
2012-12-01
Vicarious calibration refers to calibration techniques that do not depend on onboard calibration devices. Although sensors and onboard calibration devices undergo rigorous validation processes before launch, performance of sensors often degrades after the launch due to exposure to the harsh space environment and the aging of devices. Such in-flight changes of devices can be identified and adjusted through vicarious calibration activities where the sensor degradation is measured in reference to exterior calibration sources such as the Sun, the Moon, and the Earth surface. Sonoran desert is one of the best calibration sites located in the North America that are available for vicarious calibration of GOES-R satellite. To accurately calibrate sensors onboard GOES-R satellite (e.g. advanced baseline imager (ABI)), the temporal stability of Sonoran desert needs to be assessed precisely. However, short-/mid-term variations in top-of-atmosphere (TOA) reflectance caused by meteorological variables such as water vapor amount and aerosol loading are often difficult to retrieve, making the use of TOA reflectance time series for the stability assessment of the site. In this paper, we address this issue of normalization of TOA reflectance time series using a time series analysis algorithm - seasonal trend decomposition procedure based on LOESS (STL) (Cleveland et al, 1990). The algorithm is basically a collection of smoothing filters which leads to decomposition of a time series into three additive components; seasonal, trend, and remainder. Since this non-linear technique is capable of extracting seasonal patterns in the presence of trend changes, the seasonal variation can be effectively identified in the time series of remote sensing data subject to various environmental changes. The experiment results performed with Landsat 5 TM data show that the decomposition results acquired for the Sonoran Desert area produce normalized series that have much less uncertainty than those of traditional BRDF models, which leads to more accurate stability assessment.
Multi-locus analysis of genomic time series data from experimental evolution.
Terhorst, Jonathan; Schlötterer, Christian; Song, Yun S
2015-04-01
Genomic time series data generated by evolve-and-resequence (E&R) experiments offer a powerful window into the mechanisms that drive evolution. However, standard population genetic inference procedures do not account for sampling serially over time, and new methods are needed to make full use of modern experimental evolution data. To address this problem, we develop a Gaussian process approximation to the multi-locus Wright-Fisher process with selection over a time course of tens of generations. The mean and covariance structure of the Gaussian process are obtained by computing the corresponding moments in discrete-time Wright-Fisher models conditioned on the presence of a linked selected site. This enables our method to account for the effects of linkage and selection, both along the genome and across sampled time points, in an approximate but principled manner. We first use simulated data to demonstrate the power of our method to correctly detect, locate and estimate the fitness of a selected allele from among several linked sites. We study how this power changes for different values of selection strength, initial haplotypic diversity, population size, sampling frequency, experimental duration, number of replicates, and sequencing coverage depth. In addition to providing quantitative estimates of selection parameters from experimental evolution data, our model can be used by practitioners to design E&R experiments with requisite power. We also explore how our likelihood-based approach can be used to infer other model parameters, including effective population size and recombination rate. Then, we apply our method to analyze genome-wide data from a real E&R experiment designed to study the adaptation of D. melanogaster to a new laboratory environment with alternating cold and hot temperatures.
Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.
Ouyang, Yicun; Yin, Hujun
2018-05-01
Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.
Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes
Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide
2017-01-01
Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889
NASA Astrophysics Data System (ADS)
Andrianova, Olga; Lomakov, Gleb; Manturov, Gennady
2017-09-01
The report presents the results of an analysis of benchmark experiments form the international ICSBEP Handbook (HEU-MET-INTER-005) carried out at the the SSC RF - IPPE in cooperation with the Idaho National Laboratory (INL, USA) applicable to the verification of calculations of a wide range of tasks related to safe storage of vitrified radioactive waste. Experiments on the BFS assemblies make it possible to perform a large series of studies needed for neutron data refinement, including measurements of reactivity effects which allow testing the neutron cross section resonance structure. This series of studies is considered as a sample joint analysis framework for differential and integral experiments required to correct nuclea data files of the ROSFOND evaluated neutron data library. Thus, it is shown that despite the wide range of available experimental data, in so far as it relates to the resonance region refinement, the experiments on reactivity measurement make it possible to more subtly reflect the resonance structure peculiarities in addition to the time-of-flight measurement method.
Jarukanont, Daungruthai; Bonifas Arredondo, Imelda; Femat, Ricardo; Garcia, Martin E
2015-01-01
Chromaffin cells release catecholamines by exocytosis, a process that includes vesicle docking, priming and fusion. Although all these steps have been intensively studied, some aspects of their mechanisms, particularly those regarding vesicle transport to the active sites situated at the membrane, are still unclear. In this work, we show that it is possible to extract information on vesicle motion in Chromaffin cells from the combination of Langevin simulations and amperometric measurements. We developed a numerical model based on Langevin simulations of vesicle motion towards the cell membrane and on the statistical analysis of vesicle arrival times. We also performed amperometric experiments in bovine-adrenal Chromaffin cells under Ba2+ stimulation to capture neurotransmitter releases during sustained exocytosis. In the sustained phase, each amperometric peak can be related to a single release from a new vesicle arriving at the active site. The amperometric signal can then be mapped into a spike-series of release events. We normalized the spike-series resulting from the current peaks using a time-rescaling transformation, thus making signals coming from different cells comparable. We discuss why the obtained spike-series may contain information about the motion of all vesicles leading to release of catecholamines. We show that the release statistics in our experiments considerably deviate from Poisson processes. Moreover, the interspike-time probability is reasonably well described by two-parameter gamma distributions. In order to interpret this result we computed the vesicles' arrival statistics from our Langevin simulations. As expected, assuming purely diffusive vesicle motion we obtain Poisson statistics. However, if we assume that all vesicles are guided toward the membrane by an attractive harmonic potential, simulations also lead to gamma distributions of the interspike-time probability, in remarkably good agreement with experiment. We also show that including the fusion-time statistics in our model does not produce any significant changes on the results. These findings indicate that the motion of the whole ensemble of vesicles towards the membrane is directed and reflected in the amperometric signals. Our results confirm the conclusions of previous imaging studies performed on single vesicles that vesicles' motion underneath plasma membranes is not purely random, but biased towards the membrane.
Jarukanont, Daungruthai; Bonifas Arredondo, Imelda; Femat, Ricardo; Garcia, Martin E.
2015-01-01
Chromaffin cells release catecholamines by exocytosis, a process that includes vesicle docking, priming and fusion. Although all these steps have been intensively studied, some aspects of their mechanisms, particularly those regarding vesicle transport to the active sites situated at the membrane, are still unclear. In this work, we show that it is possible to extract information on vesicle motion in Chromaffin cells from the combination of Langevin simulations and amperometric measurements. We developed a numerical model based on Langevin simulations of vesicle motion towards the cell membrane and on the statistical analysis of vesicle arrival times. We also performed amperometric experiments in bovine-adrenal Chromaffin cells under Ba2+ stimulation to capture neurotransmitter releases during sustained exocytosis. In the sustained phase, each amperometric peak can be related to a single release from a new vesicle arriving at the active site. The amperometric signal can then be mapped into a spike-series of release events. We normalized the spike-series resulting from the current peaks using a time-rescaling transformation, thus making signals coming from different cells comparable. We discuss why the obtained spike-series may contain information about the motion of all vesicles leading to release of catecholamines. We show that the release statistics in our experiments considerably deviate from Poisson processes. Moreover, the interspike-time probability is reasonably well described by two-parameter gamma distributions. In order to interpret this result we computed the vesicles’ arrival statistics from our Langevin simulations. As expected, assuming purely diffusive vesicle motion we obtain Poisson statistics. However, if we assume that all vesicles are guided toward the membrane by an attractive harmonic potential, simulations also lead to gamma distributions of the interspike-time probability, in remarkably good agreement with experiment. We also show that including the fusion-time statistics in our model does not produce any significant changes on the results. These findings indicate that the motion of the whole ensemble of vesicles towards the membrane is directed and reflected in the amperometric signals. Our results confirm the conclusions of previous imaging studies performed on single vesicles that vesicles’ motion underneath plasma membranes is not purely random, but biased towards the membrane. PMID:26675312
Theoretical Series Elastic Element Length in Rana pipiens Sartorius Muscles
Matsumoto, Yorimi
1967-01-01
Assuming a two component system for the muscle, a series elastic element and a contractile component, the analyses of the isotonic and isometric data points were related to obtain the series elastic stiffness, dP/dls, from the relation, See PDF for Equation From the isometric data, dP/dt was obtained and shortening velocity, v, was a result of the isotonic experiments. Substituting (P 0 - P)/T for dP/dt and (P 0 - P)/(P + a) times b for v, dP/dls = (P + a) /bT, where P < P 0, and a, b are constants for any lengths l ≤ l 0 (Matsumoto, 1965). If the isometric tension and the shortening velocity are recorded for a given muscle length, l 0, although the series elastic, ls, and the contractile component, lc, are changing, the total muscle length, l 0 remains fixed and therefore the time constant, T. Integrating, See PDF for Equation the stress-strain relation for the series elastic element, See PDF for Equation is obtained; l sc0 - ls + l c0where l co equals the contractile component length for a muscle exerting a tension of P 0. For a given P/P 0, ls is uniquely determined and must be the same whether on the isotonic or isometric length-tension-time curve. In fact, a locus on one surface curve can be associated with the corresponding locus on the other. PMID:6033578
Time trends in recurrence of juvenile nasopharyngeal angiofibroma: Experience of the past 4 decades.
Mishra, Anupam; Mishra, Subhash Chandra
2016-01-01
An analysis of time distribution of juvenile nasopharyngeal angiofibroma (JNA) from the last 4 decades is presented. Sixty recurrences were analyzed as per actuarial survival. SPSS software was used to generate Kaplan-Meier (KM) curves and time distributions were compared by Log-rank, Breslow and Tarone-Ware test. The overall recurrence rate was 17.59%. Majority underwent open transpalatal approach(es) without embolization. The probability of detecting a recurrence was 95% in first 24months and comparison of KM curves of 4 different time periods was not significant. This is the first and largest series to address the time-distribution. The required follow up period is 2years. Our recurrence is just half of the largest series (reported so far) suggesting the superiority of transpalatal techniques. The similarity of curves suggests less likelihood for recent technical advances to influence the recurrence that as per our hypothesis is more likely to reflect tumor biology per se. Copyright © 2016 Elsevier Inc. All rights reserved.
Numerical analysis of transient fields near thin-wire antennas and scatterers
NASA Astrophysics Data System (ADS)
Landt, J. A.
1981-11-01
Under the premise that `accelerated charge radiates,' one would expect radiation on wire structures to occur from driving points, ends of wires, bends in wires, or locations of lumped loading. Here, this premise is investigated in a series of numerical experiments. The numerical procedure is based on a moment-method solution of a thin-wire time-domain electric-field integral equation. The fields in the vicinity of wire structures are calculated for short impulsive-type excitations, and are viewed in a series of time sequences or snapshots. For these excitations, the fields are spatially limited in the radial dimension, and expand in spheres centered about points of radiation. These centers of radiation coincide with the above list of possible source regions. Time retardation permits these observations to be made clearly in the time domain, similar to time-range gating. In addition to providing insight into transient radiation processes, these studies show that the direction of energy flow is not always defined by Poynting's vector near wire structures.
Yasuhara, Moriaki; Doi, Hideyuki; Wei, Chih-Lin; Danovaro, Roberto; Myhre, Sarah E
2016-05-19
The link between biodiversity and ecosystem functioning (BEF) over long temporal scales is poorly understood. Here, we investigate biological monitoring and palaeoecological records on decadal, centennial and millennial time scales from a BEF framework by using deep sea, soft-sediment environments as a test bed. Results generally show positive BEF relationships, in agreement with BEF studies based on present-day spatial analyses and short-term manipulative experiments. However, the deep-sea BEF relationship is much noisier across longer time scales compared with modern observational studies. We also demonstrate with palaeoecological time-series data that a larger species pool does not enhance ecosystem stability through time, whereas higher abundance as an indicator of higher ecosystem functioning may enhance ecosystem stability. These results suggest that BEF relationships are potentially time scale-dependent. Environmental impacts on biodiversity and ecosystem functioning may be much stronger than biodiversity impacts on ecosystem functioning at long, decadal-millennial, time scales. Longer time scale perspectives, including palaeoecological and ecosystem monitoring data, are critical for predicting future BEF relationships on a rapidly changing planet. © 2016 The Author(s).
Quasi-experimental study designs series-paper 6: risk of bias assessment.
Waddington, Hugh; Aloe, Ariel M; Becker, Betsy Jane; Djimeu, Eric W; Hombrados, Jorge Garcia; Tugwell, Peter; Wells, George; Reeves, Barney
2017-09-01
Rigorous and transparent bias assessment is a core component of high-quality systematic reviews. We assess modifications to existing risk of bias approaches to incorporate rigorous quasi-experimental approaches with selection on unobservables. These are nonrandomized studies using design-based approaches to control for unobservable sources of confounding such as difference studies, instrumental variables, interrupted time series, natural experiments, and regression-discontinuity designs. We review existing risk of bias tools. Drawing on these tools, we present domains of bias and suggest directions for evaluation questions. The review suggests that existing risk of bias tools provide, to different degrees, incomplete transparent criteria to assess the validity of these designs. The paper then presents an approach to evaluating the internal validity of quasi-experiments with selection on unobservables. We conclude that tools for nonrandomized studies of interventions need to be further developed to incorporate evaluation questions for quasi-experiments with selection on unobservables. Copyright © 2017 Elsevier Inc. All rights reserved.
Predicting disease progression from short biomarker series using expert advice algorithm
NASA Astrophysics Data System (ADS)
Morino, Kai; Hirata, Yoshito; Tomioka, Ryota; Kashima, Hisashi; Yamanishi, Kenji; Hayashi, Norihiro; Egawa, Shin; Aihara, Kazuyuki
2015-05-01
Well-trained clinicians may be able to provide diagnosis and prognosis from very short biomarker series using information and experience gained from previous patients. Although mathematical methods can potentially help clinicians to predict the progression of diseases, there is no method so far that estimates the patient state from very short time-series of a biomarker for making diagnosis and/or prognosis by employing the information of previous patients. Here, we propose a mathematical framework for integrating other patients' datasets to infer and predict the state of the disease in the current patient based on their short history. We extend a machine-learning framework of ``prediction with expert advice'' to deal with unstable dynamics. We construct this mathematical framework by combining expert advice with a mathematical model of prostate cancer. Our model predicted well the individual biomarker series of patients with prostate cancer that are used as clinical samples.
Predicting disease progression from short biomarker series using expert advice algorithm.
Morino, Kai; Hirata, Yoshito; Tomioka, Ryota; Kashima, Hisashi; Yamanishi, Kenji; Hayashi, Norihiro; Egawa, Shin; Aihara, Kazuyuki
2015-05-20
Well-trained clinicians may be able to provide diagnosis and prognosis from very short biomarker series using information and experience gained from previous patients. Although mathematical methods can potentially help clinicians to predict the progression of diseases, there is no method so far that estimates the patient state from very short time-series of a biomarker for making diagnosis and/or prognosis by employing the information of previous patients. Here, we propose a mathematical framework for integrating other patients' datasets to infer and predict the state of the disease in the current patient based on their short history. We extend a machine-learning framework of "prediction with expert advice" to deal with unstable dynamics. We construct this mathematical framework by combining expert advice with a mathematical model of prostate cancer. Our model predicted well the individual biomarker series of patients with prostate cancer that are used as clinical samples.
Performance of FORTRAN floating-point operations on the Flex/32 multicomputer
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1987-01-01
A series of experiments has been run to examine the floating-point performance of FORTRAN programs on the Flex/32 (Trademark) computer. The experiments are described, and the timing results are presented. The time required to execute a floating-point operation is found to vary considerbaly depending on a number of factors. One factor of particular interest from an algorithm design standpoint is the difference in speed between common memory accesses and local memory accesses. Common memory accesses were found to be slower, and guidelines are given for determinig when it may be cost effective to copy data from common to local memory.
Nitramine smokeless propellant research
NASA Technical Reports Server (NTRS)
1977-01-01
A transient ballistics and combustion model was derived to represent the closed vessel experiment that is widely used to characterize propellants. The model incorporates the nitramine combustion mechanisms. A computer program was developed to solve the time dependent equations, and was applied to explain aspects of closed vessel behavior. It is found that the rate of pressurization in the closed vessel is insufficient at pressures of interest to augment the burning rate by time dependent processes. Series of T-burner experiments were performed to compare the combustion instability characteristics of nitramine (HMX) containing propellants and ammonium perchlorate (AP) propellants. It is found that the inclusion of HMX consistently renders the propellant more stable.
Organics removal from landfill leachate and activated sludge production in SBR reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klimiuk, Ewa; Kulikowska, Dorota
2006-07-01
This study is aimed at estimating organic compounds removal and sludge production in SBR during treatment of landfill leachate. Four series were performed. At each series, experiments were carried out at the hydraulic retention time (HRT) of 12, 6, 3 and 2 d. The series varied in SBR filling strategies, duration of the mixing and aeration phases, and the sludge age. In series 1 and 2 (a short filling period, mixing and aeration phases in the operating cycle), the relationship between organics concentration (COD) in the leachate treated and HRT was pseudo-first-order kinetics. In series 3 (with mixing and aerationmore » phases) and series 4 (only aeration phase) with leachate supplied by means of a peristaltic pump for 4 h of the cycle (filling during reaction period) - this relationship was zero-order kinetics. Activated sludge production expressed as the observed coefficient of biomass production (Y {sub obs}) decreased correspondingly with increasing HRT. The smallest differences between reactors were observed in series 3 in which Y {sub obs} was almost stable (0.55-0.6 mg VSS/mg COD). The elimination of the mixing phase in the cycle (series 4) caused the Y {sub obs} to decrease significantly from 0.32 mg VSS/mg COD at HRT 2 d to 0.04 mg VSS/mg COD at HRT 12 d. The theoretical yield coefficient Y accounted for 0.534 mg VSS/mg COD (series 1) and 0.583 mg VSS/mg COD (series 2). In series 3 and 4, it was almost stable (0.628 mg VSS/mg COD and 0.616 mg VSS/mg COD, respectively). After the elimination of the mixing phase in the operating cycle, the specific biomass decay rate increased from 0.006 d{sup -1} (series 3) to 0.032 d{sup -1} (series 4). The operating conditions employing mixing/aeration or only aeration phases enable regulation of the sludge production. The SBRs operated under aerobic conditions are more favourable at a short hydraulic retention time. At long hydraulic retention time, it can lead to a decrease in biomass concentration in the SBR as a result of cell decay. On the contrary, in the activated sludge at long HRT, a short filling period and operating cycle of the reactor with the mixing and aeration phases seem the most favourable.« less
Three-dimensional time reversal communications in elastic media
Anderson, Brian E.; Ulrich, Timothy J.; Le Bas, Pierre-Yves; ...
2016-02-23
Our letter presents a series of vibrational communication experiments, using time reversal, conducted on a set of cast iron pipes. Time reversal has been used to provide robust, private, and clean communications in many underwater acoustic applications. Also, the use of time reversal to communicate along sections of pipes and through a wall is demonstrated here in order to overcome the complications of dispersion and multiple scattering. These demonstrations utilize a single source transducer and a single sensor, a triaxial accelerometer, enabling multiple channels of simultaneous communication streams to a single location.
A framework for periodic outlier pattern detection in time-series sequences.
Rasheed, Faraz; Alhajj, Reda
2014-05-01
Periodic pattern detection in time-ordered sequences is an important data mining task, which discovers in the time series all patterns that exhibit temporal regularities. Periodic pattern mining has a large number of applications in real life; it helps understanding the regular trend of the data along time, and enables the forecast and prediction of future events. An interesting related and vital problem that has not received enough attention is to discover outlier periodic patterns in a time series. Outlier patterns are defined as those which are different from the rest of the patterns; outliers are not noise. While noise does not belong to the data and it is mostly eliminated by preprocessing, outliers are actual instances in the data but have exceptional characteristics compared with the majority of the other instances. Outliers are unusual patterns that rarely occur, and, thus, have lesser support (frequency of appearance) in the data. Outlier patterns may hint toward discrepancy in the data such as fraudulent transactions, network intrusion, change in customer behavior, recession in the economy, epidemic and disease biomarkers, severe weather conditions like tornados, etc. We argue that detecting the periodicity of outlier patterns might be more important in many sequences than the periodicity of regular, more frequent patterns. In this paper, we present a robust and time efficient suffix tree-based algorithm capable of detecting the periodicity of outlier patterns in a time series by giving more significance to less frequent yet periodic patterns. Several experiments have been conducted using both real and synthetic data; all aspects of the proposed approach are compared with the existing algorithm InfoMiner; the reported results demonstrate the effectiveness and applicability of the proposed approach.
Pearce, Shane M; Pariser, Joseph J; Patel, Sanjay G; Anderson, Blake B; Eggener, Scott E; Zagaja, Gregory P
2016-02-01
To examine the effect of days off between cases on perioperative outcomes for robotic-assisted laparoscopic prostatectomy (RALP). We analyzed a single-surgeon series of 2036 RALP cases between 2003 and 2014. Days between cases (DBC) was calculated as the number of days elapsed since the surgeon's previous RALP with the second start cases assigned 0 DBC. Surgeon experience was assessed by dividing sequential case experience into cases 0-99, cases 100-249, cases 250-999, and cases 1000+ based on previously reported learning curve data for RALP. Outcomes included estimated blood loss (EBL), operative time (OT), and positive surgical margins (PSMs). Multiple linear regression was used to assess the impact of the DBC and surgeon experience on EBL, OT, and PSM, while controlling for patient characteristics, surgical technique, and pathologic variables. Overall median DBC was 1 day (0-3) and declined with increasing surgeon case experience. Multiple linear regression demonstrated that each additional DBC was independently associated with increased EBL [β = 3.7, 95% CI (1.3-6.2), p < 0.01] and OT [β = 2.3 (1.4-3.2), p < 0.01], but was not associated with rate of PSM [β = 0.004 (-0.003-0.010), p = 0.2]. Increased experience was also associated with reductions in EBL and OT (p < 0.01). Surgeon experience of 1000+ cases was associated with a 10% reduction in PSM rate (p = 0.03) compared to cases 0-99. In a large single-surgeon RALP series, DBC was associated with increased blood loss and operative time, but not associated with positive surgical margins, when controlling for surgeon experience.
Why didn't Box-Jenkins win (again)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pack, D.J.; Downing, D.J.
This paper focuses on the forecasting performance of the Box-Jenkins methodology applied to the 111 time series of the Makridakis competition. It considers the influence of the following factors: (1) time series length, (2) time-series information (autocorrelation) content, (3) time-series outliers or structural changes, (4) averaging results over time series, and (5) forecast time origin choice. It is found that the 111 time series contain substantial numbers of very short series, series with obvious structural change, and series whose histories are relatively uninformative. If these series are typical of those that one must face in practice, the real message ofmore » the competition is that univariate time series extrapolations will frequently fail regardless of the methodology employed to produce them.« less
Multifractal analysis of visibility graph-based Ito-related connectivity time series.
Czechowski, Zbigniew; Lovallo, Michele; Telesca, Luciano
2016-02-01
In this study, we investigate multifractal properties of connectivity time series resulting from the visibility graph applied to normally distributed time series generated by the Ito equations with multiplicative power-law noise. We show that multifractality of the connectivity time series (i.e., the series of numbers of links outgoing any node) increases with the exponent of the power-law noise. The multifractality of the connectivity time series could be due to the width of connectivity degree distribution that can be related to the exit time of the associated Ito time series. Furthermore, the connectivity time series are characterized by persistence, although the original Ito time series are random; this is due to the procedure of visibility graph that, connecting the values of the time series, generates persistence but destroys most of the nonlinear correlations. Moreover, the visibility graph is sensitive for detecting wide "depressions" in input time series.
Campaign 1.7 Pu Aging. Development of Time of Flight Secondary Ion Mass Spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venhaus, Thomas J.
2015-09-09
The first application of Time-of-Flight Secondary Ion Mass Spectroscopy (ToF-SIMS) to an aged plutonium surface has resulted in a rich set of surface chemistry data, as well as some unexpected results. FY15 was highlighted by not only the first mapping of hydrogen-containing features within the metal, but also a prove-in series of experiments using the system’s Sieverts Reaction Cell. These experiments involved successfully heating the sample to ~450 oC for nearly 24 hours while the sample was dosed several times with hydrogen, followed by an in situ ToF-SIMS analysis. During this year, the data allowed for better and more consistentmore » identification of the myriad peaks that result from the SIMS sputter process. In collaboration with the AWE (U.K), the system was also fully aligned for sputter depth profiling for future experiments.« less
Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A.
2013-01-01
Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED are maintained at http://vislab.github.com/MobbedMatlab/ PMID:24124417
Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A
2013-01-01
Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED are maintained at http://vislab.github.com/MobbedMatlab/
ERIC Educational Resources Information Center
Phan, Huy P.; Ngu, Bing H.
2017-01-01
In social sciences, the use of stringent methodological approaches is gaining increasing emphasis. Researchers have recognized the limitations of cross-sectional, non-manipulative data in the study of causality. True experimental designs, in contrast, are preferred as they represent rigorous standards for achieving causal flows between variables.…
New Times, New Fathers = A temps moderne, papas modernes.
ERIC Educational Resources Information Center
Theilheimer, Ish, Ed.
1994-01-01
This theme issue of "Transition" features a series of articles on fatherhood and the changing role of fathers in parenting. The articles include: (1) "From Cloth to Paper Diapers and Back: Reflections on Fatherhood during Two Generations" (Robert Couchman), which relates experiences of a new father 20 years ago and today; (2)…
Penetration with Long Rods: A Theoretical Framework and Comparison with Instrumented Impacts
1981-05-01
program to begin probing the details of the interaction process. The theoretical framework underlying such a program is explained in detail. The theory of...of the time sequence of events during penetration. Data from one series of experiments, reported in detail elsewhere, is presented and discussed within the theoretical framework .
ERIC Educational Resources Information Center
Reid, Alan; Payne, Phillip G.; Cutter-Mackenzie, Amy
2010-01-01
This not quite "final" ending of this special issue of "Environmental Education Research" traces a series of hopeful, if somewhat difficult and at times challenging, openings for researching experiences of environment and place through children's literature. In the first instance, we draw inspiration from the contributors who…
ERIC Educational Resources Information Center
Burkholder, Jessica Reno
2010-01-01
The research was guided by the research question: How do full-time single Turkish international graduate students conceptualize their experiences as international students? Participants in the study included three doctoral students and three master's students who participated in a series of semi-structured interviews. The data was transcribed and…
Processing Conversational Implicatures: Alternatives and Counterfactual Reasoning
ERIC Educational Resources Information Center
Tiel, Bob; Schaeken, Walter
2017-01-01
In a series of experiments, Bott and Noveck (2004) found that the computation of scalar inferences, a variety of conversational implicature, caused a delay in response times. In order to determine what aspect of the inferential process that underlies scalar inferences caused this delay, we extended their paradigm to three other kinds of…
Assessing the Effectiveness of a Computer Simulation for Teaching Ecological Experimental Design
ERIC Educational Resources Information Center
Stafford, Richard; Goodenough, Anne E.; Davies, Mark S.
2010-01-01
Designing manipulative ecological experiments is a complex and time-consuming process that is problematic to teach in traditional undergraduate classes. This study investigates the effectiveness of using a computer simulation--the Virtual Rocky Shore (VRS)--to facilitate rapid, student-centred learning of experimental design. We gave a series of…
District Self-Assessment Tool. College Readiness Indicator Systems Resource Series
ERIC Educational Resources Information Center
Annenberg Institute for School Reform at Brown University, 2014
2014-01-01
The purpose of the "District Self-Assessment Tool" is to provide school district and community stakeholders with an approach for assessing district capacity to support a college readiness indicator system and track progress over time. The tool draws on lessons from the collective implementation experiences of the four College Readiness…
Model Programs Compensatory Education: Mother-Child Home Program, Freeport, New York.
ERIC Educational Resources Information Center
American Institutes for Research in the Behavioral Sciences, Palo Alto, CA.
The Mother-Child Home Program was designed to modify the early cognitive experience of preschool disadvantaged children by "intervening" with a series of verbal stimulation activities planned to raise the child's measured IQ. Intervention was timed to occur with early speech development and within the context of family relationships. The…
DOT National Transportation Integrated Search
1996-12-01
Although the speed of some guided ground transportation systems continues to : increase, the reaction time and the sensory and information processing : capacities of railroad personnel remain constant. This second report in a : series examining criti...
Following and Giving Directions: Fifth Grade.
ERIC Educational Resources Information Center
Davis, Nancy
The fifth grade instructional unit, part of a grade school level career education series, is designed to assist learners in understanding how present experiences relate to past and future ones. Before the main body of the lessons is described field test results are reported and key items are presented: the concepts, the estimated time for…
USDA-ARS?s Scientific Manuscript database
The objective of this study is to understand how soil microorganisms interact with cover crop-derived allelochemicals to suppress weed germination and growth following cover crop residue incorporation. We conducted a time series experiment by crossing sterilized and non-sterilized soil with four dif...
This paper presents a technique for determining the trace gas emission rate from a point source. The technique was tested using data from controlled methane release experiments and from measurement downwind of a natural gas production facility in Wyoming. Concentration measuremen...
NASA Astrophysics Data System (ADS)
Wacławczyk, Marta; Ma, Yong-Feng; Kopeć, Jacek M.; Malinowski, Szymon P.
2017-11-01
In this paper we propose two approaches to estimating the turbulent kinetic energy (TKE) dissipation rate, based on the zero-crossing method by Sreenivasan et al. (1983). The original formulation requires a fine resolution of the measured signal, down to the smallest dissipative scales. However, due to finite sampling frequency, as well as measurement errors, velocity time series obtained from airborne experiments are characterized by the presence of effective spectral cutoffs. In contrast to the original formulation the new approaches are suitable for use with signals originating from airborne experiments. The suitability of the new approaches is tested using measurement data obtained during the Physics of Stratocumulus Top (POST) airborne research campaign as well as synthetic turbulence data. They appear useful and complementary to existing methods. We show the number-of-crossings-based approaches respond differently to errors due to finite sampling and finite averaging than the classical power spectral method. Hence, their application for the case of short signals and small sampling frequencies is particularly interesting, as it can increase the robustness of turbulent kinetic energy dissipation rate retrieval.
High-resolution observations in the western Mediterranean Sea: the REP14-MED experiment
NASA Astrophysics Data System (ADS)
Onken, Reiner; Fiekas, Heinz-Volker; Beguery, Laurent; Borrione, Ines; Funk, Andreas; Hemming, Michael; Hernandez-Lasheras, Jaime; Heywood, Karen J.; Kaiser, Jan; Knoll, Michaela; Mourre, Baptiste; Oddo, Paolo; Poulain, Pierre-Marie; Queste, Bastien Y.; Russo, Aniello; Shitashima, Kiminori; Siderius, Martin; Thorp Küsel, Elizabeth
2018-04-01
The observational part of the REP14-MED experiment was conducted in June 2014 in the Sardo-Balearic Basin west of Sardinia (western Mediterranean Sea). Two research vessels collected high-resolution oceanographic data by means of hydrographic casts, towed systems, and underway measurements. In addition, a vast amount of data was provided by a fleet of 11 ocean gliders, time series were available from moored instruments, and information on Lagrangian flow patterns was obtained from surface drifters and one profiling float. The spatial resolution of the observations encompasses a spectrum over 4 orders of magnitude from 𝒪(101 m) to 𝒪(105 m), and the time series from the moored instruments cover a spectral range of 5 orders from 𝒪(101 s) to 𝒪(106 s). The objective of this article is to provide an overview of the huge data set which has been utilised by various studies, focusing on (i) water masses and circulation, (ii) operational forecasting, (iii) data assimilation, (iv) variability of the ocean, and (v) new payloads for gliders.
The RATIO method for time-resolved Laue crystallography
Coppens, Philip; Pitak, Mateusz; Gembicky, Milan; Messerschmidt, Marc; Scheins, Stephan; Benedict, Jason; Adachi, Shin-ichi; Sato, Tokushi; Nozawa, Shunsuke; Ichiyanagi, Kohei; Chollet, Matthieu; Koshihara, Shin-ya
2009-01-01
A RATIO method for analysis of intensity changes in time-resolved pump–probe Laue diffraction experiments is described. The method eliminates the need for scaling the data with a wavelength curve representing the spectral distribution of the source and removes the effect of possible anisotropic absorption. It does not require relative scaling of series of frames and removes errors due to all but very short term fluctuations in the synchrotron beam. PMID:19240334
Stochastic modeling of experimental chaotic time series.
Stemler, Thomas; Werner, Johannes P; Benner, Hartmut; Just, Wolfram
2007-01-26
Methods developed recently to obtain stochastic models of low-dimensional chaotic systems are tested in electronic circuit experiments. We demonstrate that reliable drift and diffusion coefficients can be obtained even when no excessive time scale separation occurs. Crisis induced intermittent motion can be described in terms of a stochastic model showing tunneling which is dominated by state space dependent diffusion. Analytical solutions of the corresponding Fokker-Planck equation are in excellent agreement with experimental data.
Corroded Anchor Structure Stability/Reliability (CAS_Stab-R) Software for Hydraulic Structures
2017-12-01
This report describes software that provides a probabilistic estimate of time -to-failure for a corroding anchor strand system. These anchor...stability to the structure. A series of unique pull-test experiments conducted by Ebeling et al. (2016) at the U.S. Army Engineer Research and...Reliability (CAS_Stab-R) produces probabilistic Remaining Anchor Life time estimates for anchor cables based upon the direct corrosion rate for the
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
Using exogenous variables in testing for monotonic trends in hydrologic time series
Alley, William M.
1988-01-01
One approach that has been used in performing a nonparametric test for monotonic trend in a hydrologic time series consists of a two-stage analysis. First, a regression equation is estimated for the variable being tested as a function of an exogenous variable. A nonparametric trend test such as the Kendall test is then performed on the residuals from the equation. By analogy to stagewise regression and through Monte Carlo experiments, it is demonstrated that this approach will tend to underestimate the magnitude of the trend and to result in some loss in power as a result of ignoring the interaction between the exogenous variable and time. An alternative approach, referred to as the adjusted variable Kendall test, is demonstrated to generally have increased statistical power and to provide more reliable estimates of the trend slope. In addition, the utility of including an exogenous variable in a trend test is examined under selected conditions.
Tool Wear Monitoring Using Time Series Analysis
NASA Astrophysics Data System (ADS)
Song, Dong Yeul; Ohara, Yasuhiro; Tamaki, Haruo; Suga, Masanobu
A tool wear monitoring approach considering the nonlinear behavior of cutting mechanism caused by tool wear and/or localized chipping is proposed, and its effectiveness is verified through the cutting experiment and actual turning machining. Moreover, the variation in the surface roughness of the machined workpiece is also discussed using this approach. In this approach, the residual error between the actually measured vibration signal and the estimated signal obtained from the time series model corresponding to dynamic model of cutting is introduced as the feature of diagnosis. Consequently, it is found that the early tool wear state (i.e. flank wear under 40µm) can be monitored, and also the optimal tool exchange time and the tool wear state for actual turning machining can be judged by this change in the residual error. Moreover, the variation of surface roughness Pz in the range of 3 to 8µm can be estimated by the monitoring of the residual error.
1987-06-01
number of series among the 63 which were identified as a particular ARIMA form and were "best" modeled by a particular technique. Figure 1 illustrates a...th time from xe’s. The integrbted autoregressive - moving average model , denoted by ARIMA (p,d,q) is a result of combining d-th differencing process...Experiments, (4) Data Analysis and Modeling , (5) Theory and Probablistic Inference, (6) Fuzzy Statistics, (7) Forecasting and Prediction, (8) Small Sample
NASA Astrophysics Data System (ADS)
Champion, N.
2012-08-01
Contrary to aerial images, satellite images are often affected by the presence of clouds. Identifying and removing these clouds is one of the primary steps to perform when processing satellite images, as they may alter subsequent procedures such as atmospheric corrections, DSM production or land cover classification. The main goal of this paper is to present the cloud detection approach, developed at the French Mapping agency. Our approach is based on the availability of multi-temporal satellite images (i.e. time series that generally contain between 5 and 10 images) and is based on a region-growing procedure. Seeds (corresponding to clouds) are firstly extracted through a pixel-to-pixel comparison between the images contained in time series (the presence of a cloud is here assumed to be related to a high variation of reflectance between two images). Clouds are then delineated finely using a dedicated region-growing algorithm. The method, originally designed for panchromatic SPOT5-HRS images, is tested in this paper using time series with 9 multi-temporal satellite images. Our preliminary experiments show the good performances of our method. In a near future, the method will be applied to Pléiades images, acquired during the in-flight commissioning phase of the satellite (launched at the end of 2011). In that context, this is a particular goal of this paper to show to which extent and in which way our method can be adapted to this kind of imagery.
He, Meilin; Shen, Wenbin; Chen, Ruizhi; Ding, Hao; Guo, Guangyi
2017-01-01
The solid Earth deforms elastically in response to variations of surface atmosphere, hydrology, and ice/glacier mass loads. Continuous geodetic observations by Global Positioning System (CGPS) stations and Gravity Recovery and Climate Experiment (GRACE) record such deformations to estimate seasonal and secular mass changes. In this paper, we present the seasonal variation of the surface mass changes and the crustal vertical deformation in the South China Block (SCB) identified by GPS and GRACE observations with records spanning from 1999 to 2016. We used 33 CGPS stations to construct a time series of coordinate changes, which are decomposed by empirical orthogonal functions (EOFs) in SCB. The average weighted root-mean-square (WRMS) reduction is 38% when we subtract GRACE-modeled vertical displacements from GPS time series. The first common mode shows clear seasonal changes, indicating seasonal surface mass re-distribution in and around the South China Block. The correlation between GRACE and GPS time series is analyzed which provides a reference for further improvement of the seasonal variation of CGPS time series. The results of the GRACE observations inversion are the surface deformations caused by the surface mass change load at a rate of about −0.4 to −0.8 mm/year, which is used to improve the long-term trend of non-tectonic loads of the GPS vertical velocity field to further explain the crustal tectonic movement in the SCB and surroundings. PMID:29301236
Climate-driven seasonal geocenter motion during the GRACE period
NASA Astrophysics Data System (ADS)
Zhang, Hongyue; Sun, Yu
2018-03-01
Annual cycles in the geocenter motion time series are primarily driven by mass changes in the Earth's hydrologic system, which includes land hydrology, atmosphere, and oceans. Seasonal variations of the geocenter motion have been reliably determined according to Sun et al. (J Geophys Res Solid Earth 121(11):8352-8370, 2016) by combining the Gravity Recovery And Climate Experiment (GRACE) data with an ocean model output. In this study, we reconstructed the observed seasonal geocenter motion with geophysical model predictions of mass variations in the polar ice sheets, continental glaciers, terrestrial water storage (TWS), and atmosphere and dynamic ocean (AO). The reconstructed geocenter motion time series is shown to be in close agreement with the solution based on GRACE data supporting with an ocean bottom pressure model. Over 85% of the observed geocenter motion time series, variance can be explained by the reconstructed solution, which allows a further investigation of the driving mechanisms. We then demonstrated that AO component accounts for 54, 62, and 25% of the observed geocenter motion variances in the X, Y, and Z directions, respectively. The TWS component alone explains 42, 32, and 39% of the observed variances. The net mass changes over oceans together with self-attraction and loading effects also contribute significantly (about 30%) to the seasonal geocenter motion in the X and Z directions. Other contributing sources, on the other hand, have marginal (less than 10%) impact on the seasonal variations but introduce a linear trend in the time series.
TIMESERIESSTREAMING.VI: LabVIEW program for reliable data streaming of large analog time series
NASA Astrophysics Data System (ADS)
Czerwinski, Fabian; Oddershede, Lene B.
2011-02-01
With modern data acquisition devices that work fast and very precise, scientists often face the task of dealing with huge amounts of data. These need to be rapidly processed and stored onto a hard disk. We present a LabVIEW program which reliably streams analog time series of MHz sampling. Its run time has virtually no limitation. We explicitly show how to use the program to extract time series from two experiments: For a photodiode detection system that tracks the position of an optically trapped particle and for a measurement of ionic current through a glass capillary. The program is easy to use and versatile as the input can be any type of analog signal. Also, the data streaming software is simple, highly reliable, and can be easily customized to include, e.g., real-time power spectral analysis and Allan variance noise quantification. Program summaryProgram title: TimeSeriesStreaming.VI Catalogue identifier: AEHT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 No. of bytes in distributed program, including test data, etc.: 63 259 Distribution format: tar.gz Programming language: LabVIEW ( http://www.ni.com/labview/) Computer: Any machine running LabVIEW 8.6 or higher Operating system: Windows XP and Windows 7 RAM: 60-360 Mbyte Classification: 3 Nature of problem: For numerous scientific and engineering applications, it is highly desirable to have an efficient, reliable, and flexible program to perform data streaming of time series sampled with high frequencies and possibly for long time intervals. This type of data acquisition often produces very large amounts of data not easily streamed onto a computer hard disk using standard methods. Solution method: This LabVIEW program is developed to directly stream any kind of time series onto a hard disk. Due to optimized timing and usage of computational resources, such as multicores and protocols for memory usage, this program provides extremely reliable data acquisition. In particular, the program is optimized to deal with large amounts of data, e.g., taken with high sampling frequencies and over long time intervals. The program can be easily customized for time series analyses. Restrictions: Only tested in Windows-operating LabVIEW environments, must use TDMS format, acquisition cards must be LabVIEW compatible, driver DAQmx installed. Running time: As desirable: microseconds to hours
Regenerating time series from ordinal networks.
McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael
2017-03-01
Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.
Regenerating time series from ordinal networks
NASA Astrophysics Data System (ADS)
McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael
2017-03-01
Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.
Discovering significant evolution patterns from satellite image time series.
Petitjean, François; Masseglia, Florent; Gançarski, Pierre; Forestier, Germain
2011-12-01
Satellite Image Time Series (SITS) provide us with precious information on land cover evolution. By studying these series of images we can both understand the changes of specific areas and discover global phenomena that spread over larger areas. Changes that can occur throughout the sensing time can spread over very long periods and may have different start time and end time depending on the location, which complicates the mining and the analysis of series of images. This work focuses on frequent sequential pattern mining (FSPM) methods, since this family of methods fits the above-mentioned issues. This family of methods consists of finding the most frequent evolution behaviors, and is actually able to extract long-term changes as well as short term ones, whenever the change may start and end. However, applying FSPM methods to SITS implies confronting two main challenges, related to the characteristics of SITS and the domain's constraints. First, satellite images associate multiple measures with a single pixel (the radiometric levels of different wavelengths corresponding to infra-red, red, etc.), which makes the search space multi-dimensional and thus requires specific mining algorithms. Furthermore, the non evolving regions, which are the vast majority and overwhelm the evolving ones, challenge the discovery of these patterns. We propose a SITS mining framework that enables discovery of these patterns despite these constraints and characteristics. Our proposal is inspired from FSPM and provides a relevant visualization principle. Experiments carried out on 35 images sensed over 20 years show the proposed approach makes it possible to extract relevant evolution behaviors.
GPS Position Time Series @ JPL
NASA Technical Reports Server (NTRS)
Owen, Susan; Moore, Angelyn; Kedar, Sharon; Liu, Zhen; Webb, Frank; Heflin, Mike; Desai, Shailen
2013-01-01
Different flavors of GPS time series analysis at JPL - Use same GPS Precise Point Positioning Analysis raw time series - Variations in time series analysis/post-processing driven by different users. center dot JPL Global Time Series/Velocities - researchers studying reference frame, combining with VLBI/SLR/DORIS center dot JPL/SOPAC Combined Time Series/Velocities - crustal deformation for tectonic, volcanic, ground water studies center dot ARIA Time Series/Coseismic Data Products - Hazard monitoring and response focused center dot ARIA data system designed to integrate GPS and InSAR - GPS tropospheric delay used for correcting InSAR - Caltech's GIANT time series analysis uses GPS to correct orbital errors in InSAR - Zhen Liu's talking tomorrow on InSAR Time Series analysis
Zhang, Hong; Zhang, Sheng; Wang, Ping; Qin, Yuzhe; Wang, Huifeng
2017-07-01
Particulate matter with aerodynamic diameter below 10 μm (PM 10 ) forecasting is difficult because of the uncertainties in describing the emission and meteorological fields. This paper proposed a wavelet-ARMA/ARIMA model to forecast the short-term series of the PM 10 concentrations. It was evaluated by experiments using a 10-year data set of daily PM 10 concentrations from 4 stations located in Taiyuan, China. The results indicated the following: (1) PM 10 concentrations of Taiyuan had a decreasing trend during 2005 to 2012 but increased in 2013. PM 10 concentrations had an obvious seasonal fluctuation related to coal-fired heating in winter and early spring. (2) Spatial differences among the four stations showed that the PM 10 concentrations in industrial and heavily trafficked areas were higher than those in residential and suburb areas. (3) Wavelet analysis revealed that the trend variation and the changes of the PM 10 concentration of Taiyuan were complicated. (4) The proposed wavelet-ARIMA model could be efficiently and successfully applied to the PM 10 forecasting field. Compared with the traditional ARMA/ARIMA methods, this wavelet-ARMA/ARIMA method could effectively reduce the forecasting error, improve the prediction accuracy, and realize multiple-time-scale prediction. Wavelet analysis can filter noisy signals and identify the variation trend and the fluctuation of the PM 10 time-series data. Wavelet decomposition and reconstruction reduce the nonstationarity of the PM 10 time-series data, and thus improve the accuracy of the prediction. This paper proposed a wavelet-ARMA/ARIMA model to forecast the PM 10 time series. Compared with the traditional ARMA/ARIMA method, this wavelet-ARMA/ARIMA method could effectively reduce the forecasting error, improve the prediction accuracy, and realize multiple-time-scale prediction. The proposed model could be efficiently and successfully applied to the PM 10 forecasting field.
NASA Astrophysics Data System (ADS)
Krishnamurthy, V. V.; Russell, David J.; Hadden, Chad E.; Martin, Gary E.
2000-09-01
The development of a series of new, accordion-optimized long-range heteronuclear shift correlation techniques has been reported. A further derivative of the constant time variable delay introduced in the IMPEACH-MBC experiment, a STAR (Selectively Tailored Accordion F1 Refocusing) operator is described in the present report. Incorporation of the STAR operator with the capability of user-selected homonuclear modulation scaling as in the CIGAR-HMBC experiment, into a long-range heteronuclear shift correlation pulse sequence, 2J,3J-HMBC, affords for the first time in a proton-detected experiment the means of unequivocally differentiating two-bond (2JCH) from three-bond (3JCH) long-range correlations to protonated carbons.
Nonlinear effects in the time measurement device based on surface acoustic wave filter excitation.
Prochazka, Ivan; Panek, Petr
2009-07-01
A transversal surface acoustic wave filter has been used as a time interpolator in a time interval measurement device. We are presenting the experiments and results of an analysis of the nonlinear effects in such a time interpolator. The analysis shows that the nonlinear distortion in the time interpolator circuits causes a deterministic measurement error which can be understood as the time interpolation nonlinearity. The dependence of this error on time of the measured events can be expressed as a sparse Fourier series thus it usually oscillates very quickly in comparison to the clock period. The theoretical model is in good agreement with experiments carried out on an experimental two-channel timing system. Using highly linear amplifiers in the time interpolator and adjusting the filter excitation level to the optimum, we have achieved the interpolation nonlinearity below 0.2 ps. The overall single-shot precision of the experimental timing device is 0.9 ps rms in each channel.
Single-operator real-time ultrasound-guided spinal injection using SonixGPS™: a case series.
Brinkmann, Silke; Tang, Raymond; Sawka, Andrew; Vaghadia, Himat
2013-09-01
The SonixGPS™ is a novel needle tracking system that has recently been approved in Canada for ultrasound-guided needle interventions. It allows optimization of needle-beam alignment by providing a real-time display of current and predicted needle tip position. Currently, there is limited evidence on the effectiveness of this technique for performance of real-time spinal anesthesia. This case series reports performance of the SonixGPS system for real-time ultrasound-guided spinal anesthesia in elective patients scheduled for joint arthroplasty. In this single-centre case series, 20 American Society of Anesthesiologists' class I-II patients scheduled for lower limb joint arthroplasty were recruited to undergo real-time ultrasound-guided spinal anesthesia with the SonixGPS after written informed consent. The primary outcome for this clinical cases series was the success rate of spinal anesthesia, and the main secondary outcome was time required to perform spinal anesthesia. Successful spinal anesthesia for joint arthroplasty was achieved in 18/20 patients, and 17 of these required only a single skin puncture. In 7/20 (35%) patients, dural puncture was achieved on the first needle pass, and in 11/20 (55%) patients, dural puncture was achieved with two or three needle redirections. Median (range) time taken to perform the block was 8 (5-14) min. The study procedure was aborted in two cases because our clinical protocol dictated using a standard approach if spinal anesthesia was unsuccessful after three ultrasound-guided insertion attempts. These two cases were classified as failures. No complications, including paresthesia, were observed during the procedure. All patients with successful spinal anesthesia found the technique acceptable and were willing to undergo a repeat procedure if deemed necessary. This case series shows that real-time ultrasound-guided spinal anesthesia with the SonixGPS system is possible within an acceptable time frame. It proved effective with a low rate of failure and a low rate of complications. Our clinical experience suggests that a randomized trial is warranted to compare the SonixGPS with a standard block technique.
Lara, Juan A; Lizcano, David; Pérez, Aurora; Valente, Juan P
2014-10-01
There are now domains where information is recorded over a period of time, leading to sequences of data known as time series. In many domains, like medicine, time series analysis requires to focus on certain regions of interest, known as events, rather than analyzing the whole time series. In this paper, we propose a framework for knowledge discovery in both one-dimensional and multidimensional time series containing events. We show how our approach can be used to classify medical time series by means of a process that identifies events in time series, generates time series reference models of representative events and compares two time series by analyzing the events they have in common. We have applied our framework on time series generated in the areas of electroencephalography (EEG) and stabilometry. Framework performance was evaluated in terms of classification accuracy, and the results confirmed that the proposed schema has potential for classifying EEG and stabilometric signals. The proposed framework is useful for discovering knowledge from medical time series containing events, such as stabilometric and electroencephalographic time series. These results would be equally applicable to other medical domains generating iconographic time series, such as, for example, electrocardiography (ECG). Copyright © 2014 Elsevier Inc. All rights reserved.
Corazzini, Luca; Filippin, Antonio; Vanin, Paolo
2015-01-01
We report results from an incentivized laboratory experiment undertaken with the purpose of providing controlled evidence on the causal effects of alcohol consumption on risk-taking, time preferences and altruism. Our design disentangles the pharmacological effects of alcohol intoxication from those mediated by expectations, as we compare the behavior of three groups of subjects: those who participated in an experiment with no reference to alcohol, those who were exposed to the possibility of consuming alcohol but were given a placebo and those who effectively consumed alcohol. All subjects participated in a series of economic tasks administered in the same sequence across treatments. After controlling for both the willingness to pay for an object and the potential misperception of probabilities as elicited in the experiment, we detect no effect of alcohol in depleting subjects’ risk tolerance. However, we find that alcohol intoxication increases impatience and makes subjects less altruistic. PMID:25853520
Application of blind source separation to real-time dissolution dynamic nuclear polarization.
Hilty, Christian; Ragavan, Mukundan
2015-01-20
The use of a blind source separation (BSS) algorithm is demonstrated for the analysis of time series of nuclear magnetic resonance (NMR) spectra. This type of data is obtained commonly from experiments, where analytes are hyperpolarized using dissolution dynamic nuclear polarization (D-DNP), both in in vivo and in vitro contexts. High signal gains in D-DNP enable rapid measurement of data sets characterizing the time evolution of chemical or metabolic processes. BSS is based on an algorithm that can be applied to separate the different components contributing to the NMR signal and determine the time dependence of the signals from these components. This algorithm requires minimal prior knowledge of the data, notably, no reference spectra need to be provided, and can therefore be applied rapidly. In a time-resolved measurement of the enzymatic conversion of hyperpolarized oxaloacetate to malate, the two signal components are separated into computed source spectra that closely resemble the spectra of the individual compounds. An improvement in the signal-to-noise ratio of the computed source spectra is found compared to the original spectra, presumably resulting from the presence of each signal more than once in the time series. The reconstruction of the original spectra yields the time evolution of the contributions from the two sources, which also corresponds closely to the time evolution of integrated signal intensities from the original spectra. BSS may therefore be an approach for the efficient identification of components and estimation of kinetics in D-DNP experiments, which can be applied at a high level of automation.
Combining neural networks and genetic algorithms for hydrological flow forecasting
NASA Astrophysics Data System (ADS)
Neruda, Roman; Srejber, Jan; Neruda, Martin; Pascenko, Petr
2010-05-01
We present a neural network approach to rainfall-runoff modeling for small size river basins based on several time series of hourly measured data. Different neural networks are considered for short time runoff predictions (from one to six hours lead time) based on runoff and rainfall data observed in previous time steps. Correlation analysis shows that runoff data, short time rainfall history, and aggregated API values are the most significant data for the prediction. Neural models of multilayer perceptron and radial basis function networks with different numbers of units are used and compared with more traditional linear time series predictors. Out of possible 48 hours of relevant history of all the input variables, the most important ones are selected by means of input filters created by a genetic algorithm. The genetic algorithm works with population of binary encoded vectors defining input selection patterns. Standard genetic operators of two-point crossover, random bit-flipping mutation, and tournament selection were used. The evaluation of objective function of each individual consists of several rounds of building and testing a particular neural network model. The whole procedure is rather computational exacting (taking hours to days on a desktop PC), thus a high-performance mainframe computer has been used for our experiments. Results based on two years worth data from the Ploucnice river in Northern Bohemia suggest that main problems connected with this approach to modeling are ovetraining that can lead to poor generalization, and relatively small number of extreme events which makes it difficult for a model to predict the amplitude of the event. Thus, experiments with both absolute and relative runoff predictions were carried out. In general it can be concluded that the neural models show about 5 per cent improvement in terms of efficiency coefficient over liner models. Multilayer perceptrons with one hidden layer trained by back propagation algorithm and predicting relative runoff show the best behavior so far. Utilizing the genetically evolved input filter improves the performance of yet another 5 per cent. In the future we would like to continue with experiments in on-line prediction using real-time data from Smeda River with 6 hours lead time forecast. Following the operational reality we will focus on classification of the runoffs into flood alert levels, and reformulation of the time series prediction task as a classification problem. The main goal of all this work is to improve flood warning system operated by the Czech Hydrometeorological Institute.
Rapid variability of Antarctic Bottom Water transport into the Pacific Ocean inferred from GRACE
NASA Astrophysics Data System (ADS)
Mazloff, Matthew R.; Boening, Carmen
2016-04-01
Air-ice-ocean interactions in the Antarctic lead to formation of the densest waters on Earth. These waters convect and spread to fill the global abyssal oceans. The heat and carbon storage capacity of these water masses, combined with their abyssal residence times that often exceed centuries, makes this circulation pathway the most efficient sequestering mechanism on Earth. Yet monitoring this pathway has proven challenging due to the nature of the formation processes and the depth of the circulation. The Gravity Recovery and Climate Experiment (GRACE) gravity mission is providing a time series of ocean mass redistribution and offers a transformative view of the abyssal circulation. Here we use the GRACE measurements to infer, for the first time, a 2003-2014 time series of Antarctic Bottom Water export into the South Pacific. We find this export highly variable, with a standard deviation of 1.87 sverdrup (Sv) and a decorrelation timescale of less than 1 month. A significant trend is undetectable.
Valdés, Julio J; Bonham-Carter, Graeme
2006-03-01
A computational intelligence approach is used to explore the problem of detecting internal state changes in time dependent processes; described by heterogeneous, multivariate time series with imprecise data and missing values. Such processes are approximated by collections of time dependent non-linear autoregressive models represented by a special kind of neuro-fuzzy neural network. Grid and high throughput computing model mining procedures based on neuro-fuzzy networks and genetic algorithms, generate: (i) collections of models composed of sets of time lag terms from the time series, and (ii) prediction functions represented by neuro-fuzzy networks. The composition of the models and their prediction capabilities, allows the identification of changes in the internal structure of the process. These changes are associated with the alternation of steady and transient states, zones with abnormal behavior, instability, and other situations. This approach is general, and its sensitivity for detecting subtle changes of state is revealed by simulation experiments. Its potential in the study of complex processes in earth sciences and astrophysics is illustrated with applications using paleoclimate and solar data.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.
Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Generalized Feature Extraction for Wrist Pulse Analysis: From 1-D Time Series to 2-D Matrix.
Dimin Wang; Zhang, David; Guangming Lu
2017-07-01
Traditional Chinese pulse diagnosis, known as an empirical science, depends on the subjective experience. Inconsistent diagnostic results may be obtained among different practitioners. A scientific way of studying the pulse should be to analyze the objectified wrist pulse waveforms. In recent years, many pulse acquisition platforms have been developed with the advances in sensor and computer technology. And the pulse diagnosis using pattern recognition theories is also increasingly attracting attentions. Though many literatures on pulse feature extraction have been published, they just handle the pulse signals as simple 1-D time series and ignore the information within the class. This paper presents a generalized method of pulse feature extraction, extending the feature dimension from 1-D time series to 2-D matrix. The conventional wrist pulse features correspond to a particular case of the generalized models. The proposed method is validated through pattern classification on actual pulse records. Both quantitative and qualitative results relative to the 1-D pulse features are given through diabetes diagnosis. The experimental results show that the generalized 2-D matrix feature is effective in extracting both the periodic and nonperiodic information. And it is practical for wrist pulse analysis.
Efficient Bayesian inference for natural time series using ARFIMA processes
NASA Astrophysics Data System (ADS)
Graves, T.; Gramacy, R. B.; Franzke, C. L. E.; Watkins, N. W.
2015-11-01
Many geophysical quantities, such as atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long memory (LM). LM implies that these quantities experience non-trivial temporal memory, which potentially not only enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LM. In this paper we present a modern and systematic approach to the inference of LM. We use the flexible autoregressive fractional integrated moving average (ARFIMA) model, which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LM, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g., short-memory effects) can be integrated over in order to focus on long-memory parameters and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data and the central England temperature (CET) time series, with favorable comparison to the standard estimators. For CET we also extend our method to seasonal long memory.
Schubert, Thomas W; Zickfeld, Janis H; Seibt, Beate; Fiske, Alan Page
2018-02-01
Feeling moved or touched can be accompanied by tears, goosebumps, and sensations of warmth in the centre of the chest. The experience has been described frequently, but psychological science knows little about it. We propose that labelling one's feeling as being moved or touched is a component of a social-relational emotion that we term kama muta (its Sanskrit label). We hypothesise that it is caused by appraising an intensification of communal sharing relations. Here, we test this by investigating people's moment-to-moment reports of feeling moved and touched while watching six short videos. We compare these to six other sets of participants' moment-to-moment responses watching the same videos: respectively, judgements of closeness (indexing communal sharing), reports of weeping, goosebumps, warmth in the centre of the chest, happiness, and sadness. Our eighth time series is expert ratings of communal sharing. Time series analyses show strong and consistent cross-correlations of feeling moved and touched and closeness with each other and with each of the three physiological variables and expert-rated communal sharing - but distinctiveness from happiness and sadness. These results support our model.
A Spatiotemporal Prediction Framework for Air Pollution Based on Deep RNN
NASA Astrophysics Data System (ADS)
Fan, J.; Li, Q.; Hou, J.; Feng, X.; Karimian, H.; Lin, S.
2017-10-01
Time series data in practical applications always contain missing values due to sensor malfunction, network failure, outliers etc. In order to handle missing values in time series, as well as the lack of considering temporal properties in machine learning models, we propose a spatiotemporal prediction framework based on missing value processing algorithms and deep recurrent neural network (DRNN). By using missing tag and missing interval to represent time series patterns, we implement three different missing value fixing algorithms, which are further incorporated into deep neural network that consists of LSTM (Long Short-term Memory) layers and fully connected layers. Real-world air quality and meteorological datasets (Jingjinji area, China) are used for model training and testing. Deep feed forward neural networks (DFNN) and gradient boosting decision trees (GBDT) are trained as baseline models against the proposed DRNN. Performances of three missing value fixing algorithms, as well as different machine learning models are evaluated and analysed. Experiments show that the proposed DRNN framework outperforms both DFNN and GBDT, therefore validating the capacity of the proposed framework. Our results also provides useful insights for better understanding of different strategies that handle missing values.
Complexity analysis based on generalized deviation for financial markets
NASA Astrophysics Data System (ADS)
Li, Chao; Shang, Pengjian
2018-03-01
In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.
Financial Time Series Prediction Using Spiking Neural Networks
Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam
2014-01-01
In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two “traditional”, rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618
NASA Astrophysics Data System (ADS)
Traversaro, Francisco; O. Redelico, Francisco
2018-04-01
In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but these seems to fail. In this contribution, we propose a parametric bootstrap methodology using a symbolic representation of the time series to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well-known stochastic processes: the 1/fα noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.
Detection of a sudden change of the field time series based on the Lorenz system.
Da, ChaoJiu; Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan
2017-01-01
We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series.
NASA Astrophysics Data System (ADS)
Olafsdottir, Kristin B.; Mudelsee, Manfred
2013-04-01
Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.
Volatility of linear and nonlinear time series
NASA Astrophysics Data System (ADS)
Kalisky, Tomer; Ashkenazy, Yosef; Havlin, Shlomo
2005-07-01
Previous studies indicated that nonlinear properties of Gaussian distributed time series with long-range correlations, ui , can be detected and quantified by studying the correlations in the magnitude series ∣ui∣ , the “volatility.” However, the origin for this empirical observation still remains unclear and the exact relation between the correlations in ui and the correlations in ∣ui∣ is still unknown. Here we develop analytical relations between the scaling exponent of linear series ui and its magnitude series ∣ui∣ . Moreover, we find that nonlinear time series exhibit stronger (or the same) correlations in the magnitude time series compared with linear time series with the same two-point correlations. Based on these results we propose a simple model that generates multifractal time series by explicitly inserting long range correlations in the magnitude series; the nonlinear multifractal time series is generated by multiplying a long-range correlated time series (that represents the magnitude series) with uncorrelated time series [that represents the sign series sgn(ui) ]. We apply our techniques on daily deep ocean temperature records from the equatorial Pacific, the region of the El-Ninõ phenomenon, and find: (i) long-range correlations from several days to several years with 1/f power spectrum, (ii) significant nonlinear behavior as expressed by long-range correlations of the volatility series, and (iii) broad multifractal spectrum.
Duality between Time Series and Networks
Campanharo, Andriana S. L. O.; Sirer, M. Irmak; Malmgren, R. Dean; Ramos, Fernando M.; Amaral, Luís A. Nunes.
2011-01-01
Studying the interaction between a system's components and the temporal evolution of the system are two common ways to uncover and characterize its internal workings. Recently, several maps from a time series to a network have been proposed with the intent of using network metrics to characterize time series. Although these maps demonstrate that different time series result in networks with distinct topological properties, it remains unclear how these topological properties relate to the original time series. Here, we propose a map from a time series to a network with an approximate inverse operation, making it possible to use network statistics to characterize time series and time series statistics to characterize networks. As a proof of concept, we generate an ensemble of time series ranging from periodic to random and confirm that application of the proposed map retains much of the information encoded in the original time series (or networks) after application of the map (or its inverse). Our results suggest that network analysis can be used to distinguish different dynamic regimes in time series and, perhaps more importantly, time series analysis can provide a powerful set of tools that augment the traditional network analysis toolkit to quantify networks in new and useful ways. PMID:21858093
Analysis of EUVE Experiment Results
NASA Technical Reports Server (NTRS)
Horan, Stephen
1996-01-01
A series of tests to validate an antenna pointing concept for spin-stabilized satellites using a data relay satellite are described. These tests show that proper antenna pointing on an inertially-stabilized spacecraft can lead to significant access time through the relay satellite even without active antenna pointing. We summarize the test results, the simulations to model the effects of antenna pattern and space loss, and the expected contact times. We also show how antenna beam width affects the results.
2017-02-17
time for the tomography and diffraction sweeps was approximately 42 min. In a typical quasi -static in-situ experiment, loading is halted and the...data is used to extract individual grain- average stress tensors in a large aggregate of Ti-7Al grains (z500) over a time series of prescribed states...for public release: distribution unlimited. © 2017 ELSEVIER LTD (STINFO COPY) AIR FORCE RESEARCH LABORATORY MATERIALS AND MANUFACTURING
Metronome LKM: An open source virtual keyboard driver to measure experiment software latencies.
Garaizar, Pablo; Vadillo, Miguel A
2017-10-01
Experiment software is often used to measure reaction times gathered with keyboards or other input devices. In previous studies, the accuracy and precision of time stamps has been assessed through several means: (a) generating accurate square wave signals from an external device connected to the parallel port of the computer running the experiment software, (b) triggering the typematic repeat feature of some keyboards to get an evenly separated series of keypress events, or (c) using a solenoid handled by a microcontroller to press the input device (keyboard, mouse button, touch screen) that will be used in the experimental setup. Despite the advantages of these approaches in some contexts, none of them can isolate the measurement error caused by the experiment software itself. Metronome LKM provides a virtual keyboard to assess an experiment's software. Using this open source driver, researchers can generate keypress events using high-resolution timers and compare the time stamps collected by the experiment software with those gathered by Metronome LKM (with nanosecond resolution). Our software is highly configurable (in terms of keys pressed, intervals, SysRq activation) and runs on 2.6-4.8 Linux kernels.
Reduced rank models for travel time estimation of low order mode pulses.
Chandrayadula, Tarun K; Wage, Kathleen E; Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Howe, Bruce M
2013-10-01
Mode travel time estimation in the presence of internal waves (IWs) is a challenging problem. IWs perturb the sound speed, which results in travel time wander and mode scattering. A standard approach to travel time estimation is to pulse compress the broadband signal, pick the peak of the compressed time series, and average the peak time over multiple receptions to reduce variance. The peak-picking approach implicitly assumes there is a single strong arrival and does not perform well when there are multiple arrivals due to scattering. This article presents a statistical model for the scattered mode arrivals and uses the model to design improved travel time estimators. The model is based on an Empirical Orthogonal Function (EOF) analysis of the mode time series. Range-dependent simulations and data from the Long-range Ocean Acoustic Propagation Experiment (LOAPEX) indicate that the modes are represented by a small number of EOFs. The reduced-rank EOF model is used to construct a travel time estimator based on the Matched Subspace Detector (MSD). Analysis of simulation and experimental data show that the MSDs are more robust to IW scattering than peak picking. The simulation analysis also highlights how IWs affect the mode excitation by the source.
Feature assignment in perception of auditory figure.
Gregg, Melissa K; Samuel, Arthur G
2012-08-01
Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed. (c) 2012 APA, all rights reserved.
Reactor transient control in support of PFR/TREAT TUCOP experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrows, D.R.; Larsen, G.R.; Harrison, L.J.
1984-01-01
Unique energy deposition and experiment control requirements posed bythe PFR/TREAT series of transient undercooling/overpower (TUCOP) experiments resulted in equally unique TREAT reactor operations. New reactor control computer algorithms were written and used with the TREAT reactor control computer system to perform such functions as early power burst generation (based on test train flow conditions), burst generation produced by a step insertion of reactivity following a controlled power ramp, and shutdown (SCRAM) initiators based on both test train conditions and energy deposition. Specialized hardware was constructed to simulate test train inputs to the control computer system so that computer algorithms couldmore » be tested in real time without irradiating the experiment.« less
NASA Technical Reports Server (NTRS)
Kemp, E.; Jacob, J.; Rosenberg, R.; Jusem, J. C.; Emmitt, G. D.; Wood, S.; Greco, L. P.; Riishojgaard, L. P.; Masutani, M.; Ma, Z.;
2013-01-01
NASA Goddard Space Flight Center's Software Systems Support Office (SSSO) is participating in a multi-agency study of the impact of assimilating Doppler wind lidar observations on numerical weather prediction. Funded by NASA's Earth Science Technology Office, SSSO has worked with Simpson Weather Associates to produce time series of synthetic lidar observations mimicking the OAWL and WISSCR lidar instruments deployed on the International Space Station. In addition, SSSO has worked to assimilate a portion of these observations those drawn from the NASA fvGCM Nature Run into the NASA GEOS-DAS global weather prediction system in a series of Observing System Simulation Experiments (OSSEs). These OSSEs will complement parallel OSSEs prepared by the Joint Center for Satellite Data Assimilation and by NOAA's Atlantic Oceanographic and Meteorological Laboratory. In this talk, we will describe our procedure and provide available OSSE results.
Contextual Cues Aid Recovery from Interruption: The Role of Associative Activation
ERIC Educational Resources Information Center
Hodgetts, Helen M.; Jones, Dylan M.
2006-01-01
A series of experiments introduced interruptions to the execution phase of simple Tower of London problems and found that the opportunity for preparation before the break in task reduced the time cost at resumption. Retrieval of the suspended goal was facilitated when participants were given the opportunity to encode retrieval cues during an…
USDA-ARS?s Scientific Manuscript database
A series of simulated rainfall-runoff experiments with applications of different manure types (cattle solid pats, poultry dry litter, swine slurry) were conducted across four seasons on a field containing 36 plots (0.75 × 2 m each), resulting in 144 rainfall-runoff events. Simulating time-varying re...
Reflecting on My Progressive Education
ERIC Educational Resources Information Center
Friend, Nina
2012-01-01
At the end of the 2011-12 version of the project course Schools Across Borders, Schools Across Time (SABSAT), a high school senior wrote a series of letters reflecting on the experience of participating in an unusual course with an unusual outcome. In the letters, the student wanted to evoke what was personal and what was critical--to herself, her…
ERIC Educational Resources Information Center
Earl, Lorna; Katz, Steven
2005-01-01
Using data for school reform is like painting a series of pictures--pictures that are subtle and capture the nuances of the subject. This is a far cry from drawing stick figures or paint-by-numbers. Imagine the experiences of the French painter Claude Monet as he wandered through his garden at Giverny at different times of the day and year,…
ERIC Educational Resources Information Center
Rana, K. P. S.; Kumar, Vineet; Mendiratta, Jatin
2017-01-01
One of the most elementary concepts in freshmen Electrical Engineering subject comprises the Resistance-Inductance-Capacitance (RLC) circuit fundamentals, that is, their time and frequency domain responses. For a beginner, generally, it is difficult to understand and appreciate the step and the frequency responses, particularly the resonance. This…
Understanding Asperger Syndrome: A Professor's Guide [DVD
ERIC Educational Resources Information Center
Organization for Autism Research (NJ3), 2011
2011-01-01
College can be a trying time in any individual's life. For adults with Asperger Syndrome this experience can be overwhelming. This title in the new DVD series Asperger Syndrome and Adulthood focuses on educating professors, teaching assistants, and others on what it means to be a college student on the spectrum and how they might best be able to…
Natural phenomena exhibited by forest fires
J. S. Barrows
1961-01-01
Forest fire phenomena are presented through a series of motion pictures and 35 mm slides. These films have been taken by the staffs of the Southeastern, Pacific Southwest, and Intermountain Forest and Range Experiment Stations of the U. S. Forest Service and by Dr. Vincent J. Schaefer during the course of fire research activities. Both regular speed and time-lapse...
How Golden Is Silence? Teaching Undergraduates the Power and Limits of RNA Interference
ERIC Educational Resources Information Center
Kuldell, Natalie H.
2006-01-01
It is hard and getting harder to strike a satisfying balance in teaching. Time dedicated to student-generated models or ideas is often sacrificed in an effort to "get through the syllabus." I describe a series of RNA interference (RNAi) experiments for undergraduate students that simultaneously explores fundamental concepts in gene regulation,…
The Organization of Exploratory Behaviors in Infant Locomotor Planning
ERIC Educational Resources Information Center
Kretch, Kari S.; Adolph, Karen E.
2017-01-01
How do infants plan and guide locomotion under challenging conditions? This experiment investigated the real-time process of visual and haptic exploration in 14-month-old infants as they decided whether and how to walk over challenging terrain--a series of bridges varying in width. Infants' direction of gaze was recorded with a head-mounted eye…
Development of anomaly detection models for deep subsurface monitoring
NASA Astrophysics Data System (ADS)
Sun, A. Y.
2017-12-01
Deep subsurface repositories are used for waste disposal and carbon sequestration. Monitoring deep subsurface repositories for potential anomalies is challenging, not only because the number of sensor networks and the quality of data are often limited, but also because of the lack of labeled data needed to train and validate machine learning (ML) algorithms. Although physical simulation models may be applied to predict anomalies (or the system's nominal state for that sake), the accuracy of such predictions may be limited by inherent conceptual and parameter uncertainties. The main objective of this study was to demonstrate the potential of data-driven models for leakage detection in carbon sequestration repositories. Monitoring data collected during an artificial CO2 release test at a carbon sequestration repository were used, which include both scalar time series (pressure) and vector time series (distributed temperature sensing). For each type of data, separate online anomaly detection algorithms were developed using the baseline experiment data (no leak) and then tested on the leak experiment data. Performance of a number of different online algorithms was compared. Results show the importance of including contextual information in the dataset to mitigate the impact of reservoir noise and reduce false positive rate. The developed algorithms were integrated into a generic Web-based platform for real-time anomaly detection.
ERIC Educational Resources Information Center
Scott, Daniel G.; Evans, Jessica
2010-01-01
This paper emerges from the continued analysis of data collected in a series of international studies concerning Childhood Peak Experiences (CPEs) based on developments in understanding peak experiences in Maslow's hierarchy of needs initiated by Dr Edward Hoffman. Bridging from the series of studies, Canadian researchers explore collected…
Spheromak Formation and Current Sustainment Using a Repetitively Pulsed Source
NASA Astrophysics Data System (ADS)
Woodruff, S.; Macnab, A. I. D.; Ziemba, T. M.; Miller, K. E.
2009-06-01
By repeated injection of magnetic helicity ( K = 2φψ) on time-scales short compared with the dissipation time (τinj << τ K ), it is possible to produce toroidal currents relevant to POP-level experiments. Here we discuss an effective injection rate, due to the expansion of a series of current sheets and their subsequent reconnection to form spheromaks and compression into a copper flux-conserving chamber. The benefits of repeated injection are that the usual limits to current amplification can be exceeded, and an efficient quasi-steady sustainment scenario is possible (within minimum impact on confinement). A new experiment designed to address the physics of pulsed formation and sustainment is described.
The Working Experience. Teacher's Manual.
ERIC Educational Resources Information Center
Smith, Jeanne H.; Ringel, Harry
A teacher's manual is presented for "The Working Experience," a series of three texts for English-as-a-Second-Language (ESL) students. The series builds on oral skills to develop reading and writing ability while still expanding oral English-language proficiency. Because one of the basic principles underlying the series is the idea that students…
Atmospheric Boundary Layer temperature and humidity from new-generation Raman lidar
NASA Astrophysics Data System (ADS)
Froidevaux, Martin; Higgins, Chad; Simeonov, Valentin; Pardyjak, Eric R.; Parlange, Marc B.
2010-05-01
Mixing ratio and temperature data, obtained with EPFL Raman lidar during the TABLE-08 experiment are presented. The processing methods will be discussed along with fundamental physics. An independent calibration is performed at different distances along the laser beam, demonstrating that the multi-telescopes design of the lidar system is reliable for field application. The maximum achievable distance as a function of time and/or space averaging will also be discussed. During the TABLE-08 experiment, different type of lidar measurements have been obtained including: horizontal and vertical time series, as well as boundary layer "cuts", during day and night. The high resolution data, 1s in time and 1.25 m in space, are used to understand the response of the atmosphere to variations in surface variability.
NASA Astrophysics Data System (ADS)
Bernard, Ethan; LZ Collaboration
2013-10-01
Astrophysical and cosmological observations show that dark matter is concentrated in halos around galaxies and is approximately five times more abundant than baryonic matter. Dark matter has evaded direct detection despite a series of increasingly sensitive experiments. The LZ (LUX-ZEPLIN) experiment will use a two-phase liquid-xenon time projection chamber to search for elastic scattering of xenon nuclei by WIMP (weakly interactive massive particle) dark matter. The detector will contain seven tons of liquid xenon shielded by an active organic scintillator veto and a water tank within the Sanford Underground Research Facility (SURF) in Lead, South Dakota. The LZ detector scales up the demonstrated light-sensing, cryogenic, radiopurity and shielding technologies of the LUX experiment. Active shielding, position fiducialization, radiopurity control and signal discrimination will reduce backgrounds to levels subdominant to solar neutrino scattering. This experiment will reach a sensitivity to the WIMP-nucleon spin-independent cross section approaching ~ 2 .10-48 cm2 for a 50 GeV WIMP mass, which is about three orders of magnitude smaller than current limits.
The effect of benoxinate on the tear stability of Hong Kong-Chinese.
Cho, P; Brown, B
1995-07-01
We conducted a series of experiments to examine the effect of local anaesthetic instillation on tear stability measurements. All experiments were conducted with the examiner masked with respect to treatments. We measured tear break-up time (TDUT) and non-invasive tear break-up time (NITBUT) 30s after instillation of benoxinate (0.4%) in a single masked experiment and found that NITBUT was significantly increased while TBUT was unaffected. In separate experiments tear stability was assessed 5 min after instillation of benoxinate and there was no significant effect on either TBUT or NITBUT measurements. In a control experiment to examine the effect of instilling a drop of liquid into the eye, neither TBUT nor NITBUT were affected 30s after the instillation of saline. No corneal staining was observed in any of the subjects after instillation of benoxinate. The results suggest that benoxinate does not affect the stability of the precorneal tear film, and that tear stability can be assessed after the instillation of unpreserved benoxinate.
Detection of a sudden change of the field time series based on the Lorenz system
Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan
2017-01-01
We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series. PMID:28141832
Moore, Darrell; Van Nest, Byron N; Seier, Edith
2011-06-01
Classical experiments demonstrated that honey bee foragers trained to collect food at virtually any time of day will return to that food source on subsequent days with a remarkable degree of temporal accuracy. This versatile time-memory, based on an endogenous circadian clock, presumably enables foragers to schedule their reconnaissance flights to best take advantage of the daily rhythms of nectar and pollen availability in different species of flowers. It is commonly believed that the time-memory rapidly extinguishes if not reinforced daily, thus enabling foragers to switch quickly from relatively poor sources to more productive ones. On the other hand, it is also commonly thought that extinction of the time-memory is slow enough to permit foragers to 'remember' the food source over a day or two of bad weather. What exactly is the time-course of time-memory extinction? In a series of field experiments, we determined that the level of food-anticipatory activity (FAA) directed at a food source is not rapidly extinguished and, furthermore, the time-course of extinction is dependent upon the amount of experience accumulated by the forager at that source. We also found that FAA is prolonged in response to inclement weather, indicating that time-memory extinction is not a simple decay function but is responsive to environmental changes. These results provide insights into the adaptability of FAA under natural conditions.
Progress Report on the Airborne Metadata and Time Series Working Groups of the 2016 ESDSWG
NASA Astrophysics Data System (ADS)
Evans, K. D.; Northup, E. A.; Chen, G.; Conover, H.; Ames, D. P.; Teng, W. L.; Olding, S. W.; Krotkov, N. A.
2016-12-01
NASA's Earth Science Data Systems Working Groups (ESDSWG) was created over 10 years ago. The role of the ESDSWG is to make recommendations relevant to NASA's Earth science data systems from users' experiences. Each group works independently focusing on a unique topic. Participation in ESDSWG groups comes from a variety of NASA-funded science and technology projects, including MEaSUREs and ROSS. Participants include NASA information technology experts, affiliated contractor staff and other interested community members from academia and industry. Recommendations from the ESDSWG groups will enhance NASA's efforts to develop long term data products. The Airborne Metadata Working Group is evaluating the suitability of the current Common Metadata Repository (CMR) and Unified Metadata Model (UMM) for airborne data sets and to develop new recommendations as necessary. The overarching goal is to enhance the usability, interoperability, discovery and distribution of airborne observational data sets. This will be done by assessing the suitability (gaps) of the current UMM model for airborne data using lessons learned from current and past field campaigns, listening to user needs and community recommendations and assessing the suitability of ISO metadata and other standards to fill the gaps. The Time Series Working Group (TSWG) is a continuation of the 2015 Time Series/WaterML2 Working Group. The TSWG is using a case study-driven approach to test the new Open Geospatial Consortium (OGC) TimeseriesML standard to determine any deficiencies with respect to its ability to fully describe and encode NASA earth observation-derived time series data. To do this, the time series working group is engaging with the OGC TimeseriesML Standards Working Group (SWG) regarding unsatisfied needs and possible solutions. The effort will end with the drafting of an OGC Engineering Report based on the use cases and interactions with the OGC TimeseriesML SWG. Progress towards finalizing recommendations will be presented at the meeting.
Identification of Boolean Network Models From Time Series Data Incorporating Prior Knowledge.
Leifeld, Thomas; Zhang, Zhihua; Zhang, Ping
2018-01-01
Motivation: Mathematical models take an important place in science and engineering. A model can help scientists to explain dynamic behavior of a system and to understand the functionality of system components. Since length of a time series and number of replicates is limited by the cost of experiments, Boolean networks as a structurally simple and parameter-free logical model for gene regulatory networks have attracted interests of many scientists. In order to fit into the biological contexts and to lower the data requirements, biological prior knowledge is taken into consideration during the inference procedure. In the literature, the existing identification approaches can only deal with a subset of possible types of prior knowledge. Results: We propose a new approach to identify Boolean networks from time series data incorporating prior knowledge, such as partial network structure, canalizing property, positive and negative unateness. Using vector form of Boolean variables and applying a generalized matrix multiplication called the semi-tensor product (STP), each Boolean function can be equivalently converted into a matrix expression. Based on this, the identification problem is reformulated as an integer linear programming problem to reveal the system matrix of Boolean model in a computationally efficient way, whose dynamics are consistent with the important dynamics captured in the data. By using prior knowledge the number of candidate functions can be reduced during the inference. Hence, identification incorporating prior knowledge is especially suitable for the case of small size time series data and data without sufficient stimuli. The proposed approach is illustrated with the help of a biological model of the network of oxidative stress response. Conclusions: The combination of efficient reformulation of the identification problem with the possibility to incorporate various types of prior knowledge enables the application of computational model inference to systems with limited amount of time series data. The general applicability of this methodological approach makes it suitable for a variety of biological systems and of general interest for biological and medical research.
Multiple Indicator Stationary Time Series Models.
ERIC Educational Resources Information Center
Sivo, Stephen A.
2001-01-01
Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…
2013-04-09
ISS035-E-015679 (10 April 2013) --- This is one of a series of close-up images photographed during a run of the Burning and Suppression of Solids (BASS) experiment onboard the Earth-orbiting International Space Station. Following a series of preparations, NASA astronaut Chris Cassidy (out of frame) conducted a series of runs of the experiment, which examines the burning and extinction characteristics of a wide variety of fuel samples in microgravity. The experiment is planned for guiding strategies for extinguishing fires in microgravity. BASS results contribute to the combustion computational models used in the design of fire detection and suppression systems in microgravity and on Earth.
2013-04-09
ISS035-E-015827 (10 April 2013) --- This is one of a series of close-up images photographed during a run of the Burning and Suppression of Solids (BASS) experiment onboard the Earth-orbiting International Space Station. Following a series of preparations, NASA astronaut Chris Cassidy (out of frame) conducted a series of runs of the experiment, which examines the burning and extinction characteristics of a wide variety of fuel samples in microgravity. The experiment is planned for guiding strategies for extinguishing fires in microgravity. BASS results contribute to the combustion computational models used in the design of fire detection and suppression systems in microgravity and on Earth.
Microforms in gravel bed rivers: Formation, disintegration, and effects on bedload transport
Strom, K.; Papanicolaou, A.N.; Evangelopoulos, N.; Odeh, M.
2004-01-01
This research aims to advance current knowledge on cluster formation and evolution by tackling some of the aspects associated with cluster microtopography and the effects of clusters on bedload transport. The specific objectives of the study are (1) to identify the bed shear stress range in which clusters form and disintegrate, (2) to quantitatively describe the spacing characteristics and orientation of clusters with respect to flow characteristics, (3) to quantify the effects clusters have on the mean bedload rate, and (4) to assess the effects of clusters on the pulsating nature of bedload. In order to meet the objectives of this study, two main experimental scenarios, namely, Test Series A and B (20 experiments overall) are considered in a laboratory flume under well-controlled conditions. Series A tests are performed to address objectives (1) and (2) while Series B is designed to meet objectives (3) and (4). Results show that cluster microforms develop in uniform sediment at 1.25 to 2 times the Shields parameter of an individual particle and start disintegrating at about 2.25 times the Shields parameter. It is found that during an unsteady flow event, effects of clusters on bedload transport rate can be classified in three different phases: a sink phase where clusters absorb incoming sediment, a neutral phase where clusters do not affect bedload, and a source phase where clusters release particles. Clusters also increase the magnitude of the fluctuations in bedload transport rate, showing that clusters amplify the unsteady nature of bedload transport. A fourth-order autoregressive, autoregressive integrated moving average model is employed to describe the time series of bedload and provide a predictive formula for predicting bedload at different periods. Finally, a change-point analysis enhanced with a binary segmentation procedure is performed to identify the abrupt changes in the bedload statistic characteristics due to the effects of clusters and detect the different phases in bedload time series using probability theory. The analysis verifies the experimental findings that three phases are detected in the bedload rate time series structure, namely, sink, neutral, and source. ?? ASCE / JUNE 2004.
Experimental measurements of hydrodynamic instabilities on NOVA of relevance to astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budil, K S; Cherfils, C; Drake, R P
1998-09-11
Large lasers such as Nova allow the possibility of achieving regimes of high energy densities in plasmas of millimeter spatial scales and nanosecond time scales. In those plasmas where thermal conductivity and viscosity do not play a significant role, the hydrodynamic evolution is suitable for benchmarking hydrodynamics modeling in astrophysical codes. Several experiments on Nova examine hydrodynamically unstable interfaces. A typical Nova experiment uses a gold millimeter-scale hohlraum to convert the laser energy to a 200 eV blackbody source lasting about a nanosecond. The x-rays ablate a planar target, generating a series of shocks and accelerating the target. The evolvingmore » area1 density is diagnosed by time-resolved radiography, using a second x-ray source. Data from several experiments are presented and diagnostic techniques are discussed.« less
Exploring Low-Amplitude, Long-Duration Deformational Transients on the Cascadia Subduction Zone
NASA Astrophysics Data System (ADS)
Nuyen, C.; Schmidt, D. A.
2017-12-01
The absence of long-term slow slip events (SSEs) in Cascadia is enigmatic on account of the diverse group of subduction zone systems that do experience long-term SSEs. In particular, southwest Japan, Alaska, New Zealand and Mexico have observed long-term SSEs, with some of the larger events exhibiting centimeter-scale surface displacements over the course of multiple years. The conditions that encourage long-term slow slip are not well established due to the variability in thermal parameter and plate dip amongst subduction zones that host long-term events. The Cascadia Subduction Zone likely has the capacity to host long-term SSEs, and the lack of such events motivates further exploration of the observational data. In order to search for the existence of long-duration transients in surface displacements, we examine Cascadia GPS time series from PANGA and PBO to determine whether or not Cascadia has hosted a long-term slow slip event in the past 20 years. A careful review of the time series does not reveal any large-scale multi-year transients. In order to more clearly recognize possible small amplitude long-term SSEs in Cascadia, the GPS time series are reduced with two separate methods. The first method involves manually removing (1) continental water loading terms, (2) transient displacements of known short-term SSEs, and (3) common mode signals that span the network. The second method utilizes a seasonal-trend decomposition procedure (STL) to extract a long-term trend from the GPS time-series. Manual inspection of both of these products reveals intriguing long-term changes in the longitudinal component of several GPS stations in central Cascadia. To determine whether these shifts could be due to long-term slow slip, we invert the reduced surface displacement time series for fault slip using a principle component analysis-based inversion method. We also utilize forward fault models of various synthetic long-term SSEs to better understand how these events may appear in the time series for a range of magnitudes and durations. Results from this research have direct implications for the possible slip modes in Cascadia and how variations in slip over time can impact stress and strain accumulations along the margin.
IDSP- INTERACTIVE DIGITAL SIGNAL PROCESSOR
NASA Technical Reports Server (NTRS)
Mish, W. H.
1994-01-01
The Interactive Digital Signal Processor, IDSP, consists of a set of time series analysis "operators" based on the various algorithms commonly used for digital signal analysis work. The processing of a digital time series to extract information is usually achieved by the application of a number of fairly standard operations. However, it is often desirable to "experiment" with various operations and combinations of operations to explore their effect on the results. IDSP is designed to provide an interactive and easy-to-use system for this type of digital time series analysis. The IDSP operators can be applied in any sensible order (even recursively), and can be applied to single time series or to simultaneous time series. IDSP is being used extensively to process data obtained from scientific instruments onboard spacecraft. It is also an excellent teaching tool for demonstrating the application of time series operators to artificially-generated signals. IDSP currently includes over 43 standard operators. Processing operators provide for Fourier transformation operations, design and application of digital filters, and Eigenvalue analysis. Additional support operators provide for data editing, display of information, graphical output, and batch operation. User-developed operators can be easily interfaced with the system to provide for expansion and experimentation. Each operator application generates one or more output files from an input file. The processing of a file can involve many operators in a complex application. IDSP maintains historical information as an integral part of each file so that the user can display the operator history of the file at any time during an interactive analysis. IDSP is written in VAX FORTRAN 77 for interactive or batch execution and has been implemented on a DEC VAX-11/780 operating under VMS. The IDSP system generates graphics output for a variety of graphics systems. The program requires the use of Versaplot and Template plotting routines and IMSL Math/Library routines. These software packages are not included in IDSP. The virtual memory requirement for the program is approximately 2.36 MB. The IDSP system was developed in 1982 and was last updated in 1986. Versaplot is a registered trademark of Versatec Inc. Template is a registered trademark of Template Graphics Software Inc. IMSL Math/Library is a registered trademark of IMSL Inc.
Quasi-experimental study designs series-paper 7: assessing the assumptions.
Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian
2017-09-01
Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
The critical period of weed control in soybean (Glycine max (L.) Merr.) in north of Iran conditions.
Keramati, Sara; Pirdashti, Hemmatollah; Esmaili, Mohammad Ali; Abbasian, Arastoo; Habibi, Marjaneh
2008-02-01
A field study was conducted in 2006 at Sari Agricultural and Natural Resources University, in order to determine the best time for weed control in soybean promising line, 033. Experiment was arranged in randomized complete block design with 4 replications and two series of treatments. In the first series, weeds were kept in place until crop reached V2 (second trifoliolate), V4 (fourth trifoliolate), V6 (sixth trifoliolate), R1 (beginning bloom, first flower), R3 (beginning pod), R5 (beginning seed) and were then removed and the crop kept weed-free for the rest of the season. In the second series, crops were kept weed-free until the above growth stages after which weeds were allowed to grow in the plots for the rest of the season. Whole season weedy and weed-free plots were included in the experiment for yield comparison. The results showed that among studied traits, grain yield, pod numbers per plant and weed biomass were affected significantly by control and interference treatments. The highest number of pods per plant was obtained from plots which kept weed-free for whole season control. Results showed that weed control should be carried out between V2 (26 day after planting) to R1 (63 day after planting) stages of soybean to provide maximum grain yield. Thus, it is possible to optimize the timing of weed control, which can serve to reduce the costs and side effects of intensive chemical weed control.
Time-response of cultured deep-sea benthic foraminifera to different algal diets
NASA Astrophysics Data System (ADS)
Heinz, P.; Hemleben, Ch; Kitazato, H.
2002-03-01
The vertical distribution of benthic foraminifera in the surface sediment is influenced by environmental factors, mainly by food and oxygen supply. An experiment of three different time series was performed to investigate the response of deep-sea benthic foraminifera to simulated phytodetritus pulses under stable oxygen concentrations. Each series was fed constantly with one distinct algal species in equivalent amounts. The temporal reactions of the benthic foraminifera with regard to the vertical distribution in the sediment, the total number, and the species composition were observed and compared within the three series. Additionally, oxygen contents and bacterial cell numbers were measured to ensure that these factors were invariable and did not influence foraminiferal communities. The addition of algae leads to higher population densities 21 days after food was added. Higher numbers of individuals were probably caused by higher organic levels, which in turn induced reproduction. A stronger response is found after feeding with Amphiprora sp. and Pyramimonas sp., compared to Dunaliella tertiolecta. At a constant high oxygen supply, no migration to upper layers was observed after food addition, and more individuals were found in deeper layers. The laboratory results thus agree with the predictions of the TROX-model. An epifaunal microhabitat preference was shown for Adercotryma glomerata. Hippocrepina sp. was spread over the entire sediment depth with a shallow infaunal maximum. Melonis barleeanum preferred a deeper infaunal habitat. Bacterial cell concentrations were stable during the laboratory experiments and showed no significant response to higher organic fluxes.
A novel weight determination method for time series data aggregation
NASA Astrophysics Data System (ADS)
Xu, Paiheng; Zhang, Rong; Deng, Yong
2017-09-01
Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.
Working Memory and Aging: Separating the Effects of Content and Context
Bopp, Kara L.; Verhaeghen, Paul
2009-01-01
In three experiments, we investigated the hypothesis that age-related differences in working memory might be due to the inability to bind content with context. Participants were required to find a repeating stimulus within a single series (no context memory required) or within multiple series (necessitating memory for context). Response time and accuracy were examined in two task domains: verbal and visuospatial. Binding content with context led to longer processing time and poorer accuracy in both age groups, even when working memory load was held constant. Although older adults were overall slower and less accurate than younger adults, the need for context memory did not differentially affect their performance. It is therefore unlikely that age differences in working memory are due to specific age-related problems with content-with-context binding. PMID:20025410
Functional quantitative susceptibility mapping (fQSM).
Balla, Dávid Z; Sanchez-Panchuelo, Rosa M; Wharton, Samuel J; Hagberg, Gisela E; Scheffler, Klaus; Francis, Susan T; Bowtell, Richard
2014-10-15
Blood oxygenation level dependent (BOLD) functional magnetic resonance imaging (fMRI) is a powerful technique, typically based on the statistical analysis of the magnitude component of the complex time-series. Here, we additionally interrogated the phase data of the fMRI time-series and used quantitative susceptibility mapping (QSM) in order to investigate the potential of functional QSM (fQSM) relative to standard magnitude BOLD fMRI. High spatial resolution data (1mm isotropic) were acquired every 3 seconds using zoomed multi-slice gradient-echo EPI collected at 7 T in single orientation (SO) and multiple orientation (MO) experiments, the latter involving 4 repetitions with the subject's head rotated relative to B0. Statistical parametric maps (SPM) were reconstructed for magnitude, phase and QSM time-series and each was subjected to detailed analysis. Several fQSM pipelines were evaluated and compared based on the relative number of voxels that were coincidentally found to be significant in QSM and magnitude SPMs (common voxels). We found that sensitivity and spatial reliability of fQSM relative to the magnitude data depended strongly on the arbitrary significance threshold defining "activated" voxels in SPMs, and on the efficiency of spatio-temporal filtering of the phase time-series. Sensitivity and spatial reliability depended slightly on whether MO or SO fQSM was performed and on the QSM calculation approach used for SO data. Our results present the potential of fQSM as a quantitative method of mapping BOLD changes. We also critically discuss the technical challenges and issues linked to this intriguing new technique. Copyright © 2014 Elsevier Inc. All rights reserved.
Automated smoother for the numerical decoupling of dynamics models.
Vilela, Marco; Borges, Carlos C H; Vinga, Susana; Vasconcelos, Ana Tereza R; Santos, Helena; Voit, Eberhard O; Almeida, Jonas S
2007-08-21
Structure identification of dynamic models for complex biological systems is the cornerstone of their reverse engineering. Biochemical Systems Theory (BST) offers a particularly convenient solution because its parameters are kinetic-order coefficients which directly identify the topology of the underlying network of processes. We have previously proposed a numerical decoupling procedure that allows the identification of multivariate dynamic models of complex biological processes. While described here within the context of BST, this procedure has a general applicability to signal extraction. Our original implementation relied on artificial neural networks (ANN), which caused slight, undesirable bias during the smoothing of the time courses. As an alternative, we propose here an adaptation of the Whittaker's smoother and demonstrate its role within a robust, fully automated structure identification procedure. In this report we propose a robust, fully automated solution for signal extraction from time series, which is the prerequisite for the efficient reverse engineering of biological systems models. The Whittaker's smoother is reformulated within the context of information theory and extended by the development of adaptive signal segmentation to account for heterogeneous noise structures. The resulting procedure can be used on arbitrary time series with a nonstationary noise process; it is illustrated here with metabolic profiles obtained from in-vivo NMR experiments. The smoothed solution that is free of parametric bias permits differentiation, which is crucial for the numerical decoupling of systems of differential equations. The method is applicable in signal extraction from time series with nonstationary noise structure and can be applied in the numerical decoupling of system of differential equations into algebraic equations, and thus constitutes a rather general tool for the reverse engineering of mechanistic model descriptions from multivariate experimental time series.
Linden, Ariel
2017-04-01
The basic single-group interrupted time series analysis (ITSA) design has been shown to be susceptible to the most common threat to validity-history-the possibility that some other event caused the observed effect in the time series. A single-group ITSA with a crossover design (in which the intervention is introduced and withdrawn 1 or more times) should be more robust. In this paper, we describe and empirically assess the susceptibility of this design to bias from history. Time series data from 2 natural experiments (the effect of multiple repeals and reinstatements of Louisiana's motorcycle helmet law on motorcycle fatalities and the association between the implementation and withdrawal of Gorbachev's antialcohol campaign with Russia's mortality crisis) are used to illustrate that history remains a threat to ITSA validity, even in a crossover design. Both empirical examples reveal that the single-group ITSA with a crossover design may be biased because of history. In the case of motorcycle fatalities, helmet laws appeared effective in reducing mortality (while repealing the law increased mortality), but when a control group was added, it was shown that this trend was similar in both groups. In the case of Gorbachev's antialcohol campaign, only when contrasting the results against those of a control group was the withdrawal of the campaign found to be the more likely culprit in explaining the Russian mortality crisis than the collapse of the Soviet Union. Even with a robust crossover design, single-group ITSA models remain susceptible to bias from history. Therefore, a comparable control group design should be included, whenever possible. © 2016 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
College Board Advocacy & Policy Center, 2012
2012-01-01
In 2011, the National Office for School Counselor Advocacy launched a journal series to support and build awareness of the issues and challenges raised by the College Board Advocacy & Policy Center's research report, "The Educational Experience of Young Men of Color" (youngmenofcolor.collegeboard.org.) The intent of the series is to…
The Working Experience Books 1, 2, and 3.
ERIC Educational Resources Information Center
Smith, Jeanne H.; Ringel, Harry
Books 1, 2, and 3 of "The Working Experience," a series of texts for English-as-a-Second-Language (ESL) students, are contained in this document. The series builds on oral skills to develop reading and writing ability while still expanding oral English-language proficiency. Since one of the basic principles underlying the series is the idea that…
New Insights into Signed Path Coefficient Granger Causality Analysis.
Zhang, Jian; Li, Chong; Jiang, Tianzi
2016-01-01
Granger causality analysis, as a time series analysis technique derived from econometrics, has been applied in an ever-increasing number of publications in the field of neuroscience, including fMRI, EEG/MEG, and fNIRS. The present study mainly focuses on the validity of "signed path coefficient Granger causality," a Granger-causality-derived analysis method that has been adopted by many fMRI researches in the last few years. This method generally estimates the causality effect among the time series by an order-1 autoregression, and defines a positive or negative coefficient as an "excitatory" or "inhibitory" influence. In the current work we conducted a series of computations from resting-state fMRI data and simulation experiments to illustrate the signed path coefficient method was flawed and untenable, due to the fact that the autoregressive coefficients were not always consistent with the real causal relationships and this would inevitablely lead to erroneous conclusions. Overall our findings suggested that the applicability of this kind of causality analysis was rather limited, hence researchers should be more cautious in applying the signed path coefficient Granger causality to fMRI data to avoid misinterpretation.
NASA Astrophysics Data System (ADS)
Marcos-Garcia, Patricia; Pulido-Velazquez, Manuel; Lopez-Nicolas, Antonio
2016-04-01
Extreme natural phenomena, and more specifically droughts, constitute a serious environmental, economic and social issue in Southern Mediterranean countries, common in the Mediterranean Spanish basins due to the high temporal and spatial rainfall variability. Drought events are characterized by their complexity, being often difficult to identify and quantify both in time and space, and an universally accepted definition does not even exist. This fact, along with future uncertainty about the duration and intensity of the phenomena on account of climate change, makes necessary increasing the knowledge about the impacts of climate change on droughts in order to design management plans and mitigation strategies. The present abstract aims to evaluate the impact of climate change on both meteorological and hydrological droughts, through the use of a generalization of the Standardized Precipitation Index (SPI). We use the Standardized Flow Index (SFI) to assess the hydrological drought, using flow time series instead of rainfall time series. In the case of the meteorological droughts, the Standardized Precipitation and Evapotranspiration Index (SPEI) has been applied to assess the variability of temperature impacts. In order to characterize climate change impacts on droughts, we have used projections from the CORDEX project (Coordinated Regional Climate Downscaling Experiment). Future rainfall and temperature time series for short (2011-2040) and medium terms (2041-2070) were obtained, applying a quantile mapping method to correct the bias of these time series. Regarding the hydrological drought, the Témez hydrological model has been applied to simulate the impacts of future temperature and rainfall time series on runoff and river discharges. It is a conceptual, lumped and a few parameters hydrological model. Nevertheless, it is necessary to point out the time difference between the meteorological and the hydrological droughts. The case study is the Jucar river basin (Spain), a highly regulated system with a share of 80% of water use for irrigated agriculture. The results show that the climate change would increase the historical drought impacts in the river basin. Acknowledgments The study has been supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) and European FEDER funds.
Association mining of dependency between time series
NASA Astrophysics Data System (ADS)
Hafez, Alaaeldin
2001-03-01
Time series analysis is considered as a crucial component of strategic control over a broad variety of disciplines in business, science and engineering. Time series data is a sequence of observations collected over intervals of time. Each time series describes a phenomenon as a function of time. Analysis on time series data includes discovering trends (or patterns) in a time series sequence. In the last few years, data mining has emerged and been recognized as a new technology for data analysis. Data Mining is the process of discovering potentially valuable patterns, associations, trends, sequences and dependencies in data. Data mining techniques can discover information that many traditional business analysis and statistical techniques fail to deliver. In this paper, we adapt and innovate data mining techniques to analyze time series data. By using data mining techniques, maximal frequent patterns are discovered and used in predicting future sequences or trends, where trends describe the behavior of a sequence. In order to include different types of time series (e.g. irregular and non- systematic), we consider past frequent patterns of the same time sequences (local patterns) and of other dependent time sequences (global patterns). We use the word 'dependent' instead of the word 'similar' for emphasis on real life time series where two time series sequences could be completely different (in values, shapes, etc.), but they still react to the same conditions in a dependent way. In this paper, we propose the Dependence Mining Technique that could be used in predicting time series sequences. The proposed technique consists of three phases: (a) for all time series sequences, generate their trend sequences, (b) discover maximal frequent trend patterns, generate pattern vectors (to keep information of frequent trend patterns), use trend pattern vectors to predict future time series sequences.
There's alcohol in my soap: portrayal and effects of alcohol use in a popular television series.
van Hoof, Joris J; de Jong, Menno D T; Fennis, Bob M; Gosselt, Jordy F
2009-06-01
Two studies are reported addressing the media influences on adolescents' alcohol-related attitudes and behaviours. A content analysis was conducted to investigate the prevalence of alcohol portrayal in a Dutch soap series. The coding scheme covered the alcohol consumption per soap character, drinking situations and drinking times. Inter-coder reliability was satisfactory. The results showed that alcohol portrayal was prominent and that many instances of alcohol use reflected undesirable behaviours. To assess the influence of such alcohol cues on adolescents, a 2x2 experiment was conducted focusing on the separate and combined effects of alcohol portrayal in the soap series and surrounding alcohol commercials. Whereas the alcohol commercials had the expected effects on adolescents' attitudes, the alcohol-related soap content only appeared to have unexpected effects. Adolescents who were exposed to the alcohol portrayal in the soap series had a less positive attitude towards alcohol and lower drinking intentions. Implications of these findings for health policy and future research are discussed.
NASA Astrophysics Data System (ADS)
Hermosilla, Txomin; Wulder, Michael A.; White, Joanne C.; Coops, Nicholas C.; Hobart, Geordie W.
2017-12-01
The use of time series satellite data allows for the temporally dense, systematic, transparent, and synoptic capture of land dynamics over time. Subsequent to the opening of the Landsat archive, several time series approaches for characterizing landscape change have been developed, often representing a particular analytical time window. The information richness and widespread utility of these time series data have created a need to maintain the currency of time series information via the addition of new data, as it becomes available. When an existing time series is temporally extended, it is critical that previously generated change information remains consistent, thereby not altering reported change statistics or science outcomes based on that change information. In this research, we investigate the impacts and implications of adding additional years to an existing 29-year annual Landsat time series for forest change. To do so, we undertook a spatially explicit comparison of the 29 overlapping years of a time series representing 1984-2012, with a time series representing 1984-2016. Surface reflectance values, and presence, year, and type of change were compared. We found that the addition of years to extend the time series had minimal effect on the annual surface reflectance composites, with slight band-specific differences (r ≥ 0.1) in the final years of the original time series being updated. The area of stand replacing disturbances and determination of change year are virtually unchanged for the overlapping period between the two time-series products. Over the overlapping temporal period (1984-2012), the total area of change differs by 0.53%, equating to an annual difference in change area of 0.019%. Overall, the spatial and temporal agreement of the changes detected by both time series was 96%. Further, our findings suggest that the entire pre-existing historic time series does not need to be re-processed during the update process. Critically, given the time series change detection and update approach followed here, science outcomes or reports representing one temporal epoch can be considered stable and will not be altered when a time series is updated with newly available data.
Gutenberg-Richter law for Internetquakes
NASA Astrophysics Data System (ADS)
Abe, Sumiyoshi; Suzuki, Norikazu
2003-03-01
The temporal behavior of the Internet is studied by performing Ping experiments. Sudden drastic changes in the Internet time series of round-trip times of the Ping signals (i.e., congestion of the network) are catastrophic and can be identified as “Internetquakes”. Magnitude of the Internetquakes is defined and the Gutenberg-Richter law is found to hold for the cumulative frequency of the Internetquakes and magnitude. Therefore, earthquakes, financial markets and the Internet share a common scale free nature in their temporal behaviors.
Omori's law in the Internet traffic
NASA Astrophysics Data System (ADS)
Abe, S.; Suzuki, N.
2003-03-01
The Internet is a complex system, whose temporal behavior is highly nonstationary and exhibits sudden drastic changes regarded as main shocks or catastrophes. Here, analyzing a set of time series data of round-trip time measured in echo experiment with the Ping Command, the property of "aftershocks" (i.e., catastrophes of smaller scales) after a main shock is studied. It is found that the aftershocks obey Omori's law. Thus, the Internet shares with earthquakes and financial-market crashes a common scale-invariant feature in the temporal patterns of aftershocks.
Highly comparative time-series analysis: the empirical structure of time series and their methods.
Fulcher, Ben D; Little, Max A; Jones, Nick S
2013-06-06
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.
Highly comparative time-series analysis: the empirical structure of time series and their methods
Fulcher, Ben D.; Little, Max A.; Jones, Nick S.
2013-01-01
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines. PMID:23554344
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
The Da Vinci Xi and robotic radical prostatectomy-an evolution in learning and technique.
Goonewardene, S S; Cahill, D
2017-06-01
The da Vinci Xi robot has been introduced as the successor to the Si platform. The promise of the Xi is to open the door to new surgical procedures. For robotic-assisted radical prostatectomy (RARP)/pelvic surgery, the potential is better vision and longer instruments. How has the Xi impacted on operative and pathological parameters as indicators of surgical performance? This is a comparison of an initial series of 42 RARPs with the Xi system in 2015 with a series using the Si system immediately before Xi uptake in the same calendar year, and an Si series by the same surgeon synchronously as the Xi series using operative time, blood loss, and positive margins as surrogates of surgical performance. Subjectively and objectively, there is a learning curve to Xi uptake in longer operative times but no impact on T2 positive margins which are the most reflective single measure of RARP outcomes. Subjectively, the vision of the Xi is inferior to the Si system, and the integrated diathermy system and automated setup are quirky. All require experience to overcome. There is a learning curve to progress from the Si to Xi da Vinci surgical platforms, but this does not negatively impact the outcome.
ERIC Educational Resources Information Center
Ngan, Chun-Kit
2013-01-01
Making decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts…
A Review of Subsequence Time Series Clustering
Teh, Ying Wah
2014-01-01
Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies. PMID:25140332
A review of subsequence time series clustering.
Zolhavarieh, Seyedjamal; Aghabozorgi, Saeed; Teh, Ying Wah
2014-01-01
Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies.
Interactive orbital proximity operations planning system
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Ellis, Stephen R.
1990-01-01
An interactive graphical planning system for on-site planning of proximity operations in the congested multispacecraft environment about the space station is presented. The system shows the astronaut a bird's eye perspective of the space station, the orbital plane, and the co-orbiting spacecraft. The system operates in two operational modes: (1) a viewpoint mode, in which the astronaut is able to move the viewpoint around in the orbital plane to range in on areas of interest; and (2) a trajectory design mode, in which the trajectory is planned. Trajectory design involves the composition of a set of waypoints which result in a fuel-optimal trajectory which satisfies all operational constraints, such as departure and arrival constraints, plume impingement constraints, and structural constraints. The main purpose of the system is to present the trajectory and the constraints in an easily interpretable graphical format. Through a graphical interactive process, the trajectory waypoints are edited until all operational constraints are satisfied. A series of experiments was conducted to evaluate the system. Eight airline pilots with no prior background in orbital mechanics participated in the experiments. Subject training included a stand-alone training session of about 6 hours duration, in which the subjects became familiar with orbital mechanics concepts and performed a series of exercises to familiarize themselves with the control and display features of the system. They then carried out a series of production runs in which 90 different trajectory design situations were randomly addressed. The purpose of these experiments was to investigate how the planning time, planning efforts, and fuel expenditures were affected by the planning difficulty. Some results of these experiments are presented.
Multiscale structure of time series revealed by the monotony spectrum.
Vamoş, Călin
2017-03-01
Observation of complex systems produces time series with specific dynamics at different time scales. The majority of the existing numerical methods for multiscale analysis first decompose the time series into several simpler components and the multiscale structure is given by the properties of their components. We present a numerical method which describes the multiscale structure of arbitrary time series without decomposing them. It is based on the monotony spectrum defined as the variation of the mean amplitude of the monotonic segments with respect to the mean local time scale during successive averagings of the time series, the local time scales being the durations of the monotonic segments. The maxima of the monotony spectrum indicate the time scales which dominate the variations of the time series. We show that the monotony spectrum can correctly analyze a diversity of artificial time series and can discriminate the existence of deterministic variations at large time scales from the random fluctuations. As an application we analyze the multifractal structure of some hydrological time series.
Shimat V. Joseph; S. Kristine Braman; Jim Quick; James L. Hanula
2011-01-01
Hemlock woolly adelgid (HWA), Adelges tsugae Annand is a serious pest of eastern and Carolina hemlock in the eastern United States. A series of experiments compared commercially available and experimental insecticides, rates, application methods and timing for HWA control in Georgia and North Carolina. Safari 20 SG (dinotefuran) provided an average of 79 to 87%...
ERIC Educational Resources Information Center
Jingjit, Mathukorn
2015-01-01
This study aims to obtain more insight regarding the effect of multimedia learning on third grade of Thai primary pupils' achievement in Size and Depth Vocabulary of English. A quasi-experiment is applied using "one group pretest-posttest design" combined with "time series design," as well as data triangulation. The sample…
ERIC Educational Resources Information Center
Garcia, Juan R., Ed; And Others
This anthology compiles articles and essays on Chicano and Chicana political concerns in the 1980's, on cultural aspects of the Chicano experience, and on historical issues and events. The papers are: (1) "Chicano Politics after 1984" by Christine Marie Sierra; (2) "Hacia una Teoria para la Liberacion de la Mujer" (analysis of…
ERIC Educational Resources Information Center
Trent, John
2016-01-01
This article reports the results of a multiple qualitative case study which investigated the challenges that seven early career English language teachers in Hong Kong confronted as they constructed their professional and personal identities. A series of in-depth interviews with participants during the entire first year of their full-time teaching…
ERIC Educational Resources Information Center
Peace, Brian, Ed.; Foster, Keith, Ed.
The following papers are included: "Setting the Scene" (Brian Peace); "Different Training for Different Adult Educators?" (Michael Newman); "The Training of Part-Time Teachers in Adult Education: The UK Experience" (Brian Graham); "Adult Education Tutor Support" (Aileen Kelly); "Six Category…
An Experiment in Basic Airborne Electronics Training. Part 5: Evaluation of the Revised Courses.
ERIC Educational Resources Information Center
Baldwin, Robert O.; Johnson, Kirk A.
This is the fifth in a series on shortened versions of the Avionics Fundamentals and Aviation Electronics Technician R (Radar) Courses. The first four studies indicated that the initial revisions of the courses led to substantial savings in training time, but that the graduates from the revised courses were slightly inferior to graduates from the…
A Developmental Exploration of the Effects of Spacing on the Recall of Repeated Words.
ERIC Educational Resources Information Center
Wilson, William P.; Witryol, Sam L.
The purpose of this experiment was to examine lag function developmental parameters and to test a related developmental hypothesis and the predictions it generated. Fourth and eighth graders and adults were shown a series of words, one at a time, with some words presented twice. Between the two presentations of each repeated word there was one of…
ERIC Educational Resources Information Center
Teranishi, Robert T.
2010-01-01
Highly respected scholar Robert Teranishi draws on his vast research to present this timely and compelling examination of the experience of Asian Americans in higher education. "Asians in the Ivory Tower" explores why and how Asian Americans and Pacific Islanders (AAPIs) are important to our nation's higher education priorities and places the…
ERIC Educational Resources Information Center
Mooney, Gai
2010-01-01
Statistics is often presented to students as a series of algorithms to be learnt by heart and applied at the appropriate time to get "the correct answer". This approach, while it may in fact produce the right answer, has been shown to be minimally effective at helping students understand the underlying statistical concepts. As Holmes noted,…
ERIC Educational Resources Information Center
Scotland, James
2016-01-01
A time-series analysis was used to investigate Arabic undergraduate students' (n = 50) perceptions of assessed group work in a major government institution of higher education in Qatar. A longitudinal mixed methods approach was employed. Likert scale questionnaires were completed over the duration of a collaborative writing event. Additionally,…
Kapsenberg, Lydia; Kelley, Amanda L.; Shaw, Emily C.; Martz, Todd R.; Hofmann, Gretchen E.
2015-01-01
Understanding how declining seawater pH caused by anthropogenic carbon emissions, or ocean acidification, impacts Southern Ocean biota is limited by a paucity of pH time-series. Here, we present the first high-frequency in-situ pH time-series in near-shore Antarctica from spring to winter under annual sea ice. Observations from autonomous pH sensors revealed a seasonal increase of 0.3 pH units. The summer season was marked by an increase in temporal pH variability relative to spring and early winter, matching coastal pH variability observed at lower latitudes. Using our data, simulations of ocean acidification show a future period of deleterious wintertime pH levels potentially expanding to 7–11 months annually by 2100. Given the presence of (sub)seasonal pH variability, Antarctica marine species have an existing physiological tolerance of temporal pH change that may influence adaptation to future acidification. Yet, pH-induced ecosystem changes remain difficult to characterize in the absence of sufficient physiological data on present-day tolerances. It is therefore essential to incorporate natural and projected temporal pH variability in the design of experiments intended to study ocean acidification biology.
Francaviglia, Natale; Maugeri, Rosario; Odierna Contino, Antonino; Meli, Francesco; Fiorenza, Vito; Costantino, Gabriele; Giammalva, Roberto Giuseppe; Iacopino, Domenico Gerardo
2017-01-01
Cranioplasty represents a challenge in neurosurgery. Its goal is not only plastic reconstruction of the skull but also to restore and preserve cranial function, to improve cerebral hemodynamics, and to provide mechanical protection of the neural structures. The ideal material for the reconstructive procedures and the surgical timing are still controversial. Many alloplastic materials are available for performing cranioplasty and among these, titanium still represents a widely proven and accepted choice. The aim of our study was to present our preliminary experience with a "custom-made" cranioplasty, using electron beam melting (EBM) technology, in a series of ten patients. EBM is a new sintering method for shaping titanium powder directly in three-dimensional (3D) implants. To the best of our knowledge this is the first report of a skull reconstruction performed by this technique. In a 1-year follow-up no postoperative complications have been observed and good clinical and esthetic outcomes were achieved. Costs higher than those for other types of titanium mesh, a longer production process, and the greater expertise needed for this technique are compensated by the achievement of most complex skull reconstructions with a shorter operative time.
NASA Technical Reports Server (NTRS)
McGillicuddy, Dennis J., Jr.; Kosnyrev, V. K.
2001-01-01
An open boundary ocean model is configured in a domain bounded by the four TOPEX/Poseidon (T/P) ground tracks surrounding the US Joint Global Ocean Flux Study Bermuda Atlantic Time-Series Study (BATS) site. This implementation facilitates prescription of model boundary conditions directly from altimetric measurements (both TIP and ERS-2). The expected error characteristics for a domain of this size with periodically updated boundary conditions are established with idealized numerical experiments using simulated data. A hindcast simulation is then constructed using actual altimetric observations during the period October 1992 through September 1998. Quantitative evaluation of the simulation suggests significant skill. The correlation coefficient between predicted sea level anomaly and ERS observations in the model interior is 0.89; that for predicted versus observed dynamic height anomaly based on hydrography at the BATS site is 0.73. Comparison with the idealized experiments suggests that the main source of error in the hindcast is temporal undersampling of the boundary conditions. The hindcast simulation described herein provides a basis for retrospective analysis of BATS observations in the context of the mesoscale eddy field.
NASA Technical Reports Server (NTRS)
McGillicuddy, D. J.; Kosnyrev, V. K.
2001-01-01
An open boundary ocean model is configured in a domain bounded by the four TOPEX/Poseidon (TIP) ground tracks surrounding the U.S. Joint Global Ocean Flux Study Bermuda Atlantic Time-series Study (BATS) site. This implementation facilitates prescription of model boundary conditions directly from altimetric measurements (both TIP and ERS-2). The expected error characteristics for a domain of this size with periodically updated boundary conditions are established with idealized numerical experiments using simulated data. A hindcast simulation is then constructed using actual altimetric observations during the period October 1992 through September 1998. Quantitative evaluation of the simulation suggests significant skill. The correlation coefficient between predicted sea level anomaly and ERS observations in the model interior is 0.89; that for predicted versus observed dynamic height anomaly based on hydrography at the BATS site is 0.73. Comparison with the idealized experiments suggests that the main source of error in the hindcast is temporal undersampling of the boundary conditions. The hindcast simulation described herein provides a basis for retrospective analysis of BATS observations in the context of the mesoscale eddy field.
Impacts of GNSS position offsets on global frame stability
NASA Astrophysics Data System (ADS)
Griffiths, Jake; Ray, Jim
2015-04-01
Positional offsets appear in Global Navigation Satellite System (GNSS) time series for a variety of reasons. Antenna or radome changes are the most common cause for these discontinuities. Many others are from earthquakes, receiver changes, and different anthropogenic modifications at or near the stations. Some jumps appear for unknown or undocumented reasons. Accurate determination of station velocities, and therefore geophysical parameters and terrestrial reference frames, requires that positional offsets be correctly found and compensated. Williams (2003) found that undetected offsets introduce a random walk error component in individual station time series. The topic of detecting positional offsets has received considerable attention in recent years (e.g., Detection of Offsets in GPS Experiment; DOGEx), and most research groups using GNSS have adopted a mix of manual and automated methods for finding them. The removal of a positional offset from a time series is usually handled by estimating the average station position on both sides of the discontinuity. Except for large earthquake events, the velocity is usually assumed constant and continuous across the positional jump. This approach is sufficient in the absence of time-correlated errors. However, GNSS time series contain periodic and power-law (flicker) errors. In this paper, we evaluate the impact to individual station results and the overall stability of the global reference frame from adding increasing numbers of positional discontinuities. We use the International GNSS Service (IGS) weekly SINEX files, and iteratively insert positional offset parameters. Each iteration includes a restacking of the modified SINEX files using the CATREF software from Institut National de l'Information Géographique et Forestière (IGN). Comparisons of successive stacked solutions are used to assess the impacts on the time series of x-pole and y-pole offsets, along with changes in regularized position and secular velocity for stations with more than 2.5 years of data. Our preliminary results indicate that the change in polar motion scatter is logarithmic with increasing numbers of discontinuities. The best-fit natural logarithm to the changes in scatter for x-pole has R2 = 0.58; the fit for the y-pole series has R2 = 0.99. From these empirical functions, we find that polar motion scatter increases from zero when the total rate of discontinuities exceeds 0.2 (x-pole) and 1.3 (y-pole) per station, on average (the IGS has 0.65 per station). Thus, the presence of position offsets in GNSS station time series is likely already a contributor to IGS polar motion inaccuracy and global frame instability. Impacts to station position and velocity estimates depend on noise features found in that station's positional time series. For instance, larger changes in velocity occur for stations with shorter and noisier data spans. This is because an added discontinuity parameter for an individual station time series can induce changes in average position on both sides of the break. We will expand on these results, and consider remaining questions about the role of velocity discontinuities and the effects caused by non-core reference frame stations.
Gor, Ronak A; Long, Christopher J; Shukla, Aseem R; Kirsch, Andrew J; Perez-Brayfield, Marcos; Srinivasan, Arun K
2016-02-01
To review peri-procedural outcomes from a large, multi-institutional series of pediatric urology patients treated with laparaendoscopic single-site surgery (LESS) for major extirpative and reconstructive procedures. Consecutive LESS cases between January 2011 and May 2014 from three free-standing pediatric referral centers were reviewed. Data include age, sex, operative time, blood loss, length of stay, and complications according to the modified Clavien-Dindo classification. Hasson technique was used for peritoneal entry, GelPOINT advanced access platform was inserted, and standard 5mm laparoscopic instruments were used. Fifty-nine patients (median age 5 years, 4 months-17 years) met inclusion criteria: 29 nephrectomies, 9 nephroureterectomies, 3 bilateral nephrectomies, 5 heminephrectomies, 5 renal cyst decortications, 3 bilateral gonadectomies, 2 Malone antegrade continence enema, 2 calyceal diverticulectomy, and 1 ovarian detorsion with cystectomy. Median operative times for each case type were comparable to published experiences with traditional laparoscopy. Overall mean and median length of stay was 36.2 hours and 1 day, respectively. There were two complications: port site hernia requiring surgical repair (Clavien IIIb) and a superficial port site infection that resolved with antibiotics (Clavien II). Cosmetic outcomes were subjectively well received by patients and their parents. Operative time was significantly shorter between the first half of the experience and the second half (102 vs 70 minutes, P < .05). LESS approach can be broadly applied across many major extirpative and reconstructive procedures within pediatric urology. Our series advances our field's utilization of this technique and its safety. Copyright © 2016 Elsevier Inc. All rights reserved.
HIGH PRESSURE COAL COMBUSTON KINETICS PROJECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stefano Orsino
As part of the U.S. Department of Energy (DoE) initiative to improve the efficiency of coal-fired power plants and reduce the pollution generated by these facilities, DOE has funded the High-Pressure Coal Combustion Kinetics (HPCCK) Projects. A series of laboratory experiments were conducted on selected pulverized coals at elevated pressures with the specific goals to provide new data for pressurized coal combustion that will help extend to high pressure and validate models for burnout, pollutant formation, and generate samples of solid combustion products for analyses to fill crucial gaps in knowledge of char morphology and fly ash formation. Two seriesmore » of high-pressure coal combustion experiments were performed using SRI's pressurized radiant coal flow reactor. The first series of tests characterized the near burner flame zone (NBFZ). Three coals were tested, two high volatile bituminous (Pittsburgh No.8 and Illinois No.6), and one sub-bituminous (Powder River Basin), at pressures of 1, 2, and 3 MPa (10, 20, and 30 atm). The second series of experiments, which covered high-pressure burnout (HPBO) conditions, utilized a range of substantially longer combustion residence times to produce char burnout levels from 50% to 100%. The same three coals were tested at 1, 2, and 3 MPa, as well as at 0.2 MPa. Tests were also conducted on Pittsburgh No.8 coal in CO2 entrainment gas at 0.2, 1, and 2 MPa to begin establishing a database of experiments relevant to carbon sequestration techniques. The HPBO test series included use of an impactor-type particle sampler to measure the particle size distribution of fly ash produced under complete burnout conditions. The collected data have been interpreted with the help of CFD and detailed kinetics simulation to extend and validate devolatilization, char combustion and pollutant model at elevated pressure. A global NOX production sub-model has been proposed. The submodel reproduces the performance of the detailed chemical reaction mechanism for the NBFZ tests.« less
On the Reality of Illusory Conjunctions.
Botella, Juan; Suero, Manuel; Durán, Juan I
2017-01-01
The reality of illusory conjunctions in perception has been sometimes questioned, arguing that they can be explained by other mechanisms. Most relevant experiments are based on migrations along the space dimension. But the low rate of illusory conjunctions along space can easily hide them among other types of errors. As migrations over time are a more frequent phenomenon, illusory conjunctions can be disentangled from other errors. We report an experiment in which series of colored letters were presented in several spatial locations, allowing for migrations over both space and time. The distribution of frequencies were fit by several multinomial tree models based on alternative hypothesis about illusory conjunctions and the potential sources of free-floating features. The best-fit model acknowledges that most illusory conjunctions are migrations in the time domain. Migrations in space are probably present, but the rate is very low. Other conjunction errors, as those produced by guessing or miscategorizations of the to-be-reported feature, are also present in the experiment. The main conclusion is that illusory conjunctions do exist.
4-station ultra-rapid EOP experiment with e-VLBI technique and automated correlation/analysis
NASA Astrophysics Data System (ADS)
Kurihara, S.; Nozawa, K.; Haas, R.; Lovell, J.; McCallum, J.; Quick, J.; Hobiger, T.
2013-08-01
Since 2007, the Geospatial Information Authority of Japan (GSI) and the Onsala Space Observatory (OSO) have performed the ultra-rapid dUT1 experiments, which can provide us with near real-time dUT1 value. Its technical knowledge has already been adopted for the regular series of the Tsukuba-Wettzell intensive session. Now we tried some 4-station ultra-rapid EOP experiments in association with Hobart and HartRAO so that we can estimate not only dUT1 but also the two polar motion parameters. In this experiment a new analysis software c5++ developed by the National Institute of Information and Communications Technology (NICT) was used. We describe past developments and an overview of the experiment, and conclude with its results in this report.
NASA Astrophysics Data System (ADS)
Ezad, I.; Dobson, D. P.; Brodholt, J. P.; Thomson, A.; Hunt, S.
2017-12-01
The grain size of the transition zone is a poorly known but important geophysical parameter. Among others, the grain size may control the rheology, seismic attenuation and radiative thermal conductivity of the mantle. However, the grain size of the transition zone minerals ringwoodite (Mg,Fe)2SiO4 and majorite garnet MgSiO3 under appropriate zone conditions is currently unknown and there are very few experiments with which to constrain it. In order to determine the grain size of the transition zone, the grain growth kinetics must be determined for a range of mantle compositions. We have, therefore, experimentally determined the grain growth kinetics of the lowermost transition zone minerals through multi anvil experiments at University College London (UCL). This is achieved through a comprehensive set of time series experiments at pressures of 21 GPa and temperatures relevant to the transition zone. We have also determined the effect of varying water content, oxygen fugacity, iron content and aluminium content also discussed by Dobson and Mariani., (2014). Our initial grain growth experiments conducted at 1200°C and 1400°C at 18 GPa show extremely slow grain growth kinetics; time series experiments extended to 105.8 seconds are unable to produce grains larger than 100 nm. This suggests that fine-grained material at the base of the transition zone will persist on geological timescales. Such small grains size suggests that diffusion creep might be the dominant deformation mechanism in this region. Reference: Dobson, D.P., Mariani, E., 2014. The kinetics of the reaction of majorite plus ferropericlase to ringwoodite: Implications for mantle upwellings crossing the 660 km discontinuity. Earth Planet. Sci. Lett. 408, 110-118. doi:10.1016/j.epsl.2014.10.009
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-07
... Series, adjusted option series and any options series until the time to expiration for such series is... time to expiration for such series is less than nine months be treated differently. Specifically, under... series until the time to expiration for such series is less than nine months. Accordingly, the...
Marwaha, Puneeta; Sunkaria, Ramesh Kumar
2016-09-01
The sample entropy (SampEn) has been widely used to quantify the complexity of RR-interval time series. It is a fact that higher complexity, and hence, entropy is associated with the RR-interval time series of healthy subjects. But, SampEn suffers from the disadvantage that it assigns higher entropy to the randomized surrogate time series as well as to certain pathological time series, which is a misleading observation. This wrong estimation of the complexity of a time series may be due to the fact that the existing SampEn technique updates the threshold value as a function of long-term standard deviation (SD) of a time series. However, time series of certain pathologies exhibits substantial variability in beat-to-beat fluctuations. So the SD of the first order difference (short term SD) of the time series should be considered while updating threshold value, to account for period-to-period variations inherited in a time series. In the present work, improved sample entropy (I-SampEn), a new methodology has been proposed in which threshold value is updated by considering the period-to-period variations of a time series. The I-SampEn technique results in assigning higher entropy value to age-matched healthy subjects than patients suffering atrial fibrillation (AF) and diabetes mellitus (DM). Our results are in agreement with the theory of reduction in complexity of RR-interval time series in patients suffering from chronic cardiovascular and non-cardiovascular diseases.
A method for generating high resolution satellite image time series
NASA Astrophysics Data System (ADS)
Guo, Tao
2014-10-01
There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation, environment and etc. applications.
Stein, Richard R; Bucci, Vanni; Toussaint, Nora C; Buffie, Charlie G; Rätsch, Gunnar; Pamer, Eric G; Sander, Chris; Xavier, João B
2013-01-01
The intestinal microbiota is a microbial ecosystem of crucial importance to human health. Understanding how the microbiota confers resistance against enteric pathogens and how antibiotics disrupt that resistance is key to the prevention and cure of intestinal infections. We present a novel method to infer microbial community ecology directly from time-resolved metagenomics. This method extends generalized Lotka-Volterra dynamics to account for external perturbations. Data from recent experiments on antibiotic-mediated Clostridium difficile infection is analyzed to quantify microbial interactions, commensal-pathogen interactions, and the effect of the antibiotic on the community. Stability analysis reveals that the microbiota is intrinsically stable, explaining how antibiotic perturbations and C. difficile inoculation can produce catastrophic shifts that persist even after removal of the perturbations. Importantly, the analysis suggests a subnetwork of bacterial groups implicated in protection against C. difficile. Due to its generality, our method can be applied to any high-resolution ecological time-series data to infer community structure and response to external stimuli.
3D Simulations of the ``Keyhole'' Hohlraum for Shock Timing on NIF
NASA Astrophysics Data System (ADS)
Robey, H. F.; Marinak, M. M.; Munro, D. H.; Jones, O. S.
2007-11-01
Ignition implosions planned for the National Ignition Facility (NIF) require a pulse shape with a carefully designed series of steps, which launch a series of shocks through the ablator and DT fuel. The relative timing of these shocks must be tuned to better than +/- 100ps to maintain the DT fuel on a sufficiently low adiabat. To meet these requirements, pre-ignition tuning experiments using a modified hohlraum geometry are being planned. This modified geometry, known as the ``keyhole'' hohlraum, adds a re-entrant gold cone, which passes through the hohlraum and capsule walls, to provide an optical line-of-sight to directly measure the shocks as they break out of the ablator. In order to assess the surrogacy of this modified geometry, 3D simulations using HYDRA [1] have been performed. The drive conditions and the resulting effect on shock timing in the keyhole hohlraum will be compared with the corresponding results for the standard ignition hohlraum. [1] M.M. Marinak, et al., Phys. Plasmas 8, 2275 (2001).
Toussaint, Nora C.; Buffie, Charlie G.; Rätsch, Gunnar; Pamer, Eric G.; Sander, Chris; Xavier, João B.
2013-01-01
The intestinal microbiota is a microbial ecosystem of crucial importance to human health. Understanding how the microbiota confers resistance against enteric pathogens and how antibiotics disrupt that resistance is key to the prevention and cure of intestinal infections. We present a novel method to infer microbial community ecology directly from time-resolved metagenomics. This method extends generalized Lotka–Volterra dynamics to account for external perturbations. Data from recent experiments on antibiotic-mediated Clostridium difficile infection is analyzed to quantify microbial interactions, commensal-pathogen interactions, and the effect of the antibiotic on the community. Stability analysis reveals that the microbiota is intrinsically stable, explaining how antibiotic perturbations and C. difficile inoculation can produce catastrophic shifts that persist even after removal of the perturbations. Importantly, the analysis suggests a subnetwork of bacterial groups implicated in protection against C. difficile. Due to its generality, our method can be applied to any high-resolution ecological time-series data to infer community structure and response to external stimuli. PMID:24348232
Robust evaluation of time series classification algorithms for structural health monitoring
NASA Astrophysics Data System (ADS)
Harvey, Dustin Y.; Worden, Keith; Todd, Michael D.
2014-03-01
Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and mechanical infrastructure through analysis of structural response measurements. The supervised learning methodology for data-driven SHM involves computation of low-dimensional, damage-sensitive features from raw measurement data that are then used in conjunction with machine learning algorithms to detect, classify, and quantify damage states. However, these systems often suffer from performance degradation in real-world applications due to varying operational and environmental conditions. Probabilistic approaches to robust SHM system design suffer from incomplete knowledge of all conditions a system will experience over its lifetime. Info-gap decision theory enables nonprobabilistic evaluation of the robustness of competing models and systems in a variety of decision making applications. Previous work employed info-gap models to handle feature uncertainty when selecting various components of a supervised learning system, namely features from a pre-selected family and classifiers. In this work, the info-gap framework is extended to robust feature design and classifier selection for general time series classification through an efficient, interval arithmetic implementation of an info-gap data model. Experimental results are presented for a damage type classification problem on a ball bearing in a rotating machine. The info-gap framework in conjunction with an evolutionary feature design system allows for fully automated design of a time series classifier to meet performance requirements under maximum allowable uncertainty.
Detecting dynamic causal inference in nonlinear two-phase fracture flow
NASA Astrophysics Data System (ADS)
Faybishenko, Boris
2017-08-01
Identifying dynamic causal inference involved in flow and transport processes in complex fractured-porous media is generally a challenging task, because nonlinear and chaotic variables may be positively coupled or correlated for some periods of time, but can then become spontaneously decoupled or non-correlated. In his 2002 paper (Faybishenko, 2002), the author performed a nonlinear dynamical and chaotic analysis of time-series data obtained from the fracture flow experiment conducted by Persoff and Pruess (1995), and, based on the visual examination of time series data, hypothesized that the observed pressure oscillations at both inlet and outlet edges of the fracture result from a superposition of both forward and return waves of pressure propagation through the fracture. In the current paper, the author explores an application of a combination of methods for detecting nonlinear chaotic dynamics behavior along with the multivariate Granger Causality (G-causality) time series test. Based on the G-causality test, the author infers that his hypothesis is correct, and presents a causation loop diagram of the spatial-temporal distribution of gas, liquid, and capillary pressures measured at the inlet and outlet of the fracture. The causal modeling approach can be used for the analysis of other hydrological processes, for example, infiltration and pumping tests in heterogeneous subsurface media, and climatic processes, for example, to find correlations between various meteorological parameters, such as temperature, solar radiation, barometric pressure, etc.
Wan, Huafang; Cui, Yixin; Ding, Yijuan; Mei, Jiaqin; Dong, Hongli; Zhang, Wenxin; Wu, Shiqi; Liang, Ying; Zhang, Chunyu; Li, Jiana; Xiong, Qing; Qian, Wei
2016-01-01
Understanding the regulation of lipid metabolism is vital for genetic engineering of canola ( Brassica napus L.) to increase oil yield or modify oil composition. We conducted time-series analyses of transcriptomes and proteomes to uncover the molecular networks associated with oil accumulation and dynamic changes in these networks in canola. The expression levels of genes and proteins were measured at 2, 4, 6, and 8 weeks after pollination (WAP). Our results show that the biosynthesis of fatty acids is a dominant cellular process from 2 to 6 WAP, while the degradation mainly happens after 6 WAP. We found that genes in almost every node of fatty acid synthesis pathway were significantly up-regulated during oil accumulation. Moreover, significant expression changes of two genes, acetyl-CoA carboxylase and acyl-ACP desaturase, were detected on both transcriptomic and proteomic levels. We confirmed the temporal expression patterns revealed by the transcriptomic analyses using quantitative real-time PCR experiments. The gene set association analysis show that the biosynthesis of fatty acids and unsaturated fatty acids are the most significant biological processes from 2-4 WAP and 4-6 WAP, respectively, which is consistent with the results of time-series analyses. These results not only provide insight into the mechanisms underlying lipid metabolism, but also reveal novel candidate genes that are worth further investigation for their values in the genetic engineering of canola.
Smith, Andrea; Lalonde, Richard N; Johnson, Simone
2004-05-01
This study addressed the potential impact of serial migration for parent-children relationships and for children's psychological well-being. The experience of being separated from their parents during childhood and reunited with them at a later time was retrospectively examined for 48 individuals. A series of measures (e.g., self-esteem, parental identification) associated with appraisals at critical time periods during serial migration (separation, reunion, current) revealed that serial migration can potentially disrupt parent-child bonding and unfavorably affect children's self-esteem and behavior. Time did not appear to be wholly effective in repairing rifts in the parent-child relationship. Risk factors for less successful reunions included lengthy separations and the addition of new members to the family unit in the child's absence. (c) 2004 APA
The morning morality effect: the influence of time of day on unethical behavior.
Kouchaki, Maryam; Smith, Isaac H
2014-01-01
Are people more moral in the morning than in the afternoon? We propose that the normal, unremarkable experiences associated with everyday living can deplete one's capacity to resist moral temptations. In a series of four experiments, both undergraduate students and a sample of U.S. adults engaged in less unethical behavior (e.g., less lying and cheating) on tasks performed in the morning than on the same tasks performed in the afternoon. This morning morality effect was mediated by decreases in moral awareness and self-control in the afternoon. Furthermore, the effect of time of day on unethical behavior was found to be stronger for people with a lower propensity to morally disengage. These findings highlight a simple yet pervasive factor (i.e., the time of day) that has important implications for moral behavior.
Time series, periodograms, and significance
NASA Astrophysics Data System (ADS)
Hernandez, G.
1999-05-01
The geophysical literature shows a wide and conflicting usage of methods employed to extract meaningful information on coherent oscillations from measurements. This makes it difficult, if not impossible, to relate the findings reported by different authors. Therefore, we have undertaken a critical investigation of the tests and methodology used for determining the presence of statistically significant coherent oscillations in periodograms derived from time series. Statistical significance tests are only valid when performed on the independent frequencies present in a measurement. Both the number of possible independent frequencies in a periodogram and the significance tests are determined by the number of degrees of freedom, which is the number of true independent measurements, present in the time series, rather than the number of sample points in the measurement. The number of degrees of freedom is an intrinsic property of the data, and it must be determined from the serial coherence of the time series. As part of this investigation, a detailed study has been performed which clearly illustrates the deleterious effects that the apparently innocent and commonly used processes of filtering, de-trending, and tapering of data have on periodogram analysis and the consequent difficulties in the interpretation of the statistical significance thus derived. For the sake of clarity, a specific example of actual field measurements containing unevenly-spaced measurements, gaps, etc., as well as synthetic examples, have been used to illustrate the periodogram approach, and pitfalls, leading to the (statistical) significance tests for the presence of coherent oscillations. Among the insights of this investigation are: (1) the concept of a time series being (statistically) band limited by its own serial coherence and thus having a critical sampling rate which defines one of the necessary requirements for the proper statistical design of an experiment; (2) the design of a critical test for the maximum number of significant frequencies which can be used to describe a time series, while retaining intact the variance of the test sample; (3) a demonstration of the unnecessary difficulties that manipulation of the data brings into the statistical significance interpretation of said data; and (4) the resolution and correction of the apparent discrepancy in significance results obtained by the use of the conventional Lomb-Scargle significance test, when compared with the long-standing Schuster-Walker and Fisher tests.
2011-01-01
Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598
Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp
2011-08-18
Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.
NASA Technical Reports Server (NTRS)
1980-01-01
Progress in the development of systems which employ point focusing distributed receiver technology is reported. Emphasis is placed on the first engineering experiment, the Small Community Solar Thermal Power Experiment. Procurement activities for the Military Module Power Experiment the first of a series of experiments planned as part of the Isolated Load Series are included.
ERIC Educational Resources Information Center
Paulsen, Christine Andrews; Andrews, Jessica Rueter
2014-01-01
This article describes a transmedia learning experience for early school-aged children. The experience represented an effort to transition a primarily television-based series to a primarily web-based series. Children watched new animation, completed online activities designed to promote STEM (science, technology, engineering, and math)…
Effect of periodic changes of angle of attack on behavior of airfoils
NASA Technical Reports Server (NTRS)
Katzmayr, R
1922-01-01
This report presents the results of a series of experiments, which gave some quantitative results on the effect of periodic changes in the direction of the relative air flow against airfoils. The first series of experiments concerned how the angle of attack of the wing model was changed by causing the latter to oscillate about an axis parallel to the span and at right angles to the air flow. The second series embraced all the experiments in which the direction of the air flow itself was periodically changed.
Inference of quantitative models of bacterial promoters from time-series reporter gene data.
Stefan, Diana; Pinel, Corinne; Pinhal, Stéphane; Cinquemani, Eugenio; Geiselmann, Johannes; de Jong, Hidde
2015-01-01
The inference of regulatory interactions and quantitative models of gene regulation from time-series transcriptomics data has been extensively studied and applied to a range of problems in drug discovery, cancer research, and biotechnology. The application of existing methods is commonly based on implicit assumptions on the biological processes under study. First, the measurements of mRNA abundance obtained in transcriptomics experiments are taken to be representative of protein concentrations. Second, the observed changes in gene expression are assumed to be solely due to transcription factors and other specific regulators, while changes in the activity of the gene expression machinery and other global physiological effects are neglected. While convenient in practice, these assumptions are often not valid and bias the reverse engineering process. Here we systematically investigate, using a combination of models and experiments, the importance of this bias and possible corrections. We measure in real time and in vivo the activity of genes involved in the FliA-FlgM module of the E. coli motility network. From these data, we estimate protein concentrations and global physiological effects by means of kinetic models of gene expression. Our results indicate that correcting for the bias of commonly-made assumptions improves the quality of the models inferred from the data. Moreover, we show by simulation that these improvements are expected to be even stronger for systems in which protein concentrations have longer half-lives and the activity of the gene expression machinery varies more strongly across conditions than in the FliA-FlgM module. The approach proposed in this study is broadly applicable when using time-series transcriptome data to learn about the structure and dynamics of regulatory networks. In the case of the FliA-FlgM module, our results demonstrate the importance of global physiological effects and the active regulation of FliA and FlgM half-lives for the dynamics of FliA-dependent promoters.
Improving cluster-based missing value estimation of DNA microarray data.
Brás, Lígia P; Menezes, José C
2007-06-01
We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.
NASA Technical Reports Server (NTRS)
Abbott, Mark R.
1998-01-01
The objectives of the last six months were: Continue analysis of Hawaii Ocean Time-series (HOT) bio-optical mooring data, Recover instrumentation from JGOFS cruises in the Southern Ocean and analyze data Maintain documentation of MOCEAN algorithms and software for use by MOCEAN and GLI teams Continue chemostat experiments on the relationship of fluorescence quantum yield to environmental factors. Continue to develop and expand browser-based information system for in situ bio-optical data Work Analysis of Field Data from Hawaii We are continuing to analyze bio-optical data collected at the Hawaii Ocean Time Series mooring. The HOT bio-optical mooring was recovered in May 1998. After retrieving the data, the sensor package was serviced and redeployed. We now have over 18 months of data. These are being analyzed as part of a larger study of mesoscale processes at this JGOFS time series site. We have had some failures in the data logger which have affected the fluorescence channels. These are being repaired. We also had an instrument housing failure, and minor modifications have been made to avoid subsequent problems. In addition, Ricardo Letelier is funded as part of the SeaWiFS calibrator/validation effort (through a subcontract from the University of Hawaii, Dr. John Porter), and he is collecting bio-optical and fluorescence data as part of the HOT activity.
NASA Astrophysics Data System (ADS)
Chen, Tsing-Chang; Yen, Ming-Cheng; Wu, Kuang-Der; Ng, Thomas
1992-08-01
The time evolution of the Indian monsoon is closely related to locations of the northward migrating monsoon troughs and ridges which can be well depicted with the 30 60day filtered 850-mb streamfunction. Thus, long-range forecasts of the large-scale low-level monsoon can be obtained from those of the filtered 850-mb streamfunction. These long-range forecasts were made in this study in terms of the Auto Regressive (AR) Moving-Average process. The historical series of the AR model were constructed with the 30 60day filtered 850-mb streamfunction [˜ψ (850mb)] time series of 4months. However, the phase of the last low-frequency cycle in the ˜ψ (850mb) time series can be skewed by the bandpass filtering. To reduce this phase skewness, a simple scheme is introduced. With this phase modification of the filtered 850-mb streamfunction, we performed the pilot forecast experiments of three summers with the AR forecast process. The forecast errors in the positions of the northward propagating monsoon troughs and ridges at Day 20 are generally within the range of 1
2days behind the observed, except in some extreme cases.
NASA Technical Reports Server (NTRS)
Rowlands, D. D.; Luthcke, S. B.; McCarthy J. J.; Klosko, S. M.; Chinn, D. S.; Lemoine, F. G.; Boy, J.-P.; Sabaka, T. J.
2010-01-01
The differences between mass concentration (mas con) parameters and standard Stokes coefficient parameters in the recovery of gravity infonnation from gravity recovery and climate experiment (GRACE) intersatellite K-band range rate data are investigated. First, mascons are decomposed into their Stokes coefficient representations to gauge the range of solutions available using each of the two types of parameters. Next, a direct comparison is made between two time series of unconstrained gravity solutions, one based on a set of global equal area mascon parameters (equivalent to 4deg x 4deg at the equator), and the other based on standard Stokes coefficients with each time series using the same fundamental processing of the GRACE tracking data. It is shown that in unconstrained solutions, the type of gravity parameter being estimated does not qualitatively affect the estimated gravity field. It is also shown that many of the differences in mass flux derivations from GRACE gravity solutions arise from the type of smoothing being used and that the type of smoothing that can be embedded in mas con solutions has distinct advantages over postsolution smoothing. Finally, a 1 year time series based on global 2deg equal area mascons estimated every 10 days is presented.
Zago, Myrka; Bosco, Gianfranco; Maffei, Vincenzo; Iosa, Marco; Ivanenko, Yuri P; Lacquaniti, Francesco
2004-04-01
Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. Here we present evidence in favor of a different view: the brain makes the best estimate about target motion based on measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from expected dynamics (kinetics). We projected a virtual target moving vertically downward on a wide screen with different randomized laws of motion. In the first series of experiments, subjects were asked to intercept this target by punching a real ball that fell hidden behind the screen and arrived in synchrony with the visual target. Subjects systematically timed their motor responses consistent with the assumption of gravity effects on an object's mass, even when the visual target did not accelerate. With training, the gravity model was not switched off but adapted to nonaccelerating targets by shifting the time of motor activation. In the second series of experiments, there was no real ball falling behind the screen. Instead the subjects were required to intercept the visual target by clicking a mousebutton. In this case, subjects timed their responses consistent with the assumption of uniform motion in the absence of forces, even when the target actually accelerated. Overall, the results are in accord with the theory that motor responses evoked by visual kinematics are modulated by a prior of the target dynamics. The prior appears surprisingly resistant to modifications based on performance errors.
NASA Astrophysics Data System (ADS)
El Yazidi, Abdelhadi; Ramonet, Michel; Ciais, Philippe; Broquet, Gregoire; Pison, Isabelle; Abbaris, Amara; Brunner, Dominik; Conil, Sebastien; Delmotte, Marc; Gheusi, Francois; Guerin, Frederic; Hazan, Lynn; Kachroudi, Nesrine; Kouvarakis, Giorgos; Mihalopoulos, Nikolaos; Rivier, Leonard; Serça, Dominique
2018-03-01
This study deals with the problem of identifying atmospheric data influenced by local emissions that can result in spikes in time series of greenhouse gases and long-lived tracer measurements. We considered three spike detection methods known as coefficient of variation (COV), robust extraction of baseline signal (REBS) and standard deviation of the background (SD) to detect and filter positive spikes in continuous greenhouse gas time series from four monitoring stations representative of the European ICOS (Integrated Carbon Observation System) Research Infrastructure network. The results of the different methods are compared to each other and against a manual detection performed by station managers. Four stations were selected as test cases to apply the spike detection methods: a continental rural tower of 100 m height in eastern France (OPE), a high-mountain observatory in the south-west of France (PDM), a regional marine background site in Crete (FKL) and a marine clean-air background site in the Southern Hemisphere on Amsterdam Island (AMS). This selection allows us to address spike detection problems in time series with different variability. Two years of continuous measurements of CO2, CH4 and CO were analysed. All methods were found to be able to detect short-term spikes (lasting from a few seconds to a few minutes) in the time series. Analysis of the results of each method leads us to exclude the COV method due to the requirement to arbitrarily specify an a priori percentage of rejected data in the time series, which may over- or underestimate the actual number of spikes. The two other methods freely determine the number of spikes for a given set of parameters, and the values of these parameters were calibrated to provide the best match with spikes known to reflect local emissions episodes that are well documented by the station managers. More than 96 % of the spikes manually identified by station managers were successfully detected both in the SD and the REBS methods after the best adjustment of parameter values. At PDM, measurements made by two analyzers located 200 m from each other allow us to confirm that the CH4 spikes identified in one of the time series but not in the other correspond to a local source from a sewage treatment facility in one of the observatory buildings. From this experiment, we also found that the REBS method underestimates the number of positive anomalies in the CH4 data caused by local sewage emissions. As a conclusion, we recommend the use of the SD method, which also appears to be the easiest one to implement in automatic data processing, used for the operational filtering of spikes in greenhouse gases time series at global and regional monitoring stations of networks like that of the ICOS atmosphere network.
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-01-01
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques. PMID:28383496
Zhou, Renjie; Yang, Chen; Wan, Jian; Zhang, Wei; Guan, Bo; Xiong, Naixue
2017-04-06
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques.
DORIS-based point mascons for the long term stability of precise orbit solutions
NASA Astrophysics Data System (ADS)
Cerri, L.; Lemoine, J. M.; Mercier, F.; Zelensky, N. P.; Lemoine, F. G.
2013-08-01
In recent years non-tidal Time Varying Gravity (TVG) has emerged as the most important contributor in the error budget of Precision Orbit Determination (POD) solutions for altimeter satellites' orbits. The Gravity Recovery And Climate Experiment (GRACE) mission has provided POD analysts with static and time-varying gravity models that are very accurate over the 2002-2012 time interval, but whose linear rates cannot be safely extrapolated before and after the GRACE lifespan. One such model based on a combination of data from GRACE and Lageos from 2002-2010, is used in the dynamic POD solutions developed for the Geophysical Data Records (GDRs) of the Jason series of altimeter missions and the equivalent products from lower altitude missions such as Envisat, Cryosat-2, and HY-2A. In order to accommodate long-term time-variable gravity variations not included in the background geopotential model, we assess the feasibility of using DORIS data to observe local mass variations using point mascons. In particular, we show that the point-mascon approach can stabilize the geographically correlated orbit errors which are of fundamental interest for the analysis of regional Mean Sea Level trends based on altimeter data, and can therefore provide an interim solution in the event of GRACE data loss. The time series of point-mass solutions for Greenland and Antarctica show good agreement with independent series derived from GRACE data, indicating a mass loss at rate of 210 Gt/year and 110 Gt/year respectively.
Empirical method to measure stochasticity and multifractality in nonlinear time series
NASA Astrophysics Data System (ADS)
Lin, Chih-Hao; Chang, Chia-Seng; Li, Sai-Ping
2013-12-01
An empirical algorithm is used here to study the stochastic and multifractal nature of nonlinear time series. A parameter can be defined to quantitatively measure the deviation of the time series from a Wiener process so that the stochasticity of different time series can be compared. The local volatility of the time series under study can be constructed using this algorithm, and the multifractal structure of the time series can be analyzed by using this local volatility. As an example, we employ this method to analyze financial time series from different stock markets. The result shows that while developed markets evolve very much like an Ito process, the emergent markets are far from efficient. Differences about the multifractal structures and leverage effects between developed and emergent markets are discussed. The algorithm used here can be applied in a similar fashion to study time series of other complex systems.
Evaluation of Fast-Time Wake Models Using Denver 2006 Field Experiment Data
NASA Technical Reports Server (NTRS)
Ahmad, Nash’at N.; Pruis, Matthew J.
2015-01-01
The National Aeronautics and Space Administration conducted a series of wake vortex field experiments at Denver in 2003, 2005, and 2006. This paper describes the lidar wake vortex measurements and associated meteorological data collected during the 2006 deployment, and includes results of recent reprocessing of the lidar data using a new wake vortex algorithm and estimates of the atmospheric turbulence using a new algorithm to estimate eddy dissipation rate from the lidar data. The configuration and set-up of the 2006 field experiment allowed out-of-ground effect vortices to be tracked in lateral transport further than any previous campaign and thereby provides an opportunity to study long-lived wake vortices in moderate to low crosswinds. An evaluation of NASA's fast-time wake vortex transport and decay models using the dataset shows similar performance as previous studies using other field data.
Advanced approach to the analysis of a series of in-situ nuclear forward scattering experiments
NASA Astrophysics Data System (ADS)
Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel
2017-03-01
This study introduces a sequential fitting procedure as a specific approach to nuclear forward scattering (NFS) data evaluation. Principles and usage of this advanced evaluation method are described in details and its utilization is demonstrated on NFS in-situ investigations of fast processes. Such experiments frequently consist of hundreds of time spectra which need to be evaluated. The introduced procedure allows the analysis of these experiments and significantly decreases the time needed for the data evaluation. The key contributions of the study are the sequential use of the output fitting parameters of a previous data set as the input parameters for the next data set and the model suitability crosscheck option of applying the procedure in ascending and descending directions of the data sets. Described fitting methodology is beneficial for checking of model validity and reliability of obtained results.
Purwar, Namrta; Tenboer, Jason; Tripathi, Shailesh; Schmidt, Marius
2013-09-13
Time-resolved spectroscopic experiments have been performed with protein in solution and in crystalline form using a newly designed microspectrophotometer. The time-resolution of these experiments can be as good as two nanoseconds (ns), which is the minimal response time of the image intensifier used. With the current setup, the effective time-resolution is about seven ns, determined mainly by the pulse duration of the nanosecond laser. The amount of protein required is small, on the order of 100 nanograms. Bleaching, which is an undesirable effect common to photoreceptor proteins, is minimized by using a millisecond shutter to avoid extensive exposure to the probing light. We investigate two model photoreceptors, photoactive yellow protein (PYP), and α-phycoerythrocyanin (α-PEC), on different time scales and at different temperatures. Relaxation times obtained from kinetic time-series of difference absorption spectra collected from PYP are consistent with previous results. The comparison with these results validates the capability of this spectrophotometer to deliver high quality time-resolved absorption spectra.
Design and Fabrication of TES Detector Modules for the TIME-Pilot [CII] Intensity Mapping Experiment
NASA Astrophysics Data System (ADS)
Hunacek, J.; Bock, J.; Bradford, C. M.; Bumble, B.; Chang, T.-C.; Cheng, Y.-T.; Cooray, A.; Crites, A.; Hailey-Dunsheath, S.; Gong, Y.; Kenyon, M.; Koch, P.; Li, C.-T.; O'Brient, R.; Shirokoff, E.; Shiu, C.; Staniszewski, Z.; Uzgil, B.; Zemcov, M.
2016-08-01
We are developing a series of close-packed modular detector arrays for TIME-Pilot, a new mm-wavelength grating spectrometer array that will map the intensity fluctuations of the redshifted 157.7 \\upmu m emission line of singly ionized carbon ([CII]) from redshift z ˜ 5 to 9. TIME-Pilot's two banks of 16 parallel-plate waveguide spectrometers (one bank per polarization) will have a spectral range of 183-326 GHz and a resolving power of R ˜ 100. The spectrometers use a curved diffraction grating to disperse and focus the light on a series of output arcs, each sampled by 60 transition edge sensor (TES) bolometers with gold micro-mesh absorbers. These low-noise detectors will be operated from a 250 mK base temperature and are designed to have a background-limited NEP of {˜ }10^{-17} mathrm {W}/mathrm {Hz}^{1/2}. This proceeding presents an overview of the detector design in the context of the TIME-Pilot instrument. Additionally, a prototype detector module produced at the Microdevices Laboratory at JPL is shown.
Cowling, Thomas E; Majeed, Azeem; Harris, Matthew J
2018-01-22
The UK Government has introduced several national policies to improve access to primary care. We examined associations between patient experience of general practice and rates of visits to accident and emergency (A&E) departments and emergency hospital admissions in England. The study included 8124 general practices between 2011-2012 and 2013-2014. Outcome measures were annual rates of A&E visits and emergency admissions by general practice population, according to administrative hospital records. Explanatory variables included three patient experience measures from the General Practice Patient Survey: practice-level means of experience of making an appointment, satisfaction with opening hours and overall experience (on 0-100 scales). The main analysis used random-effects Poisson regression for cross-sectional time series. Five sensitivity analyses examined changes in model specification. Mean practice-level rates of A&E visits and emergency admissions increased from 2011-2012 to 2013-2014 (310.3-324.4 and 98.8-102.9 per 1000 patients). Each patient experience measure decreased; for example, mean satisfaction with opening hours was 79.4 in 2011-2012 and 76.6 in 2013-2014. In the adjusted regression analysis, an SD increase in experience of making appointments (equal to 9 points) predicted decreases of 1.8% (95% CI -2.4% to -1.2%) in A&E visit rates and 1.4% (95% CI -1.9% to -0.9%) in admission rates. This equalled 301 174 fewer A&E visits and 74 610 fewer admissions nationally per year. Satisfaction with opening hours and overall experience were not consistently associated with either outcome measure across the main and sensitivity analyses. Associations between patient experience of general practice and use of emergency hospital services were small or inconsistent. In England, realistic short-term improvements in patient experience of general practice may only have modest effects on A&E visits and emergency admissions. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.