Sample records for coefficient time series

  1. Detecting PM2.5's Correlations between Neighboring Cities Using a Time-Lagged Cross-Correlation Coefficient.

    PubMed

    Wang, Fang; Wang, Lin; Chen, Yuming

    2017-08-31

    In order to investigate the time-dependent cross-correlations of fine particulate (PM2.5) series among neighboring cities in Northern China, in this paper, we propose a new cross-correlation coefficient, the time-lagged q-L dependent height crosscorrelation coefficient (denoted by p q (τ, L)), which incorporates the time-lag factor and the fluctuation amplitude information into the analogous height cross-correlation analysis coefficient. Numerical tests are performed to illustrate that the newly proposed coefficient ρ q (τ, L) can be used to detect cross-correlations between two series with time lags and to identify different range of fluctuations at which two series possess cross-correlations. Applying the new coefficient to analyze the time-dependent cross-correlations of PM2.5 series between Beijing and the three neighboring cities of Tianjin, Zhangjiakou, and Baoding, we find that time lags between the PM2.5 series with larger fluctuations are longer than those between PM2.5 series withsmaller fluctuations. Our analysis also shows that cross-correlations between the PM2.5 series of two neighboring cities are significant and the time lags between two PM2.5 series of neighboring cities are significantly non-zero. These findings providenew scientific support on the view that air pollution in neighboring cities can affect one another not simultaneously but with a time lag.

  2. Second-degree Stokes coefficients from multi-satellite SLR

    NASA Astrophysics Data System (ADS)

    Bloßfeld, Mathis; Müller, Horst; Gerstl, Michael; Štefka, Vojtěch; Bouman, Johannes; Göttl, Franziska; Horwath, Martin

    2015-09-01

    The long wavelength part of the Earth's gravity field can be determined, with varying accuracy, from satellite laser ranging (SLR). In this study, we investigate the combination of up to ten geodetic SLR satellites using iterative variance component estimation. SLR observations to different satellites are combined in order to identify the impact of each satellite on the estimated Stokes coefficients. The combination of satellite-specific weekly or monthly arcs allows to reduce parameter correlations of the single-satellite solutions and leads to alternative estimates of the second-degree Stokes coefficients. This alternative time series might be helpful for assessing the uncertainty in the impact of the low-degree Stokes coefficients on geophysical investigations. In order to validate the obtained time series of second-degree Stokes coefficients, a comparison with the SLR RL05 time series of the Center of Space Research (CSR) is done. This investigation shows that all time series are comparable to the CSR time series. The precision of the weekly/monthly and coefficients is analyzed by comparing mass-related equatorial excitation functions with geophysical model results and reduced geodetic excitation functions. In case of , the annual amplitude and phase of the DGFI solution agrees better with three of four geophysical model combinations than other time series. In case of , all time series agree very well to each other. The impact of on the ice mass trend estimates for Antarctica are compared based on CSR GRACE RL05 solutions, in which different monthly time series are used for replacing. We found differences in the long-term Antarctic ice loss of Gt/year between the GRACE solutions induced by the different SLR time series of CSR and DGFI, which is about 13 % of the total ice loss of Antarctica. This result shows that Antarctic ice mass loss quantifications must be carefully interpreted.

  3. Introduction and application of the multiscale coefficient of variation analysis.

    PubMed

    Abney, Drew H; Kello, Christopher T; Balasubramaniam, Ramesh

    2017-10-01

    Quantifying how patterns of behavior relate across multiple levels of measurement typically requires long time series for reliable parameter estimation. We describe a novel analysis that estimates patterns of variability across multiple scales of analysis suitable for time series of short duration. The multiscale coefficient of variation (MSCV) measures the distance between local coefficient of variation estimates within particular time windows and the overall coefficient of variation across all time samples. We first describe the MSCV analysis and provide an example analytical protocol with corresponding MATLAB implementation and code. Next, we present a simulation study testing the new analysis using time series generated by ARFIMA models that span white noise, short-term and long-term correlations. The MSCV analysis was observed to be sensitive to specific parameters of ARFIMA models varying in the type of temporal structure and time series length. We then apply the MSCV analysis to short time series of speech phrases and musical themes to show commonalities in multiscale structure. The simulation and application studies provide evidence that the MSCV analysis can discriminate between time series varying in multiscale structure and length.

  4. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  5. [Correlation coefficient-based principle and method for the classification of jump degree in hydrological time series].

    PubMed

    Wu, Zi Yi; Xie, Ping; Sang, Yan Fang; Gu, Hai Ting

    2018-04-01

    The phenomenon of jump is one of the importantly external forms of hydrological variabi-lity under environmental changes, representing the adaption of hydrological nonlinear systems to the influence of external disturbances. Presently, the related studies mainly focus on the methods for identifying the jump positions and jump times in hydrological time series. In contrast, few studies have focused on the quantitative description and classification of jump degree in hydrological time series, which make it difficult to understand the environmental changes and evaluate its potential impacts. Here, we proposed a theatrically reliable and easy-to-apply method for the classification of jump degree in hydrological time series, using the correlation coefficient as a basic index. The statistical tests verified the accuracy, reasonability, and applicability of this method. The relationship between the correlation coefficient and the jump degree of series were described using mathematical equation by derivation. After that, several thresholds of correlation coefficients under different statistical significance levels were chosen, based on which the jump degree could be classified into five levels: no, weak, moderate, strong and very strong. Finally, our method was applied to five diffe-rent observed hydrological time series, with diverse geographic and hydrological conditions in China. The results of the classification of jump degrees in those series were closely accorded with their physically hydrological mechanisms, indicating the practicability of our method.

  6. Bayesian dynamic modeling of time series of dengue disease case counts

    PubMed Central

    López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-01-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941

  7. [Correlation coefficient-based classification method of hydrological dependence variability: With auto-regression model as example].

    PubMed

    Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi

    2018-04-01

    Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.

  8. A new correlation coefficient for bivariate time-series data

    NASA Astrophysics Data System (ADS)

    Erdem, Orhan; Ceyhan, Elvan; Varli, Yusuf

    2014-11-01

    The correlation in time series has received considerable attention in the literature. Its use has attained an important role in the social sciences and finance. For example, pair trading in finance is concerned with the correlation between stock prices, returns, etc. In general, Pearson’s correlation coefficient is employed in these areas although it has many underlying assumptions which restrict its use. Here, we introduce a new correlation coefficient which takes into account the lag difference of data points. We investigate the properties of this new correlation coefficient. We demonstrate that it is more appropriate for showing the direction of the covariation of the two variables over time. We also compare the performance of the new correlation coefficient with Pearson’s correlation coefficient and Detrended Cross-Correlation Analysis (DCCA) via simulated examples.

  9. Detrended partial cross-correlation analysis of two nonstationary time series influenced by common external forces

    NASA Astrophysics Data System (ADS)

    Qian, Xi-Yuan; Liu, Ya-Min; Jiang, Zhi-Qiang; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene

    2015-06-01

    When common factors strongly influence two power-law cross-correlated time series recorded in complex natural or social systems, using detrended cross-correlation analysis (DCCA) without considering these common factors will bias the results. We use detrended partial cross-correlation analysis (DPXA) to uncover the intrinsic power-law cross correlations between two simultaneously recorded time series in the presence of nonstationarity after removing the effects of other time series acting as common forces. The DPXA method is a generalization of the detrended cross-correlation analysis that takes into account partial correlation analysis. We demonstrate the method by using bivariate fractional Brownian motions contaminated with a fractional Brownian motion. We find that the DPXA is able to recover the analytical cross Hurst indices, and thus the multiscale DPXA coefficients are a viable alternative to the conventional cross-correlation coefficient. We demonstrate the advantage of the DPXA coefficients over the DCCA coefficients by analyzing contaminated bivariate fractional Brownian motions. We calculate the DPXA coefficients and use them to extract the intrinsic cross correlation between crude oil and gold futures by taking into consideration the impact of the U.S. dollar index. We develop the multifractal DPXA (MF-DPXA) method in order to generalize the DPXA method and investigate multifractal time series. We analyze multifractal binomial measures masked with strong white noises and find that the MF-DPXA method quantifies the hidden multifractal nature while the multifractal DCCA method fails.

  10. Arbitrary-order corrections for finite-time drift and diffusion coefficients

    NASA Astrophysics Data System (ADS)

    Anteneodo, C.; Riera, R.

    2009-09-01

    We address a standard class of diffusion processes with linear drift and quadratic diffusion coefficients. These contributions to dynamic equations can be directly drawn from data time series. However, real data are constrained to finite sampling rates and therefore it is crucial to establish a suitable mathematical description of the required finite-time corrections. Based on Itô-Taylor expansions, we present the exact corrections to the finite-time drift and diffusion coefficients. These results allow to reconstruct the real hidden coefficients from the empirical estimates. We also derive higher-order finite-time expressions for the third and fourth conditional moments that furnish extra theoretical checks for this class of diffusion models. The analytical predictions are compared with the numerical outcomes of representative artificial time series.

  11. Correlation tests of the engine performance parameter by using the detrended cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Dong, Keqiang; Gao, You; Jing, Liming

    2015-02-01

    The presence of cross-correlation in complex systems has long been noted and studied in a broad range of physical applications. We here focus on an aero-engine system as an example of a complex system. By applying the detrended cross-correlation (DCCA) coefficient method to aero-engine time series, we investigate the effects of the data length and the time scale on the detrended cross-correlation coefficients ρ DCCA ( T, s). We then show, for a twin-engine aircraft, that the engine fuel flow time series derived from the left engine and the right engine exhibit much stronger cross-correlations than the engine exhaust-gas temperature series derived from the left engine and the right engine do.

  12. A novel coefficient for detecting and quantifying asymmetry of California electricity market based on asymmetric detrended cross-correlation analysis.

    PubMed

    Wang, Fang

    2016-06-01

    In order to detect and quantify asymmetry of two time series, a novel cross-correlation coefficient is proposed based on recent asymmetric detrended cross-correlation analysis (A-DXA), which we called A-DXA coefficient. The A-DXA coefficient, as an important extension of DXA coefficient ρDXA, contains two directional asymmetric cross-correlated indexes, describing upwards and downwards asymmetric cross-correlations, respectively. By using the information of directional covariance function of two time series and directional variance function of each series itself instead of power-law between the covariance function and time scale, the proposed A-DXA coefficient can well detect asymmetry between the two series no matter whether the cross-correlation is significant or not. By means of the proposed A-DXA coefficient conducted over the asymmetry for California electricity market, we found that the asymmetry between the prices and loads is not significant for daily average data in 1999 yr market (before electricity crisis) but extremely significant for those in 2000 yr market (during the crisis). To further uncover the difference of asymmetry between the years 1999 and 2000, a modified H statistic (MH) and ΔMH statistic are proposed. One of the present contributions is that the high MH values calculated for hourly data exist in majority months in 2000 market. Another important conclusion is that the cross-correlation with downwards dominates over the whole 1999 yr in contrast to the cross-correlation with upwards dominates over the 2000 yr.

  13. TaiWan Ionospheric Model (TWIM) prediction based on time series autoregressive analysis

    NASA Astrophysics Data System (ADS)

    Tsai, L. C.; Macalalad, Ernest P.; Liu, C. H.

    2014-10-01

    As described in a previous paper, a three-dimensional ionospheric electron density (Ne) model has been constructed from vertical Ne profiles retrieved from the FormoSat3/Constellation Observing System for Meteorology, Ionosphere, and Climate GPS radio occultation measurements and worldwide ionosonde foF2 and foE data and named the TaiWan Ionospheric Model (TWIM). The TWIM exhibits vertically fitted α-Chapman-type layers with distinct F2, F1, E, and D layers, and surface spherical harmonic approaches for the fitted layer parameters including peak density, peak density height, and scale height. To improve the TWIM into a real-time model, we have developed a time series autoregressive model to forecast short-term TWIM coefficients. The time series of TWIM coefficients are considered as realizations of stationary stochastic processes within a processing window of 30 days. These autocorrelation coefficients are used to derive the autoregressive parameters and then forecast the TWIM coefficients, based on the least squares method and Lagrange multiplier technique. The forecast root-mean-square relative TWIM coefficient errors are generally <30% for 1 day predictions. The forecast TWIM values of foE and foF2 values are also compared and evaluated using worldwide ionosonde data.

  14. Time series of low-degree geopotential coefficients from SLR data: estimation of Earth's figure axis and LOD variations

    NASA Astrophysics Data System (ADS)

    Luceri, V.; Sciarretta, C.; Bianco, G.

    2012-12-01

    The redistribution of the mass within the earth system induces changes in the Earth's gravity field. In particular, the second-degree geopotential coefficients reflect the behaviour of the Earth's inertia tensor of order 2, describing the main mass variations of our planet impacting the EOPs. Thanks to the long record of accurate and continuous laser ranging observations to Lageos and other geodetic satellites, SLR is the only current space technique capable to monitor the long time variability of the Earth's gravity field with adequate accuracy. Time series of low-degree geopotential coefficients are estimated with our analysis of SLR data (spanning more than 25 years) from several geodetic satellites in order to detect trends and periodic variations related to tidal effects and atmospheric/oceanic mass variations. This study is focused on the variations of the second-degree Stokes coefficients related to the Earth's principal figure axis and oblateness: C21, S21 and C20. On the other hand, surface mass load variations induce excitations in the EOPs that are proportional to the same second-degree coefficients. The time series of direct estimates of low degree geopotential and those derived from the EOP excitation functions are compared and presented together with their time and frequency analysis.

  15. New Insights into Signed Path Coefficient Granger Causality Analysis.

    PubMed

    Zhang, Jian; Li, Chong; Jiang, Tianzi

    2016-01-01

    Granger causality analysis, as a time series analysis technique derived from econometrics, has been applied in an ever-increasing number of publications in the field of neuroscience, including fMRI, EEG/MEG, and fNIRS. The present study mainly focuses on the validity of "signed path coefficient Granger causality," a Granger-causality-derived analysis method that has been adopted by many fMRI researches in the last few years. This method generally estimates the causality effect among the time series by an order-1 autoregression, and defines a positive or negative coefficient as an "excitatory" or "inhibitory" influence. In the current work we conducted a series of computations from resting-state fMRI data and simulation experiments to illustrate the signed path coefficient method was flawed and untenable, due to the fact that the autoregressive coefficients were not always consistent with the real causal relationships and this would inevitablely lead to erroneous conclusions. Overall our findings suggested that the applicability of this kind of causality analysis was rather limited, hence researchers should be more cautious in applying the signed path coefficient Granger causality to fMRI data to avoid misinterpretation.

  16. An agreement coefficient for image comparison

    USGS Publications Warehouse

    Ji, Lei; Gallo, Kevin

    2006-01-01

    Combination of datasets acquired from different sensor systems is necessary to construct a long time-series dataset for remotely sensed land-surface variables. Assessment of the agreement of the data derived from various sources is an important issue in understanding the data continuity through the time-series. Some traditional measures, including correlation coefficient, coefficient of determination, mean absolute error, and root mean square error, are not always optimal for evaluating the data agreement. For this reason, we developed a new agreement coefficient for comparing two different images. The agreement coefficient has the following properties: non-dimensional, bounded, symmetric, and distinguishable between systematic and unsystematic differences. The paper provides examples of agreement analyses for hypothetical data and actual remotely sensed data. The results demonstrate that the agreement coefficient does include the above properties, and therefore is a useful tool for image comparison.

  17. A novel coefficient for detecting and quantifying asymmetry of California electricity market based on asymmetric detrended cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Wang, Fang

    2016-06-01

    In order to detect and quantify asymmetry of two time series, a novel cross-correlation coefficient is proposed based on recent asymmetric detrended cross-correlation analysis (A-DXA), which we called A-DXA coefficient. The A-DXA coefficient, as an important extension of DXA coefficient ρ D X A , contains two directional asymmetric cross-correlated indexes, describing upwards and downwards asymmetric cross-correlations, respectively. By using the information of directional covariance function of two time series and directional variance function of each series itself instead of power-law between the covariance function and time scale, the proposed A-DXA coefficient can well detect asymmetry between the two series no matter whether the cross-correlation is significant or not. By means of the proposed A-DXA coefficient conducted over the asymmetry for California electricity market, we found that the asymmetry between the prices and loads is not significant for daily average data in 1999 yr market (before electricity crisis) but extremely significant for those in 2000 yr market (during the crisis). To further uncover the difference of asymmetry between the years 1999 and 2000, a modified H statistic (MH) and ΔMH statistic are proposed. One of the present contributions is that the high MH values calculated for hourly data exist in majority months in 2000 market. Another important conclusion is that the cross-correlation with downwards dominates over the whole 1999 yr in contrast to the cross-correlation with upwards dominates over the 2000 yr.

  18. Statistical test for ΔρDCCA cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Guedes, E. F.; Brito, A. A.; Oliveira Filho, F. M.; Fernandez, B. F.; de Castro, A. P. N.; da Silva Filho, A. M.; Zebende, G. F.

    2018-07-01

    In this paper we propose a new statistical test for ΔρDCCA, Detrended Cross-Correlation Coefficient Difference, a tool to measure contagion/interdependence effect in time series of size N at different time scale n. For this proposition we analyzed simulated and real time series. The results showed that the statistical significance of ΔρDCCA depends on the size N and the time scale n, and we can define a critical value for this dependency in 90%, 95%, and 99% of confidence level, as will be shown in this paper.

  19. A comparative study of shallow groundwater level simulation with three time series models in a coastal aquifer of South China

    NASA Astrophysics Data System (ADS)

    Yang, Q.; Wang, Y.; Zhang, J.; Delgado, J.

    2017-05-01

    Accurate and reliable groundwater level forecasting models can help ensure the sustainable use of a watershed's aquifers for urban and rural water supply. In this paper, three time series analysis methods, Holt-Winters (HW), integrated time series (ITS), and seasonal autoregressive integrated moving average (SARIMA), are explored to simulate the groundwater level in a coastal aquifer, China. The monthly groundwater table depth data collected in a long time series from 2000 to 2011 are simulated and compared with those three time series models. The error criteria are estimated using coefficient of determination ( R 2), Nash-Sutcliffe model efficiency coefficient ( E), and root-mean-squared error. The results indicate that three models are all accurate in reproducing the historical time series of groundwater levels. The comparisons of three models show that HW model is more accurate in predicting the groundwater levels than SARIMA and ITS models. It is recommended that additional studies explore this proposed method, which can be used in turn to facilitate the development and implementation of more effective and sustainable groundwater management strategies.

  20. Stochastic nature of series of waiting times.

    PubMed

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H; Salehi, E; Behjat, E; Qorbani, M; Nezhad, M Khazaei; Zirak, M; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the "waiting times" series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2

  1. Multilevel Dynamic Generalized Structured Component Analysis for Brain Connectivity Analysis in Functional Neuroimaging Data.

    PubMed

    Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S

    2016-06-01

    We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.

  2. Stochastic nature of series of waiting times

    NASA Astrophysics Data System (ADS)

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H.; Salehi, E.; Behjat, E.; Qorbani, M.; Khazaei Nezhad, M.; Zirak, M.; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M. Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the “waiting times” series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2

  3. New Insights into Signed Path Coefficient Granger Causality Analysis

    PubMed Central

    Zhang, Jian; Li, Chong; Jiang, Tianzi

    2016-01-01

    Granger causality analysis, as a time series analysis technique derived from econometrics, has been applied in an ever-increasing number of publications in the field of neuroscience, including fMRI, EEG/MEG, and fNIRS. The present study mainly focuses on the validity of “signed path coefficient Granger causality,” a Granger-causality-derived analysis method that has been adopted by many fMRI researches in the last few years. This method generally estimates the causality effect among the time series by an order-1 autoregression, and defines a positive or negative coefficient as an “excitatory” or “inhibitory” influence. In the current work we conducted a series of computations from resting-state fMRI data and simulation experiments to illustrate the signed path coefficient method was flawed and untenable, due to the fact that the autoregressive coefficients were not always consistent with the real causal relationships and this would inevitablely lead to erroneous conclusions. Overall our findings suggested that the applicability of this kind of causality analysis was rather limited, hence researchers should be more cautious in applying the signed path coefficient Granger causality to fMRI data to avoid misinterpretation. PMID:27833547

  4. Seasonal to multi-decadal trends in apparent optical properties in the Sargasso Sea

    NASA Astrophysics Data System (ADS)

    Allen, James G.; Nelson, Norman B.; Siegel, David A.

    2017-01-01

    Multi-decadal, monthly observations of optical and biogeochemical properties, made as part of the Bermuda Bio-Optics Project (BBOP) at the Bermuda Atlantic Time-series Study (BATS) site in the Sargasso Sea, allow for the examination of temporal trends in vertical light attenuation and their potential controls. Trends in the magnitude of the diffuse attenuation coefficient, Kd(λ), and a proxy for its spectral shape reflect changes in phytoplankton and chromophoric dissolved organic matter (CDOM) characteristics. The length and methodological consistency of this time series provide an excellent opportunity to extend analyses of seasonal cycles of apparent optical properties to interannual and decadal time scales. Here, we characterize changes in the magnitude and spectral shape proxy of diffuse attenuation coefficient spectra and compare them to available biological and optical data from the BATS time series program. The time series analyses reveal a 1.01%±0.18% annual increase of the magnitude of the diffuse attenuation coefficient at 443 nm over the upper 75 m of the water column while showing no significant change in selected spectral characteristics over the study period. These and other observations indicate that changes in phytoplankton rather than changes in CDOM abundance are the primary driver for the diffuse attenuation trends on multi-year timescales for this region. Our findings are inconsistent with previous decadal-scale global ocean water clarity and global satellite ocean color analyses yet are consistent with recent analyses of the BATS time series and highlight the value of long-term consistent observation at ocean time series sites.

  5. Fading channel simulator

    DOEpatents

    Argo, Paul E.; Fitzgerald, T. Joseph

    1993-01-01

    Fading channel effects on a transmitted communication signal are simulated with both frequency and time variations using a channel scattering function to affect the transmitted signal. A conventional channel scattering function is converted to a series of channel realizations by multiplying the square root of the channel scattering function by a complex number of which the real and imaginary parts are each independent variables. The two-dimensional inverse-FFT of this complex-valued channel realization yields a matrix of channel coefficients that provide a complete frequency-time description of the channel. The transmitted radio signal is segmented to provide a series of transmitted signal and each segment is subject to FFT to generate a series of signal coefficient matrices. The channel coefficient matrices and signal coefficient matrices are then multiplied and subjected to inverse-FFT to output a signal representing the received affected radio signal. A variety of channel scattering functions can be used to characterize the response of a transmitter-receiver system to such atmospheric effects.

  6. Study of Glycemic Variability Through Time Series Analyses (Detrended Fluctuation Analysis and Poincaré Plot) in Children and Adolescents with Type 1 Diabetes.

    PubMed

    García Maset, Leonor; González, Lidia Blasco; Furquet, Gonzalo Llop; Suay, Francisco Montes; Marco, Roberto Hernández

    2016-11-01

    Time series analysis provides information on blood glucose dynamics that is unattainable with conventional glycemic variability (GV) indices. To date, no studies have been published on these parameters in pediatric patients with type 1 diabetes. Our aim is to evaluate the relationship between time series analysis and conventional GV indices, and glycosylated hemoglobin (HbA1c) levels. This is a transversal study of 41 children and adolescents with type 1 diabetes. Glucose monitoring was carried out continuously for 72 h to study the following GV indices: standard deviation (SD) of glucose levels (mg/dL), coefficient of variation (%), interquartile range (IQR; mg/dL), mean amplitude of the largest glycemic excursions (MAGE), and continuous overlapping net glycemic action (CONGA). The time series analysis was conducted by means of detrended fluctuation analysis (DFA) and Poincaré plot. Time series parameters (DFA alpha coefficient and elements of the ellipse of the Poincaré plot) correlated well with the more conventional GV indices. Patients were grouped according to the terciles of these indices, to the terciles of eccentricity (1: 12.56-16.98, 2: 16.99-21.91, 3: 21.92-41.03), and to the value of the DFA alpha coefficient (> or ≤1.5). No differences were observed in the HbA1c of patients grouped by GV index criteria; however, significant differences were found in patients grouped by alpha coefficient and eccentricity, not only in terms of HbA1c, but also in SD glucose, IQR, and CONGA index. The loss of complexity in glycemic homeostasis is accompanied by an increase in variability.

  7. Comparing the structure of an emerging market with a mature one under global perturbation

    NASA Astrophysics Data System (ADS)

    Namaki, A.; Jafari, G. R.; Raei, R.

    2011-09-01

    In this paper we investigate the Tehran stock exchange (TSE) and Dow Jones Industrial Average (DJIA) in terms of perturbed correlation matrices. To perturb a stock market, there are two methods, namely local and global perturbation. In the local method, we replace a correlation coefficient of the cross-correlation matrix with one calculated from two Gaussian-distributed time series, whereas in the global method, we reconstruct the correlation matrix after replacing the original return series with Gaussian-distributed time series. The local perturbation is just a technical study. We analyze these markets through two statistical approaches, random matrix theory (RMT) and the correlation coefficient distribution. By using RMT, we find that the largest eigenvalue is an influence that is common to all stocks and this eigenvalue has a peak during financial shocks. We find there are a few correlated stocks that make the essential robustness of the stock market but we see that by replacing these return time series with Gaussian-distributed time series, the mean values of correlation coefficients, the largest eigenvalues of the stock markets and the fraction of eigenvalues that deviate from the RMT prediction fall sharply in both markets. By comparing these two markets, we can see that the DJIA is more sensitive to global perturbations. These findings are crucial for risk management and portfolio selection.

  8. The influence of trading volume on market efficiency: The DCCA approach

    NASA Astrophysics Data System (ADS)

    Sukpitak, Jessada; Hengpunya, Varagorn

    2016-09-01

    For a single market, the cross-correlation between market efficiency and trading volume, which is an indicator of market liquidity, is attentively analysed. The study begins with creating time series of market efficiency by applying time-varying Hurst exponent with one year sliding window to daily closing prices. The time series of trading volume corresponding to the same time period used for the market efficiency is derived from one year moving average of daily trading volume. Subsequently, the detrended cross-correlation coefficient is employed to quantify the degree of cross-correlation between the two time series. It was found that values of cross-correlation coefficient of all considered stock markets are close to 0 and are clearly out of range in which correlation being considered significant in almost every time scale. Obtained results show that the market liquidity in term of trading volume hardly has effect on the market efficiency.

  9. A recurrence network approach for the analysis of skin blood flow dynamics in response to loading pressure.

    PubMed

    Liao, Fuyuan; Jan, Yih-Kuen

    2012-06-01

    This paper presents a recurrence network approach for the analysis of skin blood flow dynamics in response to loading pressure. Recurrence is a fundamental property of many dynamical systems, which can be explored in phase spaces constructed from observational time series. A visualization tool of recurrence analysis called recurrence plot (RP) has been proved to be highly effective to detect transitions in the dynamics of the system. However, it was found that delay embedding can produce spurious structures in RPs. Network-based concepts have been applied for the analysis of nonlinear time series recently. We demonstrate that time series with different types of dynamics exhibit distinct global clustering coefficients and distributions of local clustering coefficients and that the global clustering coefficient is robust to the embedding parameters. We applied the approach to study skin blood flow oscillations (BFO) response to loading pressure. The results showed that global clustering coefficients of BFO significantly decreased in response to loading pressure (p<0.01). Moreover, surrogate tests indicated that such a decrease was associated with a loss of nonlinearity of BFO. Our results suggest that the recurrence network approach can practically quantify the nonlinear dynamics of BFO.

  10. Empirical forecast of quiet time ionospheric Total Electron Content maps over Europe

    NASA Astrophysics Data System (ADS)

    Badeke, Ronny; Borries, Claudia; Hoque, Mainul M.; Minkwitz, David

    2018-06-01

    An accurate forecast of the atmospheric Total Electron Content (TEC) is helpful to investigate space weather influences on the ionosphere and technical applications like satellite-receiver radio links. The purpose of this work is to compare four empirical methods for a 24-h forecast of vertical TEC maps over Europe under geomagnetically quiet conditions. TEC map data are obtained from the Space Weather Application Center Ionosphere (SWACI) and the Universitat Politècnica de Catalunya (UPC). The time-series methods Standard Persistence Model (SPM), a 27 day median model (MediMod) and a Fourier Series Expansion are compared to maps for the entire year of 2015. As a representative of the climatological coefficient models the forecast performance of the Global Neustrelitz TEC model (NTCM-GL) is also investigated. Time periods of magnetic storms, which are identified with the Dst index, are excluded from the validation. By calculating the TEC values with the most recent maps, the time-series methods perform slightly better than the coefficient model NTCM-GL. The benefit of NTCM-GL is its independence on observational TEC data. Amongst the time-series methods mentioned, MediMod delivers the best overall performance regarding accuracy and data gap handling. Quiet-time SWACI maps can be forecasted accurately and in real-time by the MediMod time-series approach.

  11. Sectoral risk research about input-output structure of the United States

    NASA Astrophysics Data System (ADS)

    Zhang, Mao

    2018-02-01

    There exist rare researches about economic risk in sectoral level, which is significantly important for risk prewarning. This paper employed status coefficient to measure the symmetry of economic subnetwork, which is negatively correlated with sectoral risk. Then, we do empirical research in both cross section and time series dimensions. In cross section dimension, we study the correlation between sectoral status coefficient and sectoral volatility, earning rate and Sharpe ratio respectively in the year 2015. Next, in the perspective of time series, we first investigate the correlation change between sectoral status coefficient and annual total output from 1997 to 2015. Then, we divide the 71 sectors in America into agriculture, manufacturing, services and government, compare the trend terms of average sectoral status coefficients of the four industries and illustrate the causes behind it. We also find obvious abnormality in the sector of housing. At last, this paper puts forward some suggestions for the federal government.

  12. Method of estimating pulse response using an impedance spectrum

    DOEpatents

    Morrison, John L; Morrison, William H; Christophersen, Jon P; Motloch, Chester G

    2014-10-21

    Electrochemical Impedance Spectrum data are used to predict pulse performance of an energy storage device. The impedance spectrum may be obtained in-situ. A simulation waveform includes a pulse wave with a period greater than or equal to the lowest frequency used in the impedance measurement. Fourier series coefficients of the pulse train can be obtained. The number of harmonic constituents in the Fourier series are selected so as to appropriately resolve the response, but the maximum frequency should be less than or equal to the highest frequency used in the impedance measurement. Using a current pulse as an example, the Fourier coefficients of the pulse are multiplied by the impedance spectrum at corresponding frequencies to obtain Fourier coefficients of the voltage response to the desired pulse. The Fourier coefficients of the response are then summed and reassembled to obtain the overall time domain estimate of the voltage using the Fourier series analysis.

  13. Global Mass Flux Solutions from GRACE: A Comparison of Parameter Estimation Strategies - Mass Concentrations Versus Stokes Coefficients

    NASA Technical Reports Server (NTRS)

    Rowlands, D. D.; Luthcke, S. B.; McCarthy J. J.; Klosko, S. M.; Chinn, D. S.; Lemoine, F. G.; Boy, J.-P.; Sabaka, T. J.

    2010-01-01

    The differences between mass concentration (mas con) parameters and standard Stokes coefficient parameters in the recovery of gravity infonnation from gravity recovery and climate experiment (GRACE) intersatellite K-band range rate data are investigated. First, mascons are decomposed into their Stokes coefficient representations to gauge the range of solutions available using each of the two types of parameters. Next, a direct comparison is made between two time series of unconstrained gravity solutions, one based on a set of global equal area mascon parameters (equivalent to 4deg x 4deg at the equator), and the other based on standard Stokes coefficients with each time series using the same fundamental processing of the GRACE tracking data. It is shown that in unconstrained solutions, the type of gravity parameter being estimated does not qualitatively affect the estimated gravity field. It is also shown that many of the differences in mass flux derivations from GRACE gravity solutions arise from the type of smoothing being used and that the type of smoothing that can be embedded in mas con solutions has distinct advantages over postsolution smoothing. Finally, a 1 year time series based on global 2deg equal area mascons estimated every 10 days is presented.

  14. Linear and nonlinear trending and prediction for AVHRR time series data

    NASA Technical Reports Server (NTRS)

    Smid, J.; Volf, P.; Slama, M.; Palus, M.

    1995-01-01

    The variability of AVHRR calibration coefficient in time was analyzed using algorithms of linear and non-linear time series analysis. Specifically we have used the spline trend modeling, autoregressive process analysis, incremental neural network learning algorithm and redundancy functional testing. The analysis performed on available AVHRR data sets revealed that (1) the calibration data have nonlinear dependencies, (2) the calibration data depend strongly on the target temperature, (3) both calibration coefficients and the temperature time series can be modeled, in the first approximation, as autonomous dynamical systems, (4) the high frequency residuals of the analyzed data sets can be best modeled as an autoregressive process of the 10th degree. We have dealt with a nonlinear identification problem and the problem of noise filtering (data smoothing). The system identification and filtering are significant problems for AVHRR data sets. The algorithms outlined in this study can be used for the future EOS missions. Prediction and smoothing algorithms for time series of calibration data provide a functional characterization of the data. Those algorithms can be particularly useful when calibration data are incomplete or sparse.

  15. Evaluation of the significance of abrupt changes in precipitation and runoff process in China

    NASA Astrophysics Data System (ADS)

    Xie, Ping; Wu, Ziyi; Sang, Yan-Fang; Gu, Haiting; Zhao, Yuxi; Singh, Vijay P.

    2018-05-01

    Abrupt changes are an important manifestation of hydrological variability. How to accurately detect the abrupt changes in hydrological time series and evaluate their significance is an important issue, but methods for dealing with them effectively are lacking. In this study, we propose an approach to evaluate the significance of abrupt changes in time series at five levels: no, weak, moderate, strong, and dramatic. The approach was based on an index of correlation coefficient calculated for the original time series and its abrupt change component. A bigger value of correlation coefficient reflects a higher significance level of abrupt change. Results of Monte-Carlo experiments verified the reliability of the proposed approach, and also indicated the great influence of statistical characteristics of time series on the significance level of abrupt change. The approach was derived from the relationship between correlation coefficient index and abrupt change, and can estimate and grade the significance levels of abrupt changes in hydrological time series. Application of the proposed approach to ten major watersheds in China showed that abrupt changes mainly occurred in five watersheds in northern China, which have arid or semi-arid climate and severe shortages of water resources. Runoff processes in northern China were more sensitive to precipitation change than those in southern China. Although annual precipitation and surface water resources amount (SWRA) exhibited a harmonious relationship in most watersheds, abrupt changes in the latter were more significant. Compared with abrupt changes in annual precipitation, human activities contributed much more to the abrupt changes in the corresponding SWRA, except for the Northwest Inland River watershed.

  16. An Energy-Based Similarity Measure for Time Series

    NASA Astrophysics Data System (ADS)

    Boudraa, Abdel-Ouahab; Cexus, Jean-Christophe; Groussat, Mathieu; Brunagel, Pierre

    2007-12-01

    A new similarity measure, called SimilB, for time series analysis, based on the cross-[InlineEquation not available: see fulltext.]-energy operator (2004), is introduced. [InlineEquation not available: see fulltext.] is a nonlinear measure which quantifies the interaction between two time series. Compared to Euclidean distance (ED) or the Pearson correlation coefficient (CC), SimilB includes the temporal information and relative changes of the time series using the first and second derivatives of the time series. SimilB is well suited for both nonstationary and stationary time series and particularly those presenting discontinuities. Some new properties of [InlineEquation not available: see fulltext.] are presented. Particularly, we show that [InlineEquation not available: see fulltext.] as similarity measure is robust to both scale and time shift. SimilB is illustrated with synthetic time series and an artificial dataset and compared to the CC and the ED measures.

  17. Smoothing strategies combined with ARIMA and neural networks to improve the forecasting of traffic accidents.

    PubMed

    Barba, Lida; Rodríguez, Nibaldo; Montt, Cecilia

    2014-01-01

    Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0:26%, followed by MA-ARIMA with a MAPE of 1:12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15:51%.

  18. Estimating Geocenter Motion and Changes in the Earth's Dynamic Oblateness from a Statistically Optimal Combination of GRACE Data and Geophysical Models

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Ditmar, P.; Riva, R.

    2016-12-01

    Time-varying gravity field solutions of the GRACE satellite mission enable an observation of Earth's mass transport on a monthly basis since 2002. One of the remaining challenges is how to complement these solutions with sufficiently accurate estimates of very low-degree spherical harmonic coefficients, particularly degree-1 coefficients and C20. An absence or inaccurate estimation of these coefficients may result in strong biases in mass transports estimates. Variations in degree-1 coefficients reflect geocenter motion and variations in the C20coefficients describe changes in the Earth's dynamic oblateness (ΔJ2). In this study, we developed a novel methodology to estimate monthly variations in degree-1 and C20coefficients by combing GRACE data with oceanic mass anomalies (combination approach). Unlike the method by Swenson et al. (2008), the proposed approach exploits noise covariance information of both input datasets and thus produces stochastically optimal solutions. A numerical simulation study is carried out to verify the correctness and performance of the proposed approach. We demonstrate that solutions obtained with the proposed approach have a significantly higher quality, as compared to the method by Swenson et al. Finally, we apply the proposed approach to real monthly GRACE solutions. To evaluate the obtained results, we calculate mass transport time-series over selected regions where minimal mass anomalies are expected. A clear reduction in the RMS of the mass transport time-series (more than 50 %) is observed there when the degree-1 and C20 coefficients obtained with the proposed approach are used. In particular, the seasonal pattern in the mass transport time-series disappears almost entirely. The traditional approach (degree-1 coefficients based on Swenson et al. (2008) and C20 based on SLR data), in contrast, does not reduce that RMS or even makes it larger (e.g., over the Sahara desert). We further show that the degree-1 variations play a major role in the observed improvement. At the same time, the usage of the C20 solutions obtained with the combination approach yields a similar accuracy of mass anomaly estimates, as compared to the results based on SLR analysis. The computed degree-1 and C20 coefficients will be made publicly available.

  19. Sector Identification in a Set of Stock Return Time Series Traded at the London Stock Exchange

    NASA Astrophysics Data System (ADS)

    Coronnello, C.; Tumminello, M.; Lillo, F.; Micciche, S.; Mantegna, R. N.

    2005-09-01

    We compare some methods recently used in the literature to detect the existence of a certain degree of common behavior of stock returns belonging to the same economic sector. Specifically, we discuss methods based on random matrix theory and hierarchical clustering techniques. We apply these methods to a portfolio of stocks traded at the London Stock Exchange. The investigated time series are recorded both at a daily time horizon and at a 5-minute time horizon. The correlation coefficient matrix is very different at different time horizons confirming that more structured correlation coefficient matrices are observed for long time horizons. All the considered methods are able to detect economic information and the presence of clusters characterized by the economic sector of stocks. However, different methods present a different degree of sensitivity with respect to different sectors. Our comparative analysis suggests that the application of just a single method could not be able to extract all the economic information present in the correlation coefficient matrix of a stock portfolio.

  20. Coastal Atmosphere and Sea Time Series (CoASTS)

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Berthon, Jean-Francoise; Zibordi, Giuseppe; Doyle, John P.; Grossi, Stefania; vanderLinde, Dirk; Targa, Cristina; McClain, Charles R. (Technical Monitor)

    2002-01-01

    In this document, the first three years of a time series of bio-optical marine and atmospheric measurements are presented and analyzed. These measurements were performed from an oceanographic tower in the northern Adriatic Sea within the framework of the Coastal Atmosphere and Sea Time Series (CoASTS) project, an ocean color calibration and validation activity. The data set collected includes spectral measurements of the in-water apparent (diffuse attenuation coefficient, reflectance, Q-factor, etc.) and inherent (absorption and scattering coefficients) optical properties, as well as the concentrations of the main optical components (pigment and suspended matter concentrations). Clear seasonal patterns are exhibited by the marine quantities on which an appreciable short-term variability (on the order of a half day to one day) is superimposed. This short-term variability is well correlated with the changes in salinity at the surface resulting from the southward transport of freshwater coming from the northern rivers. Concentrations of chlorophyll alpha and total suspended matter span more than two orders of magnitude. The bio-optical characteristics of the measurement site pertain to both Case-I (about 64%) and Case-II (about 36%) waters, based on a relationship between the beam attenuation coefficient at 660nm and the chlorophyll alpha concentration. Empirical algorithms relating in-water remote sensing reflectance ratios and optical components or properties of interest (chlorophyll alpha, total suspended matter, and the diffuse attenuation coefficient) are presented.

  1. Comparative analysis of death by suicide in Brazil and in the United States: descriptive, cross-sectional time series study.

    PubMed

    Abuabara, Alexander; Abuabara, Allan; Tonchuk, Carin Albino Luçolli

    2017-01-01

    The World Health Organization recognizes suicide as a public health priority. Increased knowledge of suicide risk factors is needed in order to be able to adopt effective prevention strategies. The aim of this study was to analyze and compare the association between the Gini coefficient (which is used to measure inequality) and suicide death rates over a 14-year period (2000-2013) in Brazil and in the United States (US). The hypothesis put forward was that reduction of income inequality is accompanied by reduction of suicide rates. Descriptive cross-sectional time-series study in Brazil and in the US. Population, death and suicide death data were extracted from the DATASUS database in Brazil and from the National Center for Health Statistics in the US. Gini coefficient data were obtained from the World Development Indicators. Time series analysis was performed on Brazilian and American official data regarding the number of deaths caused by suicide between 2000 and 2013 and the Gini coefficients of the two countries. The suicide trends were examined and compared. Brazil and the US present converging Gini coefficients, mainly due to reduction of inequality in Brazil over the last decade. However, suicide rates are not converging as hypothesized, but are in fact rising in both countries. The hypothesis that reduction of income inequality is accompanied by reduction of suicide rates was not verified.

  2. Quantifying the range of cross-correlated fluctuations using a q- L dependent AHXA coefficient

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Wang, Lin; Chen, Yuming

    2018-03-01

    Recently, based on analogous height cross-correlation analysis (AHXA), a cross-correlation coefficient ρ×(L) has been proposed to quantify the levels of cross-correlation on different temporal scales for bivariate series. A limitation of this coefficient is that it cannot capture the full information of cross-correlations on amplitude of fluctuations. In fact, it only detects the cross-correlation at a specific order fluctuation, which might neglect some important information inherited from other order fluctuations. To overcome this disadvantage, in this work, based on the scaling of the qth order covariance and time delay L, we define a two-parameter dependent cross-correlation coefficient ρq(L) to detect and quantify the range and level of cross-correlations. This new version of ρq(L) coefficient leads to the formation of a ρq(L) surface, which not only is able to quantify the level of cross-correlations, but also allows us to identify the range of fluctuation amplitudes that are correlated in two given signals. Applications to the classical ARFIMA models and the binomial multifractal series illustrate the feasibility of this new coefficient ρq(L) . In addition, a statistical test is proposed to quantify the existence of cross-correlations between two given series. Applying our method to the real life empirical data from the 1999-2000 California electricity market, we find that the California power crisis in 2000 destroys the cross-correlation between the price and the load series but does not affect the correlation of the load series during and before the crisis.

  3. An assessment of optical and biogeochemical multi-decadal trends in the Sargasso Sea

    NASA Astrophysics Data System (ADS)

    Allen, J. G.; Siegel, D.; Nelson, N. B.

    2016-02-01

    Observations of optical and biogeochemical data, made as part of the Bermuda Bio-Optics Project (BBOP) at the Bermuda Atlantic Time-series Study (BATS) site in the Sargasso Sea, allow for the examination of temporal trends in vertical light attenuation and their potential controls. Trends in both the magnitude and spectral slope of the diffuse attenuation coefficient should reflect changes in chlorophyll and chromophoric dissolved organic matter (CDOM) concentrations in the Sargasso Sea. The length and methodological consistency of this time series provides an excellent opportunity to extend analyses of seasonal cycles of apparent optical properties to interannual and multi-year time scales. Here, we characterize changes in the size and shape of diffuse attenuation coefficient spectra and compare them to temperature, chlorophyll a concentration, and to discrete measurements of phytoplankton and CDOM absorption. The time series analyses reveal up to a 1.2% annual increase of the magnitude of the diffuse attenuation coefficient over the upper 70 m of the water column while showing no significant change in the spectral slope of diffuse attenuation over the course of the study. These observations indicate that increases in phytoplankton pigment concentration rather than changes in CDOM are the primary driver for the attenuation trends on multi-year timescales for this region.

  4. Modeling and experiments for the time-dependent diffusion coefficient during methane desorption from coal

    NASA Astrophysics Data System (ADS)

    Cheng-Wu, Li; Hong-Lai, Xue; Cheng, Guan; Wen-biao, Liu

    2018-04-01

    Statistical analysis shows that in the coal matrix, the diffusion coefficient for methane is time-varying, and its integral satisfies the formula μt κ /(1 + β κ ). Therefore, a so-called dynamic diffusion coefficient model (DDC model) is developed. To verify the suitability and accuracy of the DDC model, a series of gas diffusion experiments were conducted using coal particles of different sizes. The results show that the experimental data can be accurately described by the DDC and bidisperse models, but the fit to the DDC model is slightly better. For all coal samples, as time increases, the effective diffusion coefficient first shows a sudden drop, followed by a gradual decrease before stabilizing at longer times. The effective diffusion coefficient has a negative relationship with the size of the coal particle. Finally, the relationship between the constants of the DDC model and the effective diffusion coefficient is discussed. The constant α (μ/R 2 ) denotes the effective coefficient at the initial time, and the constants κ and β control the attenuation characteristic of the effective diffusion coefficient.

  5. The Effects of Population Size Histories on Estimates of Selection Coefficients from Time-Series Genetic Data

    PubMed Central

    Jewett, Ethan M.; Steinrücken, Matthias; Song, Yun S.

    2016-01-01

    Many approaches have been developed for inferring selection coefficients from time series data while accounting for genetic drift. These approaches have been motivated by the intuition that properly accounting for the population size history can significantly improve estimates of selective strengths. However, the improvement in inference accuracy that can be attained by modeling drift has not been characterized. Here, by comparing maximum likelihood estimates of selection coefficients that account for the true population size history with estimates that ignore drift by assuming allele frequencies evolve deterministically in a population of infinite size, we address the following questions: how much can modeling the population size history improve estimates of selection coefficients? How much can mis-inferred population sizes hurt inferences of selection coefficients? We conduct our analysis under the discrete Wright–Fisher model by deriving the exact probability of an allele frequency trajectory in a population of time-varying size and we replicate our results under the diffusion model. For both models, we find that ignoring drift leads to estimates of selection coefficients that are nearly as accurate as estimates that account for the true population history, even when population sizes are small and drift is high. This result is of interest because inference methods that ignore drift are widely used in evolutionary studies and can be many orders of magnitude faster than methods that account for population sizes. PMID:27550904

  6. qFeature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-09-14

    This package contains statistical routines for extracting features from multivariate time-series data which can then be used for subsequent multivariate statistical analysis to identify patterns and anomalous behavior. It calculates local linear or quadratic regression model fits to moving windows for each series and then summarizes the model coefficients across user-defined time intervals for each series. These methods are domain agnostic-but they have been successfully applied to a variety of domains, including commercial aviation and electric power grid data.

  7. Harmonic regression of Landsat time series for modeling attributes from national forest inventory data

    NASA Astrophysics Data System (ADS)

    Wilson, Barry T.; Knight, Joseph F.; McRoberts, Ronald E.

    2018-03-01

    Imagery from the Landsat Program has been used frequently as a source of auxiliary data for modeling land cover, as well as a variety of attributes associated with tree cover. With ready access to all scenes in the archive since 2008 due to the USGS Landsat Data Policy, new approaches to deriving such auxiliary data from dense Landsat time series are required. Several methods have previously been developed for use with finer temporal resolution imagery (e.g. AVHRR and MODIS), including image compositing and harmonic regression using Fourier series. The manuscript presents a study, using Minnesota, USA during the years 2009-2013 as the study area and timeframe. The study examined the relative predictive power of land cover models, in particular those related to tree cover, using predictor variables based solely on composite imagery versus those using estimated harmonic regression coefficients. The study used two common non-parametric modeling approaches (i.e. k-nearest neighbors and random forests) for fitting classification and regression models of multiple attributes measured on USFS Forest Inventory and Analysis plots using all available Landsat imagery for the study area and timeframe. The estimated Fourier coefficients developed by harmonic regression of tasseled cap transformation time series data were shown to be correlated with land cover, including tree cover. Regression models using estimated Fourier coefficients as predictor variables showed a two- to threefold increase in explained variance for a small set of continuous response variables, relative to comparable models using monthly image composites. Similarly, the overall accuracies of classification models using the estimated Fourier coefficients were approximately 10-20 percentage points higher than the models using the image composites, with corresponding individual class accuracies between six and 45 percentage points higher.

  8. Smoothing Strategies Combined with ARIMA and Neural Networks to Improve the Forecasting of Traffic Accidents

    PubMed Central

    Rodríguez, Nibaldo

    2014-01-01

    Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0 : 26%, followed by MA-ARIMA with a MAPE of 1 : 12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15 : 51%. PMID:25243200

  9. Option pricing from wavelet-filtered financial series

    NASA Astrophysics Data System (ADS)

    de Almeida, V. T. X.; Moriconi, L.

    2012-10-01

    We perform wavelet decomposition of high frequency financial time series into large and small time scale components. Taking the FTSE100 index as a case study, and working with the Haar basis, it turns out that the small scale component defined by most (≃99.6%) of the wavelet coefficients can be neglected for the purpose of option premium evaluation. The relevance of the hugely compressed information provided by low-pass wavelet-filtering is related to the fact that the non-gaussian statistical structure of the original financial time series is essentially preserved for expiration times which are larger than just one trading day.

  10. Classification of damage in structural systems using time series analysis and supervised and unsupervised pattern recognition techniques

    NASA Astrophysics Data System (ADS)

    Omenzetter, Piotr; de Lautour, Oliver R.

    2010-04-01

    Developed for studying long, periodic records of various measured quantities, time series analysis methods are inherently suited and offer interesting possibilities for Structural Health Monitoring (SHM) applications. However, their use in SHM can still be regarded as an emerging application and deserves more studies. In this research, Autoregressive (AR) models were used to fit experimental acceleration time histories from two experimental structural systems, a 3- storey bookshelf-type laboratory structure and the ASCE Phase II SHM Benchmark Structure, in healthy and several damaged states. The coefficients of the AR models were chosen as damage sensitive features. Preliminary visual inspection of the large, multidimensional sets of AR coefficients to check the presence of clusters corresponding to different damage severities was achieved using Sammon mapping - an efficient nonlinear data compression technique. Systematic classification of damage into states based on the analysis of the AR coefficients was achieved using two supervised classification techniques: Nearest Neighbor Classification (NNC) and Learning Vector Quantization (LVQ), and one unsupervised technique: Self-organizing Maps (SOM). This paper discusses the performance of AR coefficients as damage sensitive features and compares the efficiency of the three classification techniques using experimental data.

  11. Bayesian wavelet PCA methodology for turbomachinery damage diagnosis under uncertainty

    NASA Astrophysics Data System (ADS)

    Xu, Shengli; Jiang, Xiaomo; Huang, Jinzhi; Yang, Shuhua; Wang, Xiaofang

    2016-12-01

    Centrifugal compressor often suffers various defects such as impeller cracking, resulting in forced outage of the total plant. Damage diagnostics and condition monitoring of such a turbomachinery system has become an increasingly important and powerful tool to prevent potential failure in components and reduce unplanned forced outage and further maintenance costs, while improving reliability, availability and maintainability of a turbomachinery system. This paper presents a probabilistic signal processing methodology for damage diagnostics using multiple time history data collected from different locations of a turbomachine, considering data uncertainty and multivariate correlation. The proposed methodology is based on the integration of three advanced state-of-the-art data mining techniques: discrete wavelet packet transform, Bayesian hypothesis testing, and probabilistic principal component analysis. The multiresolution wavelet analysis approach is employed to decompose a time series signal into different levels of wavelet coefficients. These coefficients represent multiple time-frequency resolutions of a signal. Bayesian hypothesis testing is then applied to each level of wavelet coefficient to remove possible imperfections. The ratio of posterior odds Bayesian approach provides a direct means to assess whether there is imperfection in the decomposed coefficients, thus avoiding over-denoising. Power spectral density estimated by the Welch method is utilized to evaluate the effectiveness of Bayesian wavelet cleansing method. Furthermore, the probabilistic principal component analysis approach is developed to reduce dimensionality of multiple time series and to address multivariate correlation and data uncertainty for damage diagnostics. The proposed methodology and generalized framework is demonstrated with a set of sensor data collected from a real-world centrifugal compressor with impeller cracks, through both time series and contour analyses of vibration signal and principal components.

  12. Electron-temperature dependence of dissociative recombination of electrons with CO/+/./CO/n-series ions

    NASA Technical Reports Server (NTRS)

    Whitaker, M.; Biondi, M. A.; Johnsen, R.

    1981-01-01

    The dependence on electron temperature of the coefficients for electron recombination with molecular cluster ions of the carbon monoxide series, CO(+).(CO)n, is determined. A microwave discharge lasting approximately 0.1 msec was applied in 5-20 Torr neon containing a few tenths percent CO in an afterglow mass spectrometer apparatus, and the time histories of the various afterglow ions were measured. Expressions for the dependence of the recombination coefficients of the dimer and trimer ions CO(+).CO and CO(+).(CO)2 are obtained which are found to be significantly different from those previously obtained for hydronium and ammonium series polar cluster ions, but similar to those of simple diatomic ions.

  13. Mapping Wetlands of Dongting Lake in China Using Landsat and SENTINEL-1 Time Series at 30M

    NASA Astrophysics Data System (ADS)

    Xing, L.; Tang, X.; Wang, H.; Fan, W.; Gao, X.

    2018-04-01

    Mapping and monitoring wetlands of Dongting lake using optical sensor data has been limited by cloud cover, and open access Sentinal-1 C-band data could provide cloud-free SAR images with both have high spatial and temporal resolution, which offer new opportunities for monitoring wetlands. In this study, we combined optical data and SAR data to map wetland of Dongting Lake reserves in 2016. Firstly, we generated two monthly composited Landsat land surface reflectance, NDVI, NDWI, TC-Wetness time series and Sentinel-1 (backscattering coefficient for VH and VV) time series. Secondly, we derived surface water body with two monthly frequencies based on the threshold method using the Sentinel-1 time series. Then the permanent water and seasonal water were separated by the submergence ratio. Other land cover types were identified based on SVM classifier using Landsat time series. Results showed that (1) the overall accuracies and kappa coefficients were above 86.6 % and 0.8. (3) Natural wetlands including permanent water body (14.8 %), seasonal water body (34.6 %), and permanent marshes (10.9 %) were the main land cover types, accounting for 60.3 % of the three wetland reserves. Human-made wetlands, such as rice fields, accounted 34.3 % of the total area. Generally, this study proposed a new flowchart for wetlands mapping in Dongting lake by combining multi-source remote sensing data, and the use of the two-monthly composited optical time series effectively made up the missing data due to the clouds and increased the possibility of precise wetlands classification.

  14. Detrended fluctuation analysis made flexible to detect range of cross-correlated fluctuations

    NASA Astrophysics Data System (ADS)

    Kwapień, Jarosław; Oświecimka, Paweł; DroŻdŻ, Stanisław

    2015-11-01

    The detrended cross-correlation coefficient ρDCCA has recently been proposed to quantify the strength of cross-correlations on different temporal scales in bivariate, nonstationary time series. It is based on the detrended cross-correlation and detrended fluctuation analyses (DCCA and DFA, respectively) and can be viewed as an analog of the Pearson coefficient in the case of the fluctuation analysis. The coefficient ρDCCA works well in many practical situations but by construction its applicability is limited to detection of whether two signals are generally cross-correlated, without the possibility to obtain information on the amplitude of fluctuations that are responsible for those cross-correlations. In order to introduce some related flexibility, here we propose an extension of ρDCCA that exploits the multifractal versions of DFA and DCCA: multifractal detrended fluctuation analysis and multifractal detrended cross-correlation analysis, respectively. The resulting new coefficient ρq not only is able to quantify the strength of correlations but also allows one to identify the range of detrended fluctuation amplitudes that are correlated in two signals under study. We show how the coefficient ρq works in practical situations by applying it to stochastic time series representing processes with long memory: autoregressive and multiplicative ones. Such processes are often used to model signals recorded from complex systems and complex physical phenomena like turbulence, so we are convinced that this new measure can successfully be applied in time-series analysis. In particular, we present an example of such application to highly complex empirical data from financial markets. The present formulation can straightforwardly be extended to multivariate data in terms of the q -dependent counterpart of the correlation matrices and then to the network representation.

  15. Determining a Prony Series for a Viscoelastic Material From Time Varying Strain Data

    NASA Technical Reports Server (NTRS)

    Tzikang, Chen

    2000-01-01

    In this study a method of determining the coefficients in a Prony series representation of a viscoelastic modulus from rate dependent data is presented. Load versus time test data for a sequence of different rate loading segments is least-squares fitted to a Prony series hereditary integral model of the material tested. A nonlinear least squares regression algorithm is employed. The measured data includes ramp loading, relaxation, and unloading stress-strain data. The resulting Prony series which captures strain rate loading and unloading effects, produces an excellent fit to the complex loading sequence.

  16. The Effects of Population Size Histories on Estimates of Selection Coefficients from Time-Series Genetic Data.

    PubMed

    Jewett, Ethan M; Steinrücken, Matthias; Song, Yun S

    2016-11-01

    Many approaches have been developed for inferring selection coefficients from time series data while accounting for genetic drift. These approaches have been motivated by the intuition that properly accounting for the population size history can significantly improve estimates of selective strengths. However, the improvement in inference accuracy that can be attained by modeling drift has not been characterized. Here, by comparing maximum likelihood estimates of selection coefficients that account for the true population size history with estimates that ignore drift by assuming allele frequencies evolve deterministically in a population of infinite size, we address the following questions: how much can modeling the population size history improve estimates of selection coefficients? How much can mis-inferred population sizes hurt inferences of selection coefficients? We conduct our analysis under the discrete Wright-Fisher model by deriving the exact probability of an allele frequency trajectory in a population of time-varying size and we replicate our results under the diffusion model. For both models, we find that ignoring drift leads to estimates of selection coefficients that are nearly as accurate as estimates that account for the true population history, even when population sizes are small and drift is high. This result is of interest because inference methods that ignore drift are widely used in evolutionary studies and can be many orders of magnitude faster than methods that account for population sizes. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Although using standard Taylor series coefficients for finite-difference operators is optimal in the sense that in the limit of infinitesimal space and time discretization, the solution approaches the correct analytic solution to the acousto-dynamic system of differential equations, other finite-difference operators may provide optimal computational run time given certain error bounds or source bandwidth constraints. This report describes the results of investigation of alternative optimal finite-difference coefficients based on several optimization/accuracy scenarios and provides recommendations for minimizing run time while retaining error within given error bounds.

  18. iVAR: a program for imputing missing data in multivariate time series using vector autoregressive models.

    PubMed

    Liu, Siwei; Molenaar, Peter C M

    2014-12-01

    This article introduces iVAR, an R program for imputing missing data in multivariate time series on the basis of vector autoregressive (VAR) models. We conducted a simulation study to compare iVAR with three methods for handling missing data: listwise deletion, imputation with sample means and variances, and multiple imputation ignoring time dependency. The results showed that iVAR produces better estimates for the cross-lagged coefficients than do the other three methods. We demonstrate the use of iVAR with an empirical example of time series electrodermal activity data and discuss the advantages and limitations of the program.

  19. A Langevin equation for the rates of currency exchange based on the Markov analysis

    NASA Astrophysics Data System (ADS)

    Farahpour, F.; Eskandari, Z.; Bahraminasab, A.; Jafari, G. R.; Ghasemi, F.; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2007-11-01

    We propose a method for analyzing the data for the rates of exchange of various currencies versus the U.S. dollar. The method analyzes the return time series of the data as a Markov process, and develops an effective equation which reconstructs it. We find that the Markov time scale, i.e., the time scale over which the data are Markov-correlated, is one day for the majority of the daily exchange rates that we analyze. We derive an effective Langevin equation to describe the fluctuations in the rates. The equation contains two quantities, D and D, representing the drift and diffusion coefficients, respectively. We demonstrate how the two coefficients are estimated directly from the data, without using any assumptions or models for the underlying stochastic time series that represent the daily rates of exchange of various currencies versus the U.S. dollar.

  20. A comparative analysis of spectral exponent estimation techniques for 1/fβ processes with applications to the analysis of stride interval time series

    PubMed Central

    Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin

    2013-01-01

    Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509

  1. A comparative analysis of spectral exponent estimation techniques for 1/f(β) processes with applications to the analysis of stride interval time series.

    PubMed

    Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin

    2014-01-30

    The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. The geometry of chaotic dynamics — a complex network perspective

    NASA Astrophysics Data System (ADS)

    Donner, R. V.; Heitzig, J.; Donges, J. F.; Zou, Y.; Marwan, N.; Kurths, J.

    2011-12-01

    Recently, several complex network approaches to time series analysis have been developed and applied to study a wide range of model systems as well as real-world data, e.g., geophysical or financial time series. Among these techniques, recurrence-based concepts and prominently ɛ-recurrence networks, most faithfully represent the geometrical fine structure of the attractors underlying chaotic (and less interestingly non-chaotic) time series. In this paper we demonstrate that the well known graph theoretical properties local clustering coefficient and global (network) transitivity can meaningfully be exploited to define two new local and two new global measures of dimension in phase space: local upper and lower clustering dimension as well as global upper and lower transitivity dimension. Rigorous analytical as well as numerical results for self-similar sets and simple chaotic model systems suggest that these measures are well-behaved in most non-pathological situations and that they can be estimated reasonably well using ɛ-recurrence networks constructed from relatively short time series. Moreover, we study the relationship between clustering and transitivity dimensions on the one hand, and traditional measures like pointwise dimension or local Lyapunov dimension on the other hand. We also provide further evidence that the local clustering coefficients, or equivalently the local clustering dimensions, are useful for identifying unstable periodic orbits and other dynamically invariant objects from time series. Our results demonstrate that ɛ-recurrence networks exhibit an important link between dynamical systems and graph theory.

  3. Coastline detection with time series of SAR images

    NASA Astrophysics Data System (ADS)

    Ao, Dongyang; Dumitru, Octavian; Schwarz, Gottfried; Datcu, Mihai

    2017-10-01

    For maritime remote sensing, coastline detection is a vital task. With continuous coastline detection results from satellite image time series, the actual shoreline, the sea level, and environmental parameters can be observed to support coastal management and disaster warning. Established coastline detection methods are often based on SAR images and wellknown image processing approaches. These methods involve a lot of complicated data processing, which is a big challenge for remote sensing time series. Additionally, a number of SAR satellites operating with polarimetric capabilities have been launched in recent years, and many investigations of target characteristics in radar polarization have been performed. In this paper, a fast and efficient coastline detection method is proposed which comprises three steps. First, we calculate a modified correlation coefficient of two SAR images of different polarization. This coefficient differs from the traditional computation where normalization is needed. Through this modified approach, the separation between sea and land becomes more prominent. Second, we set a histogram-based threshold to distinguish between sea and land within the given image. The histogram is derived from the statistical distribution of the polarized SAR image pixel amplitudes. Third, we extract continuous coastlines using a Canny image edge detector that is rather immune to speckle noise. Finally, the individual coastlines derived from time series of .SAR images can be checked for changes.

  4. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.

  5. Optimal estimation of diffusion coefficients from single-particle trajectories

    NASA Astrophysics Data System (ADS)

    Vestergaard, Christian L.; Blainey, Paul C.; Flyvbjerg, Henrik

    2014-02-01

    How does one optimally determine the diffusion coefficient of a diffusing particle from a single-time-lapse recorded trajectory of the particle? We answer this question with an explicit, unbiased, and practically optimal covariance-based estimator (CVE). This estimator is regression-free and is far superior to commonly used methods based on measured mean squared displacements. In experimentally relevant parameter ranges, it also outperforms the analytically intractable and computationally more demanding maximum likelihood estimator (MLE). For the case of diffusion on a flexible and fluctuating substrate, the CVE is biased by substrate motion. However, given some long time series and a substrate under some tension, an extended MLE can separate particle diffusion on the substrate from substrate motion in the laboratory frame. This provides benchmarks that allow removal of bias caused by substrate fluctuations in CVE. The resulting unbiased CVE is optimal also for short time series on a fluctuating substrate. We have applied our estimators to human 8-oxoguanine DNA glycolase proteins diffusing on flow-stretched DNA, a fluctuating substrate, and found that diffusion coefficients are severely overestimated if substrate fluctuations are not accounted for.

  6. Complex network approach to fractional time series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manshour, Pouya

    In order to extract correlation information inherited in stochastic time series, the visibility graph algorithm has been recently proposed, by which a time series can be mapped onto a complex network. We demonstrate that the visibility algorithm is not an appropriate one to study the correlation aspects of a time series. We then employ the horizontal visibility algorithm, as a much simpler one, to map fractional processes onto complex networks. The degree distributions are shown to have parabolic exponential forms with Hurst dependent fitting parameter. Further, we take into account other topological properties such as maximum eigenvalue of the adjacencymore » matrix and the degree assortativity, and show that such topological quantities can also be used to predict the Hurst exponent, with an exception for anti-persistent fractional Gaussian noises. To solve this problem, we take into account the Spearman correlation coefficient between nodes' degrees and their corresponding data values in the original time series.« less

  7. Multidimensional stock network analysis: An Escoufier's RV coefficient approach

    NASA Astrophysics Data System (ADS)

    Lee, Gan Siew; Djauhari, Maman A.

    2013-09-01

    The current practice of stocks network analysis is based on the assumption that the time series of closed stock price could represent the behaviour of the each stock. This assumption leads to consider minimal spanning tree (MST) and sub-dominant ultrametric (SDU) as an indispensible tool to filter the economic information contained in the network. Recently, there is an attempt where researchers represent stock not only as a univariate time series of closed price but as a bivariate time series of closed price and volume. In this case, they developed the so-called multidimensional MST to filter the important economic information. However, in this paper, we show that their approach is only applicable for that bivariate time series only. This leads us to introduce a new methodology to construct MST where each stock is represented by a multivariate time series. An example of Malaysian stock exchange will be presented and discussed to illustrate the advantages of the method.

  8. Analysis and generation of groundwater concentration time series

    NASA Astrophysics Data System (ADS)

    Crăciun, Maria; Vamoş, Călin; Suciu, Nicolae

    2018-01-01

    Concentration time series are provided by simulated concentrations of a nonreactive solute transported in groundwater, integrated over the transverse direction of a two-dimensional computational domain and recorded at the plume center of mass. The analysis of a statistical ensemble of time series reveals subtle features that are not captured by the first two moments which characterize the approximate Gaussian distribution of the two-dimensional concentration fields. The concentration time series exhibit a complex preasymptotic behavior driven by a nonstationary trend and correlated fluctuations with time-variable amplitude. Time series with almost the same statistics are generated by successively adding to a time-dependent trend a sum of linear regression terms, accounting for correlations between fluctuations around the trend and their increments in time, and terms of an amplitude modulated autoregressive noise of order one with time-varying parameter. The algorithm generalizes mixing models used in probability density function approaches. The well-known interaction by exchange with the mean mixing model is a special case consisting of a linear regression with constant coefficients.

  9. On power series representing solutions of the one-dimensional time-independent Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Trotsenko, N. P.

    2017-06-01

    For the equation χ″( x) = u( x)χ( x) with infinitely smooth u( x), the general solution χ( x) is found in the form of a power series. The coefficients of the series are expressed via all derivatives u ( m)( y) of the function u( x) at a fixed point y. Examples of solutions for particular functions u( x) are considered.

  10. On the modular structure of the genus-one Type II superstring low energy expansion

    NASA Astrophysics Data System (ADS)

    D'Hoker, Eric; Green, Michael B.; Vanhove, Pierre

    2015-08-01

    The analytic contribution to the low energy expansion of Type II string amplitudes at genus-one is a power series in space-time derivatives with coefficients that are determined by integrals of modular functions over the complex structure modulus of the world-sheet torus. These modular functions are associated with world-sheet vacuum Feynman diagrams and given by multiple sums over the discrete momenta on the torus. In this paper we exhibit exact differential and algebraic relations for a certain infinite class of such modular functions by showing that they satisfy Laplace eigenvalue equations with inhomogeneous terms that are polynomial in non-holomorphic Eisenstein series. Furthermore, we argue that the set of modular functions that contribute to the coefficients of interactions up to order are linear sums of functions in this class and quadratic polynomials in Eisenstein series and odd Riemann zeta values. Integration over the complex structure results in coefficients of the low energy expansion that are rational numbers multiplying monomials in odd Riemann zeta values.

  11. A new method to calibrate Lagrangian model with ASAR images for oil slick trajectory.

    PubMed

    Tian, Siyu; Huang, Xiaoxia; Li, Hongga

    2017-03-15

    Since Lagrangian model coefficients vary with different conditions, it is necessary to calibrate the model to obtain optimal coefficient combination for special oil spill accident. This paper focuses on proposing a new method to calibrate Lagrangian model with time series of Envisat ASAR images. Oil slicks extracted from time series images form a detected trajectory of special oil slick. Lagrangian model is calibrated by minimizing the difference between simulated trajectory and detected trajectory. mean center position distance difference (MCPD) and rotation difference (RD) of Oil slicks' or particles' standard deviational ellipses (SDEs) are calculated as two evaluations. The two parameters are taken to evaluate the performance of Lagrangian transport model with different coefficient combinations. This method is applied to Penglai 19-3 oil spill accident. The simulation result with calibrated model agrees well with related satellite observations. It is suggested the new method is effective to calibrate Lagrangian model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Independent Research and Independent Exploratory Development Programs: FY92 Annual Report

    DTIC Science & Technology

    1993-04-01

    transform- of an ERP provides a record of ERP energy at different times and scales. It does this by producing a set of filtered time series ai different...that the coefficients at any level are a series that measures energy within the bandwidth of that level as a function of time. For this reason it is...I to 25 Hz, and decimated to a final sampling rate of 50 Hz. The prestimulus baseline (200 ms) was adjusted to zero to remove any DC offset

  13. An asymptotic theory for cross-correlation between auto-correlated sequences and its application on neuroimaging data.

    PubMed

    Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng

    2018-04-20

    Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.

  14. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    PubMed

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  15. Combined temperature and density series for fluid-phase properties. I. Square-well spheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, J. Richard; Schultz, Andrew J.; Kofke, David A.

    Cluster integrals are evaluated for the coefficients of the combined temperature- and density-expansion of pressure: Z = 1 + B{sub 2}(β) η + B{sub 3}(β) η{sup 2} + B{sub 4}(β) η{sup 3} + ⋯, where Z is the compressibility factor, η is the packing fraction, and the B{sub i}(β) coefficients are expanded as a power series in reciprocal temperature, β, about β = 0. The methodology is demonstrated for square-well spheres with λ = [1.2-2.0], where λ is the well diameter relative to the hard core. For this model, the B{sub i} coefficients can be expressed in closed form asmore » a function of β, and we develop appropriate expressions for i = 2-6; these expressions facilitate derivation of the coefficients of the β series. Expanding the B{sub i} coefficients in β provides a correspondence between the power series in density (typically called the virial series) and the power series in β (typically called thermodynamic perturbation theory, TPT). The coefficients of the β series result in expressions for the Helmholtz energy that can be compared to recent computations of TPT coefficients to fourth order in β. These comparisons show good agreement at first order in β, suggesting that the virial series converges for this term. Discrepancies for higher-order terms suggest that convergence of the density series depends on the order in β. With selection of an appropriate approximant, the treatment of Helmholtz energy that is second order in β appears to be stable and convergent at least to the critical density, but higher-order coefficients are needed to determine how far this behavior extends into the liquid.« less

  16. Multiscale limited penetrable horizontal visibility graph for analyzing nonlinear time series

    NASA Astrophysics Data System (ADS)

    Gao, Zhong-Ke; Cai, Qing; Yang, Yu-Xuan; Dang, Wei-Dong; Zhang, Shan-Shan

    2016-10-01

    Visibility graph has established itself as a powerful tool for analyzing time series. We in this paper develop a novel multiscale limited penetrable horizontal visibility graph (MLPHVG). We use nonlinear time series from two typical complex systems, i.e., EEG signals and two-phase flow signals, to demonstrate the effectiveness of our method. Combining MLPHVG and support vector machine, we detect epileptic seizures from the EEG signals recorded from healthy subjects and epilepsy patients and the classification accuracy is 100%. In addition, we derive MLPHVGs from oil-water two-phase flow signals and find that the average clustering coefficient at different scales allows faithfully identifying and characterizing three typical oil-water flow patterns. These findings render our MLPHVG method particularly useful for analyzing nonlinear time series from the perspective of multiscale network analysis.

  17. Using self-organizing maps to infill missing data in hydro-meteorological time series from the Logone catchment, Lake Chad basin.

    PubMed

    Nkiaka, E; Nawaz, N R; Lovett, J C

    2016-07-01

    Hydro-meteorological data is an important asset that can enhance management of water resources. But existing data often contains gaps, leading to uncertainties and so compromising their use. Although many methods exist for infilling data gaps in hydro-meteorological time series, many of these methods require inputs from neighbouring stations, which are often not available, while other methods are computationally demanding. Computing techniques such as artificial intelligence can be used to address this challenge. Self-organizing maps (SOMs), which are a type of artificial neural network, were used for infilling gaps in a hydro-meteorological time series in a Sudano-Sahel catchment. The coefficients of determination obtained were all above 0.75 and 0.65 while the average topographic error was 0.008 and 0.02 for rainfall and river discharge time series, respectively. These results further indicate that SOMs are a robust and efficient method for infilling missing gaps in hydro-meteorological time series.

  18. Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

    NASA Technical Reports Server (NTRS)

    Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

    2013-01-01

    The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

  19. Inhomogeneous scaling behaviors in Malaysian foreign currency exchange rates

    NASA Astrophysics Data System (ADS)

    Muniandy, S. V.; Lim, S. C.; Murugan, R.

    2001-12-01

    In this paper, we investigate the fractal scaling behaviors of foreign currency exchange rates with respect to Malaysian currency, Ringgit Malaysia. These time series are examined piecewise before and after the currency control imposed in 1st September 1998 using the monofractal model based on fractional Brownian motion. The global Hurst exponents are determined using the R/ S analysis, the detrended fluctuation analysis and the method of second moment using the correlation coefficients. The limitation of these monofractal analyses is discussed. The usual multifractal analysis reveals that there exists a wide range of Hurst exponents in each of the time series. A new method of modelling the multifractal time series based on multifractional Brownian motion with time-varying Hurst exponents is studied.

  20. Research on maximum level noise contaminated of remote reference magnetotelluric measurements using synthesized data

    NASA Astrophysics Data System (ADS)

    Gang, Zhang; Fansong, Meng; Jianzhong, Wang; Mingtao, Ding

    2018-02-01

    Determining magnetotelluric impedance precisely and accurately is fundamental to valid inversion and geological interpretation. This study aims to determine the minimum value of signal-to-noise ratio (SNR) which maintains the effectiveness of remote reference technique. Results of standard time series simulation, addition of different Gaussian noises to obtain the different SNR time series, and analysis of the intermediate data, such as polarization direction, correlation coefficient, and impedance tensor, show that when the SNR value is larger than 23.5743, the polarization direction disorder at morphology and a smooth and accurate sounding carve value can be obtained. At this condition, the correlation coefficient value of nearly complete segments between the base and remote station is larger than 0.9, and impedance tensor Zxy presents only one aggregation, which meet the natural magnetotelluric signal characteristic.

  1. Decomposing Time Series Data by a Non-negative Matrix Factorization Algorithm with Temporally Constrained Coefficients

    PubMed Central

    Cheung, Vincent C. K.; Devarajan, Karthik; Severini, Giacomo; Turolla, Andrea; Bonato, Paolo

    2017-01-01

    The non-negative matrix factorization algorithm (NMF) decomposes a data matrix into a set of non-negative basis vectors, each scaled by a coefficient. In its original formulation, the NMF assumes the data samples and dimensions to be independently distributed, making it a less-than-ideal algorithm for the analysis of time series data with temporal correlations. Here, we seek to derive an NMF that accounts for temporal dependencies in the data by explicitly incorporating a very simple temporal constraint for the coefficients into the NMF update rules. We applied the modified algorithm to 2 multi-dimensional electromyographic data sets collected from the human upper-limb to identify muscle synergies. We found that because it reduced the number of free parameters in the model, our modified NMF made it possible to use the Akaike Information Criterion to objectively identify a model order (i.e., the number of muscle synergies composing the data) that is more functionally interpretable, and closer to the numbers previously determined using ad hoc measures. PMID:26737046

  2. Frequency domain system identification of helicopter rotor dynamics incorporating models with time periodic coefficients

    NASA Astrophysics Data System (ADS)

    Hwang, Sunghwan

    1997-08-01

    One of the most prominent features of helicopter rotor dynamics in forward flight is the periodic coefficients in the equations of motion introduced by the rotor rotation. The frequency response characteristics of such a linear time periodic system exhibits sideband behavior, which is not the case for linear time invariant systems. Therefore, a frequency domain identification methodology for linear systems with time periodic coefficients was developed, because the linear time invariant theory cannot account for sideband behavior. The modulated complex Fourier series was introduced to eliminate the smearing effect of Fourier series expansions of exponentially modulated periodic signals. A system identification theory was then developed using modulated complex Fourier series expansion. Correlation and spectral density functions were derived using the modulated complex Fourier series expansion for linear time periodic systems. Expressions of the identified harmonic transfer function were then formulated using the spectral density functions both with and without additive noise processes at input and/or output. A procedure was developed to identify parameters of a model to match the frequency response characteristics between measured and estimated harmonic transfer functions by minimizing an objective function defined in terms of the trace of the squared frequency response error matrix. Feasibility was demonstrated by the identification of the harmonic transfer function and parameters for helicopter rigid blade flapping dynamics in forward flight. This technique is envisioned to satisfy the needs of system identification in the rotating frame, especially in the context of individual blade control. The technique was applied to the coupled flap-lag-inflow dynamics of a rigid blade excited by an active pitch link. The linear time periodic technique results were compared with the linear time invariant technique results. Also, the effect of noise processes and initial parameter guess on the identification procedure were investigated. To study the effect of elastic modes, a rigid blade with a trailing edge flap excited by a smart actuator was selected and system parameters were successfully identified, but with some expense of computational storage and time. Conclusively, the linear time periodic technique substantially improved the identified parameter accuracy compared to the linear time invariant technique. Also, the linear time periodic technique was robust to noises and initial guess of parameters. However, an elastic mode of higher frequency relative to the system pumping frequency tends to increase the computer storage requirement and computing time.

  3. Unraveling spurious properties of interaction networks with tailored random networks.

    PubMed

    Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus

    2011-01-01

    We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures--known for their complex spatial and temporal dynamics--we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis.

  4. Unraveling Spurious Properties of Interaction Networks with Tailored Random Networks

    PubMed Central

    Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus

    2011-01-01

    We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures – known for their complex spatial and temporal dynamics – we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis. PMID:21850239

  5. Using Derivative Estimates to Describe Intraindividual Variability at Multiple Time Scales

    ERIC Educational Resources Information Center

    Deboeck, Pascal R.; Montpetit, Mignon A.; Bergeman, C. S.; Boker, Steven M.

    2009-01-01

    The study of intraindividual variability is central to the study of individuals in psychology. Previous research has related the variance observed in repeated measurements (time series) of individuals to traitlike measures that are logically related. Intraindividual measures, such as intraindividual standard deviation or the coefficient of…

  6. On new classes of solutions of nonlinear partial differential equations in the form of convergent special series

    NASA Astrophysics Data System (ADS)

    Filimonov, M. Yu.

    2017-12-01

    The method of special series with recursively calculated coefficients is used to solve nonlinear partial differential equations. The recurrence of finding the coefficients of the series is achieved due to a special choice of functions, in powers of which the solution is expanded in a series. We obtain a sequence of linear partial differential equations to find the coefficients of the series constructed. In many cases, one can deal with a sequence of linear ordinary differential equations. We construct classes of solutions in the form of convergent series for a certain class of nonlinear evolution equations. A new class of solutions of generalized Boussinesque equation with an arbitrary function in the form of a convergent series is constructed.

  7. 75 FR 3647 - Federal Agricultural Mortgage Corporation Funding and Fiscal Affairs; Risk-Based Capital...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-22

    ... secured borrowers within each year), the coefficients of variation of the time series of annual default... the method you use, please do not submit your comment multiple times via different methods. You may... component to directly recognize the credit risk on such loans.\\4\\ At the time of the Farm Bill's enactment...

  8. Forecasting and analyzing high O3 time series in educational area through an improved chaotic approach

    NASA Astrophysics Data System (ADS)

    Hamid, Nor Zila Abd; Adenan, Nur Hamiza; Noorani, Mohd Salmi Md

    2017-08-01

    Forecasting and analyzing the ozone (O3) concentration time series is important because the pollutant is harmful to health. This study is a pilot study for forecasting and analyzing the O3 time series in one of Malaysian educational area namely Shah Alam using chaotic approach. Through this approach, the observed hourly scalar time series is reconstructed into a multi-dimensional phase space, which is then used to forecast the future time series through the local linear approximation method. The main purpose is to forecast the high O3 concentrations. The original method performed poorly but the improved method addressed the weakness thereby enabling the high concentrations to be successfully forecast. The correlation coefficient between the observed and forecasted time series through the improved method is 0.9159 and both the mean absolute error and root mean squared error are low. Thus, the improved method is advantageous. The time series analysis by means of the phase space plot and Cao method identified the presence of low-dimensional chaotic dynamics in the observed O3 time series. Results showed that at least seven factors affect the studied O3 time series, which is consistent with the listed factors from the diurnal variations investigation and the sensitivity analysis from past studies. In conclusion, chaotic approach has been successfully forecast and analyzes the O3 time series in educational area of Shah Alam. These findings are expected to help stakeholders such as Ministry of Education and Department of Environment in having a better air pollution management.

  9. Analysis of the ST-T complex of the electrocardiogram using the Karhunen--Loeve transform: adaptive monitoring and alternans detection

    NASA Technical Reports Server (NTRS)

    Laguna, P.; Moody, G. B.; Garcia, J.; Goldberger, A. L.; Mark, R. G.

    1999-01-01

    The Karhunen-Loeve transform (KLT) is applied to study the ventricular repolarisation period as reflected in the ST-T complex of the surface ECG. The KLT coefficients provide a sensitive means of quantitating ST-T shapes. A training set of ST-T complexes is used to derive a set of KLT basis vectors that permits representation of 90% of the signal energy using four KLT coefficients. As a truncated KLT expansion tends to favor representation of the signal over any additive noise, a time series of KLT coefficients obtained from successive ST-T complexes is better suited for representation of both medium-term variations (such as ischemic changes) and short-term variations (such as ST-T alternans) than discrete parameters such as the ST level or other local indices. For analysis of ischemic changes, an adaptive filter is described that can be used to estimate the KLT coefficient, yielding an increase in the signal-to-noise ratio of 10 dB (u = 0.1), with a convergence time of about three beats. A beat spectrum of the unfiltered KLT coefficient series is used for detection of ST-T alterans. These methods are illustrated with examples from the European ST-T Database. About 20% of records revealed quasi-periodic salvos of ischemic ST-T change episodes and another 20% exhibit repetitive, but not clearly periodic patterns of ST-T change episodes. About 5% of ischemic episodes were associated with ST-T alterans.

  10. Modeling of nutation-precession: Very long baseline interferometry results

    NASA Astrophysics Data System (ADS)

    Herring, T. A.; Mathews, P. M.; Buffett, B. A.

    2002-04-01

    Analysis of over 20 years of very long baseline interferometry data (VLBI) yields estimates of the coefficients of the nutation series with standard deviations ranging from 5 microseconds of arc (μas) for the terms with periods <400 days to 38 μas for the longest-period terms. The largest deviations between the VLBI estimates of the amplitudes of terms in the nutation series and the theoretical values from the Mathews-Herring-Buffett (MHB2000) nutation series are 56 +/- 38 μas (associated with two of the 18.6 year nutations). The amplitudes of nutational terms with periods <400 days deviate from the MHB2000 nutation series values at the level standard deviation. The estimated correction to the IAU-1976 precession constant is -2.997 +/- 0.008 mas yr-1 when the coefficients of the MHB2000 nutation series are held fixed and is consistent with that inferred from the MHB2000 nutation theory. The secular change in the obliquity of the ecliptic is estimated to be -0.252 +/- 0.003 mas yr-1. When the coefficients of the largest-amplitude terms in the nutation series are estimated, the precession constant correction and obliquity rate are estimated to be -2.960 +/- 0.030 and -0.237 +/- 0.012 mas yr-1. Significant variations in the freely excited retrograde free core nutation mode are observed over the 20 years. During this time the amplitude has decreased from ~300 +/- 50 μas in the mid-1980s to nearly zero by the year 2000. There is evidence that the amplitude of the mode in now increasing again.

  11. Coastal Atmosphere and Sea Time Series (CoASTS)

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Zibordi, Giuseppe; Berthon, Jean-Francoise; Doyle, John P.; Grossi, Stefania; vanderLinde, Dirk; Targa, Cristina; Alberotanza, Luigi; McClain, Charles R. (Technical Monitor)

    2002-01-01

    The Coastal Atmosphere and Sea Time Series (CoASTS) Project aimed at supporting ocean color research and applications, from 1995 up to the time of publication of this document, has ensured the collection of a comprehensive atmospheric and marine data set from an oceanographic tower located in the northern Adriatic Sea. The instruments and the measurement methodologies used to gather quantities relevant for bio-optical modeling and for the calibration and validation of ocean color sensors, are described. Particular emphasis is placed on four items: (1) the evaluation of perturbation effects in radiometric data (i.e., tower-shading, instrument self-shading, and bottom effects); (2) the intercomparison of seawater absorption coefficients from in situ measurements and from laboratory spectrometric analysis on discrete samples; (3) the intercomparison of two filter techniques for in vivo measurement of particulate absorption coefficients; and (4) the analysis of repeatability and reproducibility of the most relevant laboratory measurements carried out on seawater samples (i.e., particulate and yellow substance absorption coefficients, and pigment and total suspended matter concentrations). Sample data are also presented and discussed to illustrate the typical features characterizing the CoASTS measurement site in view of supporting the suitability of the CoASTS data set for bio-optical modeling and ocean color calibration and validation.

  12. Innovating patient care delivery: DSRIP's interrupted time series analysis paradigm.

    PubMed

    Shenoy, Amrita G; Begley, Charles E; Revere, Lee; Linder, Stephen H; Daiger, Stephen P

    2017-12-08

    Adoption of Medicaid Section 1115 waiver is one of the many ways of innovating healthcare delivery system. The Delivery System Reform Incentive Payment (DSRIP) pool, one of the two funding pools of the waiver has four categories viz. infrastructure development, program innovation and redesign, quality improvement reporting and lastly, bringing about population health improvement. A metric of the fourth category, preventable hospitalization (PH) rate was analyzed in the context of eight conditions for two time periods, pre-reporting years (2010-2012) and post-reporting years (2013-2015) for two hospital cohorts, DSRIP participating and non-participating hospitals. The study explains how DSRIP impacted Preventable Hospitalization (PH) rates of eight conditions for both hospital cohorts within two time periods. Eight PH rates were regressed as the dependent variable with time, intervention and post-DSRIP Intervention as independent variables. PH rates of eight conditions were then consolidated into one rate for regressing with the above independent variables to evaluate overall impact of DSRIP. An interrupted time series regression was performed after accounting for auto-correlation, stationarity and seasonality in the dataset. In the individual regression model, PH rates showed statistically significant coefficients for seven out of eight conditions in DSRIP participating hospitals. In the combined regression model, the coefficient of the PH rate showed a statistically significant decrease with negative p-values for regression coefficients in DSRIP participating hospitals compared to positive/increased p-values for regression coefficients in DSRIP non-participating hospitals. Several macro- and micro-level factors may have likely contributed DSRIP hospitals outperforming DSRIP non-participating hospitals. Healthcare organization/provider collaboration, support from healthcare professionals, DSRIP's design, state reimbursement and coordination in care delivery methods may have led to likely success of DSRIP. IV, a retrospective cohort study based on longitudinal data. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Fast-GPU-PCC: A GPU-Based Technique to Compute Pairwise Pearson's Correlation Coefficients for Time Series Data-fMRI Study.

    PubMed

    Eslami, Taban; Saeed, Fahad

    2018-04-20

    Functional magnetic resonance imaging (fMRI) is a non-invasive brain imaging technique, which has been regularly used for studying brain’s functional activities in the past few years. A very well-used measure for capturing functional associations in brain is Pearson’s correlation coefficient. Pearson’s correlation is widely used for constructing functional network and studying dynamic functional connectivity of the brain. These are useful measures for understanding the effects of brain disorders on connectivities among brain regions. The fMRI scanners produce huge number of voxels and using traditional central processing unit (CPU)-based techniques for computing pairwise correlations is very time consuming especially when large number of subjects are being studied. In this paper, we propose a graphics processing unit (GPU)-based algorithm called Fast-GPU-PCC for computing pairwise Pearson’s correlation coefficient. Based on the symmetric property of Pearson’s correlation, this approach returns N ( N − 1 ) / 2 correlation coefficients located at strictly upper triangle part of the correlation matrix. Storing correlations in a one-dimensional array with the order as proposed in this paper is useful for further usage. Our experiments on real and synthetic fMRI data for different number of voxels and varying length of time series show that the proposed approach outperformed state of the art GPU-based techniques as well as the sequential CPU-based versions. We show that Fast-GPU-PCC runs 62 times faster than CPU-based version and about 2 to 3 times faster than two other state of the art GPU-based methods.

  14. On the Prony series representation of stretched exponential relaxation

    NASA Astrophysics Data System (ADS)

    Mauro, John C.; Mauro, Yihong Z.

    2018-09-01

    Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.

  15. An Unsupervised Change Detection Method Using Time-Series of PolSAR Images from Radarsat-2 and GaoFen-3.

    PubMed

    Liu, Wensong; Yang, Jie; Zhao, Jinqi; Shi, Hongtao; Yang, Le

    2018-02-12

    The traditional unsupervised change detection methods based on the pixel level can only detect the changes between two different times with same sensor, and the results are easily affected by speckle noise. In this paper, a novel method is proposed to detect change based on time-series data from different sensors. Firstly, the overall difference image of the time-series PolSAR is calculated by omnibus test statistics, and difference images between any two images in different times are acquired by R j test statistics. Secondly, the difference images are segmented with a Generalized Statistical Region Merging (GSRM) algorithm which can suppress the effect of speckle noise. Generalized Gaussian Mixture Model (GGMM) is then used to obtain the time-series change detection maps in the final step of the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection using time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can not only detect the time-series change from different sensors, but it can also better suppress the influence of speckle noise and improve the overall accuracy and Kappa coefficient.

  16. How to determine life expectancy change of air pollution mortality: a time series study

    PubMed Central

    2011-01-01

    Background Information on life expectancy (LE) change is of great concern for policy makers, as evidenced by discussions of the "harvesting" (or "mortality displacement") issue, i.e. how large an LE loss corresponds to the mortality results of time series (TS) studies. Whereas loss of LE attributable to chronic air pollution exposure can be determined from cohort studies, using life table methods, conventional TS studies have identified only deaths due to acute exposure, during the immediate past (typically the preceding one to five days), and they provide no information about the LE loss per death. Methods We show how to obtain information on population-average LE loss by extending the observation window (largest "lag") of TS to include a sufficient number of "impact coefficients" for past exposures ("lags"). We test several methods for determining these coefficients. Once all of the coefficients have been determined, the LE change is calculated as time integral of the relative risk change after a permanent step change in exposure. Results The method is illustrated with results for daily data of non-accidental mortality from Hong Kong for 1985 - 2005, regressed against PM10 and SO2 with observation windows up to 5 years. The majority of the coefficients is statistically significant. The magnitude of the SO2 coefficients is comparable to those for PM10. But a window of 5 years is not sufficient and the results for LE change are only a lower bound; it is consistent with what is implied by other studies of long term impacts. Conclusions A TS analysis can determine the LE loss, but if the observation window is shorter than the relevant exposures one obtains only a lower bound. PMID:21450107

  17. Coupled oscillators in identification of nonlinear damping of a real parametric pendulum

    NASA Astrophysics Data System (ADS)

    Olejnik, Paweł; Awrejcewicz, Jan

    2018-01-01

    A damped parametric pendulum with friction is identified twice by means of its precise and imprecise mathematical model. A laboratory test stand designed for experimental investigations of nonlinear effects determined by a viscous resistance and the stick-slip phenomenon serves as the model mechanical system. An influence of accurateness of mathematical modeling on the time variability of the nonlinear damping coefficient of the oscillator is proved. A free decay response of a precisely and imprecisely modeled physical pendulum is dependent on two different time-varying coefficients of damping. The coefficients of the analyzed parametric oscillator are identified with the use of a new semi-empirical method based on a coupled oscillators approach, utilizing the fractional order derivative of the discrete measurement series treated as an input to the numerical model. Results of application of the proposed method of identification of the nonlinear coefficients of the damped parametric oscillator have been illustrated and extensively discussed.

  18. Modeling seasonal variation of hip fracture in Montreal, Canada.

    PubMed

    Modarres, Reza; Ouarda, Taha B M J; Vanasse, Alain; Orzanco, Maria Gabriela; Gosselin, Pierre

    2012-04-01

    The investigation of the association of the climate variables with hip fracture incidences is important in social health issues. This study examined and modeled the seasonal variation of monthly population based hip fracture rate (HFr) time series. The seasonal ARIMA time series modeling approach is used to model monthly HFr incidences time series of female and male patients of the ages 40-74 and 75+ of Montreal, Québec province, Canada, in the period of 1993-2004. The correlation coefficients between meteorological variables such as temperature, snow depth, rainfall depth and day length and HFr are significant. The nonparametric Mann-Kendall test for trend assessment and the nonparametric Levene's test and Wilcoxon's test for checking the difference of HFr before and after change point are also used. The seasonality in HFr indicated sharp difference between winter and summer time. The trend assessment showed decreasing trends in HFr of female and male groups. The nonparametric test also indicated a significant change of the mean HFr. A seasonal ARIMA model was applied for HFr time series without trend and a time trend ARIMA model (TT-ARIMA) was developed and fitted to HFr time series with a significant trend. The multi criteria evaluation showed the adequacy of SARIMA and TT-ARIMA models for modeling seasonal hip fracture time series with and without significant trend. In the time series analysis of HFr of the Montreal region, the effects of the seasonal variation of climate variables on hip fracture are clear. The Seasonal ARIMA model is useful for modeling HFr time series without trend. However, for time series with significant trend, the TT-ARIMA model should be applied for modeling HFr time series. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. General Series Solutions for Stresses and Displacements in an Inner-fixed Ring

    NASA Astrophysics Data System (ADS)

    Jiao, Yongshu; Liu, Shuo; Qi, Dexuan

    2018-03-01

    The general series solution approach is provided to get the stress and displacement fields in the inner-fixed ring. After choosing an Airy stress function in series form, stresses are expressed by infinite coefficients. Displacements are obtained by integrating the geometric equations. For an inner-fixed ring, the arbitrary loads acting on outer edge are extended into two sets of Fourier series. The zero displacement boundary conditions on inner surface are utilized. Then the stress (and displacement) coefficients are expressed by loading coefficients. A numerical example shows the validity of this approach.

  20. Early-Time Solution of the Horizontal Unconfined Aquifer in the Buildup Phase

    NASA Astrophysics Data System (ADS)

    Gravanis, Elias; Akylas, Evangelos

    2017-10-01

    We derive the early-time solution of the Boussinesq equation for the horizontal unconfined aquifer in the buildup phase under constant recharge and zero inflow. The solution is expressed as a power series of a suitable similarity variable, which is constructed so that to satisfy the boundary conditions at both ends of the aquifer, that is, it is a polynomial approximation of the exact solution. The series turns out to be asymptotic and it is regularized by resummation techniques that are used to define divergent series. The outflow rate in this regime is linear in time, and the (dimensionless) coefficient is calculated to eight significant figures. The local error of the series is quantified by its deviation from satisfying the self-similar Boussinesq equation at every point. The local error turns out to be everywhere positive, hence, so is the integrated error, which in turn quantifies the degree of convergence of the series to the exact solution.

  1. Using wavelet-feedforward neural networks to improve air pollution forecasting in urban environments.

    PubMed

    Dunea, Daniel; Pohoata, Alin; Iordache, Stefania

    2015-07-01

    The paper presents the screening of various feedforward neural networks (FANN) and wavelet-feedforward neural networks (WFANN) applied to time series of ground-level ozone (O3), nitrogen dioxide (NO2), and particulate matter (PM10 and PM2.5 fractions) recorded at four monitoring stations located in various urban areas of Romania, to identify common configurations with optimal generalization performance. Two distinct model runs were performed as follows: data processing using hourly-recorded time series of airborne pollutants during cold months (O3, NO2, and PM10), when residential heating increases the local emissions, and data processing using 24-h daily averaged concentrations (PM2.5) recorded between 2009 and 2012. Dataset variability was assessed using statistical analysis. Time series were passed through various FANNs. Each time series was decomposed in four time-scale components using three-level wavelets, which have been passed also through FANN, and recomposed into a single time series. The agreement between observed and modelled output was evaluated based on the statistical significance (r coefficient and correlation between errors and data). Daubechies db3 wavelet-Rprop FANN (6-4-1) utilization gave positive results for O3 time series optimizing the exclusive use of the FANN for hourly-recorded time series. NO2 was difficult to model due to time series specificity, but wavelet integration improved FANN performances. Daubechies db3 wavelet did not improve the FANN outputs for PM10 time series. Both models (FANN/WFANN) overestimated PM2.5 forecasted values in the last quarter of time series. A potential improvement of the forecasted values could be the integration of a smoothing algorithm to adjust the PM2.5 model outputs.

  2. Feature extraction across individual time series observations with spikes using wavelet principal component analysis.

    PubMed

    Røislien, Jo; Winje, Brita

    2013-09-20

    Clinical studies frequently include repeated measurements of individuals, often for long periods. We present a methodology for extracting common temporal features across a set of individual time series observations. In particular, the methodology explores extreme observations within the time series, such as spikes, as a possible common temporal phenomenon. Wavelet basis functions are attractive in this sense, as they are localized in both time and frequency domains simultaneously, allowing for localized feature extraction from a time-varying signal. We apply wavelet basis function decomposition of individual time series, with corresponding wavelet shrinkage to remove noise. We then extract common temporal features using linear principal component analysis on the wavelet coefficients, before inverse transformation back to the time domain for clinical interpretation. We demonstrate the methodology on a subset of a large fetal activity study aiming to identify temporal patterns in fetal movement (FM) count data in order to explore formal FM counting as a screening tool for identifying fetal compromise and thus preventing adverse birth outcomes. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Radiocarbon dating uncertainty and the reliability of the PEWMA method of time-series analysis for research on long-term human-environment interaction

    PubMed Central

    Carleton, W. Christopher; Campbell, David

    2018-01-01

    Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating—the most common chronometric technique in archaeological and palaeoenvironmental research—creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20–30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence. PMID:29351329

  4. Radiocarbon dating uncertainty and the reliability of the PEWMA method of time-series analysis for research on long-term human-environment interaction.

    PubMed

    Carleton, W Christopher; Campbell, David; Collard, Mark

    2018-01-01

    Statistical time-series analysis has the potential to improve our understanding of human-environment interaction in deep time. However, radiocarbon dating-the most common chronometric technique in archaeological and palaeoenvironmental research-creates challenges for established statistical methods. The methods assume that observations in a time-series are precisely dated, but this assumption is often violated when calibrated radiocarbon dates are used because they usually have highly irregular uncertainties. As a result, it is unclear whether the methods can be reliably used on radiocarbon-dated time-series. With this in mind, we conducted a large simulation study to investigate the impact of chronological uncertainty on a potentially useful time-series method. The method is a type of regression involving a prediction algorithm called the Poisson Exponentially Weighted Moving Average (PEMWA). It is designed for use with count time-series data, which makes it applicable to a wide range of questions about human-environment interaction in deep time. Our simulations suggest that the PEWMA method can often correctly identify relationships between time-series despite chronological uncertainty. When two time-series are correlated with a coefficient of 0.25, the method is able to identify that relationship correctly 20-30% of the time, providing the time-series contain low noise levels. With correlations of around 0.5, it is capable of correctly identifying correlations despite chronological uncertainty more than 90% of the time. While further testing is desirable, these findings indicate that the method can be used to test hypotheses about long-term human-environment interaction with a reasonable degree of confidence.

  5. The application of complex network time series analysis in turbulent heated jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charakopoulos, A. K.; Karakasidis, T. E., E-mail: thkarak@uth.gr; Liakopoulos, A.

    In the present study, we applied the methodology of the complex network-based time series analysis to experimental temperature time series from a vertical turbulent heated jet. More specifically, we approach the hydrodynamic problem of discriminating time series corresponding to various regions relative to the jet axis, i.e., time series corresponding to regions that are close to the jet axis from time series originating at regions with a different dynamical regime based on the constructed network properties. Applying the transformation phase space method (k nearest neighbors) and also the visibility algorithm, we transformed time series into networks and evaluated the topologicalmore » properties of the networks such as degree distribution, average path length, diameter, modularity, and clustering coefficient. The results show that the complex network approach allows distinguishing, identifying, and exploring in detail various dynamical regions of the jet flow, and associate it to the corresponding physical behavior. In addition, in order to reject the hypothesis that the studied networks originate from a stochastic process, we generated random network and we compared their statistical properties with that originating from the experimental data. As far as the efficiency of the two methods for network construction is concerned, we conclude that both methodologies lead to network properties that present almost the same qualitative behavior and allow us to reveal the underlying system dynamics.« less

  6. The application of complex network time series analysis in turbulent heated jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charakopoulos, A. K.; Karakasidis, T. E., E-mail: thkarak@uth.gr; Liakopoulos, A.

    2014-06-15

    In the present study, we applied the methodology of the complex network-based time series analysis to experimental temperature time series from a vertical turbulent heated jet. More specifically, we approach the hydrodynamic problem of discriminating time series corresponding to various regions relative to the jet axis, i.e., time series corresponding to regions that are close to the jet axis from time series originating at regions with a different dynamical regime based on the constructed network properties. Applying the transformation phase space method (k nearest neighbors) and also the visibility algorithm, we transformed time series into networks and evaluated the topologicalmore » properties of the networks such as degree distribution, average path length, diameter, modularity, and clustering coefficient. The results show that the complex network approach allows distinguishing, identifying, and exploring in detail various dynamical regions of the jet flow, and associate it to the corresponding physical behavior. In addition, in order to reject the hypothesis that the studied networks originate from a stochastic process, we generated random network and we compared their statistical properties with that originating from the experimental data. As far as the efficiency of the two methods for network construction is concerned, we conclude that both methodologies lead to network properties that present almost the same qualitative behavior and allow us to reveal the underlying system dynamics.« less

  7. Disparities in out-of-pocket inpatient expenditures in rural Shaanxi Province, western China from 2011 to 2014: a time series analysis.

    PubMed

    Yang, Caijun; Cai, Wenfang; Li, Zongjie; Fang, Yu

    2018-06-01

    To investigate the long-term trend of disparity of monthly average out-of-pocket inpatient expenditures (OOP) between areas with different developing levels since the new healthcare reform. Time series regression was used to assess the trend of disparities of OOP and monthly average inpatient expenditures (AIE) between areas with different developing levels in rural Shaanxi Province, western China. The data of OOP and AIE in primary health institutions, secondary hospitals, tertiary hospitals and also all levels of the hospital were analysed separately covering the period 2011 through to 2014. The disparity of AIE at all levels of hospitals was increasing (coefficient = 0.003, P = 0.029), and only the disparity of AIE in secondary hospitals was statistical significant (coefficient = 0.003, P = 0.012) when separately considering different levels of the hospital. The disparity of OOP in all levels of the hospital was increasing (coefficient = 0.007, P = 0.001), and the OOP in primary hospitals contributed most of the disparity (coefficient = 0.019, P = 0.000), followed by OOP in secondary (coefficient = 0.008, P = 0.003) and tertiary hospitals (coefficient = 0.004, P = 0.091). A statistically significant absolute increase in the trend of disparities of OOP and AIE at all levels of hospital was detected after the new healthcare reform in Shaanxi Province, western China. The increase rate of disparity of OOP was bigger than that of AIE. A modified health insurance plan should be proposed to guarantee equity in the future. © 2018 John Wiley & Sons Ltd.

  8. Rainfall disaggregation for urban hydrology: Effects of spatial consistence

    NASA Astrophysics Data System (ADS)

    Müller, Hannes; Haberlandt, Uwe

    2015-04-01

    For urban hydrology rainfall time series with a high temporal resolution are crucial. Observed time series of this kind are very short in most cases, so they cannot be used. On the contrary, time series with lower temporal resolution (daily measurements) exist for much longer periods. The objective is to derive time series with a long duration and a high resolution by disaggregating time series of the non-recording stations with information of time series of the recording stations. The multiplicative random cascade model is a well-known disaggregation model for daily time series. For urban hydrology it is often assumed, that a day consists of only 1280 minutes in total as starting point for the disaggregation process. We introduce a new variant for the cascade model, which is functional without this assumption and also outperforms the existing approach regarding time series characteristics like wet and dry spell duration, average intensity, fraction of dry intervals and extreme value representation. However, in both approaches rainfall time series of different stations are disaggregated without consideration of surrounding stations. This yields in unrealistic spatial patterns of rainfall. We apply a simulated annealing algorithm that has been used successfully for hourly values before. Relative diurnal cycles of the disaggregated time series are resampled to reproduce the spatial dependence of rainfall. To describe spatial dependence we use bivariate characteristics like probability of occurrence, continuity ratio and coefficient of correlation. Investigation area is a sewage system in Northern Germany. We show that the algorithm has the capability to improve spatial dependence. The influence of the chosen disaggregation routine and the spatial dependence on overflow occurrences and volumes of the sewage system will be analyzed.

  9. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  10. AI-based (ANN and SVM) statistical downscaling methods for precipitation estimation under climate change scenarios

    NASA Astrophysics Data System (ADS)

    Mehrvand, Masoud; Baghanam, Aida Hosseini; Razzaghzadeh, Zahra; Nourani, Vahid

    2017-04-01

    Since statistical downscaling methods are the most largely used models to study hydrologic impact studies under climate change scenarios, nonlinear regression models known as Artificial Intelligence (AI)-based models such as Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been used to spatially downscale the precipitation outputs of Global Climate Models (GCMs). The study has been carried out using GCM and station data over GCM grid points located around the Peace-Tampa Bay watershed weather stations. Before downscaling with AI-based model, correlation coefficient values have been computed between a few selected large-scale predictor variables and local scale predictands to select the most effective predictors. The selected predictors are then assessed considering grid location for the site in question. In order to increase AI-based downscaling model accuracy pre-processing has been developed on precipitation time series. In this way, the precipitation data derived from various GCM data analyzed thoroughly to find the highest value of correlation coefficient between GCM-based historical data and station precipitation data. Both GCM and station precipitation time series have been assessed by comparing mean and variances over specific intervals. Results indicated that there is similar trend between GCM and station precipitation data; however station data has non-stationary time series while GCM data does not. Finally AI-based downscaling model have been applied to several GCMs with selected predictors by targeting local precipitation time series as predictand. The consequences of recent step have been used to produce multiple ensembles of downscaled AI-based models.

  11. A new numerical treatment based on Lucas polynomials for 1D and 2D sinh-Gordon equation

    NASA Astrophysics Data System (ADS)

    Oruç, Ömer

    2018-04-01

    In this paper, a new mixed method based on Lucas and Fibonacci polynomials is developed for numerical solutions of 1D and 2D sinh-Gordon equations. Firstly time variable discretized by central finite difference and then unknown function and its derivatives are expanded to Lucas series. With the help of these series expansion and Fibonacci polynomials, matrices for differentiation are derived. With this approach, finding the solution of sinh-Gordon equation transformed to finding the solution of an algebraic system of equations. Lucas series coefficients are acquired by solving this system of algebraic equations. Then by plugginging these coefficients into Lucas series expansion numerical solutions can be obtained consecutively. The main objective of this paper is to demonstrate that Lucas polynomial based method is convenient for 1D and 2D nonlinear problems. By calculating L2 and L∞ error norms of some 1D and 2D test problems efficiency and performance of the proposed method is monitored. Acquired accurate results confirm the applicability of the method.

  12. 40 CFR Appendix F to Part 60 - Quality Assurance Procedures

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... plus the 2.5 percent error confidence coefficient of a series of tests divided by the mean of the RM... the daily zero (or low-level) CD or the daily high-level CD exceeds two times the limits of the... (or low-level) or high-level CD result exceeds four times the applicable drift specification in...

  13. 40 CFR Appendix F to Part 60 - Quality Assurance Procedures

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... plus the 2.5 percent error confidence coefficient of a series of tests divided by the mean of the RM...-level) CD or the daily high-level CD exceeds two times the limits of the applicable PS's in appendix B... result exceeds four times the applicable drift specification in appendix B during any CD check, the CEMS...

  14. 40 CFR Appendix F to Part 60 - Quality Assurance Procedures

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... plus the 2.5 percent error confidence coefficient of a series of tests divided by the mean of the RM...-level) CD or the daily high-level CD exceeds two times the limits of the applicable PS's in appendix B... result exceeds four times the applicable drift specification in appendix B during any CD check, the CEMS...

  15. 40 CFR Appendix F to Part 60 - Quality Assurance Procedures

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... plus the 2.5 percent error confidence coefficient of a series of tests divided by the mean of the RM...-level) CD or the daily high-level CD exceeds two times the limits of the applicable PS's in appendix B... result exceeds four times the applicable drift specification in appendix B during any CD check, the CEMS...

  16. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  17. A comparison of performance of several artificial intelligence methods for forecasting monthly discharge time series

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Chuan; Chau, Kwok-Wing; Cheng, Chun-Tian; Qiu, Lin

    2009-08-01

    SummaryDeveloping a hydrological forecasting model based on past records is crucial to effective hydropower reservoir management and scheduling. Traditionally, time series analysis and modeling is used for building mathematical models to generate hydrologic records in hydrology and water resources. Artificial intelligence (AI), as a branch of computer science, is capable of analyzing long-series and large-scale hydrological data. In recent years, it is one of front issues to apply AI technology to the hydrological forecasting modeling. In this paper, autoregressive moving-average (ARMA) models, artificial neural networks (ANNs) approaches, adaptive neural-based fuzzy inference system (ANFIS) techniques, genetic programming (GP) models and support vector machine (SVM) method are examined using the long-term observations of monthly river flow discharges. The four quantitative standard statistical performance evaluation measures, the coefficient of correlation ( R), Nash-Sutcliffe efficiency coefficient ( E), root mean squared error (RMSE), mean absolute percentage error (MAPE), are employed to evaluate the performances of various models developed. Two case study river sites are also provided to illustrate their respective performances. The results indicate that the best performance can be obtained by ANFIS, GP and SVM, in terms of different evaluation criteria during the training and validation phases.

  18. Erosion characteristics and horizontal variability for small erosion depths in the Sacramento-San Joaquin River Delta, California, USA

    NASA Astrophysics Data System (ADS)

    Schoellhamer, David H.; Manning, Andrew J.; Work, Paul A.

    2017-06-01

    Erodibility of cohesive sediment in the Sacramento-San Joaquin River Delta (Delta) was investigated with an erosion microcosm. Erosion depths in the Delta and in the microcosm were estimated to be about one floc diameter over a range of shear stresses and times comparable to half of a typical tidal cycle. Using the conventional assumption of horizontally homogeneous bed sediment, data from 27 of 34 microcosm experiments indicate that the erosion rate coefficient increased as eroded mass increased, contrary to theory. We believe that small erosion depths, erosion rate coefficient deviation from theory, and visual observation of horizontally varying biota and texture at the sediment surface indicate that erosion cannot solely be a function of depth but must also vary horizontally. We test this hypothesis by developing a simple numerical model that includes horizontal heterogeneity, use it to develop an artificial time series of suspended-sediment concentration (SSC) in an erosion microcosm, then analyze that time series assuming horizontal homogeneity. A shear vane was used to estimate that the horizontal standard deviation of critical shear stress was about 30% of the mean value at a site in the Delta. The numerical model of the erosion microcosm included a normal distribution of initial critical shear stress, a linear increase in critical shear stress with eroded mass, an exponential decrease of erosion rate coefficient with eroded mass, and a stepped increase in applied shear stress. The maximum SSC for each step increased gradually, thus confounding identification of a single well-defined critical shear stress as encountered with the empirical data. Analysis of the artificial SSC time series with the assumption of a homogeneous bed reproduced the original profile of critical shear stress, but the erosion rate coefficient increased with eroded mass, similar to the empirical data. Thus, the numerical experiment confirms the small-depth erosion hypothesis. A linear model of critical shear stress and eroded mass is proposed to simulate small-depth erosion, assuming that the applied and critical shear stresses quickly reach equilibrium.

  19. Erosion characteristics and horizontal variability for small erosion depths in the Sacramento-San Joaquin River Delta, California, USA

    USGS Publications Warehouse

    Schoellhamer, David H.; Manning, Andrew J.; Work, Paul A.

    2017-01-01

    Erodibility of cohesive sediment in the Sacramento-San Joaquin River Delta (Delta) was investigated with an erosion microcosm. Erosion depths in the Delta and in the microcosm were estimated to be about one floc diameter over a range of shear stresses and times comparable to half of a typical tidal cycle. Using the conventional assumption of horizontally homogeneous bed sediment, data from 27 of 34 microcosm experiments indicate that the erosion rate coefficient increased as eroded mass increased, contrary to theory. We believe that small erosion depths, erosion rate coefficient deviation from theory, and visual observation of horizontally varying biota and texture at the sediment surface indicate that erosion cannot solely be a function of depth but must also vary horizontally. We test this hypothesis by developing a simple numerical model that includes horizontal heterogeneity, use it to develop an artificial time series of suspended-sediment concentration (SSC) in an erosion microcosm, then analyze that time series assuming horizontal homogeneity. A shear vane was used to estimate that the horizontal standard deviation of critical shear stress was about 30% of the mean value at a site in the Delta. The numerical model of the erosion microcosm included a normal distribution of initial critical shear stress, a linear increase in critical shear stress with eroded mass, an exponential decrease of erosion rate coefficient with eroded mass, and a stepped increase in applied shear stress. The maximum SSC for each step increased gradually, thus confounding identification of a single well-defined critical shear stress as encountered with the empirical data. Analysis of the artificial SSC time series with the assumption of a homogeneous bed reproduced the original profile of critical shear stress, but the erosion rate coefficient increased with eroded mass, similar to the empirical data. Thus, the numerical experiment confirms the small-depth erosion hypothesis. A linear model of critical shear stress and eroded mass is proposed to simulate small-depth erosion, assuming that the applied and critical shear stresses quickly reach equilibrium.

  20. Analysis of Correlation Tendency between Wind and Solar from Various Spatio-temporal Perspectives

    NASA Astrophysics Data System (ADS)

    Wang, X.; Weihua, X.; Mei, Y.

    2017-12-01

    Analysis of correlation between wind resources and solar resources could explore their complementary features, enhance the utilization efficiency of renewable energy and further alleviate the carbon emission issues caused by the fossil energy. In this paper, we discuss the correlation between wind and solar from various spatio-temporal perspectives (from east to west, in terms of plain, plateau, hill, and mountain, from hourly to daily, ten days and monthly) with observed data and modeled data from NOAA (National Oceanic and Atmospheric Administration) and NERL (National Renewable Energy Laboratory). With investigation of wind speed time series and solar radiation time series (period: 10 years, resolution: 1h) of 72 stations located in various landform and distributed dispersedly in USA, the results show that the correlation coefficient, Kendall's rank correlation coefficient, changes negative to positive value from east coast to west coast of USA, and this phenomena become more obvious when the time scale of resolution increases from daily to ten days and monthly. Furthermore, considering the differences of landforms which influence the local meteorology the Kendall coefficients of diverse topographies are compared and it is found that the coefficients descend from mountain to hill, plateau and plain. However, no such evident tendencies could be found in daily scale. According to this research, it is proposed that the complementary feature of wind resources and solar resources in the east or in the mountain area of USA is conspicuous. Subsequent study would try to further verify this analysis by investigating the operation status of wind power station and solar power station.

  1. Organics removal from landfill leachate and activated sludge production in SBR reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimiuk, Ewa; Kulikowska, Dorota

    2006-07-01

    This study is aimed at estimating organic compounds removal and sludge production in SBR during treatment of landfill leachate. Four series were performed. At each series, experiments were carried out at the hydraulic retention time (HRT) of 12, 6, 3 and 2 d. The series varied in SBR filling strategies, duration of the mixing and aeration phases, and the sludge age. In series 1 and 2 (a short filling period, mixing and aeration phases in the operating cycle), the relationship between organics concentration (COD) in the leachate treated and HRT was pseudo-first-order kinetics. In series 3 (with mixing and aerationmore » phases) and series 4 (only aeration phase) with leachate supplied by means of a peristaltic pump for 4 h of the cycle (filling during reaction period) - this relationship was zero-order kinetics. Activated sludge production expressed as the observed coefficient of biomass production (Y {sub obs}) decreased correspondingly with increasing HRT. The smallest differences between reactors were observed in series 3 in which Y {sub obs} was almost stable (0.55-0.6 mg VSS/mg COD). The elimination of the mixing phase in the cycle (series 4) caused the Y {sub obs} to decrease significantly from 0.32 mg VSS/mg COD at HRT 2 d to 0.04 mg VSS/mg COD at HRT 12 d. The theoretical yield coefficient Y accounted for 0.534 mg VSS/mg COD (series 1) and 0.583 mg VSS/mg COD (series 2). In series 3 and 4, it was almost stable (0.628 mg VSS/mg COD and 0.616 mg VSS/mg COD, respectively). After the elimination of the mixing phase in the operating cycle, the specific biomass decay rate increased from 0.006 d{sup -1} (series 3) to 0.032 d{sup -1} (series 4). The operating conditions employing mixing/aeration or only aeration phases enable regulation of the sludge production. The SBRs operated under aerobic conditions are more favourable at a short hydraulic retention time. At long hydraulic retention time, it can lead to a decrease in biomass concentration in the SBR as a result of cell decay. On the contrary, in the activated sludge at long HRT, a short filling period and operating cycle of the reactor with the mixing and aeration phases seem the most favourable.« less

  2. Partial-fraction expansion and inverse Laplace transform of a rational function with real coefficients

    NASA Technical Reports Server (NTRS)

    Chang, F.-C.; Mott, H.

    1974-01-01

    This paper presents a technique for the partial-fraction expansion of functions which are ratios of polynomials with real coefficients. The expansion coefficients are determined by writing the polynomials as Taylor's series and obtaining the Laurent series expansion of the function. The general formula for the inverse Laplace transform is also derived.

  3. Understanding Aggregation and Estimating Seasonal Abundance of Chrysaora quinquecirrha Medusae from a Fixed-station Time Series in the Choptank River, Chesapeake Bay

    NASA Astrophysics Data System (ADS)

    Tay, J.; Hood, R. R.

    2016-02-01

    Although jellyfish exert strong control over marine plankton dynamics (Richardson et al. 2009, Robison et al. 2014) and negatively impact human commercial and recreational activities (Purcell et al. 2007, Purcell 2012), jellyfish biomass is not well quantified due primarily to sampling difficulties with plankton nets or fisheries trawls (Haddock 2004). As a result, some of the longest records of jellyfish are visual shore-based surveys, such as the fixed-station time series of Chrysaora quinquecirrha that began in 1960 in the Patuxent River in Chesapeake Bay, USA (Cargo and King 1990). Time series counts from fixed-station surveys capture two signals: 1) demographic change at timescales on the order of reproductive processes and 2) spatial patchiness at shorter timescales as different parcels of water move in and out of the survey area by tidal and estuarine advection and turbulent mixing (Lee and McAlice 1979). In this study, our goal was to separate these two signals using a 4-year time series of C. quinquecirrha medusa counts from a fixed-station in the Choptank River, Chesapeake Bay. Idealized modeling of tidal and estuarine advection was used to conceptualize the sampling scheme. Change point and time series analysis was used to detect demographic changes. Indices of aggregation (Negative Binomial coefficient, Taylor's Power Law coefficient, and Morisita's Index) were calculated to describe the spatial patchiness of the medusae. Abundance estimates revealed a bloom cycle that differed in duration and magnitude for each of the study years. Indices of aggregation indicated that medusae were aggregated and that patches grew in the number of individuals, and likely in size, as abundance increased. Further inference from the conceptual modeling suggested that medusae patch structure was generally homogenous over the tidal extent. This study highlights the benefits of using fixed-station shore-based surveys for understanding the biology and ecology of jellyfish.

  4. A Novel Analysis Of The Connection Between Indian Monsoon Rainfall And Solar Activity

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, S.; Narasimha, R.

    2005-12-01

    The existence of possible correlations between the solar cycle period as extracted from the yearly means of sunspot numbers and any periodicities that may be present in the Indian monsoon rainfall has been addressed using wavelet analysis. The wavelet transform coefficient maps of sunspot-number time series and those of the homogeneous Indian monsoon rainfall annual time series data reveal striking similarities, especially around the 11-year period. A novel method to analyse and quantify this similarity devising statistical schemes is suggested in this paper. The wavelet transform coefficient maxima at the 11-year period for the sunspot numbers and the monsoon rainfall have each been modelled as a point process in time and a statistical scheme for identifying a trend or dependence between the two processes has been devised. A regression analysis of parameters in these processes reveals a nearly linear trend with small but systematic deviations from the regressed line. Suitable function models for these deviations have been obtained through an unconstrained error minimisation scheme. These models provide an excellent fit to the time series of the given wavelet transform coefficient maxima obtained from actual data. Statistical significance tests on these deviations suggest with 99% confidence that the deviations are sample fluctuations obtained from normal distributions. In fact our earlier studies (see, Bhattacharyya and Narasimha, 2005, Geophys. Res. Lett., Vol. 32, No. 5) revealed that average rainfall is higher during periods of greater solar activity for all cases, at confidence levels varying from 75% to 99%, being 95% or greater in 3 out of 7 of them. Analysis using standard wavelet techniques reveals higher power in the 8--16 y band during the higher solar activity period, in 6 of the 7 rainfall time series, at confidence levels exceeding 99.99%. Furthermore, a comparison between the wavelet cross spectra of solar activity with rainfall and noise (including those simulating the rainfall spectrum and probability distribution) revealed that over the two test-periods respectively of high and low solar activity, the average cross power of the solar activity index with rainfall exceeds that with the noise at z-test confidence levels exceeding 99.99% over period-bands covering the 11.6 y sunspot cycle (see, Bhattacharyya and Narasimha, SORCE 2005 14-16th September, at Durango, Colorado USA). These results provide strong evidence for connections between Indian rainfall and solar activity. The present study reveals in addition the presence of subharmonics of the solar cycle period in the monsoon rainfall time series together with information on their phase relationships.

  5. Initial Results from Fitting Resolved Modes using HMI Intensity Observations

    NASA Astrophysics Data System (ADS)

    Korzennik, Sylvain G.

    2017-08-01

    The HMI project recently started processing the continuum intensity images following global helioseismology procedures similar to those used to process the velocity images. The spatial decomposition of these images has produced time series of spherical harmonic coefficients for degrees up to l=300, using a different apodization than the one used for velocity observations. The first 360 days of observations were processed and made available. I present initial results from fitting these time series using my state of the art fitting methodology and compare the derived mode characteristics to those estimated using co-eval velocity observations.

  6. Comparative analysis of seismic persistence of Hindu Kush nests (Afghanistan) and Los Santos (Colombia) using fractal dimension

    NASA Astrophysics Data System (ADS)

    Prada, D. A.; Sanabria, M. P.; Torres, A. F.; Álvarez, M. A.; Gómez, J.

    2018-04-01

    The study of persistence in time series in seismic events in two of the most important nets such as Hindu Kush in Afghanistan and Los Santos Santander in Colombia generate great interest due to its high presence of telluric activity. The data were taken from the global seismological network. Using the Jarque-Bera test the presence of gaussian distribution was analyzed, and because the distribution in the series was asymmetric, without presence of mesocurtisity, the Hurst coefficient was calculated using the rescaled range method, with which it was found the fractal dimension associated to these time series and under what is possible to determine the persistence, antipersistence and volatility in these phenomena.

  7. Evaluation of agreement between temporal series obtained from electrocardiogram and pulse wave.

    NASA Astrophysics Data System (ADS)

    Leikan, GM; Rossi, E.; Sanz, MCuadra; Delisle Rodríguez, D.; Mántaras, MC; Nicolet, J.; Zapata, D.; Lapyckyj, I.; Siri, L. Nicola; Perrone, MS

    2016-04-01

    Heart rate variability allows to study the cardiovascular autonomic nervous system modulation. Usually, this signal is obtained from the electrocardiogram (ECG). A simpler method for recording the pulse wave (PW) is by means of finger photoplethysmography (PPG), which also provides information about the duration of the cardiac cycle. In this study, the correlation and agreement between the time series of the intervals between heartbeats obtained from the ECG with those obtained from the PPG, were studied. Signals analyzed were obtained from young, healthy and resting subjects. For statistical analysis, the Pearson correlation coefficient and the Bland and Altman limits of agreement were used. Results show that the time series constructed from the PW would not replace the ones obtained from ECG.

  8. Temporal evolution of total ozone and circulation patterns over European mid-latitudes

    NASA Astrophysics Data System (ADS)

    Monge Sanz, B. M.; Casale, G. R.; Palmieri, S.; Siani, A. M.

    2003-04-01

    Linear correlation analysis and the running correlation technique are used to investigate the interannual and interdecadal variations of total ozone (TO) over several mid-latitude European locations. The study includes the longest series of ozone data, that of the Swiss station of Arosa. TO series have been related to time series of two circulation indices, the North Atlantic Oscillation Index (NAOI) and the Arctic Oscillation Index (AOI). The analysis has been performed with monthly data, and both series containing all the months of the year and winter (DJFM) series have been used. Special attention has been given to winter series, which exhibit very high correlation coefficients with NAOI and AOI; interannual variations of this relationship are studied by applying the running correlation technique. TO and circulation indices data series have been also partitioned into their different time-scale components with the Kolmogorov-Zurbenko method. Long-term components indicate the existence of strong opposite connection between total ozone and circulation patterns over the studied region during the last three decades. However, it is also observed that this relation has not always been so, and in previous times differences in the correlation amplitude and sign have been detected.

  9. Discriminant Analysis of Time Series in the Presence of Within-Group Spectral Variability.

    PubMed

    Krafty, Robert T

    2016-07-01

    Many studies record replicated time series epochs from different groups with the goal of using frequency domain properties to discriminate between the groups. In many applications, there exists variation in cyclical patterns from time series in the same group. Although a number of frequency domain methods for the discriminant analysis of time series have been explored, there is a dearth of models and methods that account for within-group spectral variability. This article proposes a model for groups of time series in which transfer functions are modeled as stochastic variables that can account for both between-group and within-group differences in spectra that are identified from individual replicates. An ensuing discriminant analysis of stochastic cepstra under this model is developed to obtain parsimonious measures of relative power that optimally separate groups in the presence of within-group spectral variability. The approach possess favorable properties in classifying new observations and can be consistently estimated through a simple discriminant analysis of a finite number of estimated cepstral coefficients. Benefits in accounting for within-group spectral variability are empirically illustrated in a simulation study and through an analysis of gait variability.

  10. Eisenstein series for infinite-dimensional U-duality groups

    NASA Astrophysics Data System (ADS)

    Fleig, Philipp; Kleinschmidt, Axel

    2012-06-01

    We consider Eisenstein series appearing as coefficients of curvature corrections in the low-energy expansion of type II string theory four-graviton scattering amplitudes. We define these Eisenstein series over all groups in the E n series of string duality groups, and in particular for the infinite-dimensional Kac-Moody groups E 9, E 10 and E 11. We show that, remarkably, the so-called constant term of Kac-Moody-Eisenstein series contains only a finite number of terms for particular choices of a parameter appearing in the definition of the series. This resonates with the idea that the constant term of the Eisenstein series encodes perturbative string corrections in BPS-protected sectors allowing only a finite number of corrections. We underpin our findings with an extensive discussion of physical degeneration limits in D < 3 space-time dimensions.

  11. Clock Synchronization Through Time-Variant Underwater Acoustic Channels

    DTIC Science & Technology

    2012-09-01

    stage, we analyze a series of chirp responses to identify the least time -varying multipath present in the channel between the two nodes. Based on the... based on the detected arrivals and determines the most stable one based on the correlation coefficient of a model fit to the time -of-arrival estimates...short periods of time . Nevertheless, signal fluctuations can occur due to transceiver motion or inherent changes within the propagation medium

  12. On the frequency spectra of the core magnetic field Gauss coefficients

    NASA Astrophysics Data System (ADS)

    Lesur, Vincent; Wardinski, Ingo; Baerenzung, Julien; Holschneider, Matthias

    2018-03-01

    From monthly mean observatory data spanning 1957-2014, geomagnetic field secular variation values were calculated by annual differences. Estimates of the spherical harmonic Gauss coefficients of the core field secular variation were then derived by applying a correlation based modelling. Finally, a Fourier transform was applied to the time series of the Gauss coefficients. This process led to reliable temporal spectra of the Gauss coefficients up to spherical harmonic degree 5 or 6, and down to periods as short as 1 or 2 years depending on the coefficient. We observed that a k-2 slope, where k is the frequency, is an acceptable approximation for these spectra, with possibly an exception for the dipole field. The monthly estimates of the core field secular variation at the observatory sites also show that large and rapid variations of the latter happen. This is an indication that geomagnetic jerks are frequent phenomena and that significant secular variation signals at short time scales - i.e. less than 2 years, could still be extracted from data to reveal an unexplored part of the core dynamics.

  13. A cluster merging method for time series microarray with production values.

    PubMed

    Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio

    2014-09-01

    A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.

  14. Artificial Intelligence Techniques for Predicting and Mapping Daily Pan Evaporation

    NASA Astrophysics Data System (ADS)

    Arunkumar, R.; Jothiprakash, V.; Sharma, Kirty

    2017-09-01

    In this study, Artificial Intelligence techniques such as Artificial Neural Network (ANN), Model Tree (MT) and Genetic Programming (GP) are used to develop daily pan evaporation time-series (TS) prediction and cause-effect (CE) mapping models. Ten years of observed daily meteorological data such as maximum temperature, minimum temperature, relative humidity, sunshine hours, dew point temperature and pan evaporation are used for developing the models. For each technique, several models are developed by changing the number of inputs and other model parameters. The performance of each model is evaluated using standard statistical measures such as Mean Square Error, Mean Absolute Error, Normalized Mean Square Error and correlation coefficient (R). The results showed that daily TS-GP (4) model predicted better with a correlation coefficient of 0.959 than other TS models. Among various CE models, CE-ANN (6-10-1) resulted better than MT and GP models with a correlation coefficient of 0.881. Because of the complex non-linear inter-relationship among various meteorological variables, CE mapping models could not achieve the performance of TS models. From this study, it was found that GP performs better for recognizing single pattern (time series modelling), whereas ANN is better for modelling multiple patterns (cause-effect modelling) in the data.

  15. Statistical tests for power-law cross-correlated processes

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene

    2011-12-01

    For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.

  16. Trend Estimation and Regression Analysis in Climatological Time Series: An Application of Structural Time Series Models and the Kalman Filter.

    NASA Astrophysics Data System (ADS)

    Visser, H.; Molenaar, J.

    1995-05-01

    The detection of trends in climatological data has become central to the discussion on climate change due to the enhanced greenhouse effect. To prove detection, a method is needed (i) to make inferences on significant rises or declines in trends, (ii) to take into account natural variability in climate series, and (iii) to compare output from GCMs with the trends in observed climate data. To meet these requirements, flexible mathematical tools are needed. A structural time series model is proposed with which a stochastic trend, a deterministic trend, and regression coefficients can be estimated simultaneously. The stochastic trend component is described using the class of ARIMA models. The regression component is assumed to be linear. However, the regression coefficients corresponding with the explanatory variables may be time dependent to validate this assumption. The mathematical technique used to estimate this trend-regression model is the Kaiman filter. The main features of the filter are discussed.Examples of trend estimation are given using annual mean temperatures at a single station in the Netherlands (1706-1990) and annual mean temperatures at Northern Hemisphere land stations (1851-1990). The inclusion of explanatory variables is shown by regressing the latter temperature series on four variables: Southern Oscillation index (SOI), volcanic dust index (VDI), sunspot numbers (SSN), and a simulated temperature signal, induced by increasing greenhouse gases (GHG). In all analyses, the influence of SSN on global temperatures is found to be negligible. The correlations between temperatures and SOI and VDI appear to be negative. For SOI, this correlation is significant, but for VDI it is not, probably because of a lack of volcanic eruptions during the sample period. The relation between temperatures and GHG is positive, which is in agreement with the hypothesis of a warming climate because of increasing levels of greenhouse gases. The prediction performance of the model is rather poor, and possible explanations are discussed.

  17. Processing short-term and long-term information with a combination of polynomial approximation techniques and time-delay neural networks.

    PubMed

    Fuchs, Erich; Gruber, Christian; Reitmaier, Tobias; Sick, Bernhard

    2009-09-01

    Neural networks are often used to process temporal information, i.e., any kind of information related to time series. In many cases, time series contain short-term and long-term trends or behavior. This paper presents a new approach to capture temporal information with various reference periods simultaneously. A least squares approximation of the time series with orthogonal polynomials will be used to describe short-term trends contained in a signal (average, increase, curvature, etc.). Long-term behavior will be modeled with the tapped delay lines of a time-delay neural network (TDNN). This network takes the coefficients of the orthogonal expansion of the approximating polynomial as inputs such considering short-term and long-term information efficiently. The advantages of the method will be demonstrated by means of artificial data and two real-world application examples, the prediction of the user number in a computer network and online tool wear classification in turning.

  18. Data imputation analysis for Cosmic Rays time series

    NASA Astrophysics Data System (ADS)

    Fernandes, R. C.; Lucio, P. S.; Fernandez, J. H.

    2017-05-01

    The occurrence of missing data concerning Galactic Cosmic Rays time series (GCR) is inevitable since loss of data is due to mechanical and human failure or technical problems and different periods of operation of GCR stations. The aim of this study was to perform multiple dataset imputation in order to depict the observational dataset. The study has used the monthly time series of GCR Climax (CLMX) and Roma (ROME) from 1960 to 2004 to simulate scenarios of 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% and 90% of missing data compared to observed ROME series, with 50 replicates. Then, the CLMX station as a proxy for allocation of these scenarios was used. Three different methods for monthly dataset imputation were selected: AMÉLIA II - runs the bootstrap Expectation Maximization algorithm, MICE - runs an algorithm via Multivariate Imputation by Chained Equations and MTSDI - an Expectation Maximization algorithm-based method for imputation of missing values in multivariate normal time series. The synthetic time series compared with the observed ROME series has also been evaluated using several skill measures as such as RMSE, NRMSE, Agreement Index, R, R2, F-test and t-test. The results showed that for CLMX and ROME, the R2 and R statistics were equal to 0.98 and 0.96, respectively. It was observed that increases in the number of gaps generate loss of quality of the time series. Data imputation was more efficient with MTSDI method, with negligible errors and best skill coefficients. The results suggest a limit of about 60% of missing data for imputation, for monthly averages, no more than this. It is noteworthy that CLMX, ROME and KIEL stations present no missing data in the target period. This methodology allowed reconstructing 43 time series.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.

    Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less

  20. Do foreign exchange and equity markets co-move in Latin American region? Detrended cross-correlation approach

    NASA Astrophysics Data System (ADS)

    Bashir, Usman; Yu, Yugang; Hussain, Muntazir; Zebende, Gilney F.

    2016-11-01

    This paper investigates the dynamics of the relationship between foreign exchange markets and stock markets through time varying co-movements. In this sense, we analyzed the time series monthly of Latin American countries for the period from 1991 to 2015. Furthermore, we apply Granger causality to verify the direction of causality between foreign exchange and stock market and detrended cross-correlation approach (ρDCCA) for any co-movements at different time scales. Our empirical results suggest a positive cross correlation between exchange rate and stock price for all Latin American countries. The findings reveal two clear patterns of correlation. First, Brazil and Argentina have positive correlation in both short and long time frames. Second, the remaining countries are negatively correlated in shorter time scale, gradually moving to positive. This paper contributes to the field in three ways. First, we verified the co-movements of exchange rate and stock prices that were rarely discussed in previous empirical studies. Second, ρDCCA coefficient is a robust and powerful methodology to measure the cross correlation when dealing with non stationarity of time series. Third, most of the studies employed one or two time scales using co-integration and vector autoregressive approaches. Not much is known about the co-movements at varying time scales between foreign exchange and stock markets. ρDCCA coefficient facilitates the understanding of its explanatory depth.

  1. Optimal Selection of Time Series Coefficients for Wrist Myoelectric Control Based on Intramuscular Recordings

    DTIC Science & Technology

    2001-10-25

    considered static or invariant because the spectral behavior of EMG data is dependent on the specific muscle , contraction level, and limb function. However...produced at the onset of the muscle contraction . Because the units with lower conduction velocity (lower frequency components) are recruited first, the

  2. Extended AIC model based on high order moments and its application in the financial market

    NASA Astrophysics Data System (ADS)

    Mao, Xuegeng; Shang, Pengjian

    2018-07-01

    In this paper, an extended method of traditional Akaike Information Criteria(AIC) is proposed to detect the volatility of time series by combining it with higher order moments, such as skewness and kurtosis. Since measures considering higher order moments are powerful in many aspects, the properties of asymmetry and flatness can be observed. Furthermore, in order to reduce the effect of noise and other incoherent features, we combine the extended AIC algorithm with multiscale wavelet analysis, in which the newly extended AIC algorithm is applied to wavelet coefficients at several scales and the time series are reconstructed by wavelet transform. After that, we create AIC planes to derive the relationship among AIC values using variance, skewness and kurtosis respectively. When we test this technique on the financial market, the aim is to analyze the trend and volatility of the closing price of stock indices and classify them. And we also adapt multiscale analysis to measure complexity of time series over a range of scales. Empirical results show that the singularity of time series in stock market can be detected via extended AIC algorithm.

  3. Detecting a currency’s dominance using multivariate time series analysis

    NASA Astrophysics Data System (ADS)

    Syahidah Yusoff, Nur; Sharif, Shamshuritawati

    2017-09-01

    A currency exchange rate is the price of one country’s currency in terms of another country’s currency. There are four different prices; opening, closing, highest, and lowest can be achieved from daily trading activities. In the past, a lot of studies have been carried out by using closing price only. However, those four prices are interrelated to each other. Thus, the multivariate time series can provide more information than univariate time series. Therefore, the enthusiasm of this paper is to compare the results of two different approaches, which are mean vector and Escoufier’s RV coefficient in constructing similarity matrices of 20 world currencies. Consequently, both matrices are used to substitute the correlation matrix required by network topology. With the help of degree centrality measure, we can detect the currency’s dominance for both networks. The pros and cons for both approaches will be presented at the end of this paper.

  4. Phase synchronization based minimum spanning trees for analysis of financial time series with nonlinear correlations

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, Srinivasan; Duvvuru, Arjun; Sultornsanee, Sivarit; Kamarthi, Sagar

    2016-02-01

    The cross correlation coefficient has been widely applied in financial time series analysis, in specific, for understanding chaotic behaviour in terms of stock price and index movements during crisis periods. To better understand time series correlation dynamics, the cross correlation matrices are represented as networks, in which a node stands for an individual time series and a link indicates cross correlation between a pair of nodes. These networks are converted into simpler trees using different schemes. In this context, Minimum Spanning Trees (MST) are the most favoured tree structures because of their ability to preserve all the nodes and thereby retain essential information imbued in the network. Although cross correlations underlying MSTs capture essential information, they do not faithfully capture dynamic behaviour embedded in the time series data of financial systems because cross correlation is a reliable measure only if the relationship between the time series is linear. To address the issue, this work investigates a new measure called phase synchronization (PS) for establishing correlations among different time series which relate to one another, linearly or nonlinearly. In this approach the strength of a link between a pair of time series (nodes) is determined by the level of phase synchronization between them. We compare the performance of phase synchronization based MST with cross correlation based MST along selected network measures across temporal frame that includes economically good and crisis periods. We observe agreement in the directionality of the results across these two methods. They show similar trends, upward or downward, when comparing selected network measures. Though both the methods give similar trends, the phase synchronization based MST is a more reliable representation of the dynamic behaviour of financial systems than the cross correlation based MST because of the former's ability to quantify nonlinear relationships among time series or relations among phase shifted time series.

  5. Efficient Generation and Use of Power Series for Broad Application.

    NASA Astrophysics Data System (ADS)

    Rudmin, Joseph; Sochacki, James

    2017-01-01

    A brief history and overview of the Parker-Sockacki Method of Power Series generation is presented. This method generates power series to order n in time n2 for any system of differential equations that has a power series solution. The method is simple enough that novices to differential equations can easily learn it and immediately apply it. Maximal absolute error estimates allow one to determine the number of terms needed to reach desired accuracy. Ratios of coefficients in a solution with global convergence differ signficantly from that for a solution with only local convergence. Divergence of the series prevents one from overlooking poles. The method can always be cast in polynomial form, which allows separation of variables in almost all physical systems, facilitating exploration of hidden symmetries, and is implicitly symplectic.

  6. Multivariate time series modeling of short-term system scale irrigation demand

    NASA Astrophysics Data System (ADS)

    Perera, Kushan C.; Western, Andrew W.; George, Biju; Nawarathna, Bandara

    2015-12-01

    Travel time limits the ability of irrigation system operators to react to short-term irrigation demand fluctuations that result from variations in weather, including very hot periods and rainfall events, as well as the various other pressures and opportunities that farmers face. Short-term system-wide irrigation demand forecasts can assist in system operation. Here we developed a multivariate time series (ARMAX) model to forecast irrigation demands with respect to aggregated service points flows (IDCGi, ASP) and off take regulator flows (IDCGi, OTR) based across 5 command areas, which included area covered under four irrigation channels and the study area. These command area specific ARMAX models forecast 1-5 days ahead daily IDCGi, ASP and IDCGi, OTR using the real time flow data recorded at the service points and the uppermost regulators and observed meteorological data collected from automatic weather stations. The model efficiency and the predictive performance were quantified using the root mean squared error (RMSE), Nash-Sutcliffe model efficiency coefficient (NSE), anomaly correlation coefficient (ACC) and mean square skill score (MSSS). During the evaluation period, NSE for IDCGi, ASP and IDCGi, OTR across 5 command areas were ranged 0.98-0.78. These models were capable of generating skillful forecasts (MSSS ⩾ 0.5 and ACC ⩾ 0.6) of IDCGi, ASP and IDCGi, OTR for all 5 lead days and IDCGi, ASP and IDCGi, OTR forecasts were better than using the long term monthly mean irrigation demand. Overall these predictive performance from the ARMAX time series models were higher than almost all the previous studies we are aware. Further, IDCGi, ASP and IDCGi, OTR forecasts have improved the operators' ability to react for near future irrigation demand fluctuations as the developed ARMAX time series models were self-adaptive to reflect the short-term changes in the irrigation demand with respect to various pressures and opportunities that farmers' face, such as changing water policy, continued development of water markets, drought and changing technology.

  7. Learning investment indicators through data extension

    NASA Astrophysics Data System (ADS)

    Dvořák, Marek

    2017-07-01

    Stock prices in the form of time series were analysed using single and multivariate statistical methods. After simple data preprocessing in the form of logarithmic differences, we augmented this single variate time series to a multivariate representation. This method makes use of sliding windows to calculate several dozen of new variables using simple statistic tools like first and second moments as well as more complicated statistic, like auto-regression coefficients and residual analysis, followed by an optional quadratic transformation that was further used for data extension. These were used as a explanatory variables in a regularized logistic LASSO regression which tried to estimate Buy-Sell Index (BSI) from real stock market data.

  8. A new adaptive self-tuning Fourier coefficients algorithm for periodic torque ripple minimization in permanent magnet synchronous motors (PMSM).

    PubMed

    Gómez-Espinosa, Alfonso; Hernández-Guzmán, Víctor M; Bandala-Sánchez, Manuel; Jiménez-Hernández, Hugo; Rivas-Araiza, Edgar A; Rodríguez-Reséndiz, Juvenal; Herrera-Ruíz, Gilberto

    2013-03-19

    A New Adaptive Self-Tuning Fourier Coefficients Algorithm for Periodic Torque Ripple Minimization in Permanent Magnet Synchronous Motors (PMSM) Torque ripple occurs in Permanent Magnet Synchronous Motors (PMSMs) due to the non-sinusoidal flux density distribution around the air-gap and variable magnetic reluctance of the air-gap due to the stator slots distribution. These torque ripples change periodically with rotor position and are apparent as speed variations, which degrade the PMSM drive performance, particularly at low speeds, because of low inertial filtering. In this paper, a new self-tuning algorithm is developed for determining the Fourier Series Controller coefficients with the aim of reducing the torque ripple in a PMSM, thus allowing for a smoother operation. This algorithm adjusts the controller parameters based on the component's harmonic distortion in time domain of the compensation signal. Experimental evaluation is performed on a DSP-controlled PMSM evaluation platform. Test results obtained validate the effectiveness of the proposed self-tuning algorithm, with the Fourier series expansion scheme, in reducing the torque ripple.

  9. Implicit Wiener series analysis of epileptic seizure recordings.

    PubMed

    Barbero, Alvaro; Franz, Matthias; van Drongelen, Wim; Dorronsoro, José R; Schölkopf, Bernhard; Grosse-Wentrup, Moritz

    2009-01-01

    Implicit Wiener series are a powerful tool to build Volterra representations of time series with any degree of non-linearity. A natural question is then whether higher order representations yield more useful models. In this work we shall study this question for ECoG data channel relationships in epileptic seizure recordings, considering whether quadratic representations yield more accurate classifiers than linear ones. To do so we first show how to derive statistical information on the Volterra coefficient distribution and how to construct seizure classification patterns over that information. As our results illustrate, a quadratic model seems to provide no advantages over a linear one. Nevertheless, we shall also show that the interpretability of the implicit Wiener series provides insights into the inter-channel relationships of the recordings.

  10. The Physical Significance of the Synthetic Running Correlation Coefficient and Its Applications in Oceanic and Atmospheric Studies

    NASA Astrophysics Data System (ADS)

    Zhao, Jinping; Cao, Yong; Wang, Xin

    2018-06-01

    In order to study the temporal variations of correlations between two time series, a running correlation coefficient (RCC) could be used. An RCC is calculated for a given time window, and the window is then moved sequentially through time. The current calculation method for RCCs is based on the general definition of the Pearson product-moment correlation coefficient, calculated with the data within the time window, which we call the local running correlation coefficient (LRCC). The LRCC is calculated via the two anomalies corresponding to the two local means, meanwhile, the local means also vary. It is cleared up that the LRCC reflects only the correlation between the two anomalies within the time window but fails to exhibit the contributions of the two varying means. To address this problem, two unchanged means obtained from all available data are adopted to calculate an RCC, which is called the synthetic running correlation coefficient (SRCC). When the anomaly variations are dominant, the two RCCs are similar. However, when the variations of the means are dominant, the difference between the two RCCs becomes obvious. The SRCC reflects the correlations of both the anomaly variations and the variations of the means. Therefore, the SRCCs from different time points are intercomparable. A criterion for the superiority of the RCC algorithm is that the average value of the RCC should be close to the global correlation coefficient calculated using all data. The SRCC always meets this criterion, while the LRCC sometimes fails. Therefore, the SRCC is better than the LRCC for running correlations. We suggest using the SRCC to calculate the RCCs.

  11. Time-series analyses of air pollution and mortality in the United States: a subsampling approach.

    PubMed

    Moolgavkar, Suresh H; McClellan, Roger O; Dewanji, Anup; Turim, Jay; Luebeck, E Georg; Edwards, Melanie

    2013-01-01

    Hierarchical Bayesian methods have been used in previous papers to estimate national mean effects of air pollutants on daily deaths in time-series analyses. We obtained maximum likelihood estimates of the common national effects of the criteria pollutants on mortality based on time-series data from ≤ 108 metropolitan areas in the United States. We used a subsampling bootstrap procedure to obtain the maximum likelihood estimates and confidence bounds for common national effects of the criteria pollutants, as measured by the percentage increase in daily mortality associated with a unit increase in daily 24-hr mean pollutant concentration on the previous day, while controlling for weather and temporal trends. We considered five pollutants [PM10, ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), and sulfur dioxide (SO2)] in single- and multipollutant analyses. Flexible ambient concentration-response models for the pollutant effects were considered as well. We performed limited sensitivity analyses with different degrees of freedom for time trends. In single-pollutant models, we observed significant associations of daily deaths with all pollutants. The O3 coefficient was highly sensitive to the degree of smoothing of time trends. Among the gases, SO2 and NO2 were most strongly associated with mortality. The flexible ambient concentration-response curve for O3 showed evidence of nonlinearity and a threshold at about 30 ppb. Differences between the results of our analyses and those reported from using the Bayesian approach suggest that estimates of the quantitative impact of pollutants depend on the choice of statistical approach, although results are not directly comparable because they are based on different data. In addition, the estimate of the O3-mortality coefficient depends on the amount of smoothing of time trends.

  12. Bayesian Estimation of Random Coefficient Dynamic Factor Models

    ERIC Educational Resources Information Center

    Song, Hairong; Ferrer, Emilio

    2012-01-01

    Dynamic factor models (DFMs) have typically been applied to multivariate time series data collected from a single unit of study, such as a single individual or dyad. The goal of DFMs application is to capture dynamics of multivariate systems. When multiple units are available, however, DFMs are not suited to capture variations in dynamics across…

  13. Ignition and combustion phenomena in Diesel engines

    NASA Technical Reports Server (NTRS)

    Sass, F

    1928-01-01

    Evidences were found that neither gasification nor vaporization of the injected fuel occurs before ignition; also that the hydrogen coefficient has no significance. However the knowledge of the ignition point and of the "time lag" is important. After ignition, the combustion proceeds in a series of reactions, the last of which at least are now known.

  14. Influential Observations in Time Series.

    DTIC Science & Technology

    1984-07-01

    to appear). 20. Treadway, A. B. (1978). Ifectos sabre la economia espanola de usa devaluacion da Ia "eseta, undacion Ramon-Areces, Madrid. . 21...Also, w (X) z and ;(Xw’X) + 0. in practice this result means that lA . when w is large, all the estimated coefficients w0 are pulled down towards

  15. Assessing backscatter change due to backscatter gradient over the Greenland ice sheet using Envisat and SARAL altimetry

    NASA Astrophysics Data System (ADS)

    Su, Xiaoli; Luo, Zhicai; Zhou, Zebing

    2018-06-01

    Knowledge of backscatter change is important to accurately retrieve elevation change time series from satellite radar altimetry over continental ice sheets. Previously, backscatter coefficients generated in two cases, namely with and without accounting for backscatter gradient (BG), are used. However, the difference between backscatter time series obtained separately in these two cases and its impact on retrieving elevation change are not well known. Here we first compare the mean profiles of the Ku and Ka band backscatter over the Greenland ice sheet (GrIS), with results illustrating that the Ku-band backscatter is 3 ∼ 5 dB larger than that of the Ka band. We then conduct statistic analysis about time series of backscatter formed separately in the above two cases for both Ku and Ka bands over two regions in the GrIS. It is found that the standard deviation of backscatter time series becomes slightly smaller after removing the BG effect, which suggests that the method for the BG correction is effective. Furthermore, the impact on elevation change from backscatter change due to the BG effect is separately assessed for both Ku and Ka bands over the GrIS. We conclude that Ka band altimetry would benefit from a BG induced backscatter analysis (∼10% over region 2). This study may provide a reference to form backscatter time series towards refining elevation change time series from satellite radar altimetry over ice sheets using repeat-track analysis.

  16. Dynamic and Regression Modeling of Ocean Variability in the Tide-Gauge Record at Seasonal and Longer Periods

    NASA Technical Reports Server (NTRS)

    Hill, Emma M.; Ponte, Rui M.; Davis, James L.

    2007-01-01

    Comparison of monthly mean tide-gauge time series to corresponding model time series based on a static inverted barometer (IB) for pressure-driven fluctuations and a ocean general circulation model (OM) reveals that the combined model successfully reproduces seasonal and interannual changes in relative sea level at many stations. Removal of the OM and IB from the tide-gauge record produces residual time series with a mean global variance reduction of 53%. The OM is mis-scaled for certain regions, and 68% of the residual time series contain a significant seasonal variability after removal of the OM and IB from the tide-gauge data. Including OM admittance parameters and seasonal coefficients in a regression model for each station, with IB also removed, produces residual time series with mean global variance reduction of 71%. Examination of the regional improvement in variance caused by scaling the OM, including seasonal terms, or both, indicates weakness in the model at predicting sea-level variation for constricted ocean regions. The model is particularly effective at reproducing sea-level variation for stations in North America, Europe, and Japan. The RMS residual for many stations in these areas is 25-35 mm. The production of "cleaner" tide-gauge time series, with oceanographic variability removed, is important for future analysis of nonsecular and regionally differing sea-level variations. Understanding the ocean model's strengths and weaknesses will allow for future improvements of the model.

  17. Exact Fourier expansion in cylindrical coordinates for the three-dimensional Helmholtz Green function

    NASA Astrophysics Data System (ADS)

    Conway, John T.; Cohl, Howard S.

    2010-06-01

    A new method is presented for Fourier decomposition of the Helmholtz Green function in cylindrical coordinates, which is equivalent to obtaining the solution of the Helmholtz equation for a general ring source. The Fourier coefficients of the Green function are split into their half advanced + half retarded and half advanced-half retarded components, and closed form solutions for these components are then obtained in terms of a Horn function and a Kampé de Fériet function respectively. Series solutions for the Fourier coefficients are given in terms of associated Legendre functions, Bessel and Hankel functions and a hypergeometric function. These series are derived either from the closed form 2-dimensional hypergeometric solutions or from an integral representation, or from both. A simple closed form far-field solution for the general Fourier coefficient is derived from the Hankel series. Numerical calculations comparing different methods of calculating the Fourier coefficients are presented. Fourth order ordinary differential equations for the Fourier coefficients are also given and discussed briefly.

  18. River flow prediction using hybrid models of support vector regression with the wavelet transform, singular spectrum analysis and chaotic approach

    NASA Astrophysics Data System (ADS)

    Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal

    2018-06-01

    Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.

  19. Fractional-order Fourier analysis for ultrashort pulse characterization.

    PubMed

    Brunel, Marc; Coetmellec, Sébastien; Lelek, Mickael; Louradour, Frédéric

    2007-06-01

    We report what we believe to be the first experimental demonstration of ultrashort pulse characterization using fractional-order Fourier analysis. The analysis is applied to the interpretation of spectral interferometry resolved in time (SPIRIT) traces [which are spectral phase interferometry for direct electric field reconstruction (SPIDER)-like interferograms]. First, the fractional-order Fourier transformation is shown to naturally allow the determination of the cubic spectral phase coefficient of pulses to be analyzed. A simultaneous determination of both cubic and quadratic spectral phase coefficients of the pulses using the fractional-order Fourier series expansion is further demonstrated. This latter technique consists of localizing relative maxima in a 2D cartography representing decomposition coefficients. It is further used to reconstruct or filter SPIRIT traces.

  20. Impact of AGN and nebular emission on the estimation of stellar properties of galaxies

    NASA Astrophysics Data System (ADS)

    Cardoso, Leandro Saul Machado

    The aim of this PhD thesis is to apply tools from stochastic modeling to wind power, speed and direction data, in order to reproduce their empirically observed statistical features. In particular, the wind energy conversion process is modeled as a Langevin process, which allows to describe its dynamics with only two coefficients, namely the drift and the diffusion coefficients. Both coefficients can be directly derived from collected time-series and this so-called Langevin method has proved to be successful in several cases. However, the application to empirical data subjected to measurement noise sources in general and the case of wind turbines in particular poses several challenges and this thesis proposes methods to tackle them. To apply the Langevin method it is necessary to have data that is both stationary and Markovian, which is typically not the case. Moreover, the available time-series are often short and have missing data points, which affects the estimation of the coefficients. This thesis proposes a new methodology to overcome these issues by modeling the original data with a Markov chain prior to the Langevin analysis. The latter is then performed on data synthesized from the Markov chain model of wind data. Moreover, it is shown that the Langevin method can be applied to low sample rate wind data, namely 10-minute average data. The method is then extended in two different directions. First, to tackle non-stationary data sets. Wind data often exhibits daily patterns due to the solar cycle and this thesis proposes a method to consider these daily patterns in the analysis of the timeseries. For that, a cyclic Markov model is developed for the data synthesis step and subsequently, for each time of the day, a separate Langevin analysis of the wind energy conversion system is performed. Second, to resolve the dynamical stochastic process in the case it is spoiled by measurement noise. When working with measurement data a challenge can be posed by the quality of the data in itself. Often measurement devices add noise to the time-series that is different from the intrinsic noise of the underlying stochastic process and can even be time-correlated. This spoiled data, analyzed with the Langevin method leads to distorted drift and diffusion coefficients. This thesis proposes a direct, parameter-free way to extract the Langevin coefficients as well as the parameters of the measurement noise from spoiled data. Put in a more general context, the method allows to disentangle two superposed independent stochastic processes. Finally, since a characteristic of wind energy that motivates this stochastic modeling framework is the fluctuating nature of wind itself, several issues raise when it comes to reserve commitment or bidding on the liberalized energy market. This thesis proposes a measure to quantify the risk-returnratio that is associated to wind power production conditioned to a wind park state. The proposed state of the wind park takes into account data from all wind turbines constituting the park and also their correlations at different time lags. None

  1. Constraints and efficiency of cattle marketing in semiarid pastoral system in Kenya.

    PubMed

    Onono, Joshua Orungo; Amimo, Joshua Oluoch; Rushton, Jonathan

    2015-04-01

    Livestock keeping is regarded as a store of wealth for pastoralists in Kenya, besides their social and cultural functions. The objective of this study was to prioritize constraints to cattle marketing in a semiarid pastoral area of Narok in Kenya and to analyze efficiency of cattle marketing in transit markets located in Garissa, Kajiado and Narok counties. Primary data collection from traders was done through participatory interviews and market surveys, while time series market price data were obtained from secondary sources. Five focus group interviews were organized with a total of 61 traders in markets from Narok County, while a total of 187 traders who purchased cattle from transit markets provided data on a number of cattle purchased, purpose of purchase, buying prices and mode of transport. Market performance was analyzed through trader's market share, gross margins, Gini coefficient and coefficient of correlation between time series price data. The marketing constraints which were ranked high included lack of market for meat, trekking of cattle to markets, lack of price information and occurrence of diseases. About 10 % of traders purchased over 50 % of cattle which were supplied in markets, revealing a high concentration index. Further, a gross marketing margin per cattle purchased was positive in all markets revealing profitability. Moderate correlation coefficients existed between time series market price data for cattle purchased from Ewaso Ngiro and Mulot markets (r = 0.5; p < 0.05), while those between Dagoretti and Garissa markets were weak (r = 0.2; p > 0.05). The integration of markets, occurrence of diseases and trekking of cattle to markets are factors which may increase a risk of infectious disease spread. These results call for support of disease surveillance activities within markets in pastoral areas so that farms and systems which are connected are protected from threats of infectious diseases.

  2. Removing tidal-period variations from time-series data using low-pass digital filters

    USGS Publications Warehouse

    Walters, Roy A.; Heston, Cynthia

    1982-01-01

    Several low-pass, digital filters are examined for their ability to remove tidal Period Variations from a time-series of water surface elevation for San Francisco Bay. The most efficient filter is the one which is applied to the Fourier coefficients of the transformed data, and the filtered data recovered through an inverse transform. The ability of the filters to remove the tidal components increased in the following order: 1) cosine-Lanczos filter, 2) cosine-Lanczos squared filter; 3) Godin filter; and 4) a transform fitter. The Godin fitter is not sufficiently sharp to prevent severe attenuation of 2–3 day variations in surface elevation resulting from weather events.

  3. Recombination of electrons with NH4/+/-/NH3/n-series ions

    NASA Technical Reports Server (NTRS)

    Huang, C.-M.; Biondi, M. A.; Johnsen, R.

    1976-01-01

    The paper examines the recombination of electrons with ammonium-series cluster ions, NH4(+)-(NH3)n, for two reasons: (1) NH4(+) may be a significant ion in the lower atmospheres of the earth and the outer planets, and (2) to investigate the weak temperature dependence of the cluster ion's recombination coefficient. A microwave afterglow mass spectrometer was used to determine the recombination coefficients for the first five members of the ammonium series, (18+) through (86+), at temperatures between 200 and 410 K. The electron temperature dependence of the recombination coefficient was determined for (35+) and (52+), the n = 1 and 2 cluster ions, over the temperature range 300-3000 K.

  4. Approximate solutions for diffusive fracture-matrix transfer: Application to storage of dissolved CO 2 in fractured rocks

    DOE PAGES

    Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.; ...

    2017-01-05

    Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less

  5. TIDE TOOL: Open-Source Sea-Level Monitoring Software for Tsunami Warning Systems

    NASA Astrophysics Data System (ADS)

    Weinstein, S. A.; Kong, L. S.; Becker, N. C.; Wang, D.

    2012-12-01

    A tsunami warning center (TWC) typically decides to issue a tsunami warning bulletin when initial estimates of earthquake source parameters suggest it may be capable of generating a tsunami. A TWC, however, relies on sea-level data to provide prima facie evidence for the existence or non-existence of destructive tsunami waves and to constrain tsunami wave height forecast models. In the aftermath of the 2004 Sumatra disaster, the International Tsunami Information Center asked the Pacific Tsunami Warning Center (PTWC) to develop a platform-independent, easy-to-use software package to give nascent TWCs the ability to process WMO Global Telecommunications System (GTS) sea-level messages and to analyze the resulting sea-level curves (marigrams). In response PTWC developed TIDE TOOL that has since steadily grown in sophistication to become PTWC's operational sea-level processing system. TIDE TOOL has two main parts: a decoder that reads GTS sea-level message logs, and a graphical user interface (GUI) written in the open-source platform-independent graphical toolkit scripting language Tcl/Tk. This GUI consists of dynamic map-based clients that allow the user to select and analyze a single station or groups of stations by displaying their marigams in strip-chart or screen-tiled forms. TIDE TOOL also includes detail maps of each station to show each station's geographical context and reverse tsunami travel time contours to each station. TIDE TOOL can also be coupled to the GEOWARE™ TTT program to plot tsunami travel times and to indicate the expected tsunami arrival time on the marigrams. Because sea-level messages are structured in a rich variety of formats TIDE TOOL includes a metadata file, COMP_META, that contains all of the information needed by TIDE TOOL to decode sea-level data as well as basic information such as the geographical coordinates of each station. TIDE TOOL can therefore continuously decode theses sea-level messages in real-time and display the time-series data in the GUI as well. This GUI also includes mouse-clickable functions such as zooming or expanding the time-series display, measuring tsunami signal characteristics (arrival time, wave period and amplitude, etc.), and removing the tide signal from the time-series data. De-tiding of the time series is necessary to obtain accurate measurements of tsunami wave parameters and to maintain accurate historical tsunami databases. With TIDE TOOL, de-tiding is accomplished with a set of tide harmonic coefficients routinely computed and updated at PTWC for many of the stations in PTWC's inventory (~570). PTWC also uses the decoded time series files (previous 3-5 days' worth) to compute on-the-fly tide coefficients. The latter is useful in cases where the station is new and a long-term stable set of tide coefficients are not available or cannot be easily obtained due to various non-astronomical effects. The international tsunami warning system is coordinated globally by the UNESCO IOC, and a number of countries in the Pacific and Indian Ocean, and Caribbean depend on Tide Tool to monitor tsunamis in real time.

  6. Validation of drift and diffusion coefficients from experimental data

    NASA Astrophysics Data System (ADS)

    Riera, R.; Anteneodo, C.

    2010-04-01

    Many fluctuation phenomena, in physics and other fields, can be modeled by Fokker-Planck or stochastic differential equations whose coefficients, associated with drift and diffusion components, may be estimated directly from the observed time series. Its correct characterization is crucial to determine the system quantifiers. However, due to the finite sampling rates of real data, the empirical estimates may significantly differ from their true functional forms. In the literature, low-order corrections, or even no corrections, have been applied to the finite-time estimates. A frequent outcome consists of linear drift and quadratic diffusion coefficients. For this case, exact corrections have been recently found, from Itô-Taylor expansions. Nevertheless, model validation constitutes a necessary step before determining and applying the appropriate corrections. Here, we exploit the consequences of the exact theoretical results obtained for the linear-quadratic model. In particular, we discuss whether the observed finite-time estimates are actually a manifestation of that model. The relevance of this analysis is put into evidence by its application to two contrasting real data examples in which finite-time linear drift and quadratic diffusion coefficients are observed. In one case the linear-quadratic model is readily rejected while in the other, although the model constitutes a very good approximation, low-order corrections are inappropriate. These examples give warning signs about the proper interpretation of finite-time analysis even in more general diffusion processes.

  7. Cartesian-Grid Simulations of a Canard-Controlled Missile with a Free-Spinning Tail

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The proposed paper presents a series of simulations of a geometrically complex, canard-controlled, supersonic missile with free-spinning tail fins. Time-dependent simulations were performed using an inviscid Cartesian-grid-based method with results compared to both experimental data and high-resolution Navier-Stokes computations. At fixed free stream conditions and canard deflections, the tail spin rate was iteratively determined such that the net rolling moment on the empennage is zero. This rate corresponds to the time-asymptotic rate of the free-to-spin fin system. After obtaining spin-averaged aerodynamic coefficients for the missile, the investigation seeks a fixed-tail approximation to the spin-averaged aerodynamic coefficients, and examines the validity of this approximation over a variety of freestream conditions.

  8. Dynamical Stochastic Processes of Returns in Financial Markets

    NASA Astrophysics Data System (ADS)

    Kim, Kyungsik; Kim, Soo Yong; Lim, Gyuchang; Zhou, Junyuan; Yoon, Seung-Min

    2006-03-01

    We show how the evolution of probability distribution functions of the returns from the tick data of the Korean treasury bond futures (KTB) and the S&P 500 stock index can be described by means of the Fokker-Planck equation. We derive the Fokker- Planck equation from the estimated Kramers-Moyal coefficients estimated directly from the empirical data. By analyzing the statistics of the returns, we present the quantitative deterministic and random influences on both financial time series, for which we can give a simple physical interpretation. Finally, we remark that the diffusion coefficient should be significantly considered to make a portfolio.

  9. Rational trigonometric approximations using Fourier series partial sums

    NASA Technical Reports Server (NTRS)

    Geer, James F.

    1993-01-01

    A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.

  10. Toward a Global Horizontal and Vertical Elastic Load Deformation Model Derived from GRACE and GNSS Station Position Time Series

    NASA Astrophysics Data System (ADS)

    Chanard, Kristel; Fleitout, Luce; Calais, Eric; Rebischung, Paul; Avouac, Jean-Philippe

    2018-04-01

    We model surface displacements induced by variations in continental water, atmospheric pressure, and nontidal oceanic loading, derived from the Gravity Recovery and Climate Experiment (GRACE) for spherical harmonic degrees two and higher. As they are not observable by GRACE, we use at first the degree-1 spherical harmonic coefficients from Swenson et al. (2008, https://doi.org/10.1029/2007JB005338). We compare the predicted displacements with the position time series of 689 globally distributed continuous Global Navigation Satellite System (GNSS) stations. While GNSS vertical displacements are well explained by the model at a global scale, horizontal displacements are systematically underpredicted and out of phase with GNSS station position time series. We then reestimate the degree 1 deformation field from a comparison between our GRACE-derived model, with no a priori degree 1 loads, and the GNSS observations. We show that this approach reconciles GRACE-derived loading displacements and GNSS station position time series at a global scale, particularly in the horizontal components. Assuming that they reflect surface loading deformation only, our degree-1 estimates can be translated into geocenter motion time series. We also address and assess the impact of systematic errors in GNSS station position time series at the Global Positioning System (GPS) draconitic period and its harmonics on the comparison between GNSS and GRACE-derived annual displacements. Our results confirm that surface mass redistributions observed by GRACE, combined with an elastic spherical and layered Earth model, can be used to provide first-order corrections for loading deformation observed in both horizontal and vertical components of GNSS station position time series.

  11. Flicker Noise in GNSS Station Position Time Series: How much is due to Crustal Loading Deformations?

    NASA Astrophysics Data System (ADS)

    Rebischung, P.; Chanard, K.; Metivier, L.; Altamimi, Z.

    2017-12-01

    The presence of colored noise in GNSS station position time series was detected 20 years ago. It has been shown since then that the background spectrum of non-linear GNSS station position residuals closely follows a power-law process (known as flicker noise, 1/f noise or pink noise), with some white noise taking over at the highest frequencies. However, the origin of the flicker noise present in GNSS station position time series is still unclear. Flicker noise is often described as intrinsic to the GNSS system, i.e. due to errors in the GNSS observations or in their modeling, but no such error source has been identified so far that could explain the level of observed flicker noise, nor its spatial correlation.We investigate another possible contributor to the observed flicker noise, namely real crustal displacements driven by surface mass transports, i.e. non-tidal loading deformations. This study is motivated by the presence of power-law noise in the time series of low-degree (≤ 40) and low-order (≤ 12) Stokes coefficients observed by GRACE - power-law noise might also exist at higher degrees and orders, but obscured by GRACE observational noise. By comparing GNSS station position time series with loading deformation time series derived from GRACE gravity fields, both with their periodic components removed, we therefore assess whether GNSS and GRACE both plausibly observe the same flicker behavior of surface mass transports / loading deformations. Taking into account GRACE observability limitations, we also quantify the amount of flicker noise in GNSS station position time series that could be explained by such flicker loading deformations.

  12. Gas Chromatography Data Classification Based on Complex Coefficients of an Autoregressive Model

    DOE PAGES

    Zhao, Weixiang; Morgan, Joshua T.; Davis, Cristina E.

    2008-01-01

    This paper introduces autoregressive (AR) modeling as a novel method to classify outputs from gas chromatography (GC). The inverse Fourier transformation was applied to the original sensor data, and then an AR model was applied to transform data to generate AR model complex coefficients. This series of coefficients effectively contains a compressed version of all of the information in the original GC signal output. We applied this method to chromatograms resulting from proliferating bacteria species grown in culture. Three types of neural networks were used to classify the AR coefficients: backward propagating neural network (BPNN), radial basis function-principal component analysismore » (RBF-PCA) approach, and radial basis function-partial least squares regression (RBF-PLSR) approach. This exploratory study demonstrates the feasibility of using complex root coefficient patterns to distinguish various classes of experimental data, such as those from the different bacteria species. This cognition approach also proved to be robust and potentially useful for freeing us from time alignment of GC signals.« less

  13. The Gaussian Graphical Model in Cross-Sectional and Time-Series Data.

    PubMed

    Epskamp, Sacha; Waldorp, Lourens J; Mõttus, René; Borsboom, Denny

    2018-04-16

    We discuss the Gaussian graphical model (GGM; an undirected network of partial correlation coefficients) and detail its utility as an exploratory data analysis tool. The GGM shows which variables predict one-another, allows for sparse modeling of covariance structures, and may highlight potential causal relationships between observed variables. We describe the utility in three kinds of psychological data sets: data sets in which consecutive cases are assumed independent (e.g., cross-sectional data), temporally ordered data sets (e.g., n = 1 time series), and a mixture of the 2 (e.g., n > 1 time series). In time-series analysis, the GGM can be used to model the residual structure of a vector-autoregression analysis (VAR), also termed graphical VAR. Two network models can then be obtained: a temporal network and a contemporaneous network. When analyzing data from multiple subjects, a GGM can also be formed on the covariance structure of stationary means-the between-subjects network. We discuss the interpretation of these models and propose estimation methods to obtain these networks, which we implement in the R packages graphicalVAR and mlVAR. The methods are showcased in two empirical examples, and simulation studies on these methods are included in the supplementary materials.

  14. Cross-sample entropy of foreign exchange time series

    NASA Astrophysics Data System (ADS)

    Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao

    2010-11-01

    The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.

  15. Efficient hemodynamic event detection utilizing relational databases and wavelet analysis

    NASA Technical Reports Server (NTRS)

    Saeed, M.; Mark, R. G.

    2001-01-01

    Development of a temporal query framework for time-oriented medical databases has hitherto been a challenging problem. We describe a novel method for the detection of hemodynamic events in multiparameter trends utilizing wavelet coefficients in a MySQL relational database. Storage of the wavelet coefficients allowed for a compact representation of the trends, and provided robust descriptors for the dynamics of the parameter time series. A data model was developed to allow for simplified queries along several dimensions and time scales. Of particular importance, the data model and wavelet framework allowed for queries to be processed with minimal table-join operations. A web-based search engine was developed to allow for user-defined queries. Typical queries required between 0.01 and 0.02 seconds, with at least two orders of magnitude improvement in speed over conventional queries. This powerful and innovative structure will facilitate research on large-scale time-oriented medical databases.

  16. Visibility graph analysis on quarterly macroeconomic series of China based on complex network theory

    NASA Astrophysics Data System (ADS)

    Wang, Na; Li, Dong; Wang, Qiwen

    2012-12-01

    The visibility graph approach and complex network theory provide a new insight into time series analysis. The inheritance of the visibility graph from the original time series was further explored in the paper. We found that degree distributions of visibility graphs extracted from Pseudo Brownian Motion series obtained by the Frequency Domain algorithm exhibit exponential behaviors, in which the exponential exponent is a binomial function of the Hurst index inherited in the time series. Our simulations presented that the quantitative relations between the Hurst indexes and the exponents of degree distribution function are different for different series and the visibility graph inherits some important features of the original time series. Further, we convert some quarterly macroeconomic series including the growth rates of value-added of three industry series and the growth rates of Gross Domestic Product series of China to graphs by the visibility algorithm and explore the topological properties of graphs associated from the four macroeconomic series, namely, the degree distribution and correlations, the clustering coefficient, the average path length, and community structure. Based on complex network analysis we find degree distributions of associated networks from the growth rates of value-added of three industry series are almost exponential and the degree distributions of associated networks from the growth rates of GDP series are scale free. We also discussed the assortativity and disassortativity of the four associated networks as they are related to the evolutionary process of the original macroeconomic series. All the constructed networks have “small-world” features. The community structures of associated networks suggest dynamic changes of the original macroeconomic series. We also detected the relationship among government policy changes, community structures of associated networks and macroeconomic dynamics. We find great influences of government policies in China on the changes of dynamics of GDP and the three industries adjustment. The work in our paper provides a new way to understand the dynamics of economic development.

  17. A time-space domain stereo finite difference method for 3D scalar wave propagation

    NASA Astrophysics Data System (ADS)

    Chen, Yushu; Yang, Guangwen; Ma, Xiao; He, Conghui; Song, Guojie

    2016-11-01

    The time-space domain finite difference methods reduce numerical dispersion effectively by minimizing the error in the joint time-space domain. However, their interpolating coefficients are related with the Courant numbers, leading to significantly extra time costs for loading the coefficients consecutively according to velocity in heterogeneous models. In the present study, we develop a time-space domain stereo finite difference (TSSFD) method for 3D scalar wave equation. The method propagates both the displacements and their gradients simultaneously to keep more information of the wavefields, and minimizes the maximum phase velocity error directly using constant interpolation coefficients for different Courant numbers. We obtain the optimal constant coefficients by combining the truncated Taylor series approximation and the time-space domain optimization, and adjust the coefficients to improve the stability condition. Subsequent investigation shows that the TSSFD can suppress numerical dispersion effectively with high computational efficiency. The maximum phase velocity error of the TSSFD is just 3.09% even with only 2 sampling points per minimum wavelength when the Courant number is 0.4. Numerical experiments show that to generate wavefields with no visible numerical dispersion, the computational efficiency of the TSSFD is 576.9%, 193.5%, 699.0%, and 191.6% of those of the 4th-order and 8th-order Lax-Wendroff correction (LWC) method, the 4th-order staggered grid method (SG), and the 8th-order optimal finite difference method (OFD), respectively. Meanwhile, the TSSFD is compatible to the unsplit convolutional perfectly matched layer (CPML) boundary condition for absorbing artificial boundaries. The efficiency and capability to handle complex velocity models make it an attractive tool in imaging methods such as acoustic reverse time migration (RTM).

  18. Stochastic modeling of experimental chaotic time series.

    PubMed

    Stemler, Thomas; Werner, Johannes P; Benner, Hartmut; Just, Wolfram

    2007-01-26

    Methods developed recently to obtain stochastic models of low-dimensional chaotic systems are tested in electronic circuit experiments. We demonstrate that reliable drift and diffusion coefficients can be obtained even when no excessive time scale separation occurs. Crisis induced intermittent motion can be described in terms of a stochastic model showing tunneling which is dominated by state space dependent diffusion. Analytical solutions of the corresponding Fokker-Planck equation are in excellent agreement with experimental data.

  19. Structure of a financial cross-correlation matrix under attack

    NASA Astrophysics Data System (ADS)

    Lim, Gyuchang; Kim, SooYong; Kim, Junghwan; Kim, Pyungsoo; Kang, Yoonjong; Park, Sanghoon; Park, Inho; Park, Sang-Bum; Kim, Kyungsik

    2009-09-01

    We investigate the structure of a perturbed stock market in terms of correlation matrices. For the purpose of perturbing a stock market, two distinct methods are used, namely local and global perturbation. The former involves replacing a correlation coefficient of the cross-correlation matrix with one calculated from two Gaussian-distributed time series while the latter reconstructs the cross-correlation matrix just after replacing the original return series with Gaussian-distributed time series. Concerning the local case, it is a technical study only and there is no attempt to model reality. The term ‘global’ means the overall effect of the replacement on other untouched returns. Through statistical analyses such as random matrix theory (RMT), network theory, and the correlation coefficient distributions, we show that the global structure of a stock market is vulnerable to perturbation. However, apart from in the analysis of inverse participation ratios (IPRs), the vulnerability becomes dull under a small-scale perturbation. This means that these analysis tools are inappropriate for monitoring the whole stock market due to the low sensitivity of a stock market to a small-scale perturbation. In contrast, when going down to the structure of business sectors, we confirm that correlation-based business sectors are regrouped in terms of IPRs. This result gives a clue about monitoring the effect of hidden intentions, which are revealed via portfolios taken mostly by large investors.

  20. Time-series analysis in imatinib-resistant chronic myeloid leukemia K562-cells under different drug treatments.

    PubMed

    Zhao, Yan-Hong; Zhang, Xue-Fang; Zhao, Yan-Qiu; Bai, Fan; Qin, Fan; Sun, Jing; Dong, Ying

    2017-08-01

    Chronic myeloid leukemia (CML) is characterized by the accumulation of active BCR-ABL protein. Imatinib is the first-line treatment of CML; however, many patients are resistant to this drug. In this study, we aimed to compare the differences in expression patterns and functions of time-series genes in imatinib-resistant CML cells under different drug treatments. GSE24946 was downloaded from the GEO database, which included 17 samples of K562-r cells with (n=12) or without drug administration (n=5). Three drug treatment groups were considered for this study: arsenic trioxide (ATO), AMN107, and ATO+AMN107. Each group had one sample at each time point (3, 12, 24, and 48 h). Time-series genes with a ratio of standard deviation/average (coefficient of variation) >0.15 were screened, and their expression patterns were revealed based on Short Time-series Expression Miner (STEM). Then, the functional enrichment analysis of time-series genes in each group was performed using DAVID, and the genes enriched in the top ten functional categories were extracted to detect their expression patterns. Different time-series genes were identified in the three groups, and most of them were enriched in the ribosome and oxidative phosphorylation pathways. Time-series genes in the three treatment groups had different expression patterns and functions. Time-series genes in the ATO group (e.g. CCNA2 and DAB2) were significantly associated with cell adhesion, those in the AMN107 group were related to cellular carbohydrate metabolic process, while those in the ATO+AMN107 group (e.g. AP2M1) were significantly related to cell proliferation and antigen processing. In imatinib-resistant CML cells, ATO could influence genes related to cell adhesion, AMN107 might affect genes involved in cellular carbohydrate metabolism, and the combination therapy might regulate genes involved in cell proliferation.

  1. Investigating Causality Between Interacting Brain Areas with Multivariate Autoregressive Models of MEG Sensor Data

    PubMed Central

    Michalareas, George; Schoffelen, Jan-Mathijs; Paterson, Gavin; Gross, Joachim

    2013-01-01

    Abstract In this work, we investigate the feasibility to estimating causal interactions between brain regions based on multivariate autoregressive models (MAR models) fitted to magnetoencephalographic (MEG) sensor measurements. We first demonstrate the theoretical feasibility of estimating source level causal interactions after projection of the sensor-level model coefficients onto the locations of the neural sources. Next, we show with simulated MEG data that causality, as measured by partial directed coherence (PDC), can be correctly reconstructed if the locations of the interacting brain areas are known. We further demonstrate, if a very large number of brain voxels is considered as potential activation sources, that PDC as a measure to reconstruct causal interactions is less accurate. In such case the MAR model coefficients alone contain meaningful causality information. The proposed method overcomes the problems of model nonrobustness and large computation times encountered during causality analysis by existing methods. These methods first project MEG sensor time-series onto a large number of brain locations after which the MAR model is built on this large number of source-level time-series. Instead, through this work, we demonstrate that by building the MAR model on the sensor-level and then projecting only the MAR coefficients in source space, the true casual pathways are recovered even when a very large number of locations are considered as sources. The main contribution of this work is that by this methodology entire brain causality maps can be efficiently derived without any a priori selection of regions of interest. Hum Brain Mapp, 2013. © 2012 Wiley Periodicals, Inc. PMID:22328419

  2. Friction coefficient of skin in real-time.

    PubMed

    Sivamani, Raja K; Goodman, Jack; Gitis, Norm V; Maibach, Howard I

    2003-08-01

    Friction studies are useful in quantitatively investigating the skin surface. Previous studies utilized different apparatuses and materials for these investigations but there was no real-time test parameter control or monitoring. Our studies incorporated the commercially available UMT Series Micro-Tribometer, a tribology instrument that permits real-time monitoring and calculation of the important parameters in friction studies, increasing the accuracy over previous tribology and friction measurement devices used on skin. Our friction tests were performed on four healthy volunteers and on abdominal skin samples. A stainless steel ball was pressed on to the skin with at a pre-set load and then moved across the skin at a constant velocity of 5 mm/min. The UMT continuously monitored the friction force of the skin and the normal force of the ball to calculate the friction coefficient in real-time. Tests investigated the applicability of Amonton's law, the impact of increased and decreased hydration, and the effect of the application of moisturizers. The friction coefficient depends on the normal load applied, and Amonton's law does not provide an accurate description for the skin surface. Application of water to the skin increased the friction coefficient and application of isopropyl alcohol decreased it. Fast acting moisturizers immediately increased the friction coefficient, but did not have the prolonged effect of the slow, long lasting moisturizers. The UMT is capable of making real-time measurements on the skin and can be used as an effective tool to study friction properties. Results from the UMT measurements agree closely with theory regarding the skin surface.

  3. Automated time series forecasting for biosurveillance.

    PubMed

    Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit

    2007-09-30

    For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.

  4. Comparative assessment of five water infiltration models into the soil

    NASA Astrophysics Data System (ADS)

    Shahsavaramir, M.

    2009-04-01

    The knowledge of the soil hydraulic conditions particularly soil permeability is an important issue hydrological and climatic study. Because of its high spatial and temporal variability, soil infiltration monitoring scheme was investigated in view of its application in infiltration modelling. Some of models for infiltration into the soil have been developed, in this paper; we design and describe capability of five infiltration model into the soil. We took a decision to select the best model suggested. In this research in the first time, we designed a program in Quick Basic software and wrote algorithm of five models that include Kostiakove, Modified Kostiakove, Philip, S.C.S and Horton. Afterwards we supplied amounts of factual infiltration, according of get at infiltration data, by double rings method in 12 series of Saveh plain which situated in Markazi province in Iran. After accessing to models coefficients, these equations were regenerated by Excel software and calculations related to models acuity rate in proportion to observations and also related graphs were done by this software. Amounts of infiltration parameters, such as cumulative infiltration and infiltration rate were obtained from designed models. Then we compared amounts of observation and determination parameters of infiltration. The results show that Kostiakove and Modified Kostiakove models could quantify amounts of cumulative infiltration and infiltration rate in triple period (short, middle and long time). In tree series of soils, Horton model could determine infiltration amounts better than others in time trinal treatments. The results show that Philip model in seven series had a relatively good fitness for determination of infiltration parameters. Also Philip model in five series of soils, after passing of time, had curve shape; in fact this shown that attraction coefficient (s) was less than zero. After all S.C.S model among of others had the least capability to determination of infiltration parameters.

  5. Real-time flutter boundary prediction based on time series models

    NASA Astrophysics Data System (ADS)

    Gu, Wenjing; Zhou, Li

    2018-03-01

    For the purpose of predicting the flutter boundary in real time during flutter flight tests, two time series models accompanied with corresponding stability criterion are adopted in this paper. The first method simplifies a long nonstationary response signal as many contiguous intervals and each is considered to be stationary. The traditional AR model is then established to represent each interval of signal sequence. While the second employs a time-varying AR model to characterize actual measured signals in flutter test with progression variable speed (FTPVS). To predict the flutter boundary, stability parameters are formulated by the identified AR coefficients combined with Jury's stability criterion. The behavior of the parameters is examined using both simulated and wind-tunnel experiment data. The results demonstrate that both methods show significant effectiveness in predicting the flutter boundary at lower speed level. A comparison between the two methods is also given in this paper.

  6. Epileptic seizure classification of EEG time-series using rational discrete short-time fourier transform.

    PubMed

    Samiee, Kaveh; Kovács, Petér; Gabbouj, Moncef

    2015-02-01

    A system for epileptic seizure detection in electroencephalography (EEG) is described in this paper. One of the challenges is to distinguish rhythmic discharges from nonstationary patterns occurring during seizures. The proposed approach is based on an adaptive and localized time-frequency representation of EEG signals by means of rational functions. The corresponding rational discrete short-time Fourier transform (DSTFT) is a novel feature extraction technique for epileptic EEG data. A multilayer perceptron classifier is fed by the coefficients of the rational DSTFT in order to separate seizure epochs from seizure-free epochs. The effectiveness of the proposed method is compared with several state-of-art feature extraction algorithms used in offline epileptic seizure detection. The results of the comparative evaluations show that the proposed method outperforms competing techniques in terms of classification accuracy. In addition, it provides a compact representation of EEG time-series.

  7. Autocorrelation and cross-correlation in time series of homicide and attempted homicide

    NASA Astrophysics Data System (ADS)

    Machado Filho, A.; da Silva, M. F.; Zebende, G. F.

    2014-04-01

    We propose in this paper to establish the relationship between homicides and attempted homicides by a non-stationary time-series analysis. This analysis will be carried out by Detrended Fluctuation Analysis (DFA), Detrended Cross-Correlation Analysis (DCCA), and DCCA cross-correlation coefficient, ρ(n). Through this analysis we can identify a positive cross-correlation between homicides and attempted homicides. At the same time, looked at from the point of view of autocorrelation (DFA), this analysis can be more informative depending on time scale. For short scale (days), we cannot identify auto-correlations, on the scale of weeks DFA presents anti-persistent behavior, and for long time scales (n>90 days) DFA presents a persistent behavior. Finally, the application of this new type of statistical analysis proved to be efficient and, in this sense, this paper can contribute to a more accurate descriptive statistics of crime.

  8. Products of multiple Fourier series with application to the multiblade transformation

    NASA Technical Reports Server (NTRS)

    Kunz, D. L.

    1981-01-01

    A relatively simple and systematic method for forming the products of multiple Fourier series using tensor like operations is demonstrated. This symbolic multiplication can be performed for any arbitrary number of series, and the coefficients of a set of linear differential equations with periodic coefficients from a rotating coordinate system to a nonrotating system is also demonstrated. It is shown that using Fourier operations to perform this transformation make it easily understood, simple to apply, and generally applicable.

  9. Inverse sequential procedures for the monitoring of time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1995-01-01

    When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.

  10. Statistical analysis on multifractal detrended cross-correlation coefficient for return interval by oriented percolation

    NASA Astrophysics Data System (ADS)

    Deng, Wei; Wang, Jun

    2015-06-01

    We investigate and quantify the multifractal detrended cross-correlation of return interval series for Chinese stock markets and a proposed price model, the price model is established by oriented percolation. The return interval describes the waiting time between two successive price volatilities which are above some threshold, the present work is an attempt to quantify the level of multifractal detrended cross-correlation for the return intervals. Further, the concept of MF-DCCA coefficient of return intervals is introduced, and the corresponding empirical research is performed. The empirical results show that the return intervals of SSE and SZSE are weakly positive multifractal power-law cross-correlated, and exhibit the fluctuation patterns of MF-DCCA coefficients. The similar behaviors of return intervals for the price model is also demonstrated.

  11. Spatial and Temporal Variation of PATMOS-x AVHRR Lake Surface Temperatures in the Laurentian Great Lakes

    NASA Astrophysics Data System (ADS)

    White, C.; Heidinger, A. K.; Ackerman, S. A.; McIntyre, P. B.

    2017-12-01

    A thirty-four year lake surface water temperature (LSWT) time series over the North American Great Lakes was extracted from NOAA's Advanced Very High Resolution Radiometer (AVHRR) Global Area Coverage (GAC). The time series was cloud-cleared using the NOAA Pathfinder Atmospheres Extended (PATMOS-x) climate dataset and the Clouds from AVHRR Extended System (CLAVR-x) processing system, and was subsampled to a regular 0.05° grid. LSWT coefficients for each AVHRR platform were fit to NOAA National Data Buoy Center buoys with historical records spanning 1982 to 2016. Satellite to buoy matchups indicate an RMSE of 0.72 K for the entire time series across all five lakes. An empirically fit diurnal correction was applied to correct for orbital drift and varying observation times of NOAA-7,9,11,12,14-19, Metop-1 and Metop-2. Ordinary linear regression slopes on monthly mean LSWT show strong spatial heterogeneity in the long-term LSWT trends both within each lake and between lakes. Differences in long-term trends using nighttime only, daytime only, and both day and night are examined. Additionally, a coastal upwelling signal can be identified from the time series along with the indication of an earlier onset of spring stratification.

  12. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.

  13. Abrupt Change of the Transboundary Runoff and Its Influence on Water Security of Lanstang-Mekong River

    NASA Astrophysics Data System (ADS)

    Sang, Y. F.; Xie, P.; Ziyi, W.; Jiangyan, Z.; Qianjin, D.; Xu, L.

    2017-12-01

    As a significant manifestation of hydrological variability, abrupt change will obviously impact on the water security. To analyze what does the variation bring under changing environment, abrupt change detection should be a basic task, as well as variation level evaluation and hydrological frequency analysis. However, there lacks an effective method to reach those purposes systematically. Here we derived correlation coefficient between the original series and the jump-component series which is related to the difference degree of mean value before and after the abrupt change. Based on it, we proposed the moving-correlation-coefficient-based detection method and evaluated the significance of abrupt change as different levels related to the value of correlation coefficient. Then, with the obtained results, we calculated hydrological frequency in different situation (before and after the abrupt change). The approach above was employed to investigate the transboundary runoff of Lanstang-Mekong River at some kinds of time scale. We obtained the abrupt changes from runoff series of year, flood season and dry season which are almost the same. All the abrupt changes were significant which could reach to the moderate level. Compared with the past situation (before the abrupt change), the hydrological frequency in the current situation (after the abrupt change) indicated the water security of the water supply and flood control in the lower reaches of Lanstang-Mekong River could be guaranteed better, which is owed to the construction and operation of the water conservancy projects on the upper Lanstang-Mekong River.

  14. Nonlinear Analysis of Auscultation Signals in TCM Using the Combination of Wavelet Packet Transform and Sample Entropy.

    PubMed

    Yan, Jian-Jun; Wang, Yi-Qin; Guo, Rui; Zhou, Jin-Zhuan; Yan, Hai-Xia; Xia, Chun-Ming; Shen, Yong

    2012-01-01

    Auscultation signals are nonstationary in nature. Wavelet packet transform (WPT) has currently become a very useful tool in analyzing nonstationary signals. Sample entropy (SampEn) has recently been proposed to act as a measurement for quantifying regularity and complexity of time series data. WPT and SampEn were combined in this paper to analyze auscultation signals in traditional Chinese medicine (TCM). SampEns for WPT coefficients were computed to quantify the signals from qi- and yin-deficient, as well as healthy, subjects. The complexity of the signal can be evaluated with this scheme in different time-frequency resolutions. First, the voice signals were decomposed into approximated and detailed WPT coefficients. Then, SampEn values for approximated and detailed coefficients were calculated. Finally, SampEn values with significant differences in the three kinds of samples were chosen as the feature parameters for the support vector machine to identify the three types of auscultation signals. The recognition accuracy rates were higher than 90%.

  15. Nonlinear Analysis of Auscultation Signals in TCM Using the Combination of Wavelet Packet Transform and Sample Entropy

    PubMed Central

    Yan, Jian-Jun; Wang, Yi-Qin; Guo, Rui; Zhou, Jin-Zhuan; Yan, Hai-Xia; Xia, Chun-Ming; Shen, Yong

    2012-01-01

    Auscultation signals are nonstationary in nature. Wavelet packet transform (WPT) has currently become a very useful tool in analyzing nonstationary signals. Sample entropy (SampEn) has recently been proposed to act as a measurement for quantifying regularity and complexity of time series data. WPT and SampEn were combined in this paper to analyze auscultation signals in traditional Chinese medicine (TCM). SampEns for WPT coefficients were computed to quantify the signals from qi- and yin-deficient, as well as healthy, subjects. The complexity of the signal can be evaluated with this scheme in different time-frequency resolutions. First, the voice signals were decomposed into approximated and detailed WPT coefficients. Then, SampEn values for approximated and detailed coefficients were calculated. Finally, SampEn values with significant differences in the three kinds of samples were chosen as the feature parameters for the support vector machine to identify the three types of auscultation signals. The recognition accuracy rates were higher than 90%. PMID:22690242

  16. Simple Analytical Forms of the Perpendicular Diffusion Coefficient for Two-component Turbulence. III. Damping Model of Dynamical Turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gammon, M.; Shalchi, A., E-mail: andreasm4@yahoo.com

    2017-10-01

    In several astrophysical applications one needs analytical forms of cosmic-ray diffusion parameters. Some examples are studies of diffusive shock acceleration and solar modulation. In the current article we explore perpendicular diffusion based on the unified nonlinear transport theory. While we focused on magnetostatic turbulence in Paper I, we included the effect of dynamical turbulence in Paper II of the series. In the latter paper we assumed that the temporal correlation time does not depend on the wavenumber. More realistic models have been proposed in the past, such as the so-called damping model of dynamical turbulence. In the present paper wemore » derive analytical forms for the perpendicular diffusion coefficient of energetic particles in two-component turbulence for this type of time-dependent turbulence. We present new formulas for the perpendicular diffusion coefficient and we derive a condition for which the magnetostatic result is recovered.« less

  17. [Spanish doctoral theses in emergency medicine (1978-2013)].

    PubMed

    Fernández-Guerrero, Inés María

    2015-01-01

    To quantitatively analyze the production of Spanish doctoral theses in emergency medicine. Quantitative synthesis of productivity indicators for 214 doctoral theses in emergency medicine found in the database (TESEO) for Spanish universities from 1978 to 2013. We processed the data in 3 ways as follows: compilation of descriptive statistics, regression analysis (correlation coefficients of determination), and modeling of linear trend (time-series analysis). Most of the thesis supervisors (84.1%) only oversaw a single project. No major supervisor of 10 or more theses was identified. Analysis of cosupervision indicated there were 1.6 supervisors per thesis. The theses were defended in 67 departments (both general and specialist departments) because no emergency medicine departments had been established. The most productive universities were 2 large ones (Universitat de Barcelona and Universidad Complutense de Madrid) and 3 medium-sized ones (Universidad de Granada, Universitat Autónoma de Barcelona, and Universidad de La Laguna). Productivity over time analyzed as the trend for 2-year periods in the time-series was expressed as a polynomial function with a correlation coefficient of determination of R2 = 0.80. Spanish doctoral research in emergency medicine has grown markedly. Work has been done in various university departments in different disciplines and specialties. The findings confirm that emergency medicine is a disciplinary field.

  18. a Landsat Time-Series Stacks Model for Detection of Cropland Change

    NASA Astrophysics Data System (ADS)

    Chen, J.; Chen, J.; Zhang, J.

    2017-09-01

    Global, timely, accurate and cost-effective cropland monitoring with a fine spatial resolution will dramatically improve our understanding of the effects of agriculture on greenhouse gases emissions, food safety, and human health. Time-series remote sensing imagery have been shown particularly potential to describe land cover dynamics. The traditional change detection techniques are often not capable of detecting land cover changes within time series that are severely influenced by seasonal difference, which are more likely to generate pseuso changes. Here,we introduced and tested LTSM ( Landsat time-series stacks model), an improved Continuous Change Detection and Classification (CCDC) proposed previously approach to extract spectral trajectories of land surface change using a dense Landsat time-series stacks (LTS). The method is expected to eliminate pseudo changes caused by phenology driven by seasonal patterns. The main idea of the method is that using all available Landsat 8 images within a year, LTSM consisting of two term harmonic function are estimated iteratively for each pixel in each spectral band .LTSM can defines change area by differencing the predicted and observed Landsat images. The LTSM approach was compared with change vector analysis (CVA) method. The results indicated that the LTSM method correctly detected the "true change" without overestimating the "false" one, while CVA pointed out "true change" pixels with a large number of "false changes". The detection of change areas achieved an overall accuracy of 92.37 %, with a kappa coefficient of 0.676.

  19. Adaptive mapping functions to the azimuthal anisotropy of the neutral atmosphere

    NASA Astrophysics Data System (ADS)

    Gegout, P.; Biancale, R.; Soudarin, L.

    2011-10-01

    The anisotropy of propagation of radio waves used by global navigation satellite systems is investigated using high-resolution observational data assimilations produced by the European Centre for Medium-range Weather Forecast. The geometry and the refractivity of the neutral atmosphere are built introducing accurate geodetic heights and continuous formulations of the refractivity and its gradient. Hence the realistic ellipsoidal shape of the refractivity field above the topography is properly represented. Atmospheric delays are obtained by ray-tracing through the refractivity field, integrating the eikonal differential system. Ray-traced delays reveal the anisotropy of the atmosphere. With the aim to preserve the classical mapping function strategy, mapping functions can evolve to adapt to high-frequency atmospheric fluctuations and to account for the anisotropy of propagation by fitting at each site and time the zenith delays and the mapping functions coefficients. Adaptive mapping functions (AMF) are designed with coefficients of the continued fraction form which depend on azimuth. The basic idea is to expand the azimuthal dependency of the coefficients in Fourier series introducing a multi-scale azimuthal decomposition which slightly changes the elevation functions with the azimuth. AMF are used to approximate thousands of atmospheric ray-traced delays using a few tens of coefficients. Generic recursive definitions of the AMF and their partial derivatives lead to observe that the truncation of the continued fraction form at the third term and the truncation of the azimuthal Fourier series at the fourth term are sufficient in usual meteorological conditions. Delays' and elevations' mapping functions allow to store and to retrieve the ray-tracing results to solve the parallax problem at the observation level. AMF are suitable to fit the time-variable isotropic and anisotropic parts of the ray-traced delays at each site at each time step and to provide GPS range corrections at the measurement level with millimeter accuracy at low elevation. AMF to the azimuthal anisotropy of the neutral atmosphere are designed to adapt to complex weather conditions by adaptively changing their truncations.

  20. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    PubMed Central

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  1. Monthly gravity field solutions based on GRACE observations generated with the Celestial Mechanics Approach

    NASA Astrophysics Data System (ADS)

    Meyer, Ulrich; Jäggi, Adrian; Beutler, Gerhard

    2012-09-01

    The main objective of the Gravity Recovery And Climate Experiment (GRACE) satellite mission consists of determining the temporal variations of the Earth's gravity field. These variations are captured by time series of gravity field models of limited resolution at, e.g., monthly intervals. We present a new time series of monthly models, which was computed with the so-called Celestial Mechanics Approach (CMA), developed at the Astronomical Institute of the University of Bern (AIUB). The secular and seasonal variations in the monthly models are tested for statistical significance. Calibrated errors are derived from inter-annual variations. The time-variable signal can be extracted at least up to degree 60, but the gravity field coefficients of orders above 45 are heavily contaminated by noise. This is why a series of monthly models is computed up to a maximum degree of 60, but only a maximum order of 45. Spectral analysis of the residual time-variable signal shows a distinctive peak at a period of 160 days, which shows up in particular in the C20 spherical harmonic coefficient. Basic filter- and scaling-techniques are introduced to evaluate the monthly models. For this purpose, the variability over the oceans is investigated, which serves as a measure for the noisiness of the models. The models in selected regions show the expected seasonal and secular variations, which are in good agreement with the monthly models of the Helmholtz Centre Potsdam, German Research Centre for Geosciences (GFZ). The results also reveal a few small outliers, illustrating the necessity for improved data screening. Our monthly models are available at the web page of the International Centre for Global Earth Models (ICGEM).

  2. A Millennial-length Reconstruction of the Western Pacific Pattern with Associated Paleoclimate

    NASA Astrophysics Data System (ADS)

    Wright, W. E.; Guan, B. T.; Wei, K.

    2010-12-01

    The Western Pacific Pattern (WP) is a lesser known 500 hPa pressure pattern similar to the NAO or PNA. As defined, the poles of the WP index are centered on 60°N over the Kamchatka peninsula and the neighboring Pacific and on 32.5°N over the western north Pacific. However, the area of influence for the southern half of the dipole includes a wide swath from East Asia, across Taiwan, through the Philippine Sea, to the western north Pacific. Tree rings of Taiwanese Chamaecyparis obtusa var. formosana in this extended region show significant correlation with the WP, and with local temperature. The WP is also significantly correlated with atmospheric temperatures over Taiwan, especially at 850hPa and 700 hPa, pressure levels that bracket the tree site. Spectral analysis indicates that variations in the WP occur at relatively high frequency, with most power at less than 5 years. Simple linear regression against high frequency variants of the tree-ring chronology yielded the most significant correlation coefficients. Two reconstructions are presented. The first uses a tree-ring time series produced as the first intrinsic mode function (IMF) from an Ensemble Empirical Mode Decomposition (EEMD), based on the Hilbert-Huang Transform. The significance of the regression using the EEMD-derived time series was much more significant than time series produced using traditional high pass filtering. The second also uses the first IMF of a tree-ring time series, but the dataset was first sorted and partitioned at a specified quantile prior to EEMD decomposition, with the mean of the partitioned data forming the input to the EEMD. The partitioning was done to filter out the less climatically sensitive tree rings, a common problem with shade tolerant trees. Time series statistics indicate that the first reconstruction is reliable to 1241 of the Common Era. Reliability of the second reconstruction is dependent on the development of statistics related to the quantile partitioning, and the consequent reduction in sample depth. However, the correlation coefficients from regressions over the instrumental period greatly exceed those from any other method of chronology generation, and so the technique holds promise. Additional atmospheric parameters having significant correlations against the WPO and tree ring time series with similar spatial patterns are also presented. These include vertical wind shear (850hPa-700hPa) over the northern Philippines and the Philippine Sea, surface Omega and 850hPa v-winds over the East China Sea, Japan and Taiwan. Possible links to changes in the subtropical jet stream will also be discussed.

  3. [Wave-type time series variation of the correlation between NDVI and climatic factors].

    PubMed

    Bi, Xiaoli; Wang, Hui; Ge, Jianping

    2005-02-01

    Based on the 1992-1996 data of 1 km monthly NDVI and those of the monthly precipitation and mean temperature collected by 400 standard meteorological stations in China, this paper analyzed the temporal and spatial dynamic changes of the correlation between NDVI and climatic factors in different climate districts of this country. The results showed that there was a significant correlation between monthly precipitations and NDVI. The wave-type time series model could simulate well the temporal dynamic changes of the correlation between NDVI and climatic factors, and the simulated results of the correlation between NDVI and precipitation was better than that between NDVI and temperature. The correlation coefficients (R2) were 0.91 and 0.86, respectively for the whole country.

  4. A Symmetric Time-Varying Cluster Rate of Descent Model

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.

    2015-01-01

    A model of the time-varying rate of descent of the Orion vehicle was developed based on the observed correlation between canopy projected area and drag coefficient. This initial version of the model assumes cluster symmetry and only varies the vertical component of velocity. The cluster fly-out angle is modeled as a series of sine waves based on flight test data. The projected area of each canopy is synchronized with the primary fly-out angle mode. The sudden loss of projected area during canopy collisions is modeled at minimum fly-out angles, leading to brief increases in rate of descent. The cluster geometry is converted to drag coefficient using empirically derived constants. A more complete model is under development, which computes the aerodynamic response of each canopy to its local incidence angle.

  5. Geomagnetic temporal change: 1903-1982 - A spline representation

    NASA Technical Reports Server (NTRS)

    Langel, R. A.; Kerridge, D. J.; Barraclough, D. R.; Malin, S. R. C.

    1986-01-01

    The secular variation of the earth's magnetic field is itself subject to temporal variations. These are investigated with the aid of the coefficients of a series of spherical harmonic models of secular variation deduced from data for the interval 1903-1982 from the worldwide network of magnetic observatories. For some studies it is convenient to approximate the time variation of the spherical harmonic coefficients with a smooth, continuous, function; for this a spline fitting is used. The phenomena that are investigated include periodicities, discontinuities, and correlation with the length of day. The numerical data presented will be of use for further investigations and for the synthesis of secular variation at any place and at any time within the interval of the data - they are not appropriate for temporal extrapolations.

  6. Impact of the zero-markup drug policy on hospitalisation expenditure in western rural China: an interrupted time series analysis.

    PubMed

    Yang, Caijun; Shen, Qian; Cai, Wenfang; Zhu, Wenwen; Li, Zongjie; Wu, Lina; Fang, Yu

    2017-02-01

    To assess the long-term effects of the introduction of China's zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditures after reimbursement. An interrupted time series was used to evaluate the impact of the zero-markup drug policy on hospitalisation expenditure and hospitalisation expenditure after reimbursement at primary health institutions in Fufeng County of Shaanxi Province, western China. Two regression models were developed. Monthly average hospitalisation expenditure and monthly average hospitalisation expenditure after reimbursement in primary health institutions were analysed covering the period 2009 through to 2013. For the monthly average hospitalisation expenditure, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -16.49, P = 0.009). For the monthly average hospitalisation expenditure after reimbursement, the increasing trend was slowed down after the introduction of the zero-markup drug policy (coefficient = -10.84, P = 0.064), and a significant decrease in the intercept was noted after the second intervention of changes in reimbursement schemes of the new rural cooperative medical insurance (coefficient = -220.64, P < 0.001). A statistically significant absolute decrease in the level or trend of monthly average hospitalisation expenditure and monthly average hospitalisation expenditure after reimbursement was detected after the introduction of the zero-markup drug policy in western China. However, hospitalisation expenditure and hospitalisation expenditure after reimbursement were still increasing. More effective policies are needed to prevent these costs from continuing to rise. © 2016 John Wiley & Sons Ltd.

  7. Improved nonlinear prediction method

    NASA Astrophysics Data System (ADS)

    Adenan, Nur Hamiza; Md Noorani, Mohd Salmi

    2014-06-01

    The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.

  8. Evolution of record-breaking high and low monthly mean temperatures

    NASA Astrophysics Data System (ADS)

    Anderson, A. L.; Kostinski, A. B.

    2011-12-01

    We examine the ratio of record-breaking highs to record-breaking lows with respect to extent of time-series for monthly mean temperatures within the continental United States (1900-2006) and ask the following question. How are record-breaking high and low surface temperatures in the United States affected by time period? We find that the ratio of record-breaking highs to lows in 2006 increases as the time-series extend further into the past. For example: in 2006, the ratio of record-breaking highs to record-breaking lows is ≈ 13 : 1 with 1950 as the first year and ≈ 25 : 1 with 1900 as the first year; both ratios are an order of magnitude greater than 3-σ for stationary simulations. We also find record-breaking events are more sensitive to trends in time-series of monthly averages than time-series of corresponding daily values. When we consider the ratio as it evolves with respect to a fixed start year, we find it is strongly correlated with the ensemble mean. Correlation coefficients are 0.76 and 0.82 for 1900-2006 and 1950-2006 respectively; 3-σ = 0.3 for pairs of uncorrelated stationary time-series. We find similar values for globally distributed time-series: 0.87 and 0.92 for 1900-2006 and 1950-2006 respectively. However, the ratios evolve differently: global ratios increase throughout (1920-2006) while continental United States ratios decrease from about 1940 to 1970. (Based on Anderson and Kostinski (2011), Evolution and distribution of record-breaking high and low monthly mean temperatures. Journal of Applied Meteorology and Climatology. doi: 10.1175/JAMC-D-10-05025.1)

  9. Impact of Autocorrelation on Functional Connectivity

    PubMed Central

    Arbabshirani, Mohammad R.; Damaraju, Eswar; Phlypo, Ronald; Plis, Sergey; Allen, Elena; Ma, Sai; Mathalon, Daniel; Preda, Adrian; Vaidya, Jatin G.; Adali, Tülay; Calhoun, Vince D.

    2014-01-01

    Although the impact of serial correlation (autocorrelation) in residuals of general linear models for fMRI time-series has been studied extensively, the effect of autocorrelation on functional connectivity studies has been largely neglected until recently. Some recent studies based on results from economics have questioned the conventional estimation of functional connectivity and argue that not correcting for autocorrelation in fMRI time-series results in “spurious” correlation coefficients. In this paper, first we assess the effect of autocorrelation on Pearson correlation coefficient through theoretical approximation and simulation. Then we present this effect on real fMRI data. To our knowledge this is the first work comprehensively investigating the effect of autocorrelation on functional connectivity estimates. Our results show that although FC values are altered, even following correction for autocorrelation, results of hypothesis testing on FC values remain very similar to those before correction. In real data we show this is true for main effects and also for group difference testing between healthy controls and schizophrenia patients. We further discuss model order selection in the context of autoregressive processes, effects of frequency filtering and propose a preprocessing pipeline for connectivity studies. PMID:25072392

  10. Carbon transfer dynamics from bomb- 14C and δ 13C time series of a laminated stalagmite from SW France - modelling and comparison with other stalagmite records

    NASA Astrophysics Data System (ADS)

    Genty, Dominique; Massault, Marc

    1999-05-01

    Twenty-two AMS 14C measurements have been made on a modern stalagmite from SW France in order to reconstruct the 14C activity history of the calcite deposit. Annual growth laminae provides a chronology up to 1919 A.D. Results show that the stalagmite 14C activity time series is sensitive to modern atmosphere 14C activity changes such as those produced by the nuclear weapon tests. The comparison between the two 14C time series shows that the stalagmite time series is damped: its amplitude variation between pre-bomb and post-bomb values is 75% less and the time delay between the two time series peaks is 16 years ±3. A model is developed using atmosphere 14C and 13C data, fractionation processes and three soil organic matter components whose mean turnover rates are different. The linear correlation coefficient between modeled and measured activities is 0.99. These results, combined with two other stalagmite 14C time series already published and compared with local vegetation and climate, demonstrate that most of the carbon transfer dynamics are controlled in the soil by soil organic matter degradation rates. Where vegetation produces debris whose degradation is slow, the fraction of old carbon injected in the system increases, the observed 14C time series is much more damped and lag time longer than that observed under grassland sites. The same mixing model applied on the 13C shows a good agreement ( R2 = 0.78) between modeled and measured stalagmite δ 13C and demonstrates that the Suess Effect due to fossil fuel combustion in the atmosphere is recorded in the stalagmite but with a damped effect due to SOM degradation rate. The different sources of dead carbon in the seepage water are calculated and discussed.

  11. The geomagnetic jerk of 1969 and the DGRFs

    USGS Publications Warehouse

    Thompson, D.; Cain, J.C.

    1987-01-01

    Cubic spline fits to the DGRF/IGRF series indicate agreement with other analyses showing the 1969-1970 magnetic jerk in the h ??12 and g ??02 secular change coefficients, and agreement that the h ??11 term showed no sharp change. The variation of the g ??01 term is out of phase with other analyses indicating a likely error in its representation in the 1965-1975 interval. We recommend that future derivations of the 'definitive' geomagnetic reference models take into consideration the times of impulses or jerks so as to not be bound to a standard 5 year interval, and otherwise to make more considered analyses before adopting sets of coefficients. ?? 1987.

  12. Application of the compensated Arrhenius formalism to self-diffusion: implications for ionic conductivity and dielectric relaxation.

    PubMed

    Petrowsky, Matt; Frech, Roger

    2010-07-08

    Self-diffusion coefficients are measured from -5 to 80 degrees C in a series of linear alcohols using pulsed field gradient NMR. The temperature dependence of these data is studied using a compensated Arrhenius formalism that assumes an Arrhenius-like expression for the diffusion coefficient; however, this expression includes a dielectric constant dependence in the exponential prefactor. Scaling temperature-dependent diffusion coefficients to isothermal diffusion coefficients so that the exponential prefactors cancel results in calculated energies of activation E(a). The exponential prefactor is determined by dividing the temperature-dependent diffusion coefficients by the Boltzmann term exp(-E(a)/RT). Plotting the prefactors versus the dielectric constant places the data on a single master curve. This procedure is identical to that previously used to study the temperature dependence of ionic conductivities and dielectric relaxation rate constants. The energies of activation determined from self-diffusion coefficients in the series of alcohols are strikingly similar to those calculated for the same series of alcohols from both dielectric relaxation rate constants and ionic conductivities of dilute electrolytes. The experimental results are described in terms of an activated transport mechanism that is mediated by relaxation of the solution molecules. This microscopic picture of transport is postulated to be common to diffusion, dielectric relaxation, and ionic transport.

  13. Spherical Harmonics Analysis of the ECMWF Global Wind Fields at the 10-Meter Height Level During 1985: A Collection of Figures Illustrating Results

    NASA Technical Reports Server (NTRS)

    Sanchez, Braulio V.; Nishihama, Masahiro

    1997-01-01

    Half-daily global wind speeds in the east-west (u) and north-south (v) directions at the 10-meter height level were obtained from the European Centre for Medium Range Weather Forecasts (ECMWF) data set of global analyses. The data set covered the period 1985 January to 1995 January. A spherical harmonic expansion to degree and order 50 was used to perform harmonic analysis of the east-west (u) and north-south (v) velocity field components. The resulting wind field is displayed, as well as the residual of the fit, at a particular time. The contribution of particular coefficients is shown. The time variability of the coefficients up to degree and order 3 is presented. Corresponding power spectrum plots are given. Time series analyses were applied also to the power associated with degrees 0-10; the results are included.

  14. Development of SMA Actuated Morphing Airfoil for Wind Turbine Load Alleviation

    NASA Astrophysics Data System (ADS)

    Karakalas, A.; Machairas, T.; Solomou, A.; Riziotis, V.; Saravanos, D.

    Wind turbine rotor upscaling has entered a range of rotor diameters where the blade structure cannot sustain the increased aerodynamic loads without novel load alleviation concepts. Research on load alleviation using morphing blade sections is presented. Antagonistic shape memory alloy (SMA) actuators are implemented to deflect the section trailing edge (TE) to target shapes and target time-series relating TE movement with changes in lift coefficient. Challenges encountered by the complex thermomechanical response of morphing section and the enhancement of SMA transient response to achieve frequencies meaningful for aerodynamic load alleviation are addressed. Using a recently developed finite element for SMA actuators [1], actuator configurations are considered for fast cooling and heating cycles. Numerical results quantify the attained ranges of TE angle movement, the moving time period and the developed stresses. Estimations of the attained variations of lift coefficient vs. time are also presented to assess the performance of the morphing section.

  15. Damage classification and estimation in experimental structures using time series analysis and pattern recognition

    NASA Astrophysics Data System (ADS)

    de Lautour, Oliver R.; Omenzetter, Piotr

    2010-07-01

    Developed for studying long sequences of regularly sampled data, time series analysis methods are being increasingly investigated for the use of Structural Health Monitoring (SHM). In this research, Autoregressive (AR) models were used to fit the acceleration time histories obtained from two experimental structures: a 3-storey bookshelf structure and the ASCE Phase II Experimental SHM Benchmark Structure, in undamaged and limited number of damaged states. The coefficients of the AR models were considered to be damage-sensitive features and used as input into an Artificial Neural Network (ANN). The ANN was trained to classify damage cases or estimate remaining structural stiffness. The results showed that the combination of AR models and ANNs are efficient tools for damage classification and estimation, and perform well using small number of damage-sensitive features and limited sensors.

  16. A generalized conditional heteroscedastic model for temperature downscaling

    NASA Astrophysics Data System (ADS)

    Modarres, R.; Ouarda, T. B. M. J.

    2014-11-01

    This study describes a method for deriving the time varying second order moment, or heteroscedasticity, of local daily temperature and its association to large Coupled Canadian General Circulation Models predictors. This is carried out by applying a multivariate generalized autoregressive conditional heteroscedasticity (MGARCH) approach to construct the conditional variance-covariance structure between General Circulation Models (GCMs) predictors and maximum and minimum temperature time series during 1980-2000. Two MGARCH specifications namely diagonal VECH and dynamic conditional correlation (DCC) are applied and 25 GCM predictors were selected for a bivariate temperature heteroscedastic modeling. It is observed that the conditional covariance between predictors and temperature is not very strong and mostly depends on the interaction between the random process governing temporal variation of predictors and predictants. The DCC model reveals a time varying conditional correlation between GCM predictors and temperature time series. No remarkable increasing or decreasing change is observed for correlation coefficients between GCM predictors and observed temperature during 1980-2000 while weak winter-summer seasonality is clear for both conditional covariance and correlation. Furthermore, the stationarity and nonlinearity Kwiatkowski-Phillips-Schmidt-Shin (KPSS) and Brock-Dechert-Scheinkman (BDS) tests showed that GCM predictors, temperature and their conditional correlation time series are nonlinear but stationary during 1980-2000 according to BDS and KPSS test results. However, the degree of nonlinearity of temperature time series is higher than most of the GCM predictors.

  17. A new methodology for determining dispersion coefficient using ordinary and partial differential transport equations.

    PubMed

    Cho, Kyung Hwa; Lee, Seungwon; Ham, Young Sik; Hwang, Jin Hwan; Cha, Sung Min; Park, Yongeun; Kim, Joon Ha

    2009-01-01

    The present study proposes a methodology for determining the effective dispersion coefficient based on the field measurements performed in Gwangju (GJ) Creek in South Korea which is environmentally degraded by the artificial interferences such as weirs and culverts. Many previous works determining the dispersion coefficient were limited in application due to the complexity and artificial interferences in natural stream. Therefore, the sequential combination of N-Tank-In-Series (NTIS) model and Advection-Dispersion-Reaction (ADR) model was proposed for evaluating dispersion process in complex stream channel in this study. The series of water quality data were intensively monitored in the field to determine the effective dispersion coefficient of E. coli in rainy day. As a result, the suggested methodology reasonably estimates the dispersion coefficient for GJ Creek with 1.25 m(2)/s. Also, the sequential combined method provided Number of tank-Velocity-Dispersion coefficient (NVD) curves for convenient evaluation of dispersion coefficient of other rivers or streams. Comparing the previous studies, the present methodology is quite general and simple for determining the effective dispersion coefficients which are applicable for other rivers and streams.

  18. A New Adaptive Self-Tuning Fourier Coefficients Algorithm for Periodic Torque Ripple Minimization in Permanent Magnet Synchronous Motors (PMSM)

    PubMed Central

    Gómez-Espinosa, Alfonso; Hernández-Guzmán, Víctor M.; Bandala-Sánchez, Manuel; Jiménez-Hernández, Hugo; Rivas-Araiza, Edgar A.; Rodríguez-Reséndiz, Juvenal; Herrera-Ruíz, Gilberto

    2013-01-01

    Torque ripple occurs in Permanent Magnet Synchronous Motors (PMSMs) due to the non-sinusoidal flux density distribution around the air-gap and variable magnetic reluctance of the air-gap due to the stator slots distribution. These torque ripples change periodically with rotor position and are apparent as speed variations, which degrade the PMSM drive performance, particularly at low speeds, because of low inertial filtering. In this paper, a new self-tuning algorithm is developed for determining the Fourier Series Controller coefficients with the aim of reducing the torque ripple in a PMSM, thus allowing for a smoother operation. This algorithm adjusts the controller parameters based on the component's harmonic distortion in time domain of the compensation signal. Experimental evaluation is performed on a DSP-controlled PMSM evaluation platform. Test results obtained validate the effectiveness of the proposed self-tuning algorithm, with the Fourier series expansion scheme, in reducing the torque ripple. PMID:23519345

  19. Symbolic computation of recurrence equations for the Chebyshev series solution of linear ODE's. [ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Geddes, K. O.

    1977-01-01

    If a linear ordinary differential equation with polynomial coefficients is converted into integrated form then the formal substitution of a Chebyshev series leads to recurrence equations defining the Chebyshev coefficients of the solution function. An explicit formula is presented for the polynomial coefficients of the integrated form in terms of the polynomial coefficients of the differential form. The symmetries arising from multiplication and integration of Chebyshev polynomials are exploited in deriving a general recurrence equation from which can be derived all of the linear equations defining the Chebyshev coefficients. Procedures for deriving the general recurrence equation are specified in a precise algorithmic notation suitable for translation into any of the languages for symbolic computation. The method is algebraic and it can therefore be applied to differential equations containing indeterminates.

  20. Large-scale Granger causality analysis on resting-state functional MRI

    NASA Astrophysics Data System (ADS)

    D'Souza, Adora M.; Abidin, Anas Zainul; Leistritz, Lutz; Wismüller, Axel

    2016-03-01

    We demonstrate an approach to measure the information flow between each pair of time series in resting-state functional MRI (fMRI) data of the human brain and subsequently recover its underlying network structure. By integrating dimensionality reduction into predictive time series modeling, large-scale Granger Causality (lsGC) analysis method can reveal directed information flow suggestive of causal influence at an individual voxel level, unlike other multivariate approaches. This method quantifies the influence each voxel time series has on every other voxel time series in a multivariate sense and hence contains information about the underlying dynamics of the whole system, which can be used to reveal functionally connected networks within the brain. To identify such networks, we perform non-metric network clustering, such as accomplished by the Louvain method. We demonstrate the effectiveness of our approach to recover the motor and visual cortex from resting state human brain fMRI data and compare it with the network recovered from a visuomotor stimulation experiment, where the similarity is measured by the Dice Coefficient (DC). The best DC obtained was 0.59 implying a strong agreement between the two networks. In addition, we thoroughly study the effect of dimensionality reduction in lsGC analysis on network recovery. We conclude that our approach is capable of detecting causal influence between time series in a multivariate sense, which can be used to segment functionally connected networks in the resting-state fMRI.

  1. Fluctuations of sediments-related optical parameters on a megatidal beach in the Eastern English Channel

    NASA Astrophysics Data System (ADS)

    Xing, Q.; Schmitt, F.; Loisel, H.

    2009-04-01

    To investigate the influence of turbulence coupled with waves and tides on the re-suspension of sediments, a 4-hour field experiment was conducted on a surf-zone beach near Wimereux, France where is at the Eastern English Channel and characterized by a semi-diurnal megatide (spring tidal range > 8 m). A sensor cluster was fixed 1.5 m above the sea bed when the tidal level was low. The parameters of the particle scattering coefficient and the optical attenuation coefficient were measured as a surrogate of the suspended sediments concentration (SSC), and the water temperature, the pressure, the horizontal 2-D velocity and so on, were also simultaneously measured in a continuous mode at a frequency of 1 Hz. The parameter of pressure was used for monitoring the water level and estimating the variation of surface wave heights by removing the local averages of time series, and the pressure time series show that the experiment started with a water level of about 3.7 m at 10 o'clock and ended with 4.5 m at 14 o'clock, and that the water level reached the highest at about 12 o'clock. The time series of current direction indicate that there was a steady along-coast current with a direction of 218 degrees when the water level almost reached the largest of 6 m, i.e., when the sensors were 4.5 m under the water surface. The particle scattering coefficient and the optical attenuation coefficient exhibit a similar fluctuating trend with a correlation coefficient of 0.85 between them. Although there is a time lag of about 1000 s, a relation between the optical parameters and the square of U is observed, i.e., SSC is a function of U, where U is the vector product of the along-shore and cross-shore velocities (v and u). The cross-shore velocity u fluctuates roughly with a mean of zero, and its variation decreases exponentially with the increase of water level, which is consistent with the common sense that wave orbital motions decrease exponentially with the water depth; the variation of v is slightly different to that of u, and the mean of fluctuations changes against the occurrence of along-coast current. Power spectral analysis on the basis of Fast Fourier Transform (FFT) is used to study their scaling behaviors in an energy (E(f)) ~ frequency (f) function of log(E(f)) ~ -p log(f). Temperature fluctuations exhibit to be corresponding to a passive scalar turbulence, p=1.79. When f < 0.003Hz, the values of p with the fluctuations of v and u are between 5/3 and 3, and more close to 3, which may suggest a main component of wave orbital motions in the mixed behavior with turbulence. Particle scattering coefficients and water attenuation coefficients exhibit a similar scaling behavior to each other, and when f < 0.003Hz, the values of p are close to 3 and a little larger than it, which also suggests the role of wave orbital motions in the re-suspension of sediments. In this experiments, a water volume of tens to one hundred cubic centimeters were monitored for velocity measurement. However, a finer spatial resolution may be more suitable for the observation of turbulence as well as the sediments-related optical parameters.

  2. Time Series Forecasting of Daily Reference Evapotranspiration by Neural Network Ensemble Learning for Irrigation System

    NASA Astrophysics Data System (ADS)

    Manikumari, N.; Murugappan, A.; Vinodhini, G.

    2017-07-01

    Time series forecasting has gained remarkable interest of researchers in the last few decades. Neural networks based time series forecasting have been employed in various application areas. Reference Evapotranspiration (ETO) is one of the most important components of the hydrologic cycle and its precise assessment is vital in water balance and crop yield estimation, water resources system design and management. This work aimed at achieving accurate time series forecast of ETO using a combination of neural network approaches. This work was carried out using data collected in the command area of VEERANAM Tank during the period 2004 - 2014 in India. In this work, the Neural Network (NN) models were combined by ensemble learning in order to improve the accuracy for forecasting Daily ETO (for the year 2015). Bagged Neural Network (Bagged-NN) and Boosted Neural Network (Boosted-NN) ensemble learning were employed. It has been proved that Bagged-NN and Boosted-NN ensemble models are better than individual NN models in terms of accuracy. Among the ensemble models, Boosted-NN reduces the forecasting errors compared to Bagged-NN and individual NNs. Regression co-efficient, Mean Absolute Deviation, Mean Absolute Percentage error and Root Mean Square Error also ascertain that Boosted-NN lead to improved ETO forecasting performance.

  3. Model Performance Evaluation and Scenario Analysis ...

    EPA Pesticide Factsheets

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too

  4. Monitoring cotton root rot by synthetic Sentinel-2 NDVI time series using improved spatial and temporal data fusion.

    PubMed

    Wu, Mingquan; Yang, Chenghai; Song, Xiaoyu; Hoffmann, Wesley Clint; Huang, Wenjiang; Niu, Zheng; Wang, Changyao; Li, Wang; Yu, Bo

    2018-01-31

    To better understand the progression of cotton root rot within the season, time series monitoring is required. In this study, an improved spatial and temporal data fusion approach (ISTDFA) was employed to combine 250-m Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Different Vegetation Index (NDVI) and 10-m Sentinetl-2 NDVI data to generate a synthetic Sentinel-2 NDVI time series for monitoring this disease. Then, the phenology of healthy cotton and infected cotton was modeled using a logistic model. Finally, several phenology parameters, including the onset day of greenness minimum (OGM), growing season length (GLS), onset of greenness increase (OGI), max NDVI value, and integral area of the phenology curve, were calculated. The results showed that ISTDFA could be used to combine time series MODIS and Sentinel-2 NDVI data with a correlation coefficient of 0.893. The logistic model could describe the phenology curves with R-squared values from 0.791 to 0.969. Moreover, the phenology curve of infected cotton showed a significant difference from that of healthy cotton. The max NDVI value, OGM, GSL and the integral area of the phenology curve for infected cotton were reduced by 0.045, 30 days, 22 days, and 18.54%, respectively, compared with those for healthy cotton.

  5. Modified DTW for a quantitative estimation of the similarity between rainfall time series

    NASA Astrophysics Data System (ADS)

    Djallel Dilmi, Mohamed; Barthès, Laurent; Mallet, Cécile; Chazottes, Aymeric

    2017-04-01

    The Precipitations are due to complex meteorological phenomenon and can be described as intermittent process. The spatial and temporal variability of this phenomenon is significant and covers large scales. To analyze and model this variability and / or structure, several studies use a network of rain gauges providing several time series of precipitation measurements. To compare these different time series, the authors compute for each time series some parameters (PDF, rain peak intensity, occurrence, amount, duration, intensity …). However, and despite the calculation of these parameters, the comparison of the parameters between two series of measurements remains qualitative. Due to the advection processes, when different sensors of an observation network measure precipitation time series identical in terms of intermitency or intensities, there is a time lag between the different measured series. Analyzing and extracting relevant information on physical phenomena from these precipitation time series implies the development of automatic analytical methods capable of comparing two time series of precipitation measured by different sensors or at two different locations and thus quantifying the difference / similarity. The limits of the Euclidean distance to measure the similarity between the time series of precipitation have been well demonstrated and explained (eg the Euclidian distance is indeed very sensitive to the effects of phase shift : between two identical but slightly shifted time series, this distance is not negligible). To quantify and analysis these time lag, the correlation functions are well established, normalized and commonly used to measure the spatial dependences that are required by many applications. However, authors generally observed that there is always a considerable scatter of the inter-rain gauge correlation coefficients obtained from the individual pairs of rain gauges. Because of a substantial dispersion of estimated time lag, the interpretation of this inter-correlation is not straightforward. We propose here to use an improvement of the Euclidian distance which integrates the global complexity of the rainfall series. The Dynamic Time Wrapping (DTW) used in speech recognition allows matching two time series instantly different and provide the most probable time lag. However, the original formulation of the DTW suffers from some limitations. In particular, it is not adequate to the rain intermittency. In this study we present an adaptation of the DTW for the analysis of rainfall time series : we used time series from the "Météo France" rain gauge network observed between January 1st, 2007 and December 31st, 2015 on 25 stations located in the Île de France area. Then we analyze the results (eg. The distance, the relationship between the time lag detected by our methods and others measured parameters like speed and direction of the wind…) to show the ability of the proposed similarity to provide usefull information on the rain structure. The possibility of using this measure of similarity to define a quality indicator of a sensor integrated into an observation network is also envisaged.

  6. A series solution for horizontal infiltration in an initially dry aquifer

    NASA Astrophysics Data System (ADS)

    Furtak-Cole, Eden; Telyakovskiy, Aleksey S.; Cooper, Clay A.

    2018-06-01

    The porous medium equation (PME) is a generalization of the traditional Boussinesq equation for hydraulic conductivity as a power law function of height. We analyze the horizontal recharge of an initially dry unconfined aquifer of semi-infinite extent, as would be found in an aquifer adjacent a rising river. If the water level can be modeled as a power law function of time, similarity variables can be introduced and the original problem can be reduced to a boundary value problem for a nonlinear ordinary differential equation. The position of the advancing front is not known ahead of time and must be found in the process of solution. We present an analytical solution in the form of a power series, with the coefficients of the series given by a recurrence relation. The analytical solution compares favorably with a highly accurate numerical solution, and only a small number of terms of the series are needed to achieve high accuracy in the scenarios considered here. We also conduct a series of physical experiments in an initially dry wedged Hele-Shaw cell, where flow is modeled by a special form of the PME. Our analytical solution closely matches the hydraulic head profiles in the Hele-Shaw cell experiment.

  7. Efficient Maize and Sunflower Multi-year Mapping with NDVI Time Series of HJ-1A/1B in Hetao Irrigation District of Inner Mongolia, China

    NASA Astrophysics Data System (ADS)

    Yu, B.; Shang, S.

    2016-12-01

    Food shortage is one of the major challenges that human beings are facing. It is urgent to improve the monitoring of the plantation and distribution of the main crops to solve the following economic and social issues. Recently, with the extensive use of remote sensing satellite data, it has provided favorable conditions for crop identification in large irrigation district with complex planting structure. Difference of different crop phenology is the main basis for crop identification, and the normalized difference vegetation index (NDVI) time-series could better delineate crop phenology cycle. Therefore, the key of crop identification is to obtain high quality NDVI time-series. MODIS and Landsat TM satellite images are the most frequently used, however, neither of them could guarantee high temporal and spatial resolutions at once. Accordingly, this paper makes use of NDVI time-series extracted from China Environment Satellites data, which has two-day-repeat temporal and 30m spatial resolutions. The NDVI time-series are fitted with an asymmetric logistic curve, the fitting effect is good and the correlation coefficient is greater than 0.9. The phonological parameters are derived from NDVI fitting curves, and crop identification is carried out by different relation ellipses between NDVI and its phonological parameters of different crops. This paper takes Hetao Irrigation District of Inner Mongolia as an example, to identify multi-year maize and sunflower in the district, and the identification result is good. Compared with the official statistics, the relative errors are both lower than 5%. The results show that the NDVI time-series dataset derived from HJ-1A/1B CCD could delineate the crop phenology cycle accurately and demonstrate its application in crop identification in irrigated district.

  8. Conformal expansions and renormalons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rathsman, J.

    2000-02-07

    The coefficients in perturbative expansions in gauge theories are factorially increasing, predominantly due to renormalons. This type of factorial increase is not expected in conformal theories. In QCD conformal relations between observables can be defined in the presence of a perturbative infrared fixed-point. Using the Banks-Zaks expansion the authors study the effect of the large-order behavior of the perturbative series on the conformal coefficients. The authors find that in general these coefficients become factorially increasing. However, when the factorial behavior genuinely originates in a renormalon integral, as implied by a postulated skeleton expansion, it does not affect the conformal coefficients.more » As a consequence, the conformal coefficients will indeed be free of renormalon divergence, in accordance with previous observations concerning the smallness of these coefficients for specific observables. The authors further show that the correspondence of the BLM method with the skeleton expansion implies a unique scale-setting procedure. The BLM coefficients can be interpreted as the conformal coefficients in the series relating the fixed-point value of the observable with that of the skeleton effective charge. Through the skeleton expansion the relevance of renormalon-free conformal coefficients extends to real-world QCD.« less

  9. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  10. Combined Vocal Exercises for Rehabilitation After Supracricoid Laryngectomy: Evaluation of Different Execution Times.

    PubMed

    Silveira, Hevely Saray Lima; Simões-Zenari, Marcia; Kulcsar, Marco Aurélio; Cernea, Claudio Roberto; Nemr, Kátia

    2017-10-27

    The supracricoid partial laryngectomy allows the preservation of laryngeal functions with good local cancer control. To assess laryngeal configuration and voice analysis data following the performance of a combination of two vocal exercises: the prolonged /b/vocal exercise combined with the vowel /e/ using chest and arm pushing with different durations among individuals who have undergone supracricoid laryngectomy. Eleven patients undergoing partial laryngectomy supracricoid with cricohyoidoepiglottopexy (CHEP) were evaluated using voice recording. Four judges performed separately a perceptive-vocal analysis of hearing voices, with random samples. For the analysis of intrajudge reliability, repetitions of 70% of the voices were done. Intraclass correlation coefficient was used to analyze the reliability of the judges. For an analysis of each judge to the comparison between zero time (time point 0), after the first series of exercises (time point 1), after the second series (time point 2), after the third series (time point 3), after the fourth series (time point 4), and after the fifth and final series (time point 5), the Friedman test was used with a significance level of 5%. The data relative to the configuration of the larynx were subjected to a descriptive analysis. In the evaluation, were considered the judge results 1 which have greater reliability. There was an improvement in the general level of vocal, roughness, and breathiness deviations from time point 4 [T4]. The prolonged /b/vocal exercise, combined with the vowel /e/ using chest- and arm-pushing exercises, was associated with an improvement in the overall grade of vocal deviation, roughness, and breathiness starting at minute 4 among patients who had undergone supracricoid laryngectomy with CHEP reconstruction. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  11. Statistical Analysis of Time-Series from Monitoring of Active Volcanic Vents

    NASA Astrophysics Data System (ADS)

    Lachowycz, S.; Cosma, I.; Pyle, D. M.; Mather, T. A.; Rodgers, M.; Varley, N. R.

    2016-12-01

    Despite recent advances in the collection and analysis of time-series from volcano monitoring, and the resulting insights into volcanic processes, challenges remain in forecasting and interpreting activity from near real-time analysis of monitoring data. Statistical methods have potential to characterise the underlying structure and facilitate intercomparison of these time-series, and so inform interpretation of volcanic activity. We explore the utility of multiple statistical techniques that could be widely applicable to monitoring data, including Shannon entropy and detrended fluctuation analysis, by their application to various data streams from volcanic vents during periods of temporally variable activity. Each technique reveals changes through time in the structure of some of the data that were not apparent from conventional analysis. For example, we calculate the Shannon entropy (a measure of the randomness of a signal) of time-series from the recent dome-forming eruptions of Volcán de Colima (Mexico) and Soufrière Hills (Montserrat). The entropy of real-time seismic measurements and the count rate of certain volcano-seismic event types from both volcanoes is found to be temporally variable, with these data generally having higher entropy during periods of lava effusion and/or larger explosions. In some instances, the entropy shifts prior to or coincident with changes in seismic or eruptive activity, some of which were not clearly recognised by real-time monitoring. Comparison with other statistics demonstrates the sensitivity of the entropy to the data distribution, but that it is distinct from conventional statistical measures such as coefficient of variation. We conclude that each analysis technique examined could provide valuable insights for interpretation of diverse monitoring time-series.

  12. Direct measurements of bed stress under swash in the field

    NASA Astrophysics Data System (ADS)

    Conley, Daniel C.; Griffin, John G.

    2004-03-01

    Utilizing flush mounted hot film anemometry, the bed stress under swash was measured directly in a field experiment conducted on Barret Beach, Fire Island, New York. The theory, development, and calibration of the instrument package are discussed, and results from the field experiment are presented. Examples of bed stress time series throughout a swash cycle are presented, and an ensemble averaged swash bed stress cycle is calculated. Strong asymmetry is observed between the uprush and backwash phases of the swash flow. The maximum bed shear stress exerted by the uprush is approximately double that of the backwash, while the duration of the backwash is 135% greater than that of the uprush. Friction coefficients in the swash zone are observed to be similar in magnitude to those from steady flow, with the mean observed friction coefficient equal to 0.0037. Swash friction coefficients derived from the current measurements exhibit a Reynolds number dependence similar to that observed for other flows. A systematic difference between coefficients for uprush and backwash is suggested.

  13. Healthcare Coinsurance Elasticity Coefficient Estimation Using Monthly Cross-sectional, Time-series Claims Data.

    PubMed

    Scoggins, John F; Weinberg, Daniel A

    2017-06-01

    Published estimates of the healthcare coinsurance elasticity coefficient have typically relied on annual observations of individual healthcare expenditures even though health plan membership and expenditures are traditionally reported in monthly units and several studies have stressed the need for demand models to recognize the episodic nature of healthcare. Summing individual healthcare expenditures into annual observations complicates two common challenges of statistical inference, heteroscedasticity, and regressor endogeneity. This paper estimates the elasticity coefficient using a monthly panel data model that addresses the heteroscedasticity and endogeneity problems with relative ease. Healthcare claims data from employees of King County, Washington, during 2005 to 2011 were used to estimate the mean point elasticity coefficient: -0.314 (0.015 standard error) to -0.145 (0.015 standard error) depending on model specification. These estimates bracket the -0.2 point estimate (range: -0.22 to -0.17) derived from the famous Rand Health Insurance Experiment. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Cloud Tracking from Satellite Pictures.

    DTIC Science & Technology

    1981-07-01

    sufficiently smooth contours, this information can be obtained from very few low-order coefficients. The inverse transform of the two lowest-order...obtained from very few low- order coefficients. The inverse transform of the two lowest-order coefficients is an ellipse approximating the original...coefficients obtained from the contour of Fig. 9. .. . ........ .. .. ... ..... 67 11. Inverse transform of truncated FD series .. .. . .. .... 67 12

  15. Simplifying the Reinsch algorithm for the Baker-Campbell-Hausdorff series

    NASA Astrophysics Data System (ADS)

    Van-Brunt, Alexander; Visser, Matt

    2016-02-01

    The Goldberg version of the Baker-Campbell-Hausdorff series computes the quantity Z ( X , Y ) = ln (" separators=" e X e Y ) = ∑ w g ( w ) w ( X , Y ) , where X and Y are not necessarily commuting in terms of "words" constructed from the {X, Y} "alphabet." The so-called Goldberg coefficients g(w) are the central topic of this article. This Baker-Campbell-Hausdorff series is a general purpose tool of very wide applicability in mathematical physics, quantum physics, and many other fields. The Reinsch algorithm for the truncated series permits one to calculate the Goldberg coefficients up to some fixed word length |w| by using nilpotent (|w| + 1) × (|w| + 1) matrices. We shall show how to further simplify the Reinsch algorithm, making its implementation (in principle) utterly straightforward using "off the shelf" symbolic manipulation software. Specific computations provide examples which help to provide a deeper understanding of the Goldberg coefficients and their properties. For instance, we shall establish some strict bounds (and some equalities) on the number of non-zero Goldberg coefficients. Unfortunately, we shall see that the number of nonzero Goldberg coefficients often grows very rapidly (in fact exponentially) with the word length |w|. Furthermore, the simplified Reinsch algorithm readily generalizes to many closely related but still quite distinct problems—we shall also present closely related results for the symmetric product S ( X , Y ) = ln (" separators=" e X / 2 e Y e X / 2 ) = ∑ w g S ( w ) w ( X , Y ) . Variations on such themes are straightforward. For instance, one can just as easily consider the "loop" product L ( X , Y ) = ln (" separators=" e X e Y e - X e - Y ) = ∑ w g L ( w ) w ( X , Y ) . This "loop" type of series is of interest, for instance, when considering either differential geometric parallel transport around a closed curve, non-Abelian versions of Stokes' theorem, or even Wigner rotation/Thomas precession in special relativity. Several other closely related series are also briefly investigated.

  16. Comparison of circular orbit and Fourier power series ephemeris representations for backup use by the upper atmosphere research satellite onboard computer

    NASA Technical Reports Server (NTRS)

    Kast, J. R.

    1988-01-01

    The Upper Atmosphere Research Satellite (UARS) is a three-axis stabilized Earth-pointing spacecraft in a low-Earth orbit. The UARS onboard computer (OBC) uses a Fourier Power Series (FPS) ephemeris representation that includes 42 position and 42 velocity coefficients per axis, with position residuals at 10-minute intervals. New coefficients and 32 hours of residuals are uploaded daily. This study evaluated two backup methods that permit the OBC to compute an approximate spacecraft ephemeris in the event that new ephemeris data cannot be uplinked for several days: (1) extending the use of the FPS coefficients previously uplinked, and (2) switching to a simple circular orbit approximation designed and tested (but not implemented) for LANDSAT-D. The FPS method provides greater accuracy during the backup period and does not require additional ground operational procedures for generating and uplinking an additional ephemeris table. The tradeoff is that the high accuracy of the FPS will be degraded slightly by adopting the longer fit period necessary to obtain backup accuracy for an extended period of time. The results for UARS show that extended use of the FPS is superior to the circular orbit approximation for short-term ephemeris backup.

  17. Laboratory measurement of the absorption coefficient of riboflavin for ultraviolet light (365 nm).

    PubMed

    Iseli, Hans Peter; Popp, Max; Seiler, Theo; Spoerl, Eberhard; Mrochen, Michael

    2011-03-01

    Corneal cross-linking (CXL) is an increasingly used treatment technique for stabilizing the cornea in keratoconus. Cross-linking (polymerization) between collagen fibrils is induced by riboflavin (vitamin B2) and ultraviolet light (365 nm). Although reported to reach a constant value at higher riboflavin concentrations, the Lambert-Beer law predicts a linear increase in the absorption coefficient. This work was carried out to determine absorption behavior at different riboflavin concentrations and to further investigate the purported plateau absorption coefficient value of riboflavin and to identify possible bleaching effects. The Lambert-Beer law was used to calculate the absorption coefficient at various riboflavin concentrations. The following investigated concentrations of riboflavin solutions were prepared using a mixture of 0.5% riboflavin and 20% Dextran T500 dissolved in 0.9% sodium chloride solution: 0%, 0.02%, 0.03%, 0.04%, 0.05%, 0.06%, 0.08%, 0.1%, 0.2%, 0.3%, 0.4%, and 0.5%, and were investigated with and without aperture plate implementation. An additional test series measured the transmitted power at selected riboflavin concentrations over time. In diluted solutions, a linear correlation exists between the absorption coefficient and riboflavin concentration. The absorption coefficient reaches a plateau, but this occurs at a higher riboflavin concentration (0.1%) than previously reported (just above 0.04%). Transmitted light power increases over time, indicating a bleaching effect of riboflavin. The riboflavin concentration can be effectively varied as a treatment parameter in a considerably broader range than previously thought. Copyright 2011, SLACK Incorporated.

  18. Newly found evidence of Sun-climate relationships

    NASA Technical Reports Server (NTRS)

    Kim, Hongsuk H.; Huang, Norden E.

    1993-01-01

    Solar radiation cycles drive climatic changes intercyclically. These interdecadal changes were detected as variations in solar total irradiances over the time period of recorded global surface-air-temperature (SAT) and have been restored utilizing Earth Radiation Budget Channel 10C measurements (1978-1990), Greenwich Observatory faculae data (1874-1975), and Taipei Observatory Active Region data (1964-1991). Analysis of the two separate events was carried out by treating each as a discrete time series determined by the length of each solar cycle. The results show that the global SAT responded closely to the input of solar cyclical activities, S, with a quantitative relation of T = 1.62 * S with a correlation coefficient of 0.61. This correlation peaks at 0.71 with a built-in time lag of 32 months in temperature response. Solar forcing in interannual time scale was also detected and the derived relationship of T = 0.17 * S with a correlation coefficient of 0.66 was observed. Our analysis shows derived climate sensitivities approximately fit the theoretical feedback slope, 4T(sup 3).

  19. Interannual Variations In the Low-Degree Components of the Geopotential derived from SLR and the Connections With Geophysical/Climatic Processes

    NASA Technical Reports Server (NTRS)

    Chao, Benjamin F.; Cox, Christopher M.; Au, Andrew Y.

    2004-01-01

    Recent Satellite Laser Ranging derived long wavelength gravity time series analysis has focused to a large extent on the effects of the recent large changes in the Earth s 52, and the potential causes. However, it is difficult to determine whether there are corresponding signals in the shorter wavelength zonals from the existing SLR-derived time variable gravity results, although it appears that geophysical fluid transport is being observed. For example, the recovered J3 time series shows remarkable agreement with NCEP-derived estimates of atmospheric gravity variations. Likewise, some of the non-zonal spherical harmonic coefficient series have significant interannual signal that appears to be related to mass transport. The non-zonal degree 2 terms show reasonable correlation with atmospheric signals, as well as climatic effects such as El Nino Southern Oscillation. While the formal uncertainty of these terms is significantly higher than that for J2, it is also clear that there is useful signal to be extracted. Consequently, the SLR time series is being reprocessed to improve the time variable gravity field recovery. We will present recent updates on the J2 evolution, as well as a look at other components of the interannual variations of the gravity field, complete through degree 4, and possible geophysical and climatic causes.

  20. Analysis of stochastic characteristics of the Benue River flow process

    NASA Astrophysics Data System (ADS)

    Otache, Martins Y.; Bakir, Mohammad; Li, Zhijia

    2008-05-01

    Stochastic characteristics of the Benue River streamflow process are examined under conditions of data austerity. The streamflow process is investigated for trend, non-stationarity and seasonality for a time period of 26 years. Results of trend analyses with Mann-Kendall test show that there is no trend in the annual mean discharges. Monthly flow series examined with seasonal Kendall test indicate the presence of positive change in the trend for some months, especially the months of August, January, and February. For the stationarity test, daily and monthly flow series appear to be stationary whereas at 1%, 5%, and 10% significant levels, the stationarity alternative hypothesis is rejected for the annual flow series. Though monthly flow appears to be stationary going by this test, because of high seasonality, it could be said to exhibit periodic stationarity based on the seasonality analysis. The following conclusions are drawn: (1) There is seasonality in both the mean and variance with unimodal distribution. (2) Days with high mean also have high variance. (3) Skewness coefficients for the months within the dry season period are greater than those of the wet season period, and seasonal autocorrelations for streamflow during dry season are generally larger than those of the wet season. Precisely, they are significantly different for most of the months. (4) The autocorrelation functions estimated “over time” are greater in the absolute value for data that have not been deseasonalised but were initially normalised by logarithmic transformation only, while autocorrelation functions for i = 1, 2, ..., 365 estimated “over realisations” have their coefficients significantly different from other coefficients.

  1. Carbon-dioxide emissions trading and hierarchical structure in worldwide finance and commodities markets.

    PubMed

    Zheng, Zeyu; Yamasaki, Kazuko; Tenenbaum, Joel N; Stanley, H Eugene

    2013-01-01

    In a highly interdependent economic world, the nature of relationships between financial entities is becoming an increasingly important area of study. Recently, many studies have shown the usefulness of minimal spanning trees (MST) in extracting interactions between financial entities. Here, we propose a modified MST network whose metric distance is defined in terms of cross-correlation coefficient absolute values, enabling the connections between anticorrelated entities to manifest properly. We investigate 69 daily time series, comprising three types of financial assets: 28 stock market indicators, 21 currency futures, and 20 commodity futures. We show that though the resulting MST network evolves over time, the financial assets of similar type tend to have connections which are stable over time. In addition, we find a characteristic time lag between the volatility time series of the stock market indicators and those of the EU CO(2) emission allowance (EUA) and crude oil futures (WTI). This time lag is given by the peak of the cross-correlation function of the volatility time series EUA (or WTI) with that of the stock market indicators, and is markedly different (>20 days) from 0, showing that the volatility of stock market indicators today can predict the volatility of EU emissions allowances and of crude oil in the near future.

  2. Carbon-dioxide emissions trading and hierarchical structure in worldwide finance and commodities markets

    NASA Astrophysics Data System (ADS)

    Zheng, Zeyu; Yamasaki, Kazuko; Tenenbaum, Joel N.; Stanley, H. Eugene

    2013-01-01

    In a highly interdependent economic world, the nature of relationships between financial entities is becoming an increasingly important area of study. Recently, many studies have shown the usefulness of minimal spanning trees (MST) in extracting interactions between financial entities. Here, we propose a modified MST network whose metric distance is defined in terms of cross-correlation coefficient absolute values, enabling the connections between anticorrelated entities to manifest properly. We investigate 69 daily time series, comprising three types of financial assets: 28 stock market indicators, 21 currency futures, and 20 commodity futures. We show that though the resulting MST network evolves over time, the financial assets of similar type tend to have connections which are stable over time. In addition, we find a characteristic time lag between the volatility time series of the stock market indicators and those of the EU CO2 emission allowance (EUA) and crude oil futures (WTI). This time lag is given by the peak of the cross-correlation function of the volatility time series EUA (or WTI) with that of the stock market indicators, and is markedly different (>20 days) from 0, showing that the volatility of stock market indicators today can predict the volatility of EU emissions allowances and of crude oil in the near future.

  3. Near-Surface Flow Fields Deduced Using Correlation Tracking and Time-Distance Analysis

    NASA Technical Reports Server (NTRS)

    DeRosa, Marc; Duvall, T. L., Jr.; Toomre, Juri

    1999-01-01

    Near-photospheric flow fields on the Sun are deduced using two independent methods applied to the same time series of velocity images observed by SOI-MDI on SOHO. Differences in travel times between f modes entering and leaving each pixel measured using time-distance helioseismology are used to determine sites of supergranular outflows. Alternatively, correlation tracking analysis of mesogranular scales of motion applied to the same time series is used to deduce the near-surface flow field. These two approaches provide the means to assess the patterns and evolution of horizontal flows on supergranular scales even near disk center, which is not feasible with direct line-of-sight Doppler measurements. We find that the locations of the supergranular outflows seen in flow fields generated from correlation tracking coincide well with the locations of the outflows determined from the time-distance analysis, with a mean correlation coefficient after smoothing of bar-r(sub s) = 0.840. Near-surface velocity field measurements can used to study the evolution of the supergranular network, as merging and splitting events are observed to occur in these images. The data consist of one 2048-minute time series of high-resolution (0.6" pixels) line-of-sight velocity images taken by MDI on 1997 January 16-18 at a cadence of one minute.

  4. Annual Estimates of Global Anthropogenic Methane Emissions: 1860-1994

    DOE Data Explorer

    Stern, David I. [Boston Univ., MA (United States); Kaufmann, Robert K. [Boston Univ., MA (United States)

    1998-01-01

    The authors provide the first estimates, by year, of global man-made emissions of methane, from 1860 through 1994. The methods, including the rationale for the various coefficients and assumptions used in deriving the estimates, are described fully in Stern and Kaufmann (1995, 1996), which provides the estimates for the period 1860-1993; the data presented here are revised and updated through 1994. Some formulae and coefficients were also revised in that process. Estimates are provided for total anthropogenic emissions, as well as emissions for the following component categories: Flaring and Venting of Natural Gas; Oil and Gas Supply Systems, Excluding Flaring; Coal Mining; Biomass Burning; Livestock Farming; Rice Farming and Related Activities; Landfills. Changes in emissions over time were estimated by treating emissions as a function of variables (such as population or coal production) for which historical time series are available.

  5. Nonlinear ARMA models for the D(st) index and their physical interpretation

    NASA Technical Reports Server (NTRS)

    Vassiliadis, D.; Klimas, A. J.; Baker, D. N.

    1996-01-01

    Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.

  6. Effect of Time Varying Gravity on DORIS processing for ITRF2013

    NASA Astrophysics Data System (ADS)

    Zelensky, N. P.; Lemoine, F. G.; Chinn, D. S.; Beall, J. W.; Melachroinos, S. A.; Beckley, B. D.; Pavlis, D.; Wimert, J.

    2013-12-01

    Computations are under way to develop a new time series of DORIS SINEX solutions to contribute to the development of the new realization of the terrestrial reference frame (c.f. ITRF2013). One of the improvements that are envisaged is the application of improved models of time-variable gravity in the background orbit modeling. At GSFC we have developed a time series of spherical harmonics to degree and order 5 (using the GOC02S model as a base), based on the processing of SLR and DORIS data to 14 satellites from 1993 to 2013. This is compared with the standard approach used in ITRF2008, based on the static model EIGEN-GL04S1 which included secular variations in only a few select coefficients. Previous work on altimeter satellite POD (c.f. TOPEX/Poseidon, Jason-1, Jason-2) has shown that the standard model is not adequate and orbit improvements are observed with application of more detailed models of time-variable gravity. In this study, we quantify the impact of TVG modeling on DORIS satellite POD, and ascertain the impact on DORIS station positions estimated weekly from 1993 to 2013. The numerous recent improvements to SLR and DORIS processing at GSFC include a more complete compliance to IERS2010 standards, improvements to SLR/DORIS measurement modeling, and improved non-conservative force modeling to DORIS satellites. These improvements will affect gravity coefficient estimates, POD, and the station solutions. Tests evaluate the impact of time varying gravity on tracking data residuals, station consistency, and the geocenter and scale reference frame parameters.

  7. Personalized State-space Modeling of Glucose Dynamics for Type 1 Diabetes Using Continuously Monitored Glucose, Insulin Dose, and Meal Intake: An Extended Kalman Filter Approach.

    PubMed

    Wang, Qian; Molenaar, Peter; Harsh, Saurabh; Freeman, Kenneth; Xie, Jinyu; Gold, Carol; Rovine, Mike; Ulbrecht, Jan

    2014-03-01

    An essential component of any artificial pancreas is on the prediction of blood glucose levels as a function of exogenous and endogenous perturbations such as insulin dose, meal intake, and physical activity and emotional tone under natural living conditions. In this article, we present a new data-driven state-space dynamic model with time-varying coefficients that are used to explicitly quantify the time-varying patient-specific effects of insulin dose and meal intake on blood glucose fluctuations. Using the 3-variate time series of glucose level, insulin dose, and meal intake of an individual type 1 diabetic subject, we apply an extended Kalman filter (EKF) to estimate time-varying coefficients of the patient-specific state-space model. We evaluate our empirical modeling using (1) the FDA-approved UVa/Padova simulator with 30 virtual patients and (2) clinical data of 5 type 1 diabetic patients under natural living conditions. Compared to a forgetting-factor-based recursive ARX model of the same order, the EKF model predictions have higher fit, and significantly better temporal gain and J index and thus are superior in early detection of upward and downward trends in glucose. The EKF based state-space model developed in this article is particularly suitable for model-based state-feedback control designs since the Kalman filter estimates the state variable of the glucose dynamics based on the measured glucose time series. In addition, since the model parameters are estimated in real time, this model is also suitable for adaptive control. © 2014 Diabetes Technology Society.

  8. Financial time series prediction using spiking neural networks.

    PubMed

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  9. Disentangling the stochastic behavior of complex time series

    NASA Astrophysics Data System (ADS)

    Anvari, Mehrnaz; Tabar, M. Reza Rahimi; Peinke, Joachim; Lehnertz, Klaus

    2016-10-01

    Complex systems involving a large number of degrees of freedom, generally exhibit non-stationary dynamics, which can result in either continuous or discontinuous sample paths of the corresponding time series. The latter sample paths may be caused by discontinuous events - or jumps - with some distributed amplitudes, and disentangling effects caused by such jumps from effects caused by normal diffusion processes is a main problem for a detailed understanding of stochastic dynamics of complex systems. Here we introduce a non-parametric method to address this general problem. By means of a stochastic dynamical jump-diffusion modelling, we separate deterministic drift terms from different stochastic behaviors, namely diffusive and jumpy ones, and show that all of the unknown functions and coefficients of this modelling can be derived directly from measured time series. We demonstrate appli- cability of our method to empirical observations by a data-driven inference of the deterministic drift term and of the diffusive and jumpy behavior in brain dynamics from ten epilepsy patients. Particularly these different stochastic behaviors provide extra information that can be regarded valuable for diagnostic purposes.

  10. Estimated SLR station position and network frame sensitivity to time-varying gravity

    NASA Astrophysics Data System (ADS)

    Zelensky, Nikita P.; Lemoine, Frank G.; Chinn, Douglas S.; Melachroinos, Stavros; Beckley, Brian D.; Beall, Jennifer Wiser; Bordyugov, Oleg

    2014-06-01

    This paper evaluates the sensitivity of ITRF2008-based satellite laser ranging (SLR) station positions estimated weekly using LAGEOS-1/2 data from 1993 to 2012 to non-tidal time-varying gravity (TVG). Two primary methods for modeling TVG from degree-2 are employed. The operational approach applies an annual GRACE-derived field, and IERS recommended linear rates for five coefficients. The experimental approach uses low-order/degree coefficients estimated weekly from SLR and DORIS processing of up to 11 satellites (tvg4x4). This study shows that the LAGEOS-1/2 orbits and the weekly station solutions are sensitive to more detailed modeling of TVG than prescribed in the current IERS standards. Over 1993-2012 tvg4x4 improves SLR residuals by 18 % and shows 10 % RMS improvement in station stability. Tests suggest that the improved stability of the tvg4x4 POD solution frame may help clarify geophysical signals present in the estimated station position time series. The signals include linear and seasonal station motion, and motion of the TRF origin, particularly in Z. The effect on both POD and the station solutions becomes increasingly evident starting in 2006. Over 2008-2012, the tvg4x4 series improves SLR residuals by 29 %. Use of the GRGS RL02 series shows similar improvement in POD. Using tvg4x4, secular changes in the TRF origin Z component double over the last decade and although not conclusive, it is consistent with increased geocenter rate expected due to continental ice melt. The test results indicate that accurate modeling of TVG is necessary for improvement of station position estimation using SLR data.

  11. A time-series analysis of the relation between unemployment rate and hospital admission for acute myocardial infarction and stroke in Brazil over more than a decade.

    PubMed

    Katz, Marcelo; Bosworth, Hayden B; Lopes, Renato D; Dupre, Matthew E; Morita, Fernando; Pereira, Carolina; Franco, Fabio G M; Prado, Rogerio R; Pesaro, Antonio E; Wajngarten, Mauricio

    2016-12-01

    The effect of socioeconomic stressors on the incidence of cardiovascular disease (CVD) is currently open to debate. Using time-series analysis, our study aimed to evaluate the relationship between unemployment rate and hospital admission for acute myocardial infarction (AMI) and stroke in Brazil over a recent 11-year span. Data on monthly hospital admissions for AMI and stroke from March 2002 to December 2013 were extracted from the Brazilian Public Health System Database. The monthly unemployment rate was obtained from the Brazilian Institute for Applied Economic Research, during the same period. The autoregressive integrated moving average (ARIMA) model was used to test the association of temporal series. Statistical significance was set at p<0.05. From March 2002 to December 2013, 778,263 admissions for AMI and 1,581,675 for stroke were recorded. During this time period, the unemployment rate decreased from 12.9% in 2002 to 4.3% in 2013, while admissions due to AMI and stroke increased. However, the adjusted ARIMA model showed a positive association between the unemployment rate and admissions for AMI but not for stroke (estimate coefficient=2.81±0.93; p=0.003 and estimate coefficient=2.40±4.34; p=0.58, respectively). From 2002 to 2013, hospital admissions for AMI and stroke increased, whereas the unemployment rate decreased. However, the adjusted ARIMA model showed a positive association between unemployment rate and admissions due to AMI but not for stroke. Further studies are warranted to validate our findings and to better explore the mechanisms by which socioeconomic stressors, such as unemployment, might impact on the incidence of CVD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Impeller leakage flow modeling for mechanical vibration control

    NASA Technical Reports Server (NTRS)

    Palazzolo, Alan B.

    1996-01-01

    HPOTP and HPFTP vibration test results have exhibited transient and steady characteristics which may be due to impeller leakage path (ILP) related forces. For example, an axial shift in the rotor could suddenly change the ILP clearances and lengths yielding dynamic coefficient and subsequent vibration changes. ILP models are more complicated than conventional-single component-annular seal models due to their radial flow component (coriolis and centrifugal acceleration), complex geometry (axial/radial clearance coupling), internal boundary (transition) flow conditions between mechanical components along the ILP and longer length, requiring moment as well as force coefficients. Flow coupling between mechanical components results from mass and energy conservation applied at their interfaces. Typical components along the ILP include an inlet seal, curved shroud, and an exit seal, which may be a stepped labyrinth type. Von Pragenau (MSFC) has modeled labyrinth seals as a series of plain annular seals for leakage and dynamic coefficient prediction. These multi-tooth components increase the total number of 'flow coupled' components in the ILP. Childs developed an analysis for an ILP consisting of a single, constant clearance shroud with an exit seal represented by a lumped flow-loss coefficient. This same geometry was later extended to include compressible flow. The objective of the current work is to: supply ILP leakage-force impedance-dynamic coefficient modeling software to MSFC engineers, base on incompressible/compressible bulk flow theory; design the software to model a generic geometry ILP described by a series of components lying along an arbitrarily directed path; validate the software by comparison to available test data, CFD and bulk models; and develop a hybrid CFD-bulk flow model of an ILP to improve modeling accuracy within practical run time constraints.

  13. Virial series expansion and Monte Carlo studies of equation of state for hard spheres in narrow cylindrical pores

    NASA Astrophysics Data System (ADS)

    Mon, K. K.

    2018-05-01

    In this paper, the virial series expansion and constant pressure Monte Carlo method are used to study the longitudinal pressure equation of state for hard spheres in narrow cylindrical pores. We invoke dimensional reduction and map the model into an effective one-dimensional fluid model with interacting internal degrees of freedom. The one-dimensional model is extensive. The Euler relation holds, and longitudinal pressure can be probed with the standard virial series expansion method. Virial coefficients B2 and B3 were obtained analytically, and numerical quadrature was used for B4. A range of narrow pore widths (2 Rp) , Rp<(√{3 }+2 ) /4 =0.9330 ... (in units of the hard sphere diameter) was used, corresponding to fluids in the important single-file formations. We have also computed the virial pressure series coefficients B2', B3', and B4' to compare a truncated virial pressure series equation of state with accurate constant pressure Monte Carlo data. We find very good agreement for a wide range of pressures for narrow pores. These results contribute toward increasing the rather limited understanding of virial coefficients and the equation of state of hard sphere fluids in narrow cylindrical pores.

  14. Automatic Tracking Of Remote Sensing Precipitation Data Using Genetic Algorithm Image Registration Based Automatic Morphing: September 1999 Storm Floyd Case Study

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Vongsaard, J.; El-Ghazawi, T.; Weinman, J.; Yang, R.; Kafatos, M.

    U Due to the poor temporal sampling by satellites, data gaps exist in satellite derived time series of precipitation. This poses a challenge for assimilating rain- fall data into forecast models. To yield a continuous time series, the classic image processing technique of digital image morphing has been used. However, the digital morphing technique was applied manually and that is time consuming. In order to avoid human intervention in the process, an automatic procedure for image morphing is needed for real-time operations. For this purpose, Genetic Algorithm Based Image Registration Automatic Morphing (GRAM) model was developed and tested in this paper. Specifically, automatic morphing technique was integrated with Genetic Algo- rithm and Feature Based Image Metamorphosis technique to fill in data gaps between satellite coverage. The technique was tested using NOWRAD data which are gener- ated from the network of NEXRAD radars. Time series of NOWRAD data from storm Floyd that occurred at the US eastern region on September 16, 1999 for 00:00, 01:00, 02:00,03:00, and 04:00am were used. The GRAM technique was applied to data col- lected at 00:00 and 04:00am. These images were also manually morphed. Images at 01:00, 02:00 and 03:00am were interpolated from the GRAM and manual morphing and compared with the original NOWRAD rainrates. The results show that the GRAM technique outperforms manual morphing. The correlation coefficients between the im- ages generated using manual morphing are 0.905, 0.900, and 0.905 for the images at 01:00, 02:00,and 03:00 am, while the corresponding correlation coefficients are 0.946, 0.911, and 0.913, respectively, based on the GRAM technique. Index terms ­ Remote Sensing, Image Registration, Hydrology, Genetic Algorithm, Morphing, NEXRAD

  15. Fisher information framework for time series modeling

    NASA Astrophysics Data System (ADS)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  16. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    PubMed

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  17. Process air quality data

    NASA Technical Reports Server (NTRS)

    Butler, C. M.; Hogge, J. E.

    1978-01-01

    Air quality sampling was conducted. Data for air quality parameters, recorded on written forms, punched cards or magnetic tape, are available for 1972 through 1975. Computer software was developed to (1) calculate several daily statistical measures of location, (2) plot time histories of data or the calculated daily statistics, (3) calculate simple correlation coefficients, and (4) plot scatter diagrams. Computer software was developed for processing air quality data to include time series analysis and goodness of fit tests. Computer software was developed to (1) calculate a larger number of daily statistical measures of location, and a number of daily monthly and yearly measures of location, dispersion, skewness and kurtosis, (2) decompose the extended time series model and (3) perform some goodness of fit tests. The computer program is described, documented and illustrated by examples. Recommendations are made for continuation of the development of research on processing air quality data.

  18. Influence of the time scale on the construction of financial networks.

    PubMed

    Emmert-Streib, Frank; Dehmer, Matthias

    2010-09-30

    In this paper we investigate the definition and formation of financial networks. Specifically, we study the influence of the time scale on their construction. For our analysis we use correlation-based networks obtained from the daily closing prices of stock market data. More precisely, we use the stocks that currently comprise the Dow Jones Industrial Average (DJIA) and estimate financial networks where nodes correspond to stocks and edges correspond to none vanishing correlation coefficients. That means only if a correlation coefficient is statistically significant different from zero, we include an edge in the network. This construction procedure results in unweighted, undirected networks. By separating the time series of stock prices in non-overlapping intervals, we obtain one network per interval. The length of these intervals corresponds to the time scale of the data, whose influence on the construction of the networks will be studied in this paper. Numerical analysis of four different measures in dependence on the time scale for the construction of networks allows us to gain insights about the intrinsic time scale of the stock market with respect to a meaningful graph-theoretical analysis.

  19. Bounds of memory strength for power-law series.

    PubMed

    Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao

    2017-05-01

    Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α. By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α, which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1<α≤3, as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α>3, the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.

  20. Bounds of memory strength for power-law series

    NASA Astrophysics Data System (ADS)

    Guo, Fangjian; Yang, Dan; Yang, Zimo; Zhao, Zhi-Dan; Zhou, Tao

    2017-05-01

    Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents α . By permuting the independently drawn samples from a power-law distribution, we present nontrivial bounds on the memory strength (first-order autocorrelation) as a function of α , which are markedly different from the ordinary ±1 bounds for Gaussian or uniform distributions. When 1 <α ≤3 , as α grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when α >3 , the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and the browsing behavior of Taobao, we find that empirical power-law-distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks and challenge the validity of measures such as autocorrelation and assortativity coefficient in heterogeneous systems.

  1. Identifying presence of correlated errors in GRACE monthly harmonic coefficients using machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Piretzidis, Dimitrios; Sra, Gurveer; Karantaidis, George; Sideris, Michael G.

    2017-04-01

    A new method for identifying correlated errors in Gravity Recovery and Climate Experiment (GRACE) monthly harmonic coefficients has been developed and tested. Correlated errors are present in the differences between monthly GRACE solutions, and can be suppressed using a de-correlation filter. In principle, the de-correlation filter should be implemented only on coefficient series with correlated errors to avoid losing useful geophysical information. In previous studies, two main methods of implementing the de-correlation filter have been utilized. In the first one, the de-correlation filter is implemented starting from a specific minimum order until the maximum order of the monthly solution examined. In the second one, the de-correlation filter is implemented only on specific coefficient series, the selection of which is based on statistical testing. The method proposed in the present study exploits the capabilities of supervised machine learning algorithms such as neural networks and support vector machines (SVMs). The pattern of correlated errors can be described by several numerical and geometric features of the harmonic coefficient series. The features of extreme cases of both correlated and uncorrelated coefficients are extracted and used for the training of the machine learning algorithms. The trained machine learning algorithms are later used to identify correlated errors and provide the probability of a coefficient series to be correlated. Regarding SVMs algorithms, an extensive study is performed with various kernel functions in order to find the optimal training model for prediction. The selection of the optimal training model is based on the classification accuracy of the trained SVM algorithm on the same samples used for training. Results show excellent performance of all algorithms with a classification accuracy of 97% - 100% on a pre-selected set of training samples, both in the validation stage of the training procedure and in the subsequent use of the trained algorithms to classify independent coefficients. This accuracy is also confirmed by the external validation of the trained algorithms using the hydrology model GLDAS NOAH. The proposed method meet the requirement of identifying and de-correlating only coefficients with correlated errors. Also, there is no need of applying statistical testing or other techniques that require prior de-correlation of the harmonic coefficients.

  2. Dynamical stochastic processes of returns in financial markets

    NASA Astrophysics Data System (ADS)

    Lim, Gyuchang; Kim, SooYong; Yoon, Seong-Min; Jung, Jae-Won; Kim, Kyungsik

    2007-03-01

    We study the evolution of probability distribution functions of returns, from the tick data of the Korean treasury bond (KTB) futures and the S&P 500 stock index, which can be described by means of the Fokker-Planck equation. We show that the Fokker-Planck equation and the Langevin equation from the estimated Kramers-Moyal coefficients can be estimated directly from the empirical data. By analyzing the statistics of the returns, we present quantitatively the deterministic and random influences on financial time series for both markets, for which we can give a simple physical interpretation. We particularly focus on the diffusion coefficient, which may be important for the creation of a portfolio.

  3. Structural changes in the minimal spanning tree and the hierarchical network in the Korean stock market around the global financial crisis

    NASA Astrophysics Data System (ADS)

    Nobi, Ashadun; Maeng, Seong Eun; Ha, Gyeong Gyun; Lee, Jae Woo

    2015-04-01

    This paper considers stock prices in the Korean stock market during the 2008 global financial crisis by focusing on three time periods: before, during, and after the crisis. Complex networks are extracted from cross-correlation coefficients between the normalized logarithmic return of the stock price time series of firms. The minimal spanning trees (MSTs) and the hierarchical network (HN) are generated from cross-correlation coefficients. Before and after the crisis, securities firms are located at the center of the MST. During the crisis, however, the center of the MST changes to a firm in heavy industry and construction. During the crisis, the MST shrinks in comparison to that before and that after the crisis. This topological change in the MST during the crisis reflects a distinct effect of the global financial crisis. The cophenetic correlation coefficient increases during the crisis, indicating an increase in the hierarchical structure during in this period. When crisis hits the market, firms behave synchronously, and their correlations are higher than those during a normal period.

  4. Air kerma to Hp(3) conversion coefficients for a new cylinder phantom for photon reference radiation qualities.

    PubMed

    Behrens, R

    2012-09-01

    The International Organization for Standardization (ISO) has issued a standard series on photon reference radiation qualities (ISO 4037). In this series, no conversion coefficients are contained for the quantity personal dose equivalent at a 3 mm depth, H(p)(3). In the past, for this quantity, a slab phantom was recommended as a calibration phantom; however, a cylinder phantom much better approximates the shape of a human head than a slab phantom. Therefore, in this work, the conversion coefficients from air kerma to H(p)(3) for the cylinder phantom are supplied for X- and gamma radiation qualities defined in ISO 4037.

  5. Gravity Tides Extracted from Relative Gravimeter Data by Combining Empirical Mode Decomposition and Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Hongjuan; Guo, Jinyun; Kong, Qiaoli; Chen, Xiaodong

    2018-04-01

    The static observation data from a relative gravimeter contain noise and signals such as gravity tides. This paper focuses on the extraction of the gravity tides from the static relative gravimeter data for the first time applying the combined method of empirical mode decomposition (EMD) and independent component analysis (ICA), called the EMD-ICA method. The experimental results from the CG-5 gravimeter (SCINTREX Limited Ontario Canada) data show that the gravity tides time series derived by EMD-ICA are consistent with the theoretical reference (Longman formula) and the RMS of their differences only reaches 4.4 μGal. The time series of the gravity tides derived by EMD-ICA have a strong correlation with the theoretical time series and the correlation coefficient is greater than 0.997. The accuracy of the gravity tides estimated by EMD-ICA is comparable to the theoretical model and is slightly higher than that of independent component analysis (ICA). EMD-ICA could overcome the limitation of ICA having to process multiple observations and slightly improve the extraction accuracy and reliability of gravity tides from relative gravimeter data compared to that estimated with ICA.

  6. Water permeation through anion exchange membranes

    NASA Astrophysics Data System (ADS)

    Luo, Xiaoyan; Wright, Andrew; Weissbach, Thomas; Holdcroft, Steven

    2018-01-01

    An understanding of water permeation through solid polymer electrolyte (SPE) membranes is crucial to offset the unbalanced water activity within SPE fuel cells. We examine water permeation through an emerging class of anion exchange membranes, hexamethyl-p-terphenyl poly (dimethylbenzimidazolium) (HMT-PMBI), and compare it against series of membrane thickness for a commercial anion exchange membrane (AEM), Fumapem® FAA-3, and a series of proton exchange membranes, Nafion®. The HMT-PMBI membrane is found to possess higher water permeabilities than Fumapem® FAA-3 and comparable permeability than Nafion (H+). By measuring water permeation through membranes of different thicknesses, we are able to decouple, for the first time, internal and interfacial water permeation resistances through anion exchange membranes. Permeation resistances on liquid/membrane interface is found to be negligible compared to that for vapor/membrane for both series of AEMs. Correspondingly, the resistance of liquid water permeation is found to be one order of magnitude smaller compared to that of vapor water permeation. HMT-PMBI possesses larger effective internal water permeation coefficient than both Fumapem® FAA-3 and Nafion® membranes (60 and 18% larger, respectively). In contrast, the effective interfacial permeation coefficient of HMT-PMBI is found to be similar to Fumapem® (±5%) but smaller than Nafion®(H+) (by 14%).

  7. Spatial Representativeness of Surface-Measured Variations of Downward Solar Radiation

    NASA Astrophysics Data System (ADS)

    Schwarz, M.; Folini, D.; Hakuba, M. Z.; Wild, M.

    2017-12-01

    When using time series of ground-based surface solar radiation (SSR) measurements in combination with gridded data, the spatial and temporal representativeness of the point observations must be considered. We use SSR data from surface observations and high-resolution (0.05°) satellite-derived data to infer the spatiotemporal representativeness of observations for monthly and longer time scales in Europe. The correlation analysis shows that the squared correlation coefficients (R2) between SSR times series decrease linearly with increasing distance between the surface observations. For deseasonalized monthly mean time series, R2 ranges from 0.85 for distances up to 25 km between the stations to 0.25 at distances of 500 km. A decorrelation length (i.e., the e-folding distance of R2) on the order of 400 km (with spread of 100-600 km) was found. R2 from correlations between point observations and colocated grid box area means determined from satellite data were found to be 0.80 for a 1° grid. To quantify the error which arises when using a point observation as a surrogate for the area mean SSR of larger surroundings, we calculated a spatial sampling error (SSE) for a 1° grid of 8 (3) W/m2 for monthly (annual) time series. The SSE based on a 1° grid, therefore, is of the same magnitude as the measurement uncertainty. The analysis generally reveals that monthly mean (or longer temporally aggregated) point observations of SSR capture the larger-scale variability well. This finding shows that comparing time series of SSR measurements with gridded data is feasible for those time scales.

  8. Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842

  9. Non-parametric directionality analysis - Extension for removal of a single common predictor and application to time series.

    PubMed

    Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob

    2016-08-01

    The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Extended causal modeling to assess Partial Directed Coherence in multiple time series with significant instantaneous interactions.

    PubMed

    Faes, Luca; Nollo, Giandomenico

    2010-11-01

    The Partial Directed Coherence (PDC) and its generalized formulation (gPDC) are popular tools for investigating, in the frequency domain, the concept of Granger causality among multivariate (MV) time series. PDC and gPDC are formalized in terms of the coefficients of an MV autoregressive (MVAR) model which describes only the lagged effects among the time series and forsakes instantaneous effects. However, instantaneous effects are known to affect linear parametric modeling, and are likely to occur in experimental time series. In this study, we investigate the impact on the assessment of frequency domain causality of excluding instantaneous effects from the model underlying PDC evaluation. Moreover, we propose the utilization of an extended MVAR model including both instantaneous and lagged effects. This model is used to assess PDC either in accordance with the definition of Granger causality when considering only lagged effects (iPDC), or with an extended form of causality, when we consider both instantaneous and lagged effects (ePDC). The approach is first evaluated on three theoretical examples of MVAR processes, which show that the presence of instantaneous correlations may produce misleading profiles of PDC and gPDC, while ePDC and iPDC derived from the extended model provide here a correct interpretation of extended and lagged causality. It is then applied to representative examples of cardiorespiratory and EEG MV time series. They suggest that ePDC and iPDC are better interpretable than PDC and gPDC in terms of the known cardiovascular and neural physiologies.

  11. A new method for gravity field recovery based on frequency analysis of spherical harmonics

    NASA Astrophysics Data System (ADS)

    Cai, Lin; Zhou, Zebing

    2017-04-01

    All existing methods for gravity field recovery are mostly based on the space-wise and time-wise approach, whose core processes are constructing the observation equations and solving them by the least square method. It's should be pointed that the least square method means the approximation. On the other hand, we can directly and precisely obtain the coefficients of harmonics by computing the Fast Fourier Transform (FFT) when we do 1-D data (time series) analysis. So the question whether we directly and precisely obtain the coefficients of spherical harmonic by computing 2-D FFT of measurements of satellite gravity mission is of great significance, since this may guide us to a new understanding of the signal components of gravity field and make us determine it quickly by taking advantage of FFT. Like the 1-D data analysis, the 2-D FFT of measurements of satellite can be computed rapidly. If we can determine the relationship between spherical harmonics and 2-D Fourier frequencies and the transfer function from measurements to spherical coefficients, the question mentioned above can be solved. So the objective of this research project is to establish a new method based on frequency analysis of spherical harmonic, which directly compute the confidents of spherical harmonic of gravity field, which is differ from recovery by least squares. There is a one to one correspondence between frequency spectrum and the time series in 1-D FFT. The 2-D FFT has a similar relationship to 1-D FFT. Owing to the fact that any degree or order (higher than one) of spherical function has multi frequencies and these frequencies may be aliased. Fortunately, the elements and ratio of these frequencies of spherical function can be determined, and we can compute the coefficients of spherical function from 2-D FFT. This relationship can be written as equations and equivalent to a matrix, which is solid and can be derived in advance. Until now the relationship has be determined. Some preliminary results, which only compute lower degree spherical harmonics, indicates that the difference between the input (EGM2008) and output (coefficients from recovery) is smaller than 5E-17, while the minimal precision of computer software (Matlab) is 2.2204E-16.

  12. Methods and Applications of Time Series Analysis. Part I. Regression, Trends, Smoothing, and Differencing.

    DTIC Science & Technology

    1980-07-01

    FUNCTION ( t) CENTERED AT C WITH PERIOD n -nr 0 soTIME t FIGURE 3.4S RECTAPOOLAR PORN )=C FUNCTION g t) CENTERED AT 0 WITH PERIOD n n n 52n tI y I (h...of a typical family in Kabiria (a city in Northern Algeria) over the time period Jan.-Feb. 1975 through Nov.-Dec. 1977. We would like to obtain a...values of y .. .. ... -75- Table 4.2 The Average Bi-Monthly Expenses of a Family in Kabiria and Their Fourier Representation Fourier Coefficients x k

  13. Retrieving the optical parameters of biological tissues using diffuse reflectance spectroscopy and Fourier series expansions. I. theory and application.

    PubMed

    Muñoz Morales, Aarón A; Vázquez Y Montiel, Sergio

    2012-10-01

    The determination of optical parameters of biological tissues is essential for the application of optical techniques in the diagnosis and treatment of diseases. Diffuse Reflection Spectroscopy is a widely used technique to analyze the optical characteristics of biological tissues. In this paper we show that by using diffuse reflectance spectra and a new mathematical model we can retrieve the optical parameters by applying an adjustment of the data with nonlinear least squares. In our model we represent the spectra using a Fourier series expansion finding mathematical relations between the polynomial coefficients and the optical parameters. In this first paper we use spectra generated by the Monte Carlo Multilayered Technique to simulate the propagation of photons in turbid media. Using these spectra we determine the behavior of Fourier series coefficients when varying the optical parameters of the medium under study. With this procedure we find mathematical relations between Fourier series coefficients and optical parameters. Finally, the results show that our method can retrieve the optical parameters of biological tissues with accuracy that is adequate for medical applications.

  14. Wavelet-linear genetic programming: A new approach for modeling monthly streamflow

    NASA Astrophysics Data System (ADS)

    Ravansalar, Masoud; Rajaee, Taher; Kisi, Ozgur

    2017-06-01

    The streamflows are important and effective factors in stream ecosystems and its accurate prediction is an essential and important issue in water resources and environmental engineering systems. A hybrid wavelet-linear genetic programming (WLGP) model, which includes a discrete wavelet transform (DWT) and a linear genetic programming (LGP) to predict the monthly streamflow (Q) in two gauging stations, Pataveh and Shahmokhtar, on the Beshar River at the Yasuj, Iran were used in this study. In the proposed WLGP model, the wavelet analysis was linked to the LGP model where the original time series of streamflow were decomposed into the sub-time series comprising wavelet coefficients. The results were compared with the single LGP, artificial neural network (ANN), a hybrid wavelet-ANN (WANN) and Multi Linear Regression (MLR) models. The comparisons were done by some of the commonly utilized relevant physical statistics. The Nash coefficients (E) were found as 0.877 and 0.817 for the WLGP model, for the Pataveh and Shahmokhtar stations, respectively. The comparison of the results showed that the WLGP model could significantly increase the streamflow prediction accuracy in both stations. Since, the results demonstrate a closer approximation of the peak streamflow values by the WLGP model, this model could be utilized for the simulation of cumulative streamflow data prediction in one month ahead.

  15. Quantum correlations in non-inertial cavity systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harsij, Zeynab, E-mail: z.harsij@ph.iut.ac.ir; Mirza, Behrouz, E-mail: b.mirza@cc.iut.ac.ir

    2016-10-15

    Non-inertial cavities are utilized to store and send Quantum Information between mode pairs. A two-cavity system is considered where one is inertial and the other accelerated in a finite time. Maclaurian series are applied to expand the related Bogoliubov coefficients and the problem is treated perturbatively. It is shown that Quantum Discord, which is a measure of quantumness of correlations, is degraded periodically. This is almost in agreement with previous results reached in accelerated systems where increment of acceleration decreases the degree of quantum correlations. As another finding of the study, it is explicitly shown that degradation of Quantum Discordmore » disappears when the state is in a single cavity which is accelerated for a finite time. This feature makes accelerating cavities useful instruments in Quantum Information Theory. - Highlights: • Non-inertial cavities are utilized to store and send information in Quantum Information Theory. • Cavities include boundary conditions which will protect the entanglement once it has been created. • The problem is treated perturbatively and the maclaurian series are applied to expand the related Bogoliubov coefficients. • When two cavities are considered degradation in the degree of quantum correlation happens and it appears periodically. • The interesting issue is when a single cavity is studied and the degradation in quantum correlations disappears.« less

  16. Runoff Response at Three Spatial Scale from a Burned Watershed

    NASA Astrophysics Data System (ADS)

    Moody, J. A.; Kinner, D. A.

    2007-12-01

    The hypothesis that the magnitude and timing of runoff from burned watersheds are functions of the properties of flow paths at multiple scales was investigated at three nested spatial scales within an area burned by the 2005 Harvard Fire near Burbank, California. Water depths were measured using pressure sensors: at the outlet of a subwatershed (10000 m2); in 3-inch Parshall flumes near the outlets of three mini-watersheds (820-1780 m2) within the subwatershed; and by 12 overland-flow detectors in 6 micro-watersheds (~11-15 m2) within one of the mini-watersheds. Rainfall intensities were measured using recording raingages deployed around the perimeter of the mini-watersheds and at the subwatershed outlet. Time-to-concentration, TC, and lag time, TL, were computed for the 15 largest of 30 rainstorms (maximum 30- minute intensities were 3.3-13.0 mm/h) between December 2005 and April 2006. TC , elapsed time from the beginning of the rain until the first increase in water depth, averaged 1.0 hours at the micro-scale, 1.7 hours at the mini-scale, and 1.5 hours at the subwatershed scale. TL is the lag time that produced the maximum cross- correlation coefficient between the time series of rainfall intensities and the series of water depths. TL averaged 0.15 hours at the micro-scale, 0.35 hours at the mini-scale, and 0.39 hours at the subwatershed scale. The coefficient was >0.50 for 43% (N=168) of the measurements at the micro-scale, for 61% (N=54) at the mini- scale, and for 67% (N=6) at the subwatershed scale indicating the runoff response lagged but was often well correlated with the time-varying rainfall intensity.

  17. Object-Oriented Classification of Sugarcane Using Time-Series Middle-Resolution Remote Sensing Data Based on AdaBoost

    PubMed Central

    Zhou, Zhen; Huang, Jingfeng; Wang, Jing; Zhang, Kangyu; Kuang, Zhaomin; Zhong, Shiquan; Song, Xiaodong

    2015-01-01

    Most areas planted with sugarcane are located in southern China. However, remote sensing of sugarcane has been limited because useable remote sensing data are limited due to the cloudy climate of this region during the growing season and severe spectral mixing with other crops. In this study, we developed a methodology for automatically mapping sugarcane over large areas using time-series middle-resolution remote sensing data. For this purpose, two major techniques were used, the object-oriented method (OOM) and data mining (DM). In addition, time-series Chinese HJ-1 CCD images were obtained during the sugarcane growing period. Image objects were generated using a multi-resolution segmentation algorithm, and DM was implemented using the AdaBoost algorithm, which generated the prediction model. The prediction model was applied to the HJ-1 CCD time-series image objects, and then a map of the sugarcane planting area was produced. The classification accuracy was evaluated using independent field survey sampling points. The confusion matrix analysis showed that the overall classification accuracy reached 93.6% and that the Kappa coefficient was 0.85. Thus, the results showed that this method is feasible, efficient, and applicable for extrapolating the classification of other crops in large areas where the application of high-resolution remote sensing data is impractical due to financial considerations or because qualified images are limited. PMID:26528811

  18. Object-Oriented Classification of Sugarcane Using Time-Series Middle-Resolution Remote Sensing Data Based on AdaBoost.

    PubMed

    Zhou, Zhen; Huang, Jingfeng; Wang, Jing; Zhang, Kangyu; Kuang, Zhaomin; Zhong, Shiquan; Song, Xiaodong

    2015-01-01

    Most areas planted with sugarcane are located in southern China. However, remote sensing of sugarcane has been limited because useable remote sensing data are limited due to the cloudy climate of this region during the growing season and severe spectral mixing with other crops. In this study, we developed a methodology for automatically mapping sugarcane over large areas using time-series middle-resolution remote sensing data. For this purpose, two major techniques were used, the object-oriented method (OOM) and data mining (DM). In addition, time-series Chinese HJ-1 CCD images were obtained during the sugarcane growing period. Image objects were generated using a multi-resolution segmentation algorithm, and DM was implemented using the AdaBoost algorithm, which generated the prediction model. The prediction model was applied to the HJ-1 CCD time-series image objects, and then a map of the sugarcane planting area was produced. The classification accuracy was evaluated using independent field survey sampling points. The confusion matrix analysis showed that the overall classification accuracy reached 93.6% and that the Kappa coefficient was 0.85. Thus, the results showed that this method is feasible, efficient, and applicable for extrapolating the classification of other crops in large areas where the application of high-resolution remote sensing data is impractical due to financial considerations or because qualified images are limited.

  19. A transient response analysis of the space shuttle vehicle during liftoff

    NASA Technical Reports Server (NTRS)

    Brunty, J. A.

    1990-01-01

    A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.

  20. Truncation of Spherical Harmonic Series and its Influence on Gravity Field Modelling

    NASA Astrophysics Data System (ADS)

    Fecher, T.; Gruber, T.; Rummel, R.

    2009-04-01

    Least-squares adjustment is a very common and effective tool for the calculation of global gravity field models in terms of spherical harmonic series. However, since the gravity field is a continuous field function its optimal representation by a finite series of spherical harmonics is connected with a set of fundamental problems. Particularly worth mentioning here are cut off errors and aliasing effects. These problems stem from the truncation of the spherical harmonic series and from the fact that the spherical harmonic coefficients cannot be determined independently of each other within the adjustment process in case of discrete observations. The latter is shown by the non-diagonal variance-covariance matrices of gravity field solutions. Sneeuw described in 1994 that the off-diagonal matrix elements - at least if data are equally weighted - are the result of a loss of orthogonality of Legendre polynomials on regular grids. The poster addresses questions arising from the truncation of spherical harmonic series in spherical harmonic analysis and synthesis. Such questions are: (1) How does the high frequency data content (outside the parameter space) affect the estimated spherical harmonic coefficients; (2) Where to truncate the spherical harmonic series in the adjustment process in order to avoid high frequency leakage?; (3) Given a set of spherical harmonic coefficients resulting from an adjustment, what is the effect of using only a truncated version of it?

  1. Evaluation of an eddy resolving global model at the Bermuda Atlantic Time-series Study site

    NASA Astrophysics Data System (ADS)

    Hiron, L.; Goncalves Neto, A.; Bates, N. R.; Johnson, R. J.

    2016-02-01

    The Bermuda Atlantic Time-series Study (BATS) commenced monthly sampling in 1988 and thus provides an invaluable 27 years of ocean temperature and salinity profiles for inferring climate relevant processes. However, the passage of mesoscale eddies through this site complicates the local heat and salinity budgets due to inadequate spatial and temporal sampling of these eddy systems. Thus, application of high resolution operational numerical models potentially offers a framework for estimating the horizontal transport due to mesoscale processes. The goal of this research was to analyze the accuracy of the MERCATOR operational 1/12° global ocean model at the BATS site by comparing temperature, salinity and heat budgets for years 2008 - 2015. Overall agreement in the upper 540m for temperature and salinity is found to be very encouraging with significant (P< 0.01) correlations at all depths for both fields. The highest value of correlation coefficient for the temperature field is 0.98 at the surface which decreases to 0.66 at 150m and then reaches a minimum of 0.50 at 320 to 540m. Similarly, the highest correlation coefficient for salinity is found at the surface, with a value of 0.83 and then decreases to a minimum of 0.25 in the subtropical mode water though then increases to 0.5 at 540m. Mixing in the MERCATOR model is also very well captured with a mixed layer depth (MLD) correlation coefficient of 0.92 for the seven year period. Finally, the total heat budget (0-540m) from MERCATOR varies coherently with the BATS observations as shown by a high correlation coefficient of 0.84 (P < 0.01). According to these analyses, daily output from the MERCATOR model represents accurately the temperature, salinity, heat budget and MLD at the BATS site. We propose this model can be used in future research at the BATS site by providing information about mesoscale structure and importantly, advective fluxes at this site.

  2. Legendre-tau approximations for functional differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  3. Optimal trajectory generation for mechanical arms. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Iemenschot, J. A.

    1972-01-01

    A general method of generating optimal trajectories between an initial and a final position of an n degree of freedom manipulator arm with nonlinear equations of motion is proposed. The method is based on the assumption that the time history of each of the coordinates can be expanded in a series of simple time functions. By searching over the coefficients of the terms in the expansion, trajectories which minimize the value of a given cost function can be obtained. The method has been applied to a planar three degree of freedom arm.

  4. Legendre-Tau approximations for functional differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1983-01-01

    The numerical approximation of solutions to linear functional differential equations are considered using the so called Legendre tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time differentiation. The approximate solution is then represented as a truncated Legendre series with time varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximations is made.

  5. Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Martini, Michael C.

    2011-01-01

    A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.

  6. Program for the analysis of time series. [by means of fast Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Brown, T. J.; Brown, C. G.; Hardin, J. C.

    1974-01-01

    A digital computer program for the Fourier analysis of discrete time data is described. The program was designed to handle multiple channels of digitized data on general purpose computer systems. It is written, primarily, in a version of FORTRAN 2 currently in use on CDC 6000 series computers. Some small portions are written in CDC COMPASS, an assembler level code. However, functional descriptions of these portions are provided so that the program may be adapted for use on any facility possessing a FORTRAN compiler and random-access capability. Properly formatted digital data are windowed and analyzed by means of a fast Fourier transform algorithm to generate the following functions: (1) auto and/or cross power spectra, (2) autocorrelations and/or cross correlations, (3) Fourier coefficients, (4) coherence functions, (5) transfer functions, and (6) histograms.

  7. Analysis of the correlation dimension for inertial particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustavsson, Kristian; Department of Physics, Göteborg University, 41296 Gothenburg; Mehlig, Bernhard

    2015-07-15

    We obtain an implicit equation for the correlation dimension which describes clustering of inertial particles in a complex flow onto a fractal measure. Our general equation involves a propagator of a nonlinear stochastic process in which the velocity gradient of the fluid appears as additive noise. When the long-time limit of the propagator is considered our equation reduces to an existing large-deviation formalism from which it is difficult to extract concrete results. In the short-time limit, however, our equation reduces to a solvability condition on a partial differential equation. In the case where the inertial particles are much denser thanmore » the fluid, we show how this approach leads to a perturbative expansion of the correlation dimension, for which the coefficients can be obtained exactly and in principle to any order. We derive the perturbation series for the correlation dimension of inertial particles suspended in three-dimensional spatially smooth random flows with white-noise time correlations, obtaining the first 33 non-zero coefficients exactly.« less

  8. Extreme learning machine: a new alternative for measuring heat collection rate and heat loss coefficient of water-in-glass evacuated tube solar water heaters.

    PubMed

    Liu, Zhijian; Li, Hao; Tang, Xindong; Zhang, Xinyu; Lin, Fan; Cheng, Kewei

    2016-01-01

    Heat collection rate and heat loss coefficient are crucial indicators for the evaluation of in service water-in-glass evacuated tube solar water heaters. However, the direct determination requires complex detection devices and a series of standard experiments, wasting too much time and manpower. To address this problem, we previously used artificial neural networks and support vector machine to develop precise knowledge-based models for predicting the heat collection rates and heat loss coefficients of water-in-glass evacuated tube solar water heaters, setting the properties measured by "portable test instruments" as the independent variables. A robust software for determination was also developed. However, in previous results, the prediction accuracy of heat loss coefficients can still be improved compared to those of heat collection rates. Also, in practical applications, even a small reduction in root mean square errors (RMSEs) can sometimes significantly improve the evaluation and business processes. As a further study, in this short report, we show that using a novel and fast machine learning algorithm-extreme learning machine can generate better predicted results for heat loss coefficient, which reduces the average RMSEs to 0.67 in testing.

  9. Transient electromagnetic scattering by a radially uniaxial dielectric sphere: Debye series, Mie series and ray tracing methods

    NASA Astrophysics Data System (ADS)

    Yazdani, Mohsen

    Transient electromagnetic scattering by a radially uniaxial dielectric sphere is explored using three well-known methods: Debye series, Mie series, and ray tracing theory. In the first approach, the general solutions for the impulse and step responses of a uniaxial sphere are evaluated using the inverse Laplace transformation of the generalized Mie series solution. Following high frequency scattering solution of a large uniaxial sphere, the Mie series summation is split into the high frequency (HF) and low frequency terms where the HF term is replaced by its asymptotic expression allowing a significant reduction in computation time of the numerical Bromwich integral. In the second approach, the generalized Debye series for a radially uniaxial dielectric sphere is introduced and the Mie series coefficients are replaced by their equivalent Debye series formulations. The results are then applied to examine the transient response of each individual Debye term allowing the identification of impulse returns in the transient response of the uniaxial sphere. In the third approach, the ray tracing theory in a uniaxial sphere is investigated to evaluate the propagation path as well as the arrival time of the ordinary and extraordinary returns in the transient response of the uniaxial sphere. This is achieved by extracting the reflection and transmission angles of a plane wave obliquely incident on the radially oriented air-uniaxial and uniaxial-air boundaries, and expressing the phase velocities as well as the refractive indices of the ordinary and extraordinary waves in terms of the incident angle, optic axis and propagation direction. The results indicate a satisfactory agreement between Debye series, Mie series and ray tracing methods.

  10. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing

    NASA Astrophysics Data System (ADS)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  11. The VCOP Scale: a measure of overprotection in parents of physically vulnerable children.

    PubMed

    Wright, L; Mullen, T; West, K; Wyatt, P

    1993-11-01

    A scale is developed for measuring the overprotecting vs. optimal developmental stimulation tendencies for parents of physically "vulnerable" children. A series of items were administered to parents whose parenting techniques had been rated as either highly overprotective or as optimal by a group of MDs and other professionals. Correlations were estimated between each of the items and parental tendencies as rated by professionals. Twenty-eight items were selected that provided maximum prediction of over-protection. The resulting R2 was extraordinarily high (.94). Coefficient alpha and test-retest coefficients were acceptable. It is hoped that release of the new instrument (VCOPS) at this time will allow others to join in determining the clinical and experimental validity of this scale.

  12. Analysis of persistence in fluctuation of the Cauca river through the Hurst coefficient

    NASA Astrophysics Data System (ADS)

    Prada, D. A.; Sanabria, M. P.; Torres, A. F.; Acevedo, A.; Gómez, J.

    2018-04-01

    Study the continuous changes in the fluctuations in the levels of watersheds, it is of great importance because it allows you to adjust predictions about behaviors that can lead to floods or droughts. The Cauca River is one of the most important rivers in Colombia due to its 1350km long, its drainage area of 59.074km 2 which represents 5% of the national territory. The Government entity Cormagdalena records daily levels of the Cauca River to the height of La Mojana in the rods. From these data, we developed a series of time on which normal test were applied to calculate the coefficient of Hurst and the fractal dimension to determine the persistence associated with this behavior.

  13. Revisiting the Plane Electromagnetic Wave Transmission and Reflection Coefficients for the Layer with AN Alternating-Sign Disturbance of Relative Dielectric Permittivity

    NASA Astrophysics Data System (ADS)

    Milov, V. R.; Kogan, L. P.; Gorev, P. V.; Kuzmichev, P. N.; Egorova, P. A.

    2017-01-01

    In this paper, we consider the question of the plane electromagnetic wave incidence at the inhomogeneity with an arbitrary profile of the relative permittivity disturbance. Module estimation of Neumann series remainder is carried out for the field of the wave passing through the nonhomogeneous section. Based on that, the number of summands in the series, required to calculate with a given accuracy, the transmission and reflection coefficients have been determined.

  14. Influenza newspaper reports and the influenza epidemic: an observational study in Fukuoka City, Japan.

    PubMed

    Hagihara, Akihito; Onozuka, Daisuke; Miyazaki, Shougo; Abe, Takeru

    2015-12-30

    We examined whether the weekly number of newspaper articles reporting on influenza was related to the incidence of influenza in a large city. Prospective, non-randomised, observational study. Registry data of influenza cases in Fukuoka City, Japan. A total of 83,613 cases of influenza cases that occurred between October 1999 and March 2007 in Fukuoka City, Japan. A linear model with autoregressive time series errors was fitted to time series data on the incidence of influenza and the accumulated number of influenza-related newspaper articles with different time lags in Fukuoka City, Japan. In order to obtain further evidence that the number of newspaper articles a week with specific time lags is related to the incidence of influenza, Granger causality was also tested. Of the 16 models including 'number of newspaper articles' with different time lags between 2 and 17 weeks (xt-2 to t-17), the β coefficients of 'number of newspaper articles' at time lags between t-5 and t-13 were significant. However, the β coefficients of 'number of newspaper articles' that are significant with respect to the Granger causality tests (p<0.05) were the weekly number of newspaper articles at time lags between t-6 and t-10 (time shift of 10 weeks, β=-0.301, p<0.01; time shift of 9 weeks, β=-0.200, p<0.01; time shift of 8 weeks, β=-0.156, p<0.01; time shift of 7 weeks, β=-0.122, p<0.05; time shift of 6 weeks, β=-0.113, p<0.05). We found that the number of newspaper articles reporting on influenza in a week was related to the incidence of influenza 6-10 weeks after media coverage in a large city in Japan. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  15. Influenza newspaper reports and the influenza epidemic: an observational study in Fukuoka City, Japan

    PubMed Central

    Hagihara, Akihito; Onozuka, Daisuke; Miyazaki, Shougo; Abe, Takeru

    2015-01-01

    Objectives We examined whether the weekly number of newspaper articles reporting on influenza was related to the incidence of influenza in a large city. Design Prospective, non-randomised, observational study. Setting Registry data of influenza cases in Fukuoka City, Japan. Participants A total of 83 613 cases of influenza cases that occurred between October 1999 and March 2007 in Fukuoka City, Japan. Main outcome measure A linear model with autoregressive time series errors was fitted to time series data on the incidence of influenza and the accumulated number of influenza-related newspaper articles with different time lags in Fukuoka City, Japan. In order to obtain further evidence that the number of newspaper articles a week with specific time lags is related to the incidence of influenza, Granger causality was also tested. Results Of the 16 models including ‘number of newspaper articles’ with different time lags between 2 and 17 weeks (xt-2 to t-17), the β coefficients of ‘number of newspaper articles’ at time lags between t-5 and t-13 were significant. However, the β coefficients of ‘number of newspaper articles’ that are significant with respect to the Granger causality tests (p<0.05) were the weekly number of newspaper articles at time lags between t-6 and t-10 (time shift of 10 weeks, β=−0.301, p<0.01; time shift of 9 weeks, β=−0.200, p<0.01; time shift of 8 weeks, β=−0.156, p<0.01; time shift of 7 weeks, β=−0.122, p<0.05; time shift of 6 weeks, β=−0.113, p<0.05). Conclusions We found that the number of newspaper articles reporting on influenza in a week was related to the incidence of influenza 6–10 weeks after media coverage in a large city in Japan. PMID:26719323

  16. Second- and Higher-Order Virial Coefficients Derived from Equations of State for Real Gases

    ERIC Educational Resources Information Center

    Parkinson, William A.

    2009-01-01

    Derivation of the second- and higher-order virial coefficients for models of the gaseous state is demonstrated by employing a direct differential method and subsequent term-by-term comparison to power series expansions. This communication demonstrates the application of this technique to van der Waals representations of virial coefficients.…

  17. Combination of GRACE monthly gravity field solutions from different processing strategies

    NASA Astrophysics Data System (ADS)

    Jean, Yoomin; Meyer, Ulrich; Jäggi, Adrian

    2018-02-01

    We combine the publicly available GRACE monthly gravity field time series to produce gravity fields with reduced systematic errors. We first compare the monthly gravity fields in the spatial domain in terms of signal and noise. Then, we combine the individual gravity fields with comparable signal content, but diverse noise characteristics. We test five different weighting schemes: equal weights, non-iterative coefficient-wise, order-wise, or field-wise weights, and iterative field-wise weights applying variance component estimation (VCE). The combined solutions are evaluated in terms of signal and noise in the spectral and spatial domains. Compared to the individual contributions, they in general show lower noise. In case the noise characteristics of the individual solutions differ significantly, the weighted means are less noisy, compared to the arithmetic mean: The non-seasonal variability over the oceans is reduced by up to 7.7% and the root mean square (RMS) of the residuals of mass change estimates within Antarctic drainage basins is reduced by 18.1% on average. The field-wise weighting schemes in general show better performance, compared to the order- or coefficient-wise weighting schemes. The combination of the full set of considered time series results in lower noise levels, compared to the combination of a subset consisting of the official GRACE Science Data System gravity fields only: The RMS of coefficient-wise anomalies is smaller by up to 22.4% and the non-seasonal variability over the oceans by 25.4%. This study was performed in the frame of the European Gravity Service for Improved Emergency Management (EGSIEM; http://www.egsiem.eu) project. The gravity fields provided by the EGSIEM scientific combination service (ftp://ftp.aiub.unibe.ch/EGSIEM/) are combined, based on the weights derived by VCE as described in this article.

  18. NASA Satellite Monitoring of Water Clarity in Mobile Bay for Nutrient Criteria Development

    NASA Technical Reports Server (NTRS)

    Blonski, Slawomir; Holekamp, Kara; Spiering, Bruce A.

    2009-01-01

    This project has demonstrated feasibility of deriving from MODIS daily measurements time series of water clarity parameters that provide coverage of a specific location or an area of interest for 30-50% of days. Time series derived for estuarine and coastal waters display much higher variability than time series of ecological parameters (such as vegetation indices) derived for land areas. (Temporal filtering often applied in terrestrial studies cannot be used effectively in ocean color processing). IOP-based algorithms for retrieval of diffuse light attenuation coefficient and TSS concentration perform well for the Mobile Bay environment: only a minor adjustment was needed in the TSS algorithm, despite generally recognized dependence of such algorithms on local conditions. The current IOP-based algorithm for retrieval of chlorophyll a concentration has not performed as well: a more reliable algorithm is needed that may be based on IOPs at additional wavelengths or on remote sensing reflectance from multiple spectral bands. CDOM algorithm also needs improvement to provide better separation between effects of gilvin (gelbstoff) and detritus. (Identification or development of such algorithm requires more data from in situ measurements of CDOM concentration in Gulf of Mexico coastal waters (ongoing collaboration with the EPA Gulf Ecology Division))

  19. Recurrence network measures for hypothesis testing using surrogate data: Application to black hole light curves

    NASA Astrophysics Data System (ADS)

    Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.

    2018-01-01

    Recurrence networks and the associated statistical measures have become important tools in the analysis of time series data. In this work, we test how effective the recurrence network measures are in analyzing real world data involving two main types of noise, white noise and colored noise. We use two prominent network measures as discriminating statistic for hypothesis testing using surrogate data for a specific null hypothesis that the data is derived from a linear stochastic process. We show that the characteristic path length is especially efficient as a discriminating measure with the conclusions reasonably accurate even with limited number of data points in the time series. We also highlight an additional advantage of the network approach in identifying the dimensionality of the system underlying the time series through a convergence measure derived from the probability distribution of the local clustering coefficients. As examples of real world data, we use the light curves from a prominent black hole system and show that a combined analysis using three primary network measures can provide vital information regarding the nature of temporal variability of light curves from different spectroscopic classes.

  20. Complexity analysis based on generalized deviation for financial markets

    NASA Astrophysics Data System (ADS)

    Li, Chao; Shang, Pengjian

    2018-03-01

    In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.

  1. Financial Time Series Prediction Using Spiking Neural Networks

    PubMed Central

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two “traditional”, rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618

  2. JMR Noise Diode Stability and Recalibration Methodology after Three Years On-Orbit

    NASA Technical Reports Server (NTRS)

    Brown, Shannon; Desai, Shailen; Keihm, Stephen; Ruf, Christopher

    2006-01-01

    The Jason Microwave Radiometer (JMR) is included on the Jason-1 ocean altimeter satellite to measure the wet tropospheric path delay (PD) experienced by the radar altimeter signal. JMR is nadir pointing and measures the radiometric brightness temperature (T(sub B)) at 18.7, 23.8 and 34.0 GHz. JMR is a Dicke radiometer and it is the first radiometer to be flown in space that uses noise diodes for calibration. Therefore, monitoring the long term stability of the noise diodes is essential. Each channel has three redundant noise diodes which are individually coupled into the antenna signal to provide an estimate of the gain. Two significant jumps in the JMR path delays, relative to ground truth, were observed around 300 and 700 days into the mission. Slow drifts in the retrieved products were also evident over the entire mission. During a recalibration effort, it was determined that a single set of calibration coefficients was not able to remove the calibration jumps and drifts, suggesting that there was a change in the hardware and time dependent coefficients would be required. To facilitate the derivation of time dependent coefficients, an optimal estimation based calibration system was developed which iteratively determines that set of calibration coefficients which minimize the RMS difference between the JMR TBs and on-Earth hot and cold absolute references. This optimal calibration algorithm was used to fine tune the front end path loss coefficients and derive a time series of the JMR noise diode brightness temperatures for each of the nine diodes. Jumps and drifts, on the order of 1% to 2%, are observed among the noise diodes in the first three years on-orbit.

  3. Cross-correlation analysis between Chinese TF contracts and treasury ETF based on high-frequency data

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Chen, Shi

    2016-02-01

    In this paper, we investigate the high-frequency cross-correlation relationship between Chinese treasury futures contracts and treasury ETF. We analyze the logarithmic return of these two price series, from which we can conclude that both return series are not normally distributed and the futures markets have greater volatility. We find significant cross-correlation between these two series. We further confirm the relationship using the DCCA coefficient and the DMCA coefficient. We quantify the long-range cross-correlation with DCCA method, and we further show that the relationship is multifractal. An arbitrage algorithm based on DFA regression with stable return is proposed in the last part.

  4. The Space-Time Asymmetry Research (STAR) program

    NASA Astrophysics Data System (ADS)

    Buchman, Sasha

    Stanford University, NASA Ames, and international partners propose the Space-Time Asymme-try Research (STAR) program, a series of three Science and Technology Development Missions, which will probe the fundamental relationships between space, time and gravity. What is the nature of space-time? Is space truly isotropic? Is the speed of light truly isotropic? If not, what is its direction and location dependency? What are the answers beyond Einstein? How will gravity and the standard model ultimately be combined? The first mission, STAR-1, will measure the absolute anisotropy of the velocity of light to one part in 1017 , derive the Kennedy-Thorndike (KT) coefficient to 7x10-10 (150-fold improvement over modern ground measurements), derive the Michelson-Morley (MM) coefficient to 10-11 (confirming the ground measurements), and derive the coefficients of Lorentz violation in the Standard Model Exten-sion (SME), in the range 7x10-17 to 10-13 (an order of magnitude improvement over ground measurements). The follow-on missions will achieve a factor of 100 higher sensitivities. The core instruments are high stability optical cavities and high accuracy gas spectroscopy frequency standards using the "NICE-OHMS technique. STAR-1 is accomplished with a fully redundant instrument flown on a standard bus, spin-stabilized spacecraft with a mission lifetime of two years. Spacecraft and instrument have a total mass of less than 180 kg and consume less than 200 W of power. STAR-1 would launch in 2015 as a secondary payload in a 650 km, sun-synchronous orbit. We describe the STAR-1 mission in detail and the STAR series in general, with a focus on how each mission will build on the development and success of the previous missions, methodically enhancing both the capabilities of the STAR instrument suite and our understanding of this important field. By coupling state-of-the-art scientific instrumentation with proven and cost-effective small satellite technology in an environment designed for re-search and leadership participation by university students the STAR program will bring new answers to some of the most important physics questions of our time -questions that have faced physicists for over 100 years.

  5. Estimation of methane emission rate changes using age-defined waste in a landfill site.

    PubMed

    Ishii, Kazuei; Furuichi, Toru

    2013-09-01

    Long term methane emissions from landfill sites are often predicted by first-order decay (FOD) models, in which the default coefficients of the methane generation potential and the methane generation rate given by the Intergovernmental Panel on Climate Change (IPCC) are usually used. However, previous studies have demonstrated the large uncertainty in these coefficients because they are derived from a calibration procedure under ideal steady-state conditions, not actual landfill site conditions. In this study, the coefficients in the FOD model were estimated by a new approach to predict more precise long term methane generation by considering region-specific conditions. In the new approach, age-defined waste samples, which had been under the actual landfill site conditions, were collected in Hokkaido, Japan (in cold region), and the time series data on the age-defined waste sample's methane generation potential was used to estimate the coefficients in the FOD model. The degradation coefficients were 0.0501/y and 0.0621/y for paper and food waste, and the methane generation potentials were 214.4 mL/g-wet waste and 126.7 mL/g-wet waste for paper and food waste, respectively. These coefficients were compared with the default coefficients given by the IPCC. Although the degradation coefficient for food waste was smaller than the default value, the other coefficients were within the range of the default coefficients. With these new coefficients to calculate methane generation, the long term methane emissions from the landfill site was estimated at 1.35×10(4)m(3)-CH(4), which corresponds to approximately 2.53% of the total carbon dioxide emissions in the city (5.34×10(5)t-CO(2)/y). Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Absorption dynamics and delay time in complex potentials

    NASA Astrophysics Data System (ADS)

    Villavicencio, Jorge; Romo, Roberto; Hernández-Maldonado, Alberto

    2018-05-01

    The dynamics of absorption is analyzed by using an exactly solvable model that deals with an analytical solution to Schrödinger’s equation for cutoff initial plane waves incident on a complex absorbing potential. A dynamical absorption coefficient which allows us to explore the dynamical loss of particles from the transient to the stationary regime is derived. We find that the absorption process is characterized by the emission of a series of damped periodic pulses in time domain, associated with damped Rabi-type oscillations with a characteristic frequency, ω = (E + ε)/ℏ, where E is the energy of the incident waves and ‑ε is energy of the quasidiscrete state of the system induced by the absorptive part of the Hamiltonian; the width γ of this resonance governs the amplitude of the pulses. The resemblance of the time-dependent absorption coefficient with a real decay process is discussed, in particular the transition from exponential to nonexponential regimes, a well-known feature of quantum decay. We have also analyzed the effect of the absorptive part of the potential on the dynamical delay time, which behaves differently from the one observed in attractive real delta potentials, exhibiting two regimes: time advance and time delay.

  7. Influence of the Time Scale on the Construction of Financial Networks

    PubMed Central

    Emmert-Streib, Frank; Dehmer, Matthias

    2010-01-01

    Background In this paper we investigate the definition and formation of financial networks. Specifically, we study the influence of the time scale on their construction. Methodology/Principal Findings For our analysis we use correlation-based networks obtained from the daily closing prices of stock market data. More precisely, we use the stocks that currently comprise the Dow Jones Industrial Average (DJIA) and estimate financial networks where nodes correspond to stocks and edges correspond to none vanishing correlation coefficients. That means only if a correlation coefficient is statistically significant different from zero, we include an edge in the network. This construction procedure results in unweighted, undirected networks. By separating the time series of stock prices in non-overlapping intervals, we obtain one network per interval. The length of these intervals corresponds to the time scale of the data, whose influence on the construction of the networks will be studied in this paper. Conclusions/Significance Numerical analysis of four different measures in dependence on the time scale for the construction of networks allows us to gain insights about the intrinsic time scale of the stock market with respect to a meaningful graph-theoretical analysis. PMID:20949124

  8. On the Long-Term "Hesitation Waltz" Between the Earth's Figure and Rotation Axes

    NASA Astrophysics Data System (ADS)

    Couhert, A.; Mercier, F.; Bizouard, C.

    2017-12-01

    The principal figure axis of the Earth refers to its axis of maximum inertia. In the absence of external torques, the latter should closely coincide with the rotation pole, when averaged over many years. However, because of tidal and non-tidal mass redistributions within the Earth system, the rotational axis executes a circular motion around the figure axis essentially at seasonal time scales. In between, it is not clear what happens at decadal time spans and how well the two axes are aligned. The long record of accurate Satellite Laser Ranging (SLR) observations to Lageos makes possible to directly measure the long time displacement of the figure axis with respect to the crust, through the determination of the degree 2 order 1 geopotential coefficients for the 34-year period 1983-2017. On the other hand, the pole coordinate time series (mainly from GNSS and VLBI data) yield the motion of the rotation pole with even a greater accuracy. This study is focused on the analysis of the long-term behavior of the two time series, as well as the derivation of possible explanations for their discrepancies.

  9. Deterministic chaos in atmospheric radon dynamics

    NASA Astrophysics Data System (ADS)

    Cuculeanu, Vasile; Lupu, Alexandru

    2001-08-01

    The correlation dimension and Lyapunov exponents have been calculated for two time series of atmospheric radon daughter concentrations obtained from four daily measurements during the period 1993-1996. A number of about 6000 activity concentration values of 222Rn and 220Rn daughters have been used. The measuring method is based on aerosol collection on filters. In order to determine the filter activity, a low background gross beta measuring device with Geiger-Müller counter tubes in anticoincidence was used. The small noninteger value of the correlation dimension (≃2.2) and the existence of a positive Lyapunov exponent prove that deterministic chaos is present in the time series of atmospheric 220Rn daughters. This shows that a simple diffusion equation with a parameterized turbulent diffusion coefficient is insufficient for describing the dynamics in the near-ground layer where turbulence is not fully developed and coherent structures dominate. The analysis of 222Rn series confirms that the dynamics of the boundary layer cannot be described by a system of ordinary differential equations with a low number of independent variables.

  10. Validating the WRF-Chem model for wind energy applications using High Resolution Doppler Lidar data from a Utah 2012 field campaign

    NASA Astrophysics Data System (ADS)

    Mitchell, M. J.; Pichugina, Y. L.; Banta, R. M.

    2015-12-01

    Models are important tools for assessing potential of wind energy sites, but the accuracy of these projections has not been properly validated. In this study, High Resolution Doppler Lidar (HRDL) data obtained with high temporal and spatial resolution at heights of modern turbine rotors were compared to output from the WRF-chem model in order to help improve the performance of the model in producing accurate wind forecasts for the industry. HRDL data were collected from January 23-March 1, 2012 during the Uintah Basin Winter Ozone Study (UBWOS) field campaign. A model validation method was based on the qualitative comparison of the wind field images, time-series analysis and statistical analysis of the observed and modeled wind speed and direction, both for case studies and for the whole experiment. To compare the WRF-chem model output to the HRDL observations, the model heights and forecast times were interpolated to match the observed times and heights. Then, time-height cross-sections of the HRDL and WRF-Chem wind speed and directions were plotted to select case studies. Cross-sections of the differences between the observed and forecasted wind speed and directions were also plotted to visually analyze the model performance in different wind flow conditions. A statistical analysis includes the calculation of vertical profiles and time series of bias, correlation coefficient, root mean squared error, and coefficient of determination between two datasets. The results from this analysis reveals where and when the model typically struggles in forecasting winds at heights of modern turbine rotors so that in the future the model can be improved for the industry.

  11. Use of a polar ionic liquid as second column for the comprehensive two-dimensional GC separation of PCBs.

    PubMed

    Zapadlo, Michal; Krupcík, Ján; Májek, Pavel; Armstrong, Daniel W; Sandra, Pat

    2010-09-10

    The orthogonality of three columns coupled in two series was studied for the congener specific comprehensive two-dimensional GC separation of polychlorinated biphenyls (PCBs). A non-polar capillary column coated with poly(5%-phenyl-95%-methyl)siloxane was used as the first ((1)D) column in both series. A polar capillary column coated with 70% cyanopropyl-polysilphenylene-siloxane or a capillary column coated with the ionic liquid 1,12-di(tripropylphosphonium)dodecane bis(trifluoromethane-sulfonyl)imide were used as the second ((2)D) columns. Nine multi-congener standard PCB solutions containing subsets of all native 209 PCBs, a mixture of 209 PCBs as well as Aroclor 1242 and 1260 formulations were used to study the orthogonality of both column series. Retention times of the corresponding PCB congeners on (1)D and (2)D columns were used to construct retention time dependences (apex plots) for assessing orthogonality of both columns coupled in series. For a visual assessment of the peak density of PCBs congeners on a retention plane, 2D images were compared. The degree of orthogonality of both column series was, along the visual assessment of distribution of PCBs on the retention plane, evaluated also by Pearson's correlation coefficient, which was found by correlation of retention times t(R,i,2D) and t(R,i,1D) of corresponding PCB congeners on both column series. It was demonstrated that the apolar+ionic liquid column series is almost orthogonal both for the 2D separation of PCBs present in Aroclor 1242 and 1260 formulations as well as for the separation of all of 209 PCBs. All toxic, dioxin-like PCBs, with the exception of PCB 118 that overlaps with PCB 106, were resolved by the apolar/ionic liquid series while on the apolar/polar column series three toxic PCBs overlapped (105+127, 81+148 and 118+106). Copyright 2010 Elsevier B.V. All rights reserved.

  12. Serial Founder Effects During Range Expansion: A Spatial Analog of Genetic Drift

    PubMed Central

    Slatkin, Montgomery; Excoffier, Laurent

    2012-01-01

    Range expansions cause a series of founder events. We show that, in a one-dimensional habitat, these founder events are the spatial analog of genetic drift in a randomly mating population. The spatial series of allele frequencies created by successive founder events is equivalent to the time series of allele frequencies in a population of effective size ke, the effective number of founders. We derive an expression for ke in a discrete-population model that allows for local population growth and migration among established populations. If there is selection, the net effect is determined approximately by the product of the selection coefficients and the number of generations between successive founding events. We use the model of a single population to compute analytically several quantities for an allele present in the source population: (i) the probability that it survives the series of colonization events, (ii) the probability that it reaches a specified threshold frequency in the last population, and (iii) the mean and variance of the frequencies in each population. We show that the analytic theory provides a good approximation to simulation results. A consequence of our approximation is that the average heterozygosity of neutral alleles decreases by a factor of 1 – 1/(2ke) in each new population. Therefore, the population genetic consequences of surfing can be predicted approximately by the effective number of founders and the effective selection coefficients, even in the presence of migration among populations. We also show that our analytic results are applicable to a model of range expansion in a continuously distributed population. PMID:22367031

  13. Serial founder effects during range expansion: a spatial analog of genetic drift.

    PubMed

    Slatkin, Montgomery; Excoffier, Laurent

    2012-05-01

    Range expansions cause a series of founder events. We show that, in a one-dimensional habitat, these founder events are the spatial analog of genetic drift in a randomly mating population. The spatial series of allele frequencies created by successive founder events is equivalent to the time series of allele frequencies in a population of effective size ke, the effective number of founders. We derive an expression for ke in a discrete-population model that allows for local population growth and migration among established populations. If there is selection, the net effect is determined approximately by the product of the selection coefficients and the number of generations between successive founding events. We use the model of a single population to compute analytically several quantities for an allele present in the source population: (i) the probability that it survives the series of colonization events, (ii) the probability that it reaches a specified threshold frequency in the last population, and (iii) the mean and variance of the frequencies in each population. We show that the analytic theory provides a good approximation to simulation results. A consequence of our approximation is that the average heterozygosity of neutral alleles decreases by a factor of 1-1/(2ke) in each new population. Therefore, the population genetic consequences of surfing can be predicted approximately by the effective number of founders and the effective selection coefficients, even in the presence of migration among populations. We also show that our analytic results are applicable to a model of range expansion in a continuously distributed population.

  14. Correlation filtering in financial time series (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Aste, T.; Di Matteo, Tiziana; Tumminello, M.; Mantegna, R. N.

    2005-05-01

    We apply a method to filter relevant information from the correlation coefficient matrix by extracting a network of relevant interactions. This method succeeds to generate networks with the same hierarchical structure of the Minimum Spanning Tree but containing a larger amount of links resulting in a richer network topology allowing loops and cliques. In Tumminello et al.,1 we have shown that this method, applied to a financial portfolio of 100 stocks in the USA equity markets, is pretty efficient in filtering relevant information about the clustering of the system and its hierarchical structure both on the whole system and within each cluster. In particular, we have found that triangular loops and 4 element cliques have important and significant relations with the market structure and properties. Here we apply this filtering procedure to the analysis of correlation in two different kind of interest rate time series (16 Eurodollars and 34 US interest rates).

  15. Time series modeling and forecasting using memetic algorithms for regime-switching models.

    PubMed

    Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel

    2012-11-01

    In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.

  16. Cross over of recurrence networks to random graphs and random geometric graphs

    NASA Astrophysics Data System (ADS)

    Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.

    2017-02-01

    Recurrence networks are complex networks constructed from the time series of chaotic dynamical systems where the connection between two nodes is limited by the recurrence threshold. This condition makes the topology of every recurrence network unique with the degree distribution determined by the probability density variations of the representative attractor from which it is constructed. Here we numerically investigate the properties of recurrence networks from standard low-dimensional chaotic attractors using some basic network measures and show how the recurrence networks are different from random and scale-free networks. In particular, we show that all recurrence networks can cross over to random geometric graphs by adding sufficient amount of noise to the time series and into the classical random graphs by increasing the range of interaction to the system size. We also highlight the effectiveness of a combined plot of characteristic path length and clustering coefficient in capturing the small changes in the network characteristics.

  17. Ice Mass Change in Greenland and Antarctica Between 1993 and 2013 from Satellite Gravity Measurements

    NASA Technical Reports Server (NTRS)

    Talpe, Matthieu J.; Nerem, R. Steven; Forootan, Ehsan; Schmidt, Michael; Lemoine, Frank G.; Enderlin, Ellyn M.; Landerer, Felix W.

    2017-01-01

    We construct long-term time series of Greenland and Antarctic ice sheet mass change from satellite gravity measurements. A statistical reconstruction approach is developed based on a principal component analysis (PCA) to combine high-resolution spatial modes from the Gravity Recovery and Climate Experiment (GRACE) mission with the gravity information from conventional satellite tracking data. Uncertainties of this reconstruction are rigorously assessed; they include temporal limitations for short GRACE measurements, spatial limitations for the low-resolution conventional tracking data measurements, and limitations of the estimated statistical relationships between low- and high-degree potential coefficients reflected in the PCA modes. Trends of mass variations in Greenland and Antarctica are assessed against a number of previous studies. The resulting time series for Greenland show a higher rate of mass loss than other methods before 2000, while the Antarctic ice sheet appears heavily influenced by interannual variations.

  18. User's manual for the Graphical Constituent Loading Analysis System (GCLAS)

    USGS Publications Warehouse

    Koltun, G.F.; Eberle, Michael; Gray, J.R.; Glysson, G.D.

    2006-01-01

    This manual describes the Graphical Constituent Loading Analysis System (GCLAS), an interactive cross-platform program for computing the mass (load) and average concentration of a constituent that is transported in stream water over a period of time. GCLAS computes loads as a function of an equal-interval streamflow time series and an equal- or unequal-interval time series of constituent concentrations. The constituent-concentration time series may be composed of measured concentrations or a combination of measured and estimated concentrations. GCLAS is not intended for use in situations where concentration data (or an appropriate surrogate) are collected infrequently or where an appreciable amount of the concentration values are censored. It is assumed that the constituent-concentration time series used by GCLAS adequately represents the true time-varying concentration. Commonly, measured constituent concentrations are collected at a frequency that is less than ideal (from a load-computation standpoint), so estimated concentrations must be inserted in the time series to better approximate the expected chemograph. GCLAS provides tools to facilitate estimation and entry of instantaneous concentrations for that purpose. Water-quality samples collected for load computation frequently are collected in a single vertical or at single point in a stream cross section. Several factors, some of which may vary as a function of time and (or) streamflow, can affect whether the sample concentrations are representative of the mean concentration in the cross section. GCLAS provides tools to aid the analyst in assessing whether concentrations in samples collected in a single vertical or at single point in a stream cross section exhibit systematic bias with respect to the mean concentrations. In cases where bias is evident, the analyst can construct coefficient relations in GCLAS to reduce or eliminate the observed bias. GCLAS can export load and concentration data in formats suitable for entry into the U.S. Geological Survey's National Water Information System. GCLAS can also import and export data in formats that are compatible with various commonly used spreadsheet and statistics programs.

  19. Zernike ultrasonic tomography for fluid velocity imaging based on pipeline intrusive time-of-flight measurements.

    PubMed

    Besic, Nikola; Vasile, Gabriel; Anghel, Andrei; Petrut, Teodor-Ion; Ioana, Cornel; Stankovic, Srdjan; Girard, Alexandre; d'Urso, Guy

    2014-11-01

    In this paper, we propose a novel ultrasonic tomography method for pipeline flow field imaging, based on the Zernike polynomial series. Having intrusive multipath time-offlight ultrasonic measurements (difference in flight time and speed of ultrasound) at the input, we provide at the output tomograms of the fluid velocity components (axial, radial, and orthoradial velocity). Principally, by representing these velocities as Zernike polynomial series, we reduce the tomography problem to an ill-posed problem of finding the coefficients of the series, relying on the acquired ultrasonic measurements. Thereupon, this problem is treated by applying and comparing Tikhonov regularization and quadratically constrained ℓ1 minimization. To enhance the comparative analysis, we additionally introduce sparsity, by employing SVD-based filtering in selecting Zernike polynomials which are to be included in the series. The first approach-Tikhonov regularization without filtering, is used because it is the most suitable method. The performances are quantitatively tested by considering a residual norm and by estimating the flow using the axial velocity tomogram. Finally, the obtained results show the relative residual norm and the error in flow estimation, respectively, ~0.3% and ~1.6% for the less turbulent flow and ~0.5% and ~1.8% for the turbulent flow. Additionally, a qualitative validation is performed by proximate matching of the derived tomograms with a flow physical model.

  20. Resuspension of ash after the 2014 phreatic eruption at Ontake volcano, Japan

    NASA Astrophysics Data System (ADS)

    Miwa, Takahiro; Nagai, Masashi; Kawaguchi, Ryohei

    2018-02-01

    We determined the resuspension process of an ash deposit after the phreatic eruption of September 27th, 2014 at Ontake volcano, Japan, by analyzing the time series data of particle concentrations obtained using an optical particle counter and the characteristics of an ash sample. The time series of particle concentration was obtained by an optical particle counter installed 11 km from the volcano from September 21st to October 19th, 2014. The time series contains counts of dust particles (ash and soil), pollen, and water drops, and was corrected to calculate the concentration of dust particles based on a polarization factor reflecting the optical anisotropy of particles. The dust concentration was compared with the time series of wind velocity. The dust concentration was high and the correlation coefficient with wind velocity was positive from September 28th to October 2nd. Grain-size analysis of an ash sample confirmed that the ash deposit contains abundant very fine particles (< 30 μm). Simple theoretical calculations revealed that the daily peaks of the moderate wind (a few m/s at 10 m above the ground surface) were comparable with the threshold wind velocity for resuspension of an unconsolidated deposit with a wide range of particle densities. These results demonstrate that moderate wind drove the resuspension of an ash deposit containing abundant fine particles produced by the phreatic eruption. Histogram of polarization factors of each species experimentally obtained. The N is the number of analyzed particles.

  1. Gravity Field Recovery from the Cartwheel Formation by the Semi-analytical Approach

    NASA Astrophysics Data System (ADS)

    Li, Huishu; Reubelt, Tilo; Antoni, Markus; Sneeuw, Nico; Zhong, Min; Zhou, Zebing

    2016-04-01

    Past and current gravimetric satellite missions have contributed drastically to our knowledge of the Earth's gravity field. Nevertheless, several geoscience disciplines push for even higher requirements on accuracy, homogeneity and time- and space-resolution of the Earth's gravity field. Apart from better instruments or new observables, alternative satellite formations could improve the signal and error structure. With respect to other methods, one significant advantage of the semi-analytical approach is its effective pre-mission error assessment for gravity field missions. The semi-analytical approach builds a linear analytical relationship between the Fourier spectrum of the observables and the spherical harmonic spectrum of the gravity field. The spectral link between observables and gravity field parameters is given by the transfer coefficients, which constitutes the observation model. In connection with a stochastic model, it can be used for pre-mission error assessment of gravity field mission. The cartwheel formation is formed by two satellites on elliptic orbits in the same plane. The time dependent ranging will be considered in the transfer coefficients via convolution including the series expansion of the eccentricity functions. The transfer coefficients are applied to assess the error patterns, which are caused by different orientation of the cartwheel for range-rate and range acceleration. This work will present the isotropy and magnitude of the formal errors of the gravity field coefficients, for different orientations of the cartwheel.

  2. Recursive-operator method in vibration problems for rod systems

    NASA Astrophysics Data System (ADS)

    Rozhkova, E. V.

    2009-12-01

    Using linear differential equations with constant coefficients describing one-dimensional dynamical processes as an example, we show that the solutions of these equations and systems are related to the solution of the corresponding numerical recursion relations and one does not have to compute the roots of the corresponding characteristic equations. The arbitrary functions occurring in the general solution of the homogeneous equations are determined by the initial and boundary conditions or are chosen from various classes of analytic functions. The solutions of the inhomogeneous equations are constructed in the form of integro-differential series acting on the right-hand side of the equation, and the coefficients of the series are determined from the same recursion relations. The convergence of formal solutions as series of a more general recursive-operator construction was proved in [1]. In the special case where the solutions of the equation can be represented in separated variables, the power series can be effectively summed, i.e., expressed in terms of elementary functions, and coincide with the known solutions. In this case, to determine the natural vibration frequencies, one obtains algebraic rather than transcendental equations, which permits exactly determining the imaginary and complex roots of these equations without using the graphic method [2, pp. 448-449]. The correctness of the obtained formulas (differentiation formulas, explicit expressions for the series coefficients, etc.) can be verified directly by appropriate substitutions; therefore, we do not prove them here.

  3. Inter-annual variability and long term predictability of exchanges through the Strait of Gibraltar

    NASA Astrophysics Data System (ADS)

    Boutov, Dmitri; Peliz, Álvaro; Miranda, Pedro M. A.; Soares, Pedro M. M.; Cardoso, Rita M.; Prieto, Laura; Ruiz, Javier; García-Lafuente, Jesus

    2014-03-01

    Inter-annual variability of calculated barotropic (netflow) and simulated baroclinic (inflow and outflow) exchanges through the Strait of Gibraltar is analyzed and their response to the main modes of atmospheric variability is investigated. Time series of the outflow obtained by high resolution simulations and estimated from in-situ Acoustic Doppler Current Profiler (ADCP) current measurements are compared. The time coefficients (TC) of the leading empirical orthogonal function (EOF) modes that describe zonal atmospheric circulation in the vicinity of the Strait (1st and 3rd of Sea-Level Pressure (SLP) and 1st of the wind) show significant covariance with the inflow and outflow. Based on these analyses, a regression model between these SLP TCs and outflow of the Mediterranean Water was developed. This regression outflow time series was compared with estimates based on current meter observations and the predictability and reconstruction of past exchange variability based on atmospheric pressure fields are discussed. The simple regression model seems to reproduce the outflow evolution fairly reasonably, with the exception of the year 2008, which is apparently anomalous without available physical explanation yet. The exchange time series show a reduced inter-annual variability (less than 1%, 2.6% and 3.1% of total 2-day variability, for netflow, inflow and outflow, respectively). From a statistical point of view no clear long-term tendencies were revealed. Anomalously high baroclinic fluxes are reported for the years of 2000-2001 that are coincident with strong impact on the Alboran Sea ecosystem. The origin of the anomalous flow is associated with a strong negative anomaly (~ - 9 hPa) in atmospheric pressure fields settled north of Iberian Peninsula and extending over the central Atlantic, favoring an increased zonal circulation in winter 2000/2001. These low pressure fields forced intense and durable westerly winds in the Gulf of Cadiz-Alboran system. The signal of this anomaly is also seen in time coefficients of the most significant EOF modes. The predictability of the exchanges for future climate is discussed.

  4. Development of a real-time wave field reconstruction TEM system (II): correction of coma aberration and 3-fold astigmatism, and real-time correction of 2-fold astigmatism.

    PubMed

    Tamura, Takahiro; Kimura, Yoshihide; Takai, Yoshizo

    2018-02-01

    In this study, a function for the correction of coma aberration, 3-fold astigmatism and real-time correction of 2-fold astigmatism was newly incorporated into a recently developed real-time wave field reconstruction TEM system. The aberration correction function was developed by modifying the image-processing software previously designed for auto focus tracking, as described in the first article of this series. Using the newly developed system, the coma aberration and 3-fold astigmatism were corrected using the aberration coefficients obtained experimentally before the processing was carried out. In this study, these aberration coefficients were estimated from an apparent 2-fold astigmatism induced under tilted-illumination conditions. In contrast, 2-fold astigmatism could be measured and corrected in real time from the reconstructed wave field. Here, the measurement precision for 2-fold astigmatism was found to be ±0.4 nm and ±2°. All of these aberration corrections, as well as auto focus tracking, were performed at a video frame rate of 1/30 s. Thus, the proposed novel system is promising for quantitative and reliable in situ observations, particularly in environmental TEM applications.

  5. Variable diffusion in stock market fluctuations

    NASA Astrophysics Data System (ADS)

    Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2015-02-01

    We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.

  6. Most suitable mother wavelet for the analysis of fractal properties of stride interval time series via the average wavelet coefficient

    PubMed Central

    Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S.; Perera, Subashan

    2016-01-01

    Human gait is a complex interaction of many nonlinear systems and stride intervals exhibit self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a biomarker of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of sixteen mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length. PMID:27960102

  7. In search of functional association from time-series microarray data based on the change trend and level of gene expression

    PubMed Central

    He, Feng; Zeng, An-Ping

    2006-01-01

    Background The increasing availability of time-series expression data opens up new possibilities to study functional linkages of genes. Present methods used to infer functional linkages between genes from expression data are mainly based on a point-to-point comparison. Change trends between consecutive time points in time-series data have been so far not well explored. Results In this work we present a new method based on extracting main features of the change trend and level of gene expression between consecutive time points. The method, termed as trend correlation (TC), includes two major steps: 1, calculating a maximal local alignment of change trend score by dynamic programming and a change trend correlation coefficient between the maximal matched change levels of each gene pair; 2, inferring relationships of gene pairs based on two statistical extraction procedures. The new method considers time shifts and inverted relationships in a similar way as the local clustering (LC) method but the latter is merely based on a point-to-point comparison. The TC method is demonstrated with data from yeast cell cycle and compared with the LC method and the widely used Pearson correlation coefficient (PCC) based clustering method. The biological significance of the gene pairs is examined with several large-scale yeast databases. Although the TC method predicts an overall lower number of gene pairs than the other two methods at a same p-value threshold, the additional number of gene pairs inferred by the TC method is considerable: e.g. 20.5% compared with the LC method and 49.6% with the PCC method for a p-value threshold of 2.7E-3. Moreover, the percentage of the inferred gene pairs consistent with databases by our method is generally higher than the LC method and similar to the PCC method. A significant number of the gene pairs only inferred by the TC method are process-identity or function-similarity pairs or have well-documented biological interactions, including 443 known protein interactions and some known cell cycle related regulatory interactions. It should be emphasized that the overlapping of gene pairs detected by the three methods is normally not very high, indicating a necessity of combining the different methods in search of functional association of genes from time-series data. For a p-value threshold of 1E-5 the percentage of process-identity and function-similarity gene pairs among the shared part of the three methods reaches 60.2% and 55.6% respectively, building a good basis for further experimental and functional study. Furthermore, the combined use of methods is important to infer more complete regulatory circuits and network as exemplified in this study. Conclusion The TC method can significantly augment the current major methods to infer functional linkages and biological network and is well suitable for exploring temporal relationships of gene expression in time-series data. PMID:16478547

  8. Currency crises and the evolution of foreign exchange market: Evidence from minimum spanning tree

    NASA Astrophysics Data System (ADS)

    Jang, Wooseok; Lee, Junghoon; Chang, Woojin

    2011-02-01

    We examined the time series properties of the foreign exchange market for 1990-2008 in relation to the history of the currency crises using the minimum spanning tree (MST) approach and made several meaningful observations about the MST of currencies. First, around currency crises, the mean correlation coefficient between currencies decreased whereas the normalized tree length increased. The mean correlation coefficient dropped dramatically passing through the Asian crisis and remained at the lowered level after that. Second, the Euro and the US dollar showed a strong negative correlation after 1997, implying that the prices of the two currencies moved in opposite directions. Third, we observed that Asian countries and Latin American countries moved away from the cluster center (USA) passing through the Asian crisis and Argentine crisis, respectively.

  9. Complementary effects of surface water and groundwater on soil moisture dynamics in a degraded coastal floodplain forest

    NASA Astrophysics Data System (ADS)

    Kaplan, D.; Muñoz-Carpena, R.

    2011-02-01

    SummaryRestoration of degraded floodplain forests requires a robust understanding of surface water, groundwater, and vadose zone hydrology. Soil moisture is of particular importance for seed germination and seedling survival, but is difficult to monitor and often overlooked in wetland restoration studies. This research hypothesizes that the complex effects of surface water and shallow groundwater on the soil moisture dynamics of floodplain wetlands are spatially complementary. To test this hypothesis, 31 long-term (4-year) hydrological time series were collected in the floodplain of the Loxahatchee River (Florida, USA), where watershed modifications have led to reduced freshwater flow, altered hydroperiod and salinity, and a degraded ecosystem. Dynamic factor analysis (DFA), a time series dimension reduction technique, was applied to model temporal and spatial variation in 12 soil moisture time series as linear combinations of common trends (representing shared, but unexplained, variability) and explanatory variables (selected from 19 additional candidate hydrological time series). The resulting dynamic factor models yielded good predictions of observed soil moisture series (overall coefficient of efficiency = 0.90) by identifying surface water elevation, groundwater elevation, and net recharge (cumulative rainfall-cumulative evapotranspiration) as important explanatory variables. Strong and complementary linear relationships were found between floodplain elevation and surface water effects (slope = 0.72, R2 = 0.86, p < 0.001), and between elevation and groundwater effects (slope = -0.71, R2 = 0.71, p = 0.001), while the effect of net recharge was homogenous across the experimental transect (slope = 0.03, R2 = 0.05, p = 0.242). This study provides a quantitative insight into the spatial structure of groundwater and surface water effects on soil moisture that will be useful for refining monitoring plans and developing ecosystem restoration and management scenarios in degraded coastal floodplains.

  10. Leverage effect and its causality in the Korea composite stock price index

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Yong

    2012-02-01

    In this paper, we investigate the leverage effect and its causality in the time series of the Korea Composite Stock Price Index from November of 1997 to September of 2010. The leverage effect, which can be quantitatively expressed as a negative correlation between past return and future volatility, is measured by using the cross-correlation coefficient of different time lags between the two time series of the return and the volatility. We find that past return and future volatility are negatively correlated and that the cross correlation is moderate and decays over 60 trading days. We also carry out a partial correlation analysis in order to confirm that the negative correlation between past return and future volatility is neither an artifact nor influenced by the traded volume. To determine the causality of the leverage effect within the decay time, we additionally estimate the cross correlation between past volatility and future return. With the estimate, we perform a statistical hypothesis test to demonstrate that the causal relation is in favor of the return influencing the volatility rather than the other way around.

  11. Statistical process control of mortality series in the Australian and New Zealand Intensive Care Society (ANZICS) adult patient database: implications of the data generating process.

    PubMed

    Moran, John L; Solomon, Patricia J

    2013-05-24

    Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Monthly mean raw mortality (at hospital discharge) time series, 1995-2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) "in-control" status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues.

  12. Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments.

    PubMed

    Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn

    2015-01-01

    Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect's 3D body point's time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point's time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point's time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters' walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman's bias and limits of agreement. Body point's time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point's time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner.

  13. Kinematic Validation of a Multi-Kinect v2 Instrumented 10-Meter Walkway for Quantitative Gait Assessments

    PubMed Central

    Geerse, Daphne J.; Coolen, Bert H.; Roerdink, Melvyn

    2015-01-01

    Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect’s 3D body point’s time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point’s time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point’s time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters’ walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman’s bias and limits of agreement. Body point’s time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point’s time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner. PMID:26461498

  14. Shortened acquisition protocols for the quantitative assessment of the 2-tissue-compartment model using dynamic PET/CT 18F-FDG studies.

    PubMed

    Strauss, Ludwig G; Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2011-03-01

    (18)F-FDG kinetics are quantified by a 2-tissue-compartment model. The routine use of dynamic PET is limited because of this modality's 1-h acquisition time. We evaluated shortened acquisition protocols up to 0-30 min regarding the accuracy for data analysis with the 2-tissue-compartment model. Full dynamic series for 0-60 min were analyzed using a 2-tissue-compartment model. The time-activity curves and the resulting parameters for the model were stored in a database. Shortened acquisition data were generated from the database using the following time intervals: 0-10, 0-16, 0-20, 0-25, and 0-30 min. Furthermore, the impact of adding a 60-min uptake value to the dynamic series was evaluated. The datasets were analyzed using dedicated software to predict the results of the full dynamic series. The software is based on a modified support vector machines (SVM) algorithm and predicts the compartment parameters of the full dynamic series. The SVM-based software provides user-independent results and was accurate at predicting the compartment parameters of the full dynamic series. If a squared correlation coefficient of 0.8 (corresponding to 80% explained variance of the data) was used as a limit, a shortened acquisition of 0-16 min was accurate at predicting the 60-min 2-tissue-compartment parameters. If a limit of 0.9 (90% explained variance) was used, a dynamic series of at least 0-20 min together with the 60-min uptake values is required. Shortened acquisition protocols can be used to predict the parameters of the 2-tissue-compartment model. Either a dynamic PET series of 0-16 min or a combination of a dynamic PET/CT series of 0-20 min and a 60-min uptake value is accurate for analysis with a 2-tissue-compartment model.

  15. Direct solution for thermal stresses in a nose cap under an arbitrary axisymmetric temperature distribution

    NASA Technical Reports Server (NTRS)

    Davis, Randall C.

    1988-01-01

    The design of a nose cap for a hypersonic vehicle is an iterative process requiring a rapid, easy to use and accurate stress analysis. The objective of this paper is to develop such a stress analysis technique from a direct solution of the thermal stress equations for a spherical shell. The nose cap structure is treated as a thin spherical shell with an axisymmetric temperature distribution. The governing differential equations are solved by expressing the stress solution to the thermoelastic equations in terms of a series of derivatives of the Legendre polynomials. The process of finding the coefficients for the series solution in terms of the temperature distribution is generalized by expressing the temperature along the shell and through the thickness as a polynomial in the spherical angle coordinate. Under this generalization the orthogonality property of the Legendre polynomials leads to a sequence of integrals involving powers of the spherical shell coordinate times the derivative of the Legendre polynomials. The coefficients of the temperature polynomial appear outside of these integrals. Thus, the integrals are evaluated only once and their values tabulated for use with any arbitrary polynomial temperature distribution.

  16. The mean-variance relationship reveals two possible strategies for dynamic brain connectivity analysis in fMRI.

    PubMed

    Thompson, William H; Fransson, Peter

    2015-01-01

    When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed.

  17. Clustering change patterns using Fourier transformation with time-course gene expression data.

    PubMed

    Kim, Jaehee

    2011-01-01

    To understand the behavior of genes, it is important to explore how the patterns of gene expression change over a period of time because biologically related gene groups can share the same change patterns. In this study, the problem of finding similar change patterns is induced to clustering with the derivative Fourier coefficients. This work is aimed at discovering gene groups with similar change patterns which share similar biological properties. We developed a statistical model using derivative Fourier coefficients to identify similar change patterns of gene expression. We used a model-based method to cluster the Fourier series estimation of derivatives. We applied our model to cluster change patterns of yeast cell cycle microarray expression data with alpha-factor synchronization. It showed that, as the method clusters with the probability-neighboring data, the model-based clustering with our proposed model yielded biologically interpretable results. We expect that our proposed Fourier analysis with suitably chosen smoothing parameters could serve as a useful tool in classifying genes and interpreting possible biological change patterns.

  18. An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling

    NASA Astrophysics Data System (ADS)

    Wang, Enjiang; Liu, Yang

    2018-01-01

    The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.

  19. Frequency analysis via the method of moment functionals

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.; Pan, J. Q.

    1990-01-01

    Several variants are presented of a linear-in-parameters least squares formulation for determining the transfer function of a stable linear system at specified frequencies given a finite set of Fourier series coefficients calculated from transient nonstationary input-output data. The basis of the technique is Shinbrot's classical method of moment functionals using complex Fourier based modulating functions to convert a differential equation model on a finite time interval into an algebraic equation which depends linearly on frequency-related parameters.

  20. Prediction with Pooled Cross-Section and Time-Series Data: Two Case Studies.

    DTIC Science & Technology

    1982-02-01

    1977) 10. It may be that Venezuela’s role as a major oil R. C. Vogel, "The Dynamics of Inflation in Latin exporter makes the inflationary behavior of ...Instablitis." pp 296 24 pp., Dec 2976 (Published In Jawrnal of Cheeical Physics, Mango ), Marc S. end Thomas, Jams A., Jr., ’Analyticai Vol. 69, No. 6...is so dif- to first test for the overall homogeneity (equal- ferent that nothing can be gained by observing ity) of the coefficients. If this

  1. Fault Tolerant Signal Processing Using Finite Fields and Error-Correcting Codes.

    DTIC Science & Technology

    1983-06-01

    Decimation in Frequency Form, Fast Inverse Transform F-18 F-4 Part of Decimation in Time Form, Fast Inverse Transform F-21 I . LIST OF TABLES fable Title Page...F-2 Intermediate Variables In A Fast Inverse Transform F-14 Accession For NTIS GRA&il DTIC TAB E Unannounced El ** Dist ribut ion/ ____ AvailabilitY...component polynomials may be transformed to an equiva- lent series of multiplications of the related transform ’.. coefficients. The inverse transform of

  2. Columbia: The first five flights entry heating data series. Volume 2: The OMS Pod

    NASA Technical Reports Server (NTRS)

    Williams, S. D.

    1983-01-01

    Entry heating flight data and wind tunnel data on the OMS Pod are presented for the first five flights of the Space Shuttle Orbiter. The heating rate data are presented in terms of normalized film heat transfer coefficients as a function of angle-of-attack, Mach number, and normal shock Reynolds number. The surface heating rates and temperatures were obtained via the JSC NONLIN/INVERSE computer program. Time history plots of the surface heating rates and temperatures are also presented.

  3. The impact of an electronic health record on nurse sensitive patient outcomes: an interrupted time series analysis.

    PubMed

    Dowding, Dawn W; Turley, Marianne; Garrido, Terhilda

    2012-01-01

    To evaluate the impact of electronic health record (EHR) implementation on nursing care processes and outcomes. Interrupted time series analysis, 2003-2009. A large US not-for-profit integrated health care organization. 29 hospitals in Northern and Southern California. An integrated EHR including computerized physician order entry, nursing documentation, risk assessment tools, and documentation tools. Percentage of patients with completed risk assessments for hospital acquired pressure ulcers (HAPUs) and falls (process measures) and rates of HAPU and falls (outcome measures). EHR implementation was significantly associated with an increase in documentation rates for HAPU risk (coefficient 2.21, 95% CI 0.67 to 3.75); the increase for fall risk was not statistically significant (0.36; -3.58 to 4.30). EHR implementation was associated with a 13% decrease in HAPU rates (coefficient -0.76, 95% CI -1.37 to -0.16) but no decrease in fall rates (-0.091; -0.29 to 0.11). Irrespective of EHR implementation, HAPU rates decreased significantly over time (-0.16; -0.20 to -0.13), while fall rates did not (0.0052; -0.01 to 0.02). Hospital region was a significant predictor of variation for both HAPU (0.72; 0.30 to 1.14) and fall rates (0.57; 0.41 to 0.72). The introduction of an integrated EHR was associated with a reduction in the number of HAPUs but not in patient fall rates. Other factors, such as changes over time and hospital region, were also associated with variation in outcomes. The findings suggest that EHR impact on nursing care processes and outcomes is dependent on a number of factors that should be further explored.

  4. How Does Variability in Aragonite Saturation Proxies Impact Our Estimates of the Intensity and Duration of Exposure to Aragonite Corrosive Conditions in a Coastal Upwelling System?

    NASA Astrophysics Data System (ADS)

    Abell, J. T.; Jacobsen, J.; Bjorkstedt, E.

    2016-02-01

    Determining aragonite saturation state (Ω) in seawater requires measurement of two parameters of the carbonate system: most commonly dissolved inorganic carbon (DIC) and total alkalinity (TA). The routine measurement of DIC and TA is not always possible on frequently repeated hydrographic lines or at moored-time series that collect hydrographic data at short time intervals. In such cases a proxy can be developed that relates the saturation state as derived from one time or infrequent DIC and TA measurements (Ωmeas) to more frequently measured parameters such as dissolved oxygen (DO) and temperature (Temp). These proxies are generally based on best-fit parameterizations that utilize references values of DO and Temp and adjust linear coefficients until the error between the proxy-derived saturation state (Ωproxy) and Ωmeas is minimized. Proxies have been used to infer Ω from moored hydrographic sensors and gliders which routinely collect DO and Temp data but do not include carbonate parameter measurements. Proxies can also calculate Ω in regional oceanographic models which do not explicitly include carbonate parameters. Here we examine the variability and accuracy of Ωproxy along a near-shore hydrographic line and a moored-time series stations at Trinidad Head, CA. The saturation state is determined using proxies from different coastal regions of the California Current Large Marine Ecosystem and from different years of sampling along the hydrographic line. We then calculate the variability and error associated with the use of different proxy coefficients, the sensitivity to reference values and the inclusion of additional variables. We demonstrate how this variability affects estimates of the intensity and duration of exposure to aragonite corrosive conditions on the near-shore shelf and in the water column.

  5. Anisotropic diffusion of fluorescently labeled ATP in rat cardiomyocytes determined by raster image correlation spectroscopy

    PubMed Central

    Vendelin, Marko; Birkedal, Rikke

    2008-01-01

    A series of experimental data points to the existence of profound diffusion restrictions of ADP/ATP in rat cardiomyocytes. This assumption is required to explain the measurements of kinetics of respiration, sarcoplasmic reticulum loading with calcium, and kinetics of ATP-sensitive potassium channels. To be able to analyze and estimate the role of intracellular diffusion restrictions on bioenergetics, the intracellular diffusion coefficients of metabolites have to be determined. The aim of this work was to develop a practical method for determining diffusion coefficients in anisotropic medium and to estimate the overall diffusion coefficients of fluorescently labeled ATP in rat cardiomyocytes. For that, we have extended raster image correlation spectroscopy (RICS) protocols to be able to discriminate the anisotropy in the diffusion coefficient tensor. Using this extended protocol, we estimated diffusion coefficients of ATP labeled with the fluorescent conjugate Alexa Fluor 647 (Alexa-ATP). In the analysis, we assumed that the diffusion tensor can be described by two values: diffusion coefficient along the myofibril and that across it. The average diffusion coefficients found for Alexa-ATP were as follows: 83 ± 14 μm2/s in the longitudinal and 52 ± 16 μm2/s in the transverse directions (n = 8, mean ± SD). Those values are ∼2 (longitudinal) and ∼3.5 (transverse) times smaller than the diffusion coefficient value estimated for the surrounding solution. Such uneven reduction of average diffusion coefficient leads to anisotropic diffusion in rat cardiomyocytes. Although the source for such anisotropy is uncertain, we speculate that it may be induced by the ordered pattern of intracellular structures in rat cardiomyocytes. PMID:18815224

  6. Non-target time trend screening: a data reduction strategy for detecting emerging contaminants in biological samples.

    PubMed

    Plassmann, Merle M; Tengstrand, Erik; Åberg, K Magnus; Benskin, Jonathan P

    2016-06-01

    Non-targeted mass spectrometry-based approaches for detecting novel xenobiotics in biological samples are hampered by the occurrence of naturally fluctuating endogenous substances, which are difficult to distinguish from environmental contaminants. Here, we investigate a data reduction strategy for datasets derived from a biological time series. The objective is to flag reoccurring peaks in the time series based on increasing peak intensities, thereby reducing peak lists to only those which may be associated with emerging bioaccumulative contaminants. As a result, compounds with increasing concentrations are flagged while compounds displaying random, decreasing, or steady-state time trends are removed. As an initial proof of concept, we created artificial time trends by fortifying human whole blood samples with isotopically labelled standards. Different scenarios were investigated: eight model compounds had a continuously increasing trend in the last two to nine time points, and four model compounds had a trend that reached steady state after an initial increase. Each time series was investigated at three fortification levels and one unfortified series. Following extraction, analysis by ultra performance liquid chromatography high-resolution mass spectrometry, and data processing, a total of 21,700 aligned peaks were obtained. Peaks displaying an increasing trend were filtered from randomly fluctuating peaks using time trend ratios and Spearman's rank correlation coefficients. The first approach was successful in flagging model compounds spiked at only two to three time points, while the latter approach resulted in all model compounds ranking in the top 11 % of the peak lists. Compared to initial peak lists, a combination of both approaches reduced the size of datasets by 80-85 %. Overall, non-target time trend screening represents a promising data reduction strategy for identifying emerging bioaccumulative contaminants in biological samples. Graphical abstract Using time trends to filter out emerging contaminants from large peak lists.

  7. Quasi-measures on the group G{sup m}, Dirichlet sets, and uniqueness problems for multiple Walsh series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plotnikov, Mikhail G

    2011-02-11

    Multiple Walsh series (S) on the group G{sup m} are studied. It is proved that every at most countable set is a uniqueness set for series (S) under convergence over cubes. The recovery problem is solved for the coefficients of series (S) that converge outside countable sets or outside sets of Dirichlet type. A number of analogues of the de la Vallee Poussin theorem are established for series (S). Bibliography: 28 titles.

  8. A method to identify differential expression profiles of time-course gene data with Fourier transformation.

    PubMed

    Kim, Jaehee; Ogden, Robert Todd; Kim, Haseong

    2013-10-18

    Time course gene expression experiments are an increasingly popular method for exploring biological processes. Temporal gene expression profiles provide an important characterization of gene function, as biological systems are both developmental and dynamic. With such data it is possible to study gene expression changes over time and thereby to detect differential genes. Much of the early work on analyzing time series expression data relied on methods developed originally for static data and thus there is a need for improved methodology. Since time series expression is a temporal process, its unique features such as autocorrelation between successive points should be incorporated into the analysis. This work aims to identify genes that show different gene expression profiles across time. We propose a statistical procedure to discover gene groups with similar profiles using a nonparametric representation that accounts for the autocorrelation in the data. In particular, we first represent each profile in terms of a Fourier basis, and then we screen out genes that are not differentially expressed based on the Fourier coefficients. Finally, we cluster the remaining gene profiles using a model-based approach in the Fourier domain. We evaluate the screening results in terms of sensitivity, specificity, FDR and FNR, compare with the Gaussian process regression screening in a simulation study and illustrate the results by application to yeast cell-cycle microarray expression data with alpha-factor synchronization.The key elements of the proposed methodology: (i) representation of gene profiles in the Fourier domain; (ii) automatic screening of genes based on the Fourier coefficients and taking into account autocorrelation in the data, while controlling the false discovery rate (FDR); (iii) model-based clustering of the remaining gene profiles. Using this method, we identified a set of cell-cycle-regulated time-course yeast genes. The proposed method is general and can be potentially used to identify genes which have the same patterns or biological processes, and help facing the present and forthcoming challenges of data analysis in functional genomics.

  9. Improving cluster-based missing value estimation of DNA microarray data.

    PubMed

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  10. Functional magnetic resonance imaging activation detection: fuzzy cluster analysis in wavelet and multiwavelet domains.

    PubMed

    Jahanian, Hesamoddin; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gholam-Ali

    2005-09-01

    To present novel feature spaces, based on multiscale decompositions obtained by scalar wavelet and multiwavelet transforms, to remedy problems associated with high dimension of functional magnetic resonance imaging (fMRI) time series (when they are used directly in clustering algorithms) and their poor signal-to-noise ratio (SNR) that limits accurate classification of fMRI time series according to their activation contents. Using randomization, the proposed method finds wavelet/multiwavelet coefficients that represent the activation content of fMRI time series and combines them to define new feature spaces. Using simulated and experimental fMRI data sets, the proposed feature spaces are compared to the cross-correlation (CC) feature space and their performances are evaluated. In these studies, the false positive detection rate is controlled using randomization. To compare different methods, several points of the receiver operating characteristics (ROC) curves, using simulated data, are estimated and compared. The proposed features suppress the effects of confounding signals and improve activation detection sensitivity. Experimental results show improved sensitivity and robustness of the proposed method compared to the conventional CC analysis. More accurate and sensitive activation detection can be achieved using the proposed feature spaces compared to CC feature space. Multiwavelet features show superior detection sensitivity compared to the scalar wavelet features. (c) 2005 Wiley-Liss, Inc.

  11. Multilevel Preconditioners for Discontinuous Galerkin Approximations of Elliptic Problems with Jump Coefficients

    DTIC Science & Technology

    2010-12-01

    discontinuous coefficients on geometrically nonconforming substructures. Technical Report Serie A 634, Instituto de Matematica Pura e Aplicada, Brazil, 2009...Instituto de Matematica Pura e Aplicada, Brazil, 2010. submitted. [41] M. Dryja, M. V. Sarkis, and O. B. Widlund. Multilevel Schwarz methods for

  12. Tracking variable sedimentation rates in orbitally forced paleoclimate proxy series

    NASA Astrophysics Data System (ADS)

    Li, M.; Kump, L. R.; Hinnov, L.

    2017-12-01

    This study addresses two fundamental issues in cyclostratigraphy: quantitative testing of orbital forcing in cyclic sedimentary sequences and tracking variable sedimentation rates. The methodology proposed here addresses these issues as an inverse problem, and estimates the product-moment correlation coefficient between the frequency spectra of orbital solutions and paleoclimate proxy series over a range of "test" sedimentation rates. It is inspired by the ASM method (1). The number of orbital parameters involved in the estimation is also considered. The method relies on the hypothesis that orbital forcing had a significant impact on the paleoclimate proxy variations, and thus is also tested. The null hypothesis of no astronomical forcing is evaluated using the Beta distribution, for which the shape parameters are estimated using a Monte Carlo simulation approach. We introduce a metric to estimate the most likely sedimentation rate using the product-moment correlation coefficient, H0 significance level, and the number of contributing orbital parameters, i.e., the CHO value. The CHO metric is applied with a sliding window to track variable sedimentation rates along the paleoclimate proxy series. Two forward models with uniform and variable sedimentation rates are evaluated to demonstrate the robustness of the method. The CHO method is applied to the classical Late Triassic Newark depth rank series; the estimated sedimentation rates match closely with previously published sedimentation rates and provide a more highly time-resolved estimate (2,3). References: (1) Meyers, S.R., Sageman, B.B., Amer. J. Sci., 307, 773-792, 2007; (2) Kent, D.V., Olsen, P.E., Muttoni, G., Earth-Sci. Rev.166, 153-180, 2017; (3) Li, M., Zhang, Y., Huang, C., Ogg, J., Hinnov, L., Wang, Y., Zou, Z., Li, L., 2017. Earth Plant. Sc. Lett. doi:10.1016/j.epsl.2017.07.015

  13. A soil water based index as a suitable agricultural drought indicator

    NASA Astrophysics Data System (ADS)

    Martínez-Fernández, J.; González-Zamora, A.; Sánchez, N.; Gumuzzio, A.

    2015-03-01

    Currently, the availability of soil water databases is increasing worldwide. The presence of a growing number of long-term soil moisture networks around the world and the impressive progress of remote sensing in recent years has allowed the scientific community and, in the very next future, a diverse group of users to obtain precise and frequent soil water measurements. Therefore, it is reasonable to consider soil water observations as a potential approach for monitoring agricultural drought. In the present work, a new approach to define the soil water deficit index (SWDI) is analyzed to use a soil water series for drought monitoring. In addition, simple and accurate methods using a soil moisture series solely to obtain soil water parameters (field capacity and wilting point) needed for calculating the index are evaluated. The application of the SWDI in an agricultural area of Spain presented good results at both daily and weekly time scales when compared to two climatic water deficit indicators (average correlation coefficient, R, 0.6) and to agricultural production. The long-term minimum, the growing season minimum and the 5th percentile of the soil moisture series are good estimators (coefficient of determination, R2, 0.81) for the wilting point. The minimum of the maximum value of the growing season is the best estimator (R2, 0.91) for field capacity. The use of these types of tools for drought monitoring can aid the better management of agricultural lands and water resources, mainly under the current scenario of climate uncertainty.

  14. Bayesian inference of selection in a heterogeneous environment from genetic time-series data.

    PubMed

    Gompert, Zachariah

    2016-01-01

    Evolutionary geneticists have sought to characterize the causes and molecular targets of selection in natural populations for many years. Although this research programme has been somewhat successful, most statistical methods employed were designed to detect consistent, weak to moderate selection. In contrast, phenotypic studies in nature show that selection varies in time and that individual bouts of selection can be strong. Measurements of the genomic consequences of such fluctuating selection could help test and refine hypotheses concerning the causes of ecological specialization and the maintenance of genetic variation in populations. Herein, I proposed a Bayesian nonhomogeneous hidden Markov model to estimate effective population sizes and quantify variable selection in heterogeneous environments from genetic time-series data. The model is described and then evaluated using a series of simulated data, including cases where selection occurs on a trait with a simple or polygenic molecular basis. The proposed method accurately distinguished neutral loci from non-neutral loci under strong selection, but not from those under weak selection. Selection coefficients were accurately estimated when selection was constant or when the fitness values of genotypes varied linearly with the environment, but these estimates were less accurate when fitness was polygenic or the relationship between the environment and the fitness of genotypes was nonlinear. Past studies of temporal evolutionary dynamics in laboratory populations have been remarkably successful. The proposed method makes similar analyses of genetic time-series data from natural populations more feasible and thereby could help answer fundamental questions about the causes and consequences of evolution in the wild. © 2015 John Wiley & Sons Ltd.

  15. Urban green land cover changes and their relation to climatic variables in an anthropogenically impacted area

    NASA Astrophysics Data System (ADS)

    Zoran, Maria A.; Dida, Adrian I.

    2017-10-01

    Urban green areas are experiencing rapid land cover change caused by human-induced land degradation and extreme climatic events. Vegetation index time series provide a useful way to monitor urban vegetation phenological variations. This study quantitatively describes Normalized Difference Vegetation Index NDVI) /Enhanced Vegetation Index (EVI) and Leaf Area Index (LAI) temporal changes for Bucharest metropolitan region land cover in Romania from the perspective of vegetation phenology and its relation with climate changes and extreme climate events. The time series from 2000 to 2016 of the NOAA AVHRR and MODIS Terra/Aqua satellite data were analyzed to extract anomalies. Time series of climatic variables were also analyzed through anomaly detection techniques and the Fourier Transform. Correlations between NDVI/EVI time series and climatic variables were computed. Temperature, rainfall and radiation were significantly correlated with almost all land-cover classes for the harmonic analysis amplitude term. However, vegetation phenology was not correlated with climatic variables for the harmonic analysis phase term suggesting a delay between climatic variations and vegetation response. Training and validation were based on a reference dataset collected from IKONOS high resolution remote sensing data. The mean detection accuracy for period 2000- 2016 was assessed to be of 87%, with a reasonable balance between change commission errors (19.3%), change omission errors (24.7%), and Kappa coefficient of 0.73. This paper demonstrates the potential of moderate - and high resolution, multispectral imagery to map and monitor the evolution of the physical urban green land cover under climate and anthropogenic pressure.

  16. The computation of ICRP dose coefficients for intakes of radionuclides with PLEIADES: biokinetic aspects.

    PubMed

    Fell, T P

    2007-01-01

    The ICRP has published dose coefficients for the ingestion or inhalation of radionuclides in a series of reports covering intakes by workers and members of the public including children and pregnant or lactating women. The calculation of these coefficients conveniently divides into two distinct parts--the biokinetic and dosimetric. This paper gives a brief summary of the methods used to solve the biokinetic problem in the generation of dose coefficients on behalf of the ICRP, as implemented in the Health Protection Agency's internal dosimetry code PLEIADES.

  17. Automated smoother for the numerical decoupling of dynamics models.

    PubMed

    Vilela, Marco; Borges, Carlos C H; Vinga, Susana; Vasconcelos, Ana Tereza R; Santos, Helena; Voit, Eberhard O; Almeida, Jonas S

    2007-08-21

    Structure identification of dynamic models for complex biological systems is the cornerstone of their reverse engineering. Biochemical Systems Theory (BST) offers a particularly convenient solution because its parameters are kinetic-order coefficients which directly identify the topology of the underlying network of processes. We have previously proposed a numerical decoupling procedure that allows the identification of multivariate dynamic models of complex biological processes. While described here within the context of BST, this procedure has a general applicability to signal extraction. Our original implementation relied on artificial neural networks (ANN), which caused slight, undesirable bias during the smoothing of the time courses. As an alternative, we propose here an adaptation of the Whittaker's smoother and demonstrate its role within a robust, fully automated structure identification procedure. In this report we propose a robust, fully automated solution for signal extraction from time series, which is the prerequisite for the efficient reverse engineering of biological systems models. The Whittaker's smoother is reformulated within the context of information theory and extended by the development of adaptive signal segmentation to account for heterogeneous noise structures. The resulting procedure can be used on arbitrary time series with a nonstationary noise process; it is illustrated here with metabolic profiles obtained from in-vivo NMR experiments. The smoothed solution that is free of parametric bias permits differentiation, which is crucial for the numerical decoupling of systems of differential equations. The method is applicable in signal extraction from time series with nonstationary noise structure and can be applied in the numerical decoupling of system of differential equations into algebraic equations, and thus constitutes a rather general tool for the reverse engineering of mechanistic model descriptions from multivariate experimental time series.

  18. Geostatistical Analysis of Surface Temperature and In-Situ Soil Moisture Using LST Time-Series from Modis

    NASA Astrophysics Data System (ADS)

    Sohrabinia, M.; Rack, W.; Zawar-Reza, P.

    2012-07-01

    The objective of this analysis is to provide a quantitative estimate of the fluctuations of land surface temperature (LST) with varying near surface soil moisture (SM) on different land-cover (LC) types. The study area is located in the Canterbury Plains in the South Island of New Zealand. Time series of LST from the MODerate resolution Imaging Spectro-radiometer (MODIS) have been analysed statistically to study the relationship between the surface skin temperature and near-surface SM. In-situ measurements of the skin temperature and surface SM with a quasi-experimental design over multiple LC types are used for validation. Correlations between MODIS LST and in-situ SM, as well as in-situ surface temperature and SM are calculated. The in-situ measurements and MODIS data are collected from various LC types. Pearson's r correlation coefficient and linear regression are used to fit the MODIS LST and surface skin temperature with near-surface SM. There was no significant correlation between time-series of MODIS LST and near-surface SM from the initial analysis, however, careful analysis of the data showed significant correlation between the two parameters. Night-time series of the in-situ surface temperature and SM from a 12 hour period over Irrigated-Crop, Mixed-Grass, Forest, Barren and Open- Grass showed inverse correlations of -0.47, -0.68, -0.74, -0.88 and -0.93, respectively. These results indicated that the relationship between near-surface SM and LST in short-terms (12 to 24 hours) is strong, however, remotely sensed LST with higher temporal resolution is required to establish this relationship in such time-scales. This method can be used to study near-surface SM using more frequent LST observations from a geostationary satellite over the study area.

  19. Structural connectome topology relates to regional BOLD signal dynamics in the mouse brain

    NASA Astrophysics Data System (ADS)

    Sethi, Sarab S.; Zerbi, Valerio; Wenderoth, Nicole; Fornito, Alex; Fulcher, Ben D.

    2017-04-01

    Brain dynamics are thought to unfold on a network determined by the pattern of axonal connections linking pairs of neuronal elements; the so-called connectome. Prior work has indicated that structural brain connectivity constrains pairwise correlations of brain dynamics ("functional connectivity"), but it is not known whether inter-regional axonal connectivity is related to the intrinsic dynamics of individual brain areas. Here we investigate this relationship using a weighted, directed mesoscale mouse connectome from the Allen Mouse Brain Connectivity Atlas and resting state functional MRI (rs-fMRI) time-series data measured in 184 brain regions in eighteen anesthetized mice. For each brain region, we measured degree, betweenness, and clustering coefficient from weighted and unweighted, and directed and undirected versions of the connectome. We then characterized the univariate rs-fMRI dynamics in each brain region by computing 6930 time-series properties using the time-series analysis toolbox, hctsa. After correcting for regional volume variations, strong and robust correlations between structural connectivity properties and rs-fMRI dynamics were found only when edge weights were accounted for, and were associated with variations in the autocorrelation properties of the rs-fMRI signal. The strongest relationships were found for weighted in-degree, which was positively correlated to the autocorrelation of fMRI time series at time lag τ = 34 s (partial Spearman correlation ρ = 0.58 ), as well as a range of related measures such as relative high frequency power (f > 0.4 Hz: ρ = - 0.43 ). Our results indicate that the topology of inter-regional axonal connections of the mouse brain is closely related to intrinsic, spontaneous dynamics such that regions with a greater aggregate strength of incoming projections display longer timescales of activity fluctuations.

  20. Mapping croplands, cropping patterns, and crop types using MODIS time-series data

    NASA Astrophysics Data System (ADS)

    Chen, Yaoliang; Lu, Dengsheng; Moran, Emilio; Batistella, Mateus; Dutra, Luciano Vieira; Sanches, Ieda Del'Arco; da Silva, Ramon Felipe Bicudo; Huang, Jingfeng; Luiz, Alfredo José Barreto; de Oliveira, Maria Antonia Falcão

    2018-07-01

    The importance of mapping regional and global cropland distribution in timely ways has been recognized, but separation of crop types and multiple cropping patterns is challenging due to their spectral similarity. This study developed a new approach to identify crop types (including soy, cotton and maize) and cropping patterns (Soy-Maize, Soy-Cotton, Soy-Pasture, Soy-Fallow, Fallow-Cotton and Single crop) in the state of Mato Grosso, Brazil. The Moderate Resolution Imaging Spectroradiometer (MODIS) normalized difference vegetation index (NDVI) time series data for 2015 and 2016 and field survey data were used in this research. The major steps of this proposed approach include: (1) reconstructing NDVI time series data by removing the cloud-contaminated pixels using the temporal interpolation algorithm, (2) identifying the best periods and developing temporal indices and phenological parameters to distinguish croplands from other land cover types, and (3) developing crop temporal indices to extract cropping patterns using NDVI time-series data and group cropping patterns into crop types. Decision tree classifier was used to map cropping patterns based on these temporal indices. Croplands from Landsat imagery in 2016, cropping pattern samples from field survey in 2016, and the planted area of crop types in 2015 were used for accuracy assessment. Overall accuracies of approximately 90%, 73% and 86%, respectively were obtained for croplands, cropping patterns, and crop types. The adjusted coefficients of determination of total crop, soy, maize, and cotton areas with corresponding statistical areas were 0.94, 0.94, 0.88 and 0.88, respectively. This research indicates that the proposed approach is promising for mapping large-scale croplands, their cropping patterns and crop types.

  1. A hybrid wavelet analysis-cloud model data-extending approach for meteorologic and hydrologic time series

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Ding, Hao; Singh, Vijay P.; Shang, Xiaosan; Liu, Dengfeng; Wang, Yuankun; Zeng, Xiankui; Wu, Jichun; Wang, Lachun; Zou, Xinqing

    2015-05-01

    For scientific and sustainable management of water resources, hydrologic and meteorologic data series need to be often extended. This paper proposes a hybrid approach, named WA-CM (wavelet analysis-cloud model), for data series extension. Wavelet analysis has time-frequency localization features, known as "mathematics microscope," that can decompose and reconstruct hydrologic and meteorologic series by wavelet transform. The cloud model is a mathematical representation of fuzziness and randomness and has strong robustness for uncertain data. The WA-CM approach first employs the wavelet transform to decompose the measured nonstationary series and then uses the cloud model to develop an extension model for each decomposition layer series. The final extension is obtained by summing the results of extension of each layer. Two kinds of meteorologic and hydrologic data sets with different characteristics and different influence of human activity from six (three pairs) representative stations are used to illustrate the WA-CM approach. The approach is also compared with four other methods, which are conventional correlation extension method, Kendall-Theil robust line method, artificial neural network method (back propagation, multilayer perceptron, and radial basis function), and single cloud model method. To evaluate the model performance completely and thoroughly, five measures are used, which are relative error, mean relative error, standard deviation of relative error, root mean square error, and Thiel inequality coefficient. Results show that the WA-CM approach is effective, feasible, and accurate and is found to be better than other four methods compared. The theory employed and the approach developed here can be applied to extension of data in other areas as well.

  2. Seasonal to Decadal-Scale Variability in Satellite Ocean Color and Sea Surface Temperature for the California Current System

    NASA Technical Reports Server (NTRS)

    Mitchell, B. Greg; Kahru, Mati; Marra, John (Technical Monitor)

    2002-01-01

    Support for this project was used to develop satellite ocean color and temperature indices (SOCTI) for the California Current System (CCS) using the historic record of CZCS West Coast Time Series (WCTS), OCTS, WiFS and AVHRR SST. The ocean color satellite data have been evaluated in relation to CalCOFI data sets for chlorophyll (CZCS) and ocean spectral reflectance and chlorophyll OCTS and SeaWiFS. New algorithms for the three missions have been implemented based on in-water algorithm data sets, or in the case of CZCS, by comparing retrieved pigments with ship-based observations. New algorithms for absorption coefficients, diffuse attenuation coefficients and primary production have also been evaluated. Satellite retrievals are being evaluated based on our large data set of pigments and optics from CalCOFI.

  3. Broken Ergodicity in Ideal, Homogeneous, Incompressible Turbulence

    NASA Technical Reports Server (NTRS)

    Morin, Lee; Shebalin, John; Fu, Terry; Nguyen, Phu; Shum, Victor

    2010-01-01

    We discuss the statistical mechanics of numerical models of ideal homogeneous, incompressible turbulence and their relevance for dissipative fluids and magnetofluids. These numerical models are based on Fourier series and the relevant statistical theory predicts that Fourier coefficients of fluid velocity and magnetic fields (if present) are zero-mean random variables. However, numerical simulations clearly show that certain coefficients have a non-zero mean value that can be very large compared to the associated standard deviation. We explain this phenomena in terms of broken ergodicity', which is defined to occur when dynamical behavior does not match ensemble predictions on very long time-scales. We review the theoretical basis of broken ergodicity, apply it to 2-D and 3-D fluid and magnetohydrodynamic simulations of homogeneous turbulence, and show new results from simulations using GPU (graphical processing unit) computers.

  4. Impact of improved momentum transfer coefficients on the dynamics and thermodynamics of the north Indian Ocean

    NASA Astrophysics Data System (ADS)

    Parekh, Anant; Gnanaseelan, C.; Jayakumar, A.

    2011-01-01

    Long time series of in situ observations from the north Indian Ocean are used to compute the momentum transfer coefficients over the north Indian Ocean. The transfer coefficients behave nonlinearly for low winds (<4 m/s), when most of the known empirical relations assume linear relations. Impact of momentum transfer coefficients on the upper ocean parameters is studied using an ocean general circulation model. The model experiments revealed that the Arabian Sea and Equatorial Indian Ocean are more sensitive to the momentum transfer coefficients than the Bay of Bengal and south Indian Ocean. The impact of momentum transfer coefficients on sea surface temperature is up to 0.3°C-0.4°C, on mixed layer depth is up to 10 m, and on thermocline depth is up to 15 m. Furthermore, the impact on the zonal current is maximum over the equatorial Indian Ocean (i.e., about 0.12 m/s in May and 0.15 m/s in October; both May and October are the period of Wyrtki jets and the difference in current has potential impact on the seasonal mass transport). The Sverdrup transport has maximum impact in the Bay of Bengal (3 to 4 Sv in August), whereas the Ekman transport has maximum impact in the Arabian Sea (4 Sv during May to July). These highlight the potential impact of accurate momentum forcing on the results from current ocean models.

  5. Clustering Coefficients for Correlation Networks.

    PubMed

    Masuda, Naoki; Sakaki, Michiko; Ezaki, Takahiro; Watanabe, Takamitsu

    2018-01-01

    Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of previous applications of clustering coefficient (and other) measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node) are useful in characterizing individual nodes. In contrast, the conventional local clustering coefficients were strongly correlated with and therefore may be confounded by the node's connectivity. The proposed methods are expected to help us to understand clustering and lack thereof in correlational brain networks, such as those derived from functional time series and across-participant correlation in neuroanatomical properties.

  6. Clustering Coefficients for Correlation Networks

    PubMed Central

    Masuda, Naoki; Sakaki, Michiko; Ezaki, Takahiro; Watanabe, Takamitsu

    2018-01-01

    Graph theory is a useful tool for deciphering structural and functional networks of the brain on various spatial and temporal scales. The clustering coefficient quantifies the abundance of connected triangles in a network and is a major descriptive statistics of networks. For example, it finds an application in the assessment of small-worldness of brain networks, which is affected by attentional and cognitive conditions, age, psychiatric disorders and so forth. However, it remains unclear how the clustering coefficient should be measured in a correlation-based network, which is among major representations of brain networks. In the present article, we propose clustering coefficients tailored to correlation matrices. The key idea is to use three-way partial correlation or partial mutual information to measure the strength of the association between the two neighboring nodes of a focal node relative to the amount of pseudo-correlation expected from indirect paths between the nodes. Our method avoids the difficulties of previous applications of clustering coefficient (and other) measures in defining correlational networks, i.e., thresholding on the correlation value, discarding of negative correlation values, the pseudo-correlation problem and full partial correlation matrices whose estimation is computationally difficult. For proof of concept, we apply the proposed clustering coefficient measures to functional magnetic resonance imaging data obtained from healthy participants of various ages and compare them with conventional clustering coefficients. We show that the clustering coefficients decline with the age. The proposed clustering coefficients are more strongly correlated with age than the conventional ones are. We also show that the local variants of the proposed clustering coefficients (i.e., abundance of triangles around a focal node) are useful in characterizing individual nodes. In contrast, the conventional local clustering coefficients were strongly correlated with and therefore may be confounded by the node's connectivity. The proposed methods are expected to help us to understand clustering and lack thereof in correlational brain networks, such as those derived from functional time series and across-participant correlation in neuroanatomical properties. PMID:29599714

  7. Permeability and compression characteristics of municipal solid waste samples

    NASA Astrophysics Data System (ADS)

    Durmusoglu, Ertan; Sanchez, Itza M.; Corapcioglu, M. Yavuz

    2006-08-01

    Four series of laboratory tests were conducted to evaluate the permeability and compression characteristics of municipal solid waste (MSW) samples. While the two series of tests were conducted using a conventional small-scale consolidometer, the two others were conducted in a large-scale consolidometer specially constructed for this study. In each consolidometer, the MSW samples were tested at two different moisture contents, i.e., original moisture content and field capacity. A scale effect between the two consolidometers with different sizes was investigated. The tests were carried out on samples reconsolidated to pressures of 123, 246, and 369 kPa. Time settlement data gathered from each load increment were employed to plot strain versus log-time graphs. The data acquired from the compression tests were used to back calculate primary and secondary compression indices. The consolidometers were later adapted for permeability experiments. The values of indices and the coefficient of compressibility for the MSW samples tested were within a relatively narrow range despite the size of the consolidometer and the different moisture contents of the specimens tested. The values of the coefficient of permeability were within a band of two orders of magnitude (10-6-10-4 m/s). The data presented in this paper agreed very well with the data reported by previous researchers. It was concluded that the scale effect in the compression behavior was significant. However, there was usually no linear relationship between the results obtained in the tests.

  8. The relationship between pay day and violent death in Guatemala: a time series analysis

    PubMed Central

    Ramírez, Dorian E; Branas, Charles C; Richmond, Therese S; Bream, Kent; Xie, Dawei; Velásquez-Tohom, Magda; Wiebe, Douglas J

    2016-01-01

    Objective To assess if violent deaths were associated with pay days in Guatemala. Design Interrupted time series analysis. Setting Guatemalan national autopsy databases. Participants Daily violence-related autopsy data for 22 418 decedents from 2009 to 2012. Data were provided by the Guatemalan National Institute of Forensic Sciences. Multiple pay-day lags and other important days such as holidays were tested. Outcome measures Absolute and relative estimates of excess violent deaths on pay days and holidays. Results The occurrence of violent deaths was not associated with pay days. However, a significant association was observed for national holidays, and this association was more pronounced when national holidays and pay days occurred simultaneously. This effect was observed mainly in males, who constituted the vast majority of violent deaths in Guatemala. An estimated 112 (coefficient=3.12; 95% CI 2.15 to 4.08; p<0.01) more male violent deaths occurred on holidays than were expected. An estimated 121 (coefficient=4.64; 95% CI 3.41 to 5.88; p<0.01) more male violent deaths than expected occurred on holidays that coincided with the first 2 days following a pay day. Conclusions Men in Guatemala experience violent deaths at an elevated rate when pay days coincide with national holidays. Efforts to be better prepared for violence during national holidays and to prevent violent deaths by rescheduling pay days when these days co-occur with national holidays should be considered. PMID:27697828

  9. Continuous hydrologic simulation of runoff for the Middle Fork and South Fork of the Beargrass Creek basin in Jefferson County, Kentucky

    USGS Publications Warehouse

    Jarrett, G. Lynn; Downs, Aimee C.; Grace-Jarrett, Patricia A.

    1998-01-01

    The Hydrological Simulation Pro-gram-FORTRAN (HSPF) was applied to an urban drainage basin in Jefferson County, Ky to integrate the large amounts of information being collected on water quantity and quality into an analytical framework that could be used as a management and planning tool. Hydrologic response units were developed using geographic data and a K-means analysis to characterize important hydrologic and physical factors in the basin. The Hydrological Simulation Program FORTRAN Expert System (HSPEXP) was used to calibrate the model parameters for the Middle Fork Beargrass Creek Basin for 3 years (June 1, 1991, to May 31, 1994) of 5-minute streamflow and precipitation time series, and 3 years of hourly pan-evaporation time series. The calibrated model parameters were applied to the South Fork Beargrass Creek Basin for confirmation. The model confirmation results indicated that the model simulated the system within acceptable tolerances. The coefficient of determination and coefficient of model-fit efficiency between simulated and observed daily flows were 0.91 and 0.82, respectively, for model calibration and 0.88 and 0.77, respectively, for model confirmation. The model is most sensitive to estimates of the area of effective impervious land in the basin; the spatial distribution of rain-fall; and the lower-zone evapotranspiration, lower-zone nominal storage, and infiltration-capacity parameters during recession and low-flow periods. The error contribution from these sources varies with season and antecedent conditions.

  10. A Systematic Approach to Time-series Metabolite Profiling and RNA-seq Analysis of Chinese Hamster Ovary Cell Culture.

    PubMed

    Hsu, Han-Hsiu; Araki, Michihiro; Mochizuki, Masao; Hori, Yoshimi; Murata, Masahiro; Kahar, Prihardi; Yoshida, Takanobu; Hasunuma, Tomohisa; Kondo, Akihiko

    2017-03-02

    Chinese hamster ovary (CHO) cells are the primary host used for biopharmaceutical protein production. The engineering of CHO cells to produce higher amounts of biopharmaceuticals has been highly dependent on empirical approaches, but recent high-throughput "omics" methods are changing the situation in a rational manner. Omics data analyses using gene expression or metabolite profiling make it possible to identify key genes and metabolites in antibody production. Systematic omics approaches using different types of time-series data are expected to further enhance understanding of cellular behaviours and molecular networks for rational design of CHO cells. This study developed a systematic method for obtaining and analysing time-dependent intracellular and extracellular metabolite profiles, RNA-seq data (enzymatic mRNA levels) and cell counts from CHO cell cultures to capture an overall view of the CHO central metabolic pathway (CMP). We then calculated correlation coefficients among all the profiles and visualised the whole CMP by heatmap analysis and metabolic pathway mapping, to classify genes and metabolites together. This approach provides an efficient platform to identify key genes and metabolites in CHO cell culture.

  11. Reconstructing disturbance history for an intensively mined region by time-series analysis of Landsat imagery.

    PubMed

    Li, Jing; Zipper, Carl E; Donovan, Patricia F; Wynne, Randolph H; Oliphant, Adam J

    2015-09-01

    Surface mining disturbances have attracted attention globally due to extensive influence on topography, land use, ecosystems, and human populations in mineral-rich regions. We analyzed a time series of Landsat satellite imagery to produce a 28-year disturbance history for surface coal mining in a segment of eastern USA's central Appalachian coalfield, southwestern Virginia. The method was developed and applied as a three-step sequence: vegetation index selection, persistent vegetation identification, and mined-land delineation by year of disturbance. The overall classification accuracy and kappa coefficient were 0.9350 and 0.9252, respectively. Most surface coal mines were identified correctly by location and by time of initial disturbance. More than 8 % of southwestern Virginia's >4000-km(2) coalfield area was disturbed by surface coal mining over the 28-year period. Approximately 19.5 % of the Appalachian coalfield surface within the most intensively mined county (Wise County) has been disturbed by mining. Mining disturbances expanded steadily and progressively over the study period. Information generated can be applied to gain further insight concerning mining influences on ecosystems and other essential environmental features.

  12. An experimental 392-year documentary-based multi-proxy (vine and grain) reconstruction of May-July temperatures for Kőszeg, West-Hungary

    NASA Astrophysics Data System (ADS)

    Kiss, Andrea; Wilson, Rob; Bariska, István

    2011-07-01

    In this paper, we present a 392-year-long preliminary temperature reconstruction for western Hungary. The reconstructed series is based on five vine- and grain-related historical phenological series from the town of Kőszeg. We apply dendrochronological methods for both signal assessment of the phenological series and the resultant temperature reconstruction. As a proof of concept, the present reconstruction explains 57% of the temperature variance of May-July Budapest mean temperatures and is well verified with coefficient of efficiency values in excess of 0.45. The developed temperature reconstruction portrays warm conditions during the late seventeenth and early eighteenth centuries with a period of cooling until the coldest reconstructed period centred around 1815, which was followed by a period of warming until the 1860s. The phenological evidence analysed here represent an important data source from which non-biased estimates of past climate can be derived that may provide information at all possible time-scales.

  13. A more general system for Poisson series manipulation.

    NASA Technical Reports Server (NTRS)

    Cherniack, J. R.

    1973-01-01

    The design of a working Poisson series processor system is described that is more general than those currently in use. This system is the result of a series of compromises among efficiency, generality, ease of programing, and ease of use. The most general form of coefficients that can be multiplied efficiently is pointed out, and the place of general-purpose algebraic systems in celestial mechanics is discussed.

  14. Time-series photometric spot modeling. 2: Fifteen years of photometry of the bright RS CVn binary HR 7275

    NASA Technical Reports Server (NTRS)

    Strassmeier, K. G.; Hall, D. S.; Henry, G. W.

    1994-01-01

    We present a time-dependent spot modeling analysis of 15 consecutive years of V-band photometry of the long-period (P(sub orb) = 28.6 days) RS CVn binary HR 7275. This baseline in time is one of the longest, uninterrupted intervals a spotted star has been observed. The spot modeling analysis yields a total of 20 different spots throughout the time span of our observations. The distribution of the observed spot migration rates is consistent with solar-type differential rotation and suggests a lower limit of the differential-rotation coefficient of 0.022 +/-0.004. The observed, maximum lifetime of a single spot (or spot group) is 4.5 years, the minimum lifetime is approximately one year, but an average spot lives for 2.2 years. If we assume that the mechanical shear by differential rotation sets the upper limit to the spot lifetime, the observed maximum lifetime in turn sets an upper limit to the differential-rotation coefficient, namely 0.04 +/- 0.01. This would be differential rotation just 5 to 8 times less than the solar value and one of the strongest among active binaries. We found no conclusive evidence for the existence of a periodic phenomenon that could be attributed to a stellar magnetic cycle.

  15. A hybrid least squares support vector machines and GMDH approach for river flow forecasting

    NASA Astrophysics Data System (ADS)

    Samsudin, R.; Saad, P.; Shabri, A.

    2010-06-01

    This paper proposes a novel hybrid forecasting model, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM), known as GLSSVM. The GMDH is used to determine the useful input variables for LSSVM model and the LSSVM model which works as time series forecasting. In this study the application of GLSSVM for monthly river flow forecasting of Selangor and Bernam River are investigated. The results of the proposed GLSSVM approach are compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA) model, GMDH and LSSVM models using the long term observations of monthly river flow discharge. The standard statistical, the root mean square error (RMSE) and coefficient of correlation (R) are employed to evaluate the performance of various models developed. Experiment result indicates that the hybrid model was powerful tools to model discharge time series and can be applied successfully in complex hydrological modeling.

  16. Multi-frequency complex network from time series for uncovering oil-water flow structure.

    PubMed

    Gao, Zhong-Ke; Yang, Yu-Xuan; Fang, Peng-Cheng; Jin, Ning-De; Xia, Cheng-Yi; Hu, Li-Dan

    2015-02-04

    Uncovering complex oil-water flow structure represents a challenge in diverse scientific disciplines. This challenge stimulates us to develop a new distributed conductance sensor for measuring local flow signals at different positions and then propose a novel approach based on multi-frequency complex network to uncover the flow structures from experimental multivariate measurements. In particular, based on the Fast Fourier transform, we demonstrate how to derive multi-frequency complex network from multivariate time series. We construct complex networks at different frequencies and then detect community structures. Our results indicate that the community structures faithfully represent the structural features of oil-water flow patterns. Furthermore, we investigate the network statistic at different frequencies for each derived network and find that the frequency clustering coefficient enables to uncover the evolution of flow patterns and yield deep insights into the formation of flow structures. Current results present a first step towards a network visualization of complex flow patterns from a community structure perspective.

  17. Transformation between surface spherical harmonic expansion of arbitrary high degree and order and double Fourier series on sphere

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2018-02-01

    In order to accelerate the spherical harmonic synthesis and/or analysis of arbitrary function on the unit sphere, we developed a pair of procedures to transform between a truncated spherical harmonic expansion and the corresponding two-dimensional Fourier series. First, we obtained an analytic expression of the sine/cosine series coefficient of the 4 π fully normalized associated Legendre function in terms of the rectangle values of the Wigner d function. Then, we elaborated the existing method to transform the coefficients of the surface spherical harmonic expansion to those of the double Fourier series so as to be capable with arbitrary high degree and order. Next, we created a new method to transform inversely a given double Fourier series to the corresponding surface spherical harmonic expansion. The key of the new method is a couple of new recurrence formulas to compute the inverse transformation coefficients: a decreasing-order, fixed-degree, and fixed-wavenumber three-term formula for general terms, and an increasing-degree-and-order and fixed-wavenumber two-term formula for diagonal terms. Meanwhile, the two seed values are analytically prepared. Both of the forward and inverse transformation procedures are confirmed to be sufficiently accurate and applicable to an extremely high degree/order/wavenumber as 2^{30} {≈ } 10^9. The developed procedures will be useful not only in the synthesis and analysis of the spherical harmonic expansion of arbitrary high degree and order, but also in the evaluation of the derivatives and integrals of the spherical harmonic expansion.

  18. Generalized linear mixed models with varying coefficients for longitudinal data.

    PubMed

    Zhang, Daowen

    2004-03-01

    The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.

  19. On the accuracy of the interdiffusion coefficient measurements of high-temperature binary mixtures under ISS conditions

    NASA Astrophysics Data System (ADS)

    Saez, Núria; Ruiz, Xavier; Pallarés, Jordi; Shevtsova, Valentina

    2013-04-01

    An accelerometric record from the IVIDIL experiment (ESA Columbus module) has exhaustively been studied. The analysis involved the determination of basic statistical properties as, for instance, the auto-correlation and the power spectrum (second-order statistical analyses). Also, and taking into account the shape of the associated histograms, we address another important question, the non-Gaussian nature of the time series using the bispectrum and the bicoherence of the signals. Extrapolating the above-mentioned results, a computational model of a high-temperature shear cell has been performed. A scalar indicator has been used to quantify the accuracy of the diffusion coefficient measurements in the case of binary mixtures involving photovoltaic silicon or liquid Al-Cu binary alloys. Three different initial arrangements have been considered, the so-called interdiffusion, centred thick layer and the lateral thick layer. Results allow us to conclude that, under the conditions of the present work, the diffusion coefficient is insensitive to the environmental conditions, that is to say, accelerometric disturbances and initial shear cell arrangement.

  20. Estimates of bottom roughness length and bottom shear stress in South San Francisco Bay, California

    USGS Publications Warehouse

    Cheng, R.T.; Ling, C.-H.; Gartner, J.W.; Wang, P.-F.

    1999-01-01

    A field investigation of the hydrodynamics and the resuspension and transport of participate matter in a bottom boundary layer was carried out in South San Francisco Bay (South Bay), California, during March-April 1995. Using broadband acoustic Doppler current profilers, detailed measurements of turbulent mean velocity distribution within 1.5 m above bed have been obtained. A global method of data analysis was used for estimating bottom roughness length zo and bottom shear stress (or friction velocities u*). Field data have been examined by dividing the time series of velocity profiles into 24-hour periods and independently analyzing the velocity profile time series by flooding and ebbing periods. The global method of solution gives consistent properties of bottom roughness length zo and bottom shear stress values (or friction velocities u*) in South Bay. Estimated mean values of zo and u* for flooding and ebbing cycles are different. The differences in mean zo and u* are shown to be caused by tidal current flood-ebb inequality, rather than the flooding or ebbing of tidal currents. The bed shear stress correlates well with a reference velocity; the slope of the correlation defines a drag coefficient. Forty-three days of field data in South Bay show two regimes of zo (and drag coefficient) as a function of a reference velocity. When the mean velocity is >25-30 cm s-1, the ln zo (and thus the drag coefficient) is inversely proportional to the reference velocity. The cause for the reduction of roughness length is hypothesized as sediment erosion due to intensifying tidal currents thereby reducing bed roughness. When the mean velocity is <25-30 cm s-1, the correlation between zo and the reference velocity is less clear. A plausible explanation of scattered values of zo under this condition may be sediment deposition. Measured sediment data were inadequate to support this hypothesis, but the proposed hypothesis warrants further field investigation.

  1. Temporal Downscaling of Crop Coefficient and Crop Water Requirement from Growing Stage to Substage Scales

    PubMed Central

    Shang, Songhao

    2012-01-01

    Crop water requirement is essential for agricultural water management, which is usually available for crop growing stages. However, crop water requirement values of monthly or weekly scales are more useful for water management. A method was proposed to downscale crop coefficient and water requirement from growing stage to substage scales, which is based on the interpolation of accumulated crop and reference evapotranspiration calculated from their values in growing stages. The proposed method was compared with two straightforward methods, that is, direct interpolation of crop evapotranspiration and crop coefficient by assuming that stage average values occurred in the middle of the stage. These methods were tested with a simulated daily crop evapotranspiration series. Results indicate that the proposed method is more reliable, showing that the downscaled crop evapotranspiration series is very close to the simulated ones. PMID:22619572

  2. Statistical process control of mortality series in the Australian and New Zealand Intensive Care Society (ANZICS) adult patient database: implications of the data generating process

    PubMed Central

    2013-01-01

    Background Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Methods Monthly mean raw mortality (at hospital discharge) time series, 1995–2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) “in-control” status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. Results The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. Conclusions The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues. PMID:23705957

  3. Statistics of time delay and scattering correlation functions in chaotic systems. II. Semiclassical approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novaes, Marcel

    2015-06-15

    We consider S-matrix correlation functions for a chaotic cavity having M open channels, in the absence of time-reversal invariance. Relying on a semiclassical approximation, we compute the average over E of the quantities Tr[S{sup †}(E − ϵ) S(E + ϵ)]{sup n}, for general positive integer n. Our result is an infinite series in ϵ, whose coefficients are rational functions of M. From this, we extract moments of the time delay matrix Q = − iħS{sup †}dS/dE and check that the first 8 of them agree with the random matrix theory prediction from our previous paper [M. Novaes, J. Math. Phys.more » 56, 062110 (2015)].« less

  4. Endogenous time-varying risk aversion and asset returns.

    PubMed

    Berardi, Michele

    2016-01-01

    Stylized facts about statistical properties for short horizon returns in financial markets have been identified in the literature, but a satisfactory understanding for their manifestation is yet to be achieved. In this work, we show that a simple asset pricing model with representative agent is able to generate time series of returns that replicate such stylized facts if the risk aversion coefficient is allowed to change endogenously over time in response to unexpected excess returns under evolutionary forces. The same model, under constant risk aversion, would instead generate returns that are essentially Gaussian. We conclude that an endogenous time-varying risk aversion represents a very parsimonious way to make the model match real data on key statistical properties, and therefore deserves careful consideration from economists and practitioners alike.

  5. Diffusion of organic pollutants within a biofilm in porous media

    NASA Astrophysics Data System (ADS)

    Fan, Chihhao; Kao, Chen-Fei; Liu, You-Hsi

    2017-04-01

    The occurrence of aquatic pollution is an inevitable environmental impact resulting from human civilization and societal advancement. Either from the natural or anthropogenic sources, the aqueous contaminants enter the natural environment and aggravate its quality. To assure the aquatic environment quality, the attached-growth biological degradation is often applied to removing organic contaminants by introducing contaminated water into a porous media which is covered by microorganism. Additionally, many natural aquatic systems also form such similar mechanism to increase their self-purification capability. To better understand this transport phenomenon and degradation mechanism in the biofilm for future application, the mathematic characterization of organic contaminant diffusion within the biofilm requires further exploration. The present study aimed to formulate a mathematic representation to quantify the diffusion of the organic contaminant in the biofilm. The BOD was selected as the target contaminant. A series of experiments were conducted to quantify the BOD diffusion in the biofilm under the conditions of influent BOD variation from 50 to 300 mg/L, COD:N:P ratios of 100:5:1 and 100:15:3, with or without auxiliary aeration. For diffusion coefficient calculation, the boundary condition of zero diffusion at the interface between microbial phase and contact media was assumed. With the principle of conservation of mass, the removed contaminants equal those that diffuse into the biofilm, and eq 1 results, and the diffusion coefficient (i.e., eq 2) can be solved through calculus with equations from table of integral. ∂2Sf- Df ∂z2 = Rf (1) --(QSin--QSout)2Y--- Df = 2μmaxxf(Sb + Ks ln-Ks-) Sb+Ks (2) Using the obtained experimental data, the diffusion coefficient was calculated to be 2.02*10-6 m2/d with influent COD of 50 mg/L at COD:N:P ratio of 100:5:1 with aeration, and this coefficient increased to 6.02*10-6 m2/d as the influent concentration increased to 300 mg/L. Meanwhile, the diffusion coefficient decreased to 2.61*10-7 m2/d as the retention time increased to 3 hours. Generally, the variation in diffusion coefficients between different COD:N:P ratios exhibits similar pattern with a slight decrease for the ratio of 100:15:3. The difference in diffusion coefficients between 1 and 2 hours was apparently greater than that between 2 and 3 hours, implying the diffusion was a critical factor for contaminant removal for the treatment condition with retention time of 1 hour or less, because higher retention time leads to better microbial degradation due to sufficient contact time for biological reactions. For 1 hour retention time, the increase in diffusion coefficient becomes limited as the influent COD concentration was equal to or above 150 mg/L. These obtained diffusion coefficients were applied to estimating the treatment efficiency for real domestic sewage. The result was found that the estimated effluent BOD concentrations were quite comparable to that obtained through experimental measurements.

  6. Scaling for Robust Empirical Modeling and Predictions of Net Ecosystem Exchange (NEE) from Diverse Wetland Ecosystems

    NASA Astrophysics Data System (ADS)

    Ishtiaq, K. S.; Abdul-Aziz, O. I.

    2014-12-01

    We developed a scaling-based, simple empirical model for spatio-temporally robust prediction of the diurnal cycles of wetland net ecosystem exchange (NEE) by using an extended stochastic harmonic algorithm (ESHA). A reference-time observation from each diurnal cycle was utilized as the scaling parameter to normalize and collapse hourly observed NEE of different days into a single, dimensionless diurnal curve. The modeling concept was tested by parameterizing the unique diurnal curve and predicting hourly NEE of May to October (summer growing and fall seasons) between 2002-12 for diverse wetland ecosystems, as available in the U.S. AmeriFLUX network. As an example, the Taylor Slough short hydroperiod marsh site in the Florida Everglades had data for four consecutive growing seasons from 2009-12; results showed impressive modeling efficiency (coefficient of determination, R2 = 0.66) and accuracy (ratio of root-mean-square-error to the standard deviation of observations, RSR = 0.58). Model validation was performed with an independent year of NEE data, indicating equally impressive performance (R2 = 0.68, RSR = 0.57). The model included a parsimonious set of estimated parameters, which exhibited spatio-temporal robustness by collapsing onto narrow ranges. Model robustness was further investigated by analytically deriving and quantifying parameter sensitivity coefficients and a first-order uncertainty measure. The relatively robust, empirical NEE model can be applied for simulating continuous (e.g., hourly) NEE time-series from a single reference observation (or a set of limited observations) at different wetland sites of comparable hydro-climatology, biogeochemistry, and ecology. The method can also be used for a robust gap-filling of missing data in observed time-series of periodic ecohydrological variables for wetland or other ecosystems.

  7. Distributed lags time series analysis versus linear correlation analysis (Pearson's r) in identifying the relationship between antipseudomonal antibiotic consumption and the susceptibility of Pseudomonas aeruginosa isolates in a single Intensive Care Unit of a tertiary hospital.

    PubMed

    Erdeljić, Viktorija; Francetić, Igor; Bošnjak, Zrinka; Budimir, Ana; Kalenić, Smilja; Bielen, Luka; Makar-Aušperger, Ksenija; Likić, Robert

    2011-05-01

    The relationship between antibiotic consumption and selection of resistant strains has been studied mainly by employing conventional statistical methods. A time delay in effect must be anticipated and this has rarely been taken into account in previous studies. Therefore, distributed lags time series analysis and simple linear correlation were compared in their ability to evaluate this relationship. Data on monthly antibiotic consumption for ciprofloxacin, piperacillin/tazobactam, carbapenems and cefepime as well as Pseudomonas aeruginosa susceptibility were retrospectively collected for the period April 2006 to July 2007. Using distributed lags analysis, a significant temporal relationship was identified between ciprofloxacin, meropenem and cefepime consumption and the resistance rates of P. aeruginosa isolates to these antibiotics. This effect was lagged for ciprofloxacin and cefepime [1 month (R=0.827, P=0.039) and 2 months (R=0.962, P=0.001), respectively] and was simultaneous for meropenem (lag 0, R=0.876, P=0.002). Furthermore, a significant concomitant effect of meropenem consumption on the appearance of multidrug-resistant P. aeruginosa strains (resistant to three or more representatives of classes of antibiotics) was identified (lag 0, R=0.992, P<0.001). This effect was not delayed and it was therefore identified both by distributed lags analysis and the Pearson's correlation coefficient. Correlation coefficient analysis was not able to identify relationships between antibiotic consumption and bacterial resistance when the effect was delayed. These results indicate that the use of diverse statistical methods can yield significantly different results, thus leading to the introduction of possibly inappropriate infection control measures. Copyright © 2010 Elsevier B.V. and the International Society of Chemotherapy. All rights reserved.

  8. Possible Noise Nature of Elsässer Variable z- in Highly Alfvénic Solar Wind Fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, X.; Tu, C.-Y.; He, J.-S.; Wang, L.-H.; Yao, S.; Zhang, L.

    2018-01-01

    It has been a long-standing debate on the nature of Elsässer variable z- observed in the solar wind fluctuations. It is widely believed that z- represents inward propagating Alfvén waves and interacts nonlinearly with z+ (outward propagating Alfvén waves) to generate energy cascade. However, z- variations sometimes show a feature of convective structures. Here we present a new data analysis on autocorrelation functions of z- in order to get some definite information on its nature. We find that there is usually a large drop on the z- autocorrelation function when the solar wind fluctuations are highly Alfvénic. The large drop observed by Helios 2 spacecraft near 0.3 AU appears at the first nonzero time lag τ = 81 s, where the value of the autocorrelation coefficient drops to 25%-65% of that at τ = 0 s. Beyond the first nonzero time lag, the autocorrelation coefficient decreases gradually to zero. The drop of z- correlation function also appears in the Wind observations near 1 AU. These features of the z- correlation function may suggest that z- fluctuations consist of two components: high-frequency white noise and low-frequency pseudo structures, which correspond to flat and steep parts of z- power spectrum, respectively. This explanation is confirmed by doing a simple test on an artificial time series, which is obtained from the superposition of a random data series on its smoothed sequence. Our results suggest that in highly Alfvénic fluctuations, z- may not contribute importantly to the interactions with z+ to produce energy cascade.

  9. Effectiveness of an Integrated Approach to HIV and Hypertension Care in Rural South Africa: Controlled Interrupted Time-Series Analysis.

    PubMed

    Ameh, Soter; Klipstein-Grobusch, Kerstin; Musenge, Eustasius; Kahn, Kathleen; Tollman, Stephen; Gómez-Olivé, Francesc Xavier

    2017-08-01

    South Africa faces a dual burden of HIV/AIDS and noncommunicable diseases. In 2011, a pilot integrated chronic disease management (ICDM) model was introduced by the National Health Department into selected primary health care (PHC) facilities. The objective of this study was to assess the effectiveness of the ICDM model in controlling patients' CD4 counts (>350 cells/mm) and blood pressure [BP (<140/90 mm Hg)] in PHC facilities in the Bushbuckridge municipality, South Africa. A controlled interrupted time-series study was conducted using the data from patients' clinical records collected multiple times before and after the ICDM model was initiated in PHC facilities in Bushbuckridge. Patients ≥18 years were recruited by proportionate sampling from the pilot (n = 435) and comparing (n = 443) PHC facilities from 2011 to 2013. Health outcomes for patients were retrieved from facility records for 30 months. We performed controlled segmented regression to model the monthly averages of individuals' propensity scores using autoregressive moving average model at 5% significance level. The pilot facilities had 6% greater likelihood of controlling patients' CD4 counts than the comparison facilities (coefficient = 0.057; 95% confidence interval: 0.056 to 0.058; P < 0.001). Compared with the comparison facilities, the pilot facilities had 1.0% greater likelihood of controlling patients' BP (coefficient = 0.010; 95% confidence interval: 0.003 to 0.016; P = 0.002). Application of the model had a small effect in controlling patients' CD4 counts and BP, but showed no overall clinical benefit for the patients; hence, the need to more extensively leverage the HIV program for hypertension treatment.

  10. On interrelations of recurrences and connectivity trends between stock indices

    NASA Astrophysics Data System (ADS)

    Goswami, B.; Ambika, G.; Marwan, N.; Kurths, J.

    2012-09-01

    Financial data has been extensively studied for correlations using Pearson's cross-correlation coefficient ρ as the point of departure. We employ an estimator based on recurrence plots - the correlation of probability of recurrence (CPR) - to analyze connections between nine stock indices spread worldwide. We suggest a slight modification of the CPR approach in order to get more robust results. We examine trends in CPR for an approximately 19-month window moved along the time series and compare them to trends in ρ. Binning CPR into three levels of connectedness (strong, moderate, and weak), we extract the trends in number of connections in each bin over time. We also look at the behavior of CPR during the dot-com bubble by shifting the time series to align their peaks. CPR mainly uncovers that the markets move in and out of periods of strong connectivity erratically, instead of moving monotonically towards increasing global connectivity. This is in contrast to ρ, which gives a picture of ever-increasing correlation. CPR also exhibits that time-shifted markets have high connectivity around the dot-com bubble of 2000. We use significance tests using twin surrogates to interpret all the measures estimated in the study.

  11. Stock market context of the Lévy walks with varying velocity

    NASA Astrophysics Data System (ADS)

    Kutner, Ryszard

    2002-11-01

    We developed the most general Lévy walks with varying velocity, shorter called the Weierstrass walks (WW) model, by which one can describe both stationary and non-stationary stochastic time series. We considered a non-Brownian random walk where the walker moves, in general, with a velocity that assumes a different constant value between the successive turning points, i.e., the velocity is a piecewise constant function. This model is a kind of Lévy walks where we assume a hierarchical, self-similar in a stochastic sense, spatio-temporal representation of the main quantities such as waiting-time distribution and sojourn probability density (which are principal quantities in the continuous-time random walk formalism). The WW model makes possible to analyze both the structure of the Hurst exponent and the power-law behavior of kurtosis. This structure results from the hierarchical, spatio-temporal coupling between the walker displacement and the corresponding time of the walks. The analysis uses both the fractional diffusion and the super Burnett coefficients. We constructed the diffusion phase diagram which distinguishes regions occupied by classes of different universality. We study only such classes which are characteristic for stationary situations. We thus have a model ready for describing the data presented, e.g., in the form of moving averages; the operation is often used for stochastic time series, especially financial ones. The model was inspired by properties of financial time series and tested for empirical data extracted from the Warsaw stock exchange since it offers an opportunity to study in an unbiased way several features of stock exchange in its early stage.

  12. Nursing home bed capacity in the States, 1978-86

    PubMed Central

    Harrington, Charlene; Swan, James H.; Grant, Leslie A.

    1988-01-01

    Trends in nursing home bed supply in the States show large variations in beds per population and a gradual decline in supply per aged population. A cross-sectional time-series regression analysis was used to examine some factors associated with nursing home bed supply. Variation was accounted for by economic factors, supply of alternative services, and climate. State Medicaid reimbursement rates had negative coefficients, with supply suggesting States may be increasing rates to improve access where supply is limited. Medicaid waiver policy was not found to be significant. PMID:10312634

  13. Global ocean tide mapping using TOPEX/Poseidon altimetry

    NASA Technical Reports Server (NTRS)

    Sanchez, Braulio V.; Cartwright, D. E.; Estes, R. H.; Williamson, R. G.; Colombo, O. L.

    1991-01-01

    The investigation's main goals are to produce accurate tidal maps of the main diurnal, semidiurnal, and long-period tidal components in the world's deep oceans. This will be done by the application of statistical estimation techniques to long time series of altimeter data provided by the TOPEX/POSEIDON mission, with additional information provided by satellite tracking data. In the prelaunch phase, we will use in our simulations and preliminary work data supplied by previous oceanographic missions, such as Seasat and Geosat. These results will be of scientific interest in themselves. The investigation will also be concerned with the estimation of new values, and their uncertainties, for tidal currents and for the physical parameters appearing in the Laplace tidal equations, such as bottom friction coefficients and eddy viscosity coefficients. This will be done by incorporating the altimetry-derived charts of vertical tides as boundary conditions in the integration of those equations. The methodology of the tidal representation will include the use of appropriate series expansions such as ocean-basin normal modes and spherical harmonics. The results of the investigation will be space-determined tidal models of coverage and accuracy superior to that of the present numerical models of the ocean tides, with the concomitant benefits to oceanography and associated disciplinary fields.

  14. Monitoring irrigation water consumption using high resolution NDVI image time series (Sentinel-2 like). Calibration and validation in the Kairouan plain (Tunisia)

    NASA Astrophysics Data System (ADS)

    Saadi, Sameh; Simonneaux, Vincent; Boulet, Gilles; Mougenot, Bernard; Zribi, Mehrez; Lili Chabaane, Zohra

    2015-04-01

    Water scarcity is one of the main factors limiting agricultural development in semi-arid areas. It is thus of major importance to design tools allowing a better management of this resource. Remote sensing has long been used for computing evapotranspiration estimates, which is an input for crop water balance monitoring. Up to now, only medium and low resolution data (e.g. MODIS) are available on regular basis to monitor cultivated areas. However, the increasing availability of high resolution high repetitivity VIS-NIR remote sensing, like the forthcoming Sentinel-2 mission to be lunched in 2015, offers unprecedented opportunity to improve this monitoring. In this study, regional crops water consumption was estimated with the SAMIR software (Satellite of Monitoring Irrigation) using the FAO-56 dual crop coefficient water balance model fed with high resolution NDVI image time series providing estimates of both the actual basal crop coefficient (Kcb) and the vegetation fraction cover. The model includes a soil water model, requiring the knowledge of soil water holding capacity, maximum rooting depth, and water inputs. As irrigations are usually not known on large areas, they are simulated based on rules reproducing the farmer practices. The main objective of this work is to assess the operationality and accuracy of SAMIR at plot and perimeter scales, when several land use types (winter cereals, summer vegetables…), irrigation and agricultural practices are intertwined in a given landscape, including complex canopies such as sparse orchards. Meteorological ground stations were used to compute the reference evapotranspiration and get the rainfall depths. Two time series of ten and fourteen high-resolution SPOT5 have been acquired for the 2008-2009 and 2012-2013 hydrological years over an irrigated area in central Tunisia. They span the various successive crop seasons. The images were radiometrically corrected, first, using the SMAC6s Algorithm, second, using invariant objects located on the scene, based on visual observation of the images. From these time series, a Normalized Difference Vegetation Index (NDVI) profile was generated for each pixel. SAMIR was first calibrated based on ground measurements of evapotranspiration achieved using eddy-correlation devices installed on irrigated wheat and barley plots. After calibration, the model was run to spatialize irrigation over the whole area and a validation was done using cumulated seasonal water volumes obtained from ground survey at both plot and perimeter scales. The results show that although determination of model parameters was successful at plot scale, irrigation rules required an additional calibration which was achieved at perimeter scale.

  15. Single event time series analysis in a binary karst catchment evaluated using a groundwater model (Lurbach system, Austria).

    PubMed

    Mayaud, C; Wagner, T; Benischke, R; Birk, S

    2014-04-16

    The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the synthetic system allow to deduce that similar aquifer properties are relevant in both systems. In particular, the heterogeneity of aquifer parameters appears to be a controlling factor. Moreover, the location of the overflow connecting the sub-catchments of the two springs is found to be of primary importance, regarding the occurrence of inter-catchment flow. This further supports our current understanding of an overflow zone located in the upper part of the Lurbach karst aquifer. Thus, time series analysis of single events can potentially be used to characterize transient inter-catchment flow behavior of karst systems.

  16. Single event time series analysis in a binary karst catchment evaluated using a groundwater model (Lurbach system, Austria)

    PubMed Central

    Mayaud, C.; Wagner, T.; Benischke, R.; Birk, S.

    2014-01-01

    Summary The Lurbach karst system (Styria, Austria) is drained by two major springs and replenished by both autogenic recharge from the karst massif itself and a sinking stream that originates in low permeable schists (allogenic recharge). Detailed data from two events recorded during a tracer experiment in 2008 demonstrate that an overflow from one of the sub-catchments to the other is activated if the discharge of the main spring exceeds a certain threshold. Time series analysis (autocorrelation and cross-correlation) was applied to examine to what extent the various available methods support the identification of the transient inter-catchment flow observed in this binary karst system. As inter-catchment flow is found to be intermittent, the evaluation was focused on single events. In order to support the interpretation of the results from the time series analysis a simplified groundwater flow model was built using MODFLOW. The groundwater model is based on the current conceptual understanding of the karst system and represents a synthetic karst aquifer for which the same methods were applied. Using the wetting capability package of MODFLOW, the model simulated an overflow similar to what has been observed during the tracer experiment. Various intensities of allogenic recharge were employed to generate synthetic discharge data for the time series analysis. In addition, geometric and hydraulic properties of the karst system were varied in several model scenarios. This approach helps to identify effects of allogenic recharge and aquifer properties in the results from the time series analysis. Comparing the results from the time series analysis of the observed data with those of the synthetic data a good agreement was found. For instance, the cross-correlograms show similar patterns with respect to time lags and maximum cross-correlation coefficients if appropriate hydraulic parameters are assigned to the groundwater model. The comparable behaviors of the real and the synthetic system allow to deduce that similar aquifer properties are relevant in both systems. In particular, the heterogeneity of aquifer parameters appears to be a controlling factor. Moreover, the location of the overflow connecting the sub-catchments of the two springs is found to be of primary importance, regarding the occurrence of inter-catchment flow. This further supports our current understanding of an overflow zone located in the upper part of the Lurbach karst aquifer. Thus, time series analysis of single events can potentially be used to characterize transient inter-catchment flow behavior of karst systems. PMID:24748687

  17. The gravitational potential due to uniform disks and rings

    NASA Astrophysics Data System (ADS)

    Lass, H.; Blitzer, L.

    1983-07-01

    The gravitational potential of bodies possessing axial symmetry can be expressed as a power series in distance, with the Legendre polynomials as coefficients. Such series, however, converge so slowly in the neighborhood of thin, uniform disks and rings that too many series terms must be summed in order to obtain an accurate field measure. A gravitational potential expression is presently obtained in closed form, in terms of complete elliptic integrals.

  18. The symbolic computation of series solutions to ordinary differential equations using trees (extended abstract)

    NASA Technical Reports Server (NTRS)

    Grossman, Robert

    1991-01-01

    Algorithms previously developed by the author give formulas which can be used for the efficient symbolic computation of series expansions to solutions of nonlinear systems of ordinary differential equations. As a by product of this analysis, formulas are derived which relate to trees to the coefficients of the series expansions, similar to the work of Leroux and Viennot, and Lamnabhi, Leroux and Viennot.

  19. Effect of the Matching Circuit on the Electromechanical Characteristics of Sandwiched Piezoelectric Transducers.

    PubMed

    Lin, Shuyu; Xu, Jie

    2017-02-10

    The input electrical impedance behaves as a capacitive when a piezoelectric transducer is excited near its resonance frequency. In order to increase the energy transmission efficiency, a series or parallel inductor should be used to compensate the capacitive impedance of the piezoelectric transducer. In this paper, the effect of the series matching inductor on the electromechanical characteristics of the piezoelectric transducer is analyzed. The dependency of the resonance/anti-resonance frequency, the effective electromechanical coupling coefficient, the electrical quality factor and the electro-acoustical efficiency on the matching inductor is obtained. It is shown that apart from compensating the capacitive impedance of the piezoelectric transducer, the series matching inductor can also change the electromechanical characteristics of the piezoelectric transducer. When series matching inductor is increased, the resonance frequency is decreased and the anti-resonance unchanged; the effective electromechanical coupling coefficient is increased. For the electrical quality factor and the electroacoustic efficiency, the dependency on the matching inductor is different when the transducer is operated at the resonance and the anti-resonance frequency. The electromechanical characteristics of the piezoelectric transducer with series matching inductor are measured. It is shown that the theoretically predicted relationship between the electromechanical characteristics and the series matching inductor is in good agreement with the experimental results.

  20. Effect of the Matching Circuit on the Electromechanical Characteristics of Sandwiched Piezoelectric Transducers

    PubMed Central

    Lin, Shuyu; Xu, Jie

    2017-01-01

    The input electrical impedance behaves as a capacitive when a piezoelectric transducer is excited near its resonance frequency. In order to increase the energy transmission efficiency, a series or parallel inductor should be used to compensate the capacitive impedance of the piezoelectric transducer. In this paper, the effect of the series matching inductor on the electromechanical characteristics of the piezoelectric transducer is analyzed. The dependency of the resonance/anti-resonance frequency, the effective electromechanical coupling coefficient, the electrical quality factor and the electro-acoustical efficiency on the matching inductor is obtained. It is shown that apart from compensating the capacitive impedance of the piezoelectric transducer, the series matching inductor can also change the electromechanical characteristics of the piezoelectric transducer. When series matching inductor is increased, the resonance frequency is decreased and the anti-resonance unchanged; the effective electromechanical coupling coefficient is increased. For the electrical quality factor and the electroacoustic efficiency, the dependency on the matching inductor is different when the transducer is operated at the resonance and the anti-resonance frequency. The electromechanical characteristics of the piezoelectric transducer with series matching inductor are measured. It is shown that the theoretically predicted relationship between the electromechanical characteristics and the series matching inductor is in good agreement with the experimental results. PMID:28208583

  1. A comparison of monthly precipitation point estimates at 6 locations in Iran using integration of soft computing methods and GARCH time series model

    NASA Astrophysics Data System (ADS)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2017-11-01

    Precipitation plays an important role in determining the climate of a region. Precise estimation of precipitation is required to manage and plan water resources, as well as other related applications such as hydrology, climatology, meteorology and agriculture. Time series of hydrologic variables such as precipitation are composed of deterministic and stochastic parts. Despite this fact, the stochastic part of the precipitation data is not usually considered in modeling of precipitation process. As an innovation, the present study introduces three new hybrid models by integrating soft computing methods including multivariate adaptive regression splines (MARS), Bayesian networks (BN) and gene expression programming (GEP) with a time series model, namely generalized autoregressive conditional heteroscedasticity (GARCH) for modeling of the monthly precipitation. For this purpose, the deterministic (obtained by soft computing methods) and stochastic (obtained by GARCH time series model) parts are combined with each other. To carry out this research, monthly precipitation data of Babolsar, Bandar Anzali, Gorgan, Ramsar, Tehran and Urmia stations with different climates in Iran were used during the period of 1965-2014. Root mean square error (RMSE), relative root mean square error (RRMSE), mean absolute error (MAE) and determination coefficient (R2) were employed to evaluate the performance of conventional/single MARS, BN and GEP, as well as the proposed MARS-GARCH, BN-GARCH and GEP-GARCH hybrid models. It was found that the proposed novel models are more precise than single MARS, BN and GEP models. Overall, MARS-GARCH and BN-GARCH models yielded better accuracy than GEP-GARCH. The results of the present study confirmed the suitability of proposed methodology for precise modeling of precipitation.

  2. Automatic segmentation of invasive breast carcinomas from dynamic contrast-enhanced MRI using time series analysis.

    PubMed

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva

    2014-08-01

    To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.

  3. Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis

    PubMed Central

    Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva

    2013-01-01

    Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175

  4. A Comprehensive Set of Impact Data for Common Aerospace Metals

    DOE PAGES

    Brake, Matthew; Reu, Phil L.; Aragon, Dannelle S.

    2017-05-16

    Our results for the two sets of impact experiments are reported here. In order to assist with model development using the impact data reported, the materials are mechanically characterized using a series of standard experiments. The first set of impact data comes from a series of coefficient of restitution experiments, in which a 2 meter long pendulum is used to study "in context" measurements of the coefficient of restitution for eight different materials (6061-T6 Aluminum, Phosphor Bronze alloy 510, Hiperco, Nitronic 60A, Stainless Steel 304, Titanium, Copper, and Annealed Copper). The coefficient of restitution is measured via two different techniques:more » digital image correlation and laser Doppler vibrometry. Due to the strong agreement of the two different methods, only results from the digital image correlation are reported. The coefficient of restitution experiments are "in context" as the scales of the geometry and impact velocities are representative of common features in the motivating application for this research. Finally, a series of compliance measurements are detailed for the same set of materials. Furthermore, the compliance measurements are conducted using both nano-indentation and micro-indentation machines, providing sub-nm displacement resolution and uN force resolution. Good agreement is seen for load levels spanned by both machines. As the transition from elastic to plastic behavior occurs at contact displacements on the order of 30 nm, this data set provides a unique insight into the transitionary region.« less

  5. A Comprehensive Set of Impact Data for Common Aerospace Metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brake, Matthew; Reu, Phil L.; Aragon, Dannelle S.

    Our results for the two sets of impact experiments are reported here. In order to assist with model development using the impact data reported, the materials are mechanically characterized using a series of standard experiments. The first set of impact data comes from a series of coefficient of restitution experiments, in which a 2 meter long pendulum is used to study "in context" measurements of the coefficient of restitution for eight different materials (6061-T6 Aluminum, Phosphor Bronze alloy 510, Hiperco, Nitronic 60A, Stainless Steel 304, Titanium, Copper, and Annealed Copper). The coefficient of restitution is measured via two different techniques:more » digital image correlation and laser Doppler vibrometry. Due to the strong agreement of the two different methods, only results from the digital image correlation are reported. The coefficient of restitution experiments are "in context" as the scales of the geometry and impact velocities are representative of common features in the motivating application for this research. Finally, a series of compliance measurements are detailed for the same set of materials. Furthermore, the compliance measurements are conducted using both nano-indentation and micro-indentation machines, providing sub-nm displacement resolution and uN force resolution. Good agreement is seen for load levels spanned by both machines. As the transition from elastic to plastic behavior occurs at contact displacements on the order of 30 nm, this data set provides a unique insight into the transitionary region.« less

  6. Solving ODE Initial Value Problems With Implicit Taylor Series Methods

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    2000-01-01

    In this paper we introduce a new class of numerical methods for integrating ODE initial value problems. Specifically, we propose an extension of the Taylor series method which significantly improves its accuracy and stability while also increasing its range of applicability. To advance the solution from t (sub n) to t (sub n+1), we expand a series about the intermediate point t (sub n+mu):=t (sub n) + mu h, where h is the stepsize and mu is an arbitrary parameter called an expansion coefficient. We show that, in general, a Taylor series of degree k has exactly k expansion coefficients which raise its order of accuracy. The accuracy is raised by one order if k is odd, and by two orders if k is even. In addition, if k is three or greater, local extrapolation can be used to raise the accuracy two additional orders. We also examine stability for the problem y'= lambda y, Re (lambda) less than 0, and identify several A-stable schemes. Numerical results are presented for both fixed and variable stepsizes. It is shown that implicit Taylor series methods provide an effective integration tool for most problems, including stiff systems and ODE's with a singular point.

  7. Mutual connectivity analysis (MCA) using generalized radial basis function neural networks for nonlinear functional connectivity network recovery in resting-state functional MRI

    NASA Astrophysics Data System (ADS)

    D'Souza, Adora M.; Abidin, Anas Zainul; Nagarajan, Mahesh B.; Wismüller, Axel

    2016-03-01

    We investigate the applicability of a computational framework, called mutual connectivity analysis (MCA), for directed functional connectivity analysis in both synthetic and resting-state functional MRI data. This framework comprises of first evaluating non-linear cross-predictability between every pair of time series prior to recovering the underlying network structure using community detection algorithms. We obtain the non-linear cross-prediction score between time series using Generalized Radial Basis Functions (GRBF) neural networks. These cross-prediction scores characterize the underlying functionally connected networks within the resting brain, which can be extracted using non-metric clustering approaches, such as the Louvain method. We first test our approach on synthetic models with known directional influence and network structure. Our method is able to capture the directional relationships between time series (with an area under the ROC curve = 0.92 +/- 0.037) as well as the underlying network structure (Rand index = 0.87 +/- 0.063) with high accuracy. Furthermore, we test this method for network recovery on resting-state fMRI data, where results are compared to the motor cortex network recovered from a motor stimulation sequence, resulting in a strong agreement between the two (Dice coefficient = 0.45). We conclude that our MCA approach is effective in analyzing non-linear directed functional connectivity and in revealing underlying functional network structure in complex systems.

  8. Cross-entropy clustering framework for catchment classification

    NASA Astrophysics Data System (ADS)

    Tongal, Hakan; Sivakumar, Bellie

    2017-09-01

    There is an increasing interest in catchment classification and regionalization in hydrology, as they are useful for identification of appropriate model complexity and transfer of information from gauged catchments to ungauged ones, among others. This study introduces a nonlinear cross-entropy clustering (CEC) method for classification of catchments. The method specifically considers embedding dimension (m), sample entropy (SampEn), and coefficient of variation (CV) to represent dimensionality, complexity, and variability of the time series, respectively. The method is applied to daily streamflow time series from 217 gauging stations across Australia. The results suggest that a combination of linear and nonlinear parameters (i.e. m, SampEn, and CV), representing different aspects of the underlying dynamics of streamflows, could be useful for determining distinct patterns of flow generation mechanisms within a nonlinear clustering framework. For the 217 streamflow time series, nine hydrologically homogeneous clusters that have distinct patterns of flow regime characteristics and specific dominant hydrological attributes with different climatic features are obtained. Comparison of the results with those obtained using the widely employed k-means clustering method (which results in five clusters, with the loss of some information about the features of the clusters) suggests the superiority of the cross-entropy clustering method. The outcomes from this study provide a useful guideline for employing the nonlinear dynamic approaches based on hydrologic signatures and for gaining an improved understanding of streamflow variability at a large scale.

  9. Detrended Cross Correlation Analysis: a new way to figure out the underlying cause of global warming

    NASA Astrophysics Data System (ADS)

    Hazra, S.; Bera, S. K.

    2016-12-01

    Analysing non-stationary time series is a challenging task in earth science, seismology, solar physics, climate, biology, finance etc. Most of the cases external noise like oscillation, high frequency noise, low frequency noise in different scales lead to erroneous result. Many statistical methods are proposed to find the correlation between two non-stationary time series. N. Scafetta and B. J. West, Phys. Rev. Lett. 90, 248701 (2003), reported a strong relationship between solar flare intermittency (SFI) and global temperature anomalies (GTA) using diffusion entropy analysis. It has been recently shown that detrended cross correlation analysis (DCCA) is better technique to remove the effects of any unwanted signal as well as local and periodic trend. Thus DCCA technique is more suitable to find the correlation between two non-stationary time series. By this technique, correlation coefficient at different scale can be estimated. Motivated by this here we have applied a new DCCA technique to find the relationship between SFI and GTA. We have also applied this technique to find the relationship between GTA and carbon di-oxide density, GTA and methane density on earth atmosphere. In future we will try to find the relationship between GTA and aerosols present in earth atmosphere, water vapour density on earth atmosphere, ozone depletion etc. This analysis will help us for better understanding about the reason behind global warming

  10. Mutual Connectivity Analysis (MCA) Using Generalized Radial Basis Function Neural Networks for Nonlinear Functional Connectivity Network Recovery in Resting-State Functional MRI.

    PubMed

    DSouza, Adora M; Abidin, Anas Zainul; Nagarajan, Mahesh B; Wismüller, Axel

    2016-03-29

    We investigate the applicability of a computational framework, called mutual connectivity analysis (MCA), for directed functional connectivity analysis in both synthetic and resting-state functional MRI data. This framework comprises of first evaluating non-linear cross-predictability between every pair of time series prior to recovering the underlying network structure using community detection algorithms. We obtain the non-linear cross-prediction score between time series using Generalized Radial Basis Functions (GRBF) neural networks. These cross-prediction scores characterize the underlying functionally connected networks within the resting brain, which can be extracted using non-metric clustering approaches, such as the Louvain method. We first test our approach on synthetic models with known directional influence and network structure. Our method is able to capture the directional relationships between time series (with an area under the ROC curve = 0.92 ± 0.037) as well as the underlying network structure (Rand index = 0.87 ± 0.063) with high accuracy. Furthermore, we test this method for network recovery on resting-state fMRI data, where results are compared to the motor cortex network recovered from a motor stimulation sequence, resulting in a strong agreement between the two (Dice coefficient = 0.45). We conclude that our MCA approach is effective in analyzing non-linear directed functional connectivity and in revealing underlying functional network structure in complex systems.

  11. Interpretation of a compositional time series

    NASA Astrophysics Data System (ADS)

    Tolosana-Delgado, R.; van den Boogaart, K. G.

    2012-04-01

    Common methods for multivariate time series analysis use linear operations, from the definition of a time-lagged covariance/correlation to the prediction of new outcomes. However, when the time series response is a composition (a vector of positive components showing the relative importance of a set of parts in a total, like percentages and proportions), then linear operations are afflicted of several problems. For instance, it has been long recognised that (auto/cross-)correlations between raw percentages are spurious, more dependent on which other components are being considered than on any natural link between the components of interest. Also, a long-term forecast of a composition in models with a linear trend will ultimately predict negative components. In general terms, compositional data should not be treated in a raw scale, but after a log-ratio transformation (Aitchison, 1986: The statistical analysis of compositional data. Chapman and Hill). This is so because the information conveyed by a compositional data is relative, as stated in their definition. The principle of working in coordinates allows to apply any sort of multivariate analysis to a log-ratio transformed composition, as long as this transformation is invertible. This principle is of full application to time series analysis. We will discuss how results (both auto/cross-correlation functions and predictions) can be back-transformed, viewed and interpreted in a meaningful way. One view is to use the exhaustive set of all possible pairwise log-ratios, which allows to express the results into D(D - 1)/2 separate, interpretable sets of one-dimensional models showing the behaviour of each possible pairwise log-ratios. Another view is the interpretation of estimated coefficients or correlations back-transformed in terms of compositions. These two views are compatible and complementary. These issues are illustrated with time series of seasonal precipitation patterns at different rain gauges of the USA. In this data set, the proportion of annual precipitation falling in winter, spring, summer and autumn is considered a 4-component time series. Three invertible log-ratios are defined for calculations, balancing rainfall in autumn vs. winter, in summer vs. spring, and in autumn-winter vs. spring-summer. Results suggest a 2-year correlation range, and certain oscillatory behaviour in the last balance, which does not occur in the other two.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, N.; Lawson, M.; Yu, Y. H.

    WEC-Sim is a midfidelity numerical tool for modeling wave energy conversion devices. The code uses the MATLAB SimMechanics package to solve multibody dynamics and models wave interactions using hydrodynamic coefficients derived from frequency-domain boundary-element methods. This paper presents the new modeling features introduced in the latest release of WEC-Sim. The first feature discussed conversion of the fluid memory kernel to a state-space form. This enhancement offers a substantial computational benefit after the hydrodynamic body-to-body coefficients are introduced and the number of interactions increases exponentially with each additional body. Additional features include the ability to calculate the wave-excitation forces based onmore » the instantaneous incident wave angle, allowing the device to weathervane, as well as import a user-defined wave elevation time series. A review of the hydrodynamic theory for each feature is provided and the successful implementation is verified using test cases.« less

  13. Parameterization using Fourier series expansion of the diffuse reflectance of human skin to vary the concentration of the melanocytes

    NASA Astrophysics Data System (ADS)

    Narea, J. Freddy; Muñoz, Aarón A.; Castro, Jorge; Muñoz, Rafael A.; Villalba, Caroleny E.; Martinez, María. F.; Bravo, Kelly D.

    2013-11-01

    Human skin has been studied in numerous investigations, given the interest in knowing information about physiology, morphology and chemical composition. These parameters can be determined using non invasively optical techniques in vivo, such as the diffuse reflectance spectroscopy. The human skin color is determined by many factors, but primarily by the amount and distribution of the pigment melanin. The melanin is produced by the melanocytes in the basal layer of the epidermis. This research characterize the spectral response of the human skin using the coefficients of Fourier series expansion. Simulating the radiative transfer equation for the Monte Carlo method to vary the concentration of the melanocytes (fme) in a simplified model of human skin. It fits relating the Fourier series coefficient a0 with fme. Therefore it is possible to recover the skin biophysical parameter.

  14. A Metric to Quantify Shared Visual Attention in Two-Person Teams

    NASA Technical Reports Server (NTRS)

    Gontar, Patrick; Mulligan, Jeffrey B.

    2015-01-01

    Introduction: Critical tasks in high-risk environments are often performed by teams, the members of which must work together efficiently. In some situations, the team members may have to work together to solve a particular problem, while in others it may be better for them to divide the work into separate tasks that can be completed in parallel. We hypothesize that these two team strategies can be differentiated on the basis of shared visual attention, measured by gaze tracking. 2) Methods: Gaze recordings were obtained for two-person flight crews flying a high-fidelity simulator (Gontar, Hoermann, 2014). Gaze was categorized with respect to 12 areas of interest (AOIs). We used these data to construct time series of 12 dimensional vectors, with each vector component representing one of the AOIs. At each time step, each vector component was set to 0, except for the one corresponding to the currently fixated AOI, which was set to 1. This time series could then be averaged in time, with the averaging window time (t) as a variable parameter. For example, when we average with a t of one minute, each vector component represents the proportion of time that the corresponding AOI was fixated within the corresponding one minute interval. We then computed the Pearson product-moment correlation coefficient between the gaze proportion vectors for each of the two crew members, at each point in time, resulting in a signal representing the time-varying correlation between gaze behaviors. We determined criteria for concluding correlated gaze behavior using two methods: first, a permutation test was applied to the subjects' data. When one crew member's gaze proportion vector is correlated with a random time sample from the other crewmember's data, a distribution of correlation values is obtained that differs markedly from the distribution obtained from temporally aligned samples. In addition to validating that the gaze tracker was functioning reasonably well, this also allows us to compute probabilities of coordinated behavior for each value of the correlation. As an alternative, we also tabulated distributions of correlation coefficients for synthetic data sets, in which the behavior was modeled as a first-order Markov process, and compared correlation distributions for identical processes with those for disparate processes, allowing us to choose criteria and estimate error rates. 3) Discussion: Our method of gaze correlation is able to measure shared visual attention, and can distinguish between activities involving different instruments. We plan to analyze whether pilots strategies of sharing visual attention can predict performance. Possible measurements of performance include expert ratings from instructors, fuel consumption, total task time, and failure rate. While developed for two-person crews, our approach can be applied to larger groups, using intra-class correlation coefficients instead of the Pearson product-moment correlation.

  15. Retrospective Analog Year Analyses Using NASA Satellite Data to Improve USDA's World Agricultural Supply and Demand Estimates

    NASA Technical Reports Server (NTRS)

    Teng, William; Shannon, Harlan

    2011-01-01

    The USDA World Agricultural Outlook Board (WAOB) is responsible for monitoring weather and climate impacts on domestic and foreign crop development. One of WAOB's primary goals is to determine the net cumulative effect of weather and climate anomalies on final crop yields. To this end, a broad array of information is consulted, including maps, charts, and time series of recent weather, climate, and crop observations; numerical output from weather and crop models; and reports from the press, USDA attach s, and foreign governments. The resulting agricultural weather assessments are published in the Weekly Weather and Crop Bulletin, to keep farmers, policy makers, and commercial agricultural interests informed of weather and climate impacts on agriculture. Because both the amount and timing of precipitation significantly affect crop yields, WAOB often uses precipitation time series to identify growing seasons with similar weather patterns and help estimate crop yields for the current growing season, based on observed yields in analog years. Historically, these analog years are visually identified; however, the qualitative nature of this method sometimes precludes the definitive identification of the best analog year. Thus, one goal of this study is to derive a more rigorous, statistical approach for identifying analog years, based on a modified coefficient of determination, termed the analog index (AI). A second goal is to compare the performance of AI for time series derived from surface-based observations vs. satellite-based measurements (NASA TRMM and other data).

  16. Structure coefficients for different initial metallicities for use in stellar analysis

    NASA Astrophysics Data System (ADS)

    Inlek, Gulay; Budding, Edwin; Demircan, Osman

    2017-09-01

    Internal structure coefficients for zero age Main Sequence (ZAMS) model stars with different initial metallicities are presented. A series of (Eggleton) stellar models with masses between 1-40 M_{⊙} and metallicities Z=0.0001, Z=0.001, Z=0.004, Z=0.01, Z=0.02, and Z=0.03 were used. We have also calculated the same coefficients for a recommended solar metallicity value Z=0.0134 (Asplund et al. in Annu. Rev. Astron. Astrophys. 47:481, 2009). For each model, values of the internal structure constants k2, k3, k4 and related coefficients have been derived by numerically integrating Radau's equation with the (FORTRAN) program RADAU. The (Eggleton) stellar models used come from the ` EZ-Web' compilation of the Dept. of Astronomy, University of Wisconsin, Madison. The calculations follow the procedure given by Inlek and Budding (Astrophys. Space Sci. 342:365, 2012). These new results were compared with others in the literature. We deduce that the current state of theoretical evaluation of structure coefficients is generally in sufficient agreement with data obtained from apsidal advance rates of selected well-observed eccentric eclipsing binary stars at the present time, given the probable errors of the latter. However, new results coming from more precise and extensive data sets in the wake of the Kepler Mission, or similar future surveys, may call for further theoretical specification or refinement. The derivation of structure coefficients from observations of apsidal motion in close eccentric binary systems requires specification of relevant parameters from light curve analysis. A self-consistent treatment then implies inclusion of the structure coefficients within the fitting function of such analysis.

  17. Determination of carrier lifetime and diffusion length in Al-doped 4H-SiC epilayers by time-resolved optical techniques

    NASA Astrophysics Data System (ADS)

    Liaugaudas, Gediminas; Dargis, Donatas; Kwasnicki, Pawel; Arvinte, Roxana; Zielinski, Marcin; Jarašiūnas, Kęstutis

    2015-01-01

    A series of p-type 4H-SiC epilayers with aluminium concentration ranging from 2  ×  1016 to 8  ×  1019 cm-3 were investigated by time-resolved optical techniques in order to determine the effect of aluminium doping on high-injection carrier lifetime at room temperature and the diffusion coefficient at different injections (from ≈3  ×  1018 to ≈5  ×  1019 cm-3) and temperatures (from 78 to 730 K). We find that the defect limited carrier lifetime τSRH decreases from 20 ns in the low-doped samples down to ≈0.6 ns in the heavily doped epilayers. Accordingly, the ambipolar diffusion coefficient decreases from Da = 3.5 cm2 s-1 down to ≈0.6 cm2 s-1, corresponding to the hole mobility of µh = 70 cm2 Vs-1 and 12 cm2 Vs-1, respectively. In the highly doped epilayers, the injection-induced decrease of the diffusion coefficient, due to the transition from the minority carrier diffusion to the ambipolar diffusion, provided the electron diffusion coefficient of De ≈ 3 cm2 s-1. The Al-doping resulted in the gradual decrease of the ambipolar diffusion length, from LD = 2.7 µm down to LD = 0.25 µm in the epilayers with the lowest and highest aluminium concentrations.

  18. Statistical Considerations in Choosing a Test Reliability Coefficient. ACT Research Report Series, 2012 (10)

    ERIC Educational Resources Information Center

    Woodruff, David; Wu, Yi-Fang

    2012-01-01

    The purpose of this paper is to illustrate alpha's robustness and usefulness, using actual and simulated educational test data. The sampling properties of alpha are compared with the sampling properties of several other reliability coefficients: Guttman's lambda[subscript 2], lambda[subscript 4], and lambda[subscript 6]; test-retest reliability;…

  19. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    ERIC Educational Resources Information Center

    Bishara, Anthony J.; Hittner, James B.

    2015-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…

  20. A Simple Student Laboratory on Osmotic Flow, Osmotic Pressure, and the Reflection Coefficient.

    ERIC Educational Resources Information Center

    Feher, Joseph J.; Ford, George D.

    1995-01-01

    Describes a laboratory exercise containing a practical series of experiments that novice students can perform within two hours. The exercise provides a confirmation of van't Hoff's law while placing more emphasis on osmotic flow than pressure. Students can determine parameters such as the reflection coefficient which stress the interaction of both…

  1. Time series modelling of increased soil temperature anomalies during long period

    NASA Astrophysics Data System (ADS)

    Shirvani, Amin; Moradi, Farzad; Moosavi, Ali Akbar

    2015-10-01

    Soil temperature just beneath the soil surface is highly dynamic and has a direct impact on plant seed germination and is probably the most distinct and recognisable factor governing emergence. Autoregressive integrated moving average as a stochastic model was developed to predict the weekly soil temperature anomalies at 10 cm depth, one of the most important soil parameters. The weekly soil temperature anomalies for the periods of January1986-December 2011 and January 2012-December 2013 were taken into consideration to construct and test autoregressive integrated moving average models. The proposed model autoregressive integrated moving average (2,1,1) had a minimum value of Akaike information criterion and its estimated coefficients were different from zero at 5% significance level. The prediction of the weekly soil temperature anomalies during the test period using this proposed model indicated a high correlation coefficient between the observed and predicted data - that was 0.99 for lead time 1 week. Linear trend analysis indicated that the soil temperature anomalies warmed up significantly by 1.8°C during the period of 1986-2011.

  2. An experimental study of an adaptive-wall wind tunnel

    NASA Technical Reports Server (NTRS)

    Celik, Zeki; Roberts, Leonard

    1988-01-01

    A series of adaptive wall ventilated wind tunnel experiments was carried out to demonstrate the feasibility of using the side wall pressure distribution as the flow variable for the assessment of compatibility with free air conditions. Iterative and one step convergence methods were applied using the streamwise velocity component, the side wall pressure distribution and the normal velocity component in order to investigate their relative merits. The advantage of using the side wall pressure as the flow variable is to reduce the data taking time which is one the major contributors to the total testing time. In ventilated adaptive wall wind tunnel testing, side wall pressure measurements require simple instrumentation as opposed to the Laser Doppler Velocimetry used to measure the velocity components. In ventilated adaptive wall tunnel testing, influence coefficients are required to determine the pressure corrections in the plenum compartment. Experiments were carried out to evaluate the influence coefficients from side wall pressure distributions, and from streamwise and normal velocity distributions at two control levels. Velocity measurements were made using a two component Laser Doppler Velocimeter system.

  3. Solar panel acceptance testing using a pulsed solar simulator

    NASA Technical Reports Server (NTRS)

    Hershey, T. L.

    1977-01-01

    Utilizing specific parameters as area of an individual cell, number in series and parallel, and established coefficient of current and voltage temperature dependence, a solar array irradiated with one solar constant at AMO and at ambient temperature can be characterized by a current-voltage curve for different intensities, temperatures, and even different configurations. Calibration techniques include: uniformity in area, depth and time, absolute and transfer irradiance standards, dynamic and functional check out procedures. Typical data are given for individual cell (2x2 cm) to complete flat solar array (5x5 feet) with 2660 cells and on cylindrical test items with up to 10,000 cells. The time and energy saving of such testing techniques are emphasized.

  4. Drainage-system development in consecutive melt seasons at a polythermal, Arctic glacier, evaluated by flow-recession analysis and linear-reservoir simulation.

    PubMed

    Hodgkins, Richard; Cooper, Richard; Tranter, Martyn; Wadham, Jemma

    2013-07-26

    [1] The drainage systems of polythermal glaciers play an important role in high-latitude hydrology, and are determinants of ice flow rate. Flow-recession analysis and linear-reservoir simulation of runoff time series are here used to evaluate seasonal and inter-annual variability in the drainage system of the polythermal Finsterwalderbreen, Svalbard, in 1999 and 2000. Linear-flow recessions are pervasive, with mean coefficients of a fast reservoir varying from 16 (1999) to 41 h (2000), and mean coefficients of an intermittent, slow reservoir varying from 54 (1999) to 114 h (2000). Drainage-system efficiency is greater overall in the first of the two seasons, the simplest explanation of which is more rapid depletion of the snow cover. Reservoir coefficients generally decline during each season (at 0.22 h d -1 in 1999 and 0.52 h d -1 in 2000), denoting an increase in drainage efficiency. However, coefficients do not exhibit a consistent relationship with discharge. Finsterwalderbreen therefore appears to behave as an intermediate case between temperate glaciers and other polythermal glaciers with smaller proportions of temperate ice. Linear-reservoir runoff simulations exhibit limited sensitivity to a relatively wide range of reservoir coefficients, although the use of fixed coefficients in a spatially lumped model can generate significant subseasonal error. At Finsterwalderbreen, an ice-marginal channel with the characteristics of a fast reservoir, and a subglacial upwelling with the characteristics of a slow reservoir, both route meltwater to the terminus. This suggests that drainage-system components of significantly contrasting efficiencies can coexist spatially and temporally at polythermal glaciers.

  5. Quantifying the Temporal Inequality of Nutrient Loads with a Novel Metric

    NASA Astrophysics Data System (ADS)

    Gall, H. E.; Schultz, D.; Rao, P. S.; Jawitz, J. W.; Royer, M.

    2015-12-01

    Inequality is an emergent property of many complex systems. For a given series of stochastic events, some events generate a disproportionately large contribution to system responses compared to other events. In catchments, such responses cause streamflow and solute loads to exhibit strong temporal inequality, with the vast majority of discharge and solute loads exported during short periods of time during which high-flow events occur. These periods of time are commonly referred to as "hot moments". Although this temporal inequality is widely recognized, there is currently no uniform metric for assessing it. We used a novel application of Lorenz Inequality, a method commonly used in economics to quantify income inequality, to quantify the spatial and temporal inequality of streamflow and nutrient (nitrogen and phosphorus) loads exported to the Chesapeake Bay. Lorenz Inequality and the corresponding Gini Coefficient provide an analytical tool for quantifying inequality that can be applied at any temporal or spatial scale. The Gini coefficient (G) is a formal measure of inequality that varies from 0 to 1, with a value of 0 indicating perfect equality (i.e., fluxes and loads are constant in time) and 1 indicating perfect inequality (i.e., all of the discharge and solute loads are exported during one instant in time). Therefore, G is a simple yet powerful tool for providing insight into the temporal inequality of nutrient transport. We will present the results of our detailed analysis of streamflow and nutrient time series data collected since the early 1980's at 30 USGS gauging stations in the Chesapeake Bay watershed. The analysis is conducted at an annual time scale, enabling trends and patterns to be assessed both temporally (over time at each station) and spatially (for the same period of time across stations). The results of this analysis have the potential to create a transformative new framework for identifying "hot moments", improving our ability to temporally and spatially target implementation of best management practices to ultimately improve water quality in the Chesapeake Bay. This method also provides insight into the temporal scales at which hydrologic and biogeochemical variability dominate nutrient export dynamics.

  6. Reevaluation of Performance of Electric Double-layer Capacitors from Constant-current Charge/Discharge and Cyclic Voltammetry

    PubMed Central

    Allagui, Anis; Freeborn, Todd J.; Elwakil, Ahmed S.; Maundy, Brent J.

    2016-01-01

    The electric characteristics of electric-double layer capacitors (EDLCs) are determined by their capacitance which is usually measured in the time domain from constant-current charging/discharging and cyclic voltammetry tests, and from the frequency domain using nonlinear least-squares fitting of spectral impedance. The time-voltage and current-voltage profiles from the first two techniques are commonly treated by assuming ideal SsC behavior in spite of the nonlinear response of the device, which in turn provides inaccurate values for its characteristic metrics. In this paper we revisit the calculation of capacitance, power and energy of EDLCs from the time domain constant-current step response and linear voltage waveform, under the assumption that the device behaves as an equivalent fractional-order circuit consisting of a resistance Rs in series with a constant phase element (CPE(Q, α), with Q being a pseudocapacitance and α a dispersion coefficient). In particular, we show with the derived (Rs, Q, α)-based expressions, that the corresponding nonlinear effects in voltage-time and current-voltage can be encompassed through nonlinear terms function of the coefficient α, which is not possible with the classical RsC model. We validate our formulae with the experimental measurements of different EDLCs. PMID:27934904

  7. DCCA analysis of renewable and conventional energy prices

    NASA Astrophysics Data System (ADS)

    Paiva, Aureliano Sancho Souza; Rivera-Castro, Miguel Angel; Andrade, Roberto Fernandes Silva

    2018-01-01

    Here we investigate the inter-influence of oil prices and renewable energy sources. The non-stationary time series are scrutinized within the Detrended Cross-Correlation Analysis (DCCA) framework, where the resulting DCCA coefficient provides a useful and reliable index to the evaluate the cross correlation between events at the same time instant as well as at a suitably chosen time lags. The analysis is based on the quotient of two successive daily closing oil prices and composite indices of renewable energy sources in USA and Europe in the period 2006-2015, which was subject to several social and economic driving forces, as the increase of social pressure in favor of the use of non-fossil energy sources and the worldwide economic crisis that started in 2008. The DCCA coefficient is evaluated for different window sizes, extracting information for short and long term correlation between the indices. Particularly, strong correlation between the behavior of the two distinct economic sectors are observed for large time intervals during the worst period of the economic crisis (2008-2012), hinting at a very cautious behavior of the economic agents. Before and after this period, the behavior of two economic sectors are overwhelmingly uncorrelated or very weakly correlated. The results reported here may be useful to select proper strategies in future similar scenarios.

  8. Reevaluation of Performance of Electric Double-layer Capacitors from Constant-current Charge/Discharge and Cyclic Voltammetry

    NASA Astrophysics Data System (ADS)

    Allagui, Anis; Freeborn, Todd J.; Elwakil, Ahmed S.; Maundy, Brent J.

    2016-12-01

    The electric characteristics of electric-double layer capacitors (EDLCs) are determined by their capacitance which is usually measured in the time domain from constant-current charging/discharging and cyclic voltammetry tests, and from the frequency domain using nonlinear least-squares fitting of spectral impedance. The time-voltage and current-voltage profiles from the first two techniques are commonly treated by assuming ideal SsC behavior in spite of the nonlinear response of the device, which in turn provides inaccurate values for its characteristic metrics. In this paper we revisit the calculation of capacitance, power and energy of EDLCs from the time domain constant-current step response and linear voltage waveform, under the assumption that the device behaves as an equivalent fractional-order circuit consisting of a resistance Rs in series with a constant phase element (CPE(Q, α), with Q being a pseudocapacitance and α a dispersion coefficient). In particular, we show with the derived (Rs, Q, α)-based expressions, that the corresponding nonlinear effects in voltage-time and current-voltage can be encompassed through nonlinear terms function of the coefficient α, which is not possible with the classical RsC model. We validate our formulae with the experimental measurements of different EDLCs.

  9. Reevaluation of Performance of Electric Double-layer Capacitors from Constant-current Charge/Discharge and Cyclic Voltammetry.

    PubMed

    Allagui, Anis; Freeborn, Todd J; Elwakil, Ahmed S; Maundy, Brent J

    2016-12-09

    The electric characteristics of electric-double layer capacitors (EDLCs) are determined by their capacitance which is usually measured in the time domain from constant-current charging/discharging and cyclic voltammetry tests, and from the frequency domain using nonlinear least-squares fitting of spectral impedance. The time-voltage and current-voltage profiles from the first two techniques are commonly treated by assuming ideal R s C behavior in spite of the nonlinear response of the device, which in turn provides inaccurate values for its characteristic metrics [corrected]. In this paper we revisit the calculation of capacitance, power and energy of EDLCs from the time domain constant-current step response and linear voltage waveform, under the assumption that the device behaves as an equivalent fractional-order circuit consisting of a resistance R s in series with a constant phase element (CPE(Q, α), with Q being a pseudocapacitance and α a dispersion coefficient). In particular, we show with the derived (R s , Q, α)-based expressions, that the corresponding nonlinear effects in voltage-time and current-voltage can be encompassed through nonlinear terms function of the coefficient α, which is not possible with the classical R s C model. We validate our formulae with the experimental measurements of different EDLCs.

  10. Europe's Preparation For GOCE Gravity Field Recovery

    NASA Astrophysics Data System (ADS)

    Suenkel, H.; Suenkel, H.

    2001-12-01

    The European Space Agency ESA is preparing for its first dedicated gravity field mission GOCE (Gravity Field and Steady-state Ocean Circulation Explorer) with a proposed launch in fall 2005. The mission's goal is the mapping of the Earth's static gravity field with very high resolution and utmost accuracy on a global scale. GOCE is a drag-free mission, flown in a circular and sun-synchronous orbit at an altitude between 240 and 250 km. Each of the two operational phases will last for 6 months. GOCE is based on a sensor fusion concept combining high-low satellite-to-satellite tracking (SST) and satellite gravity gradiometry (SGG). The transformation of the GOCE sensor data into a scientific product of utmost quality and reliability requires a well-coordinated effort of experts in satellite geodesy, applied mathematics and computer science. Several research groups in Europe do have this expertise and decided to form the "European GOCE Gravity Consortium (EGG-C)". The EGG-C activities are subdivided into tasks such as standard and product definition, data base and data dissemination, precise orbit determination, global gravity field model solutions and regional solutions, solution validation, communication and documentation, and the interfacing to level 3 product scientific users. The central issue of GOCE data processing is, of course, the determination of the global gravity field model using three independent mathematical-numerical techniques which had been designed and pre-developed in the course of several scientific preparatory studies of ESA: 1. The direct solution which is a least squares adjustment technique based on a pre-conditioned conjugated gradient method (PCGM). The method is capable of efficiently transforming the calibrated and validated SST and SGG observations directly or via lumped coefficients into harmonic coefficients of the gravitational potential. 2. The time-wise approach considers both SST and SGG data as a time series. For an idealized repeat mission such a time series can be very efficiently transformed into lumped coefficients using fast Fourier techniques. For a realistic mission scenario this transformation has to be extended by an iteration process. 3. The space-wise approach which, after having transformed the original observations onto a spatial geographical grid, transforms the pseudo-observations into harmonic coefficients using a fast collocation technique. A successful mission presupposed, GOCE will finally deliver the Earth's gravity field with a resolution of about 70 km half wavelength and a global geoid with an accuracy of about 1 cm.

  11. On the connection coefficients and recurrence relations arising from expansions in series of Laguerre polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2003-05-01

    A formula expressing the Laguerre coefficients of a general-order derivative of an infinitely differentiable function in terms of its original coefficients is proved, and a formula expressing explicitly the derivatives of Laguerre polynomials of any degree and for any order as a linear combination of suitable Laguerre polynomials is deduced. A formula for the Laguerre coefficients of the moments of one single Laguerre polynomial of certain degree is given. Formulae for the Laguerre coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Laguerre coefficients are also obtained. A simple approach in order to build and solve recursively for the connection coefficients between Jacobi-Laguerre and Hermite-Laguerre polynomials is described. An explicit formula for these coefficients between Jacobi and Laguerre polynomials is given, of which the ultra-spherical polynomials of the first and second kinds and Legendre polynomials are important special cases. An analytical formula for the connection coefficients between Hermite and Laguerre polynomials is also obtained.

  12. On the construction of recurrence relations for the expansion and connection coefficients in series of Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2004-01-01

    Formulae expressing explicitly the Jacobi coefficients of a general-order derivative (integral) of an infinitely differentiable function in terms of its original expansion coefficients, and formulae for the derivatives (integrals) of Jacobi polynomials in terms of Jacobi polynomials themselves are stated. A formula for the Jacobi coefficients of the moments of one single Jacobi polynomial of certain degree is proved. Another formula for the Jacobi coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its original expanded coefficients is also given. A simple approach in order to construct and solve recursively for the connection coefficients between Jacobi-Jacobi polynomials is described. Explicit formulae for these coefficients between ultraspherical and Jacobi polynomials are deduced, of which the Chebyshev polynomials of the first and second kinds and Legendre polynomials are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Jacobi and Hermite-Jacobi are developed.

  13. Integrating a Linear Signal Model with Groundwater and Rainfall time-series on the Characteristic Identification of Groundwater Systems

    NASA Astrophysics Data System (ADS)

    Chen, Yu-Wen; Wang, Yetmen; Chang, Liang-Cheng

    2017-04-01

    Groundwater resources play a vital role on regional supply. To avoid irreversible environmental impact such as land subsidence, the characteristic identification of groundwater system is crucial before sustainable management of groundwater resource. This study proposes a signal process approach to identify the character of groundwater systems based on long-time hydrologic observations include groundwater level and rainfall. The study process contains two steps. First, a linear signal model (LSM) is constructed and calibrated to simulate the variation of underground hydrology based on the time series of groundwater levels and rainfall. The mass balance equation of the proposed LSM contains three major terms contain net rate of horizontal exchange, rate of rainfall recharge and rate of pumpage and four parameters are required to calibrate. Because reliable records of pumpage is rare, the time-variant groundwater amplitudes of daily frequency (P ) calculated by STFT are assumed as linear indicators of puamage instead of pumpage records. Time series obtained from 39 observation wells and 50 rainfall stations in and around the study area, Pintung Plain, are paired for model construction. Second, the well-calibrated parameters of the linear signal model can be used to interpret the characteristic of groundwater system. For example, the rainfall recharge coefficient (γ) means the transform ratio between rainfall intention and groundwater level raise. The area around the observation well with higher γ means that the saturated zone here is easily affected by rainfall events and the material of unsaturated zone might be gravel or coarse sand with high infiltration ratio. Considering the spatial distribution of γ, the values of γ decrease from the upstream to the downstream of major rivers and also are correlated to the spatial distribution of grain size of surface soil. Via the time-series of groundwater levels and rainfall, the well-calibrated parameters of LSM have ability to identify the characteristic of aquifer.

  14. Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results. Part II; Cloud Coverage

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Yang, Yuekui

    2016-01-01

    This is the second part of a study on how temporal sampling frequency affects satellite retrievals in support of the Deep Space Climate Observatory (DSCOVR) mission. Continuing from Part 1, which looked at Earth's radiation budget, this paper presents the effect of sampling frequency on DSCOVR-derived cloud fraction. The output from NASA's Goddard Earth Observing System version 5 (GEOS-5) Nature Run is used as the "truth". The effect of temporal resolution on potential DSCOVR observations is assessed by subsampling the full Nature Run data. A set of metrics, including uncertainty and absolute error in the subsampled time series, correlation between the original and the subsamples, and Fourier analysis have been used for this study. Results show that, for a given sampling frequency, the uncertainties in the annual mean cloud fraction of the sunlit half of the Earth are larger over land than over ocean. Analysis of correlation coefficients between the subsamples and the original time series demonstrates that even though sampling at certain longer time intervals may not increase the uncertainty in the mean, the subsampled time series is further and further away from the "truth" as the sampling interval becomes larger and larger. Fourier analysis shows that the simulated DSCOVR cloud fraction has underlying periodical features at certain time intervals, such as 8, 12, and 24 h. If the data is subsampled at these frequencies, the uncertainties in the mean cloud fraction are higher. These results provide helpful insights for the DSCOVR temporal sampling strategy.

  15. The relationship between pay day and violent death in Guatemala: a time series analysis.

    PubMed

    Ramírez, Dorian E; Branas, Charles C; Richmond, Therese S; Bream, Kent; Xie, Dawei; Velásquez-Tohom, Magda; Wiebe, Douglas J

    2017-04-01

    To assess if violent deaths were associated with pay days in Guatemala. Interrupted time series analysis. Guatemalan national autopsy databases. Daily violence-related autopsy data for 22 418 decedents from 2009 to 2012. Data were provided by the Guatemalan National Institute of Forensic Sciences. Multiple pay-day lags and other important days such as holidays were tested. Absolute and relative estimates of excess violent deaths on pay days and holidays. The occurrence of violent deaths was not associated with pay days. However, a significant association was observed for national holidays, and this association was more pronounced when national holidays and pay days occurred simultaneously. This effect was observed mainly in males, who constituted the vast majority of violent deaths in Guatemala. An estimated 112 (coefficient=3.12; 95% CI 2.15 to 4.08; p<0.01) more male violent deaths occurred on holidays than were expected. An estimated 121 (coefficient=4.64; 95% CI 3.41 to 5.88; p<0.01) more male violent deaths than expected occurred on holidays that coincided with the first 2 days following a pay day. Men in Guatemala experience violent deaths at an elevated rate when pay days coincide with national holidays. Efforts to be better prepared for violence during national holidays and to prevent violent deaths by rescheduling pay days when these days co-occur with national holidays should be considered. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  16. Bus electrode having same thermal expansion coefficient as crystalline silicon solar cell

    NASA Astrophysics Data System (ADS)

    Kato, T.; Morita, H.; Nakano, H.; Washida, H.; Onoe, A.; Inomata, K.; Mori, F.; Sugai, S.

    1982-01-01

    It is well known that the bus electrode plays a main role in series resistance of solar cells. Bus electrodes composed of bare leads, were investigated for which thermal expansion coefficients are less than those of the cell and which are coated with highly conducting metals. These leads exhibited the lower expansion coefficient than expected by empirical law, and the origins of these phenomena were explained. Work hardening effect on the expansion coefficient was then measured. Solar cell fabrication with these leads and rigid solders rationalized assembly processing. Cell characteristics proved to be excellent compared with conventional ones. Finally, lead costs were compared for various materials.

  17. Electrochemical measurement of lateral diffusion coefficients of ubiquinones and plastoquinones of various isoprenoid chain lengths incorporated in model bilayers.

    PubMed Central

    Marchal, D; Boireau, W; Laval, J M; Moiroux, J; Bourdillon, C

    1998-01-01

    The long-range diffusion coefficients of isoprenoid quinones in a model of lipid bilayer were determined by a method avoiding fluorescent probe labeling of the molecules. The quinone electron carriers were incorporated in supported dimyristoylphosphatidylcholine layers at physiological molar fractions (<3 mol%). The elaborate bilayer template contained a built-in gold electrode at which the redox molecules solubilized in the bilayer were reduced or oxidized. The lateral diffusion coefficient of a natural quinone like UQ10 or PQ9 was 2.0 +/- 0.4 x 10(-8) cm2 s(-1) at 30 degrees C, two to three times smaller than the diffusion coefficient of a lipid analog in the same artificial bilayer. The lateral mobilities of the oxidized or reduced forms could be determined separately and were found to be identical in the 4-13 pH range. For a series of isoprenoid quinones, UQ2 or PQ2 to UQ10, the diffusion coefficient exhibited a marked dependence on the length of the isoprenoid chain. The data fit very well the quantitative behavior predicted by a continuum fluid model in which the isoprenoid chains are taken as rigid particles moving in the less viscous part of the bilayer and rubbing against the more viscous layers of lipid heads. The present study supports the concept of a homogeneous pool of quinone located in the less viscous region of the bilayer. PMID:9545054

  18. Arithmetical functions and irrationality of Lambert series

    NASA Astrophysics Data System (ADS)

    Duverney, Daniel

    2011-09-01

    We use a method of Erdös in order to prove the linear independence over Q of the numbers 1, ∑ n = 1+∞1/qn2-1, ∑ n = 1+∞n/qn2-1 for every q∈Z, with |q|≥2. The main idea consists in considering the two above series as Lambert series. This allows to expand them as power series of 1/q. The Taylor coefficients of these expansions are arithmetical functions, whose properties allow to apply an elementary irrationality criterion, which yields the result.

  19. The combined use of dynamic factor analysis and wavelet analysis to evaluate latent factors controlling complex groundwater level fluctuations in a riverside alluvial aquifer

    NASA Astrophysics Data System (ADS)

    Oh, Yun-Yeong; Yun, Seong-Taek; Yu, Soonyoung; Hamm, Se-Yeong

    2017-12-01

    To identify and quantitatively evaluate complex latent factors controlling groundwater level (GWL) fluctuations in a riverside alluvial aquifer influenced by barrage construction, we developed the combined use of dynamic factor analysis (DFA) and wavelet analysis (WA). Time series data of GWL, river water level and precipitation were collected for 3 years (July 2012 to June 2015) from an alluvial aquifer underneath an agricultural area of the Nakdong river basin, South Korea. Based on the wavelet coefficients of the final approximation, the GWL data was clustered into three groups (WCG1 to WCG3). Two dynamic factors (DFs) were then extracted using DFA for each group; thus, six major factors were extracted. Next, the time-frequency variability of the extracted DFs was examined using multiresolution cross-correlation analysis (MRCCA) with the following steps: 1) major driving forces and their scales in GWL fluctuations were identified by comparing maximum correlation coefficients (rmax) between DFs and the GWL time series and 2) the results were supplemented using the wavelet transformed coherence (WTC) analysis between DFs and the hydrological time series. Finally, relative contributions of six major DFs to the GWL fluctuations could be quantitatively assessed by calculating the effective dynamic efficiency (Def). The characteristics and relevant process of the identified six DFs are: 1) WCG1DF4,1 as an indicative of seasonal agricultural pumping (scales = 64-128 days; rmax = 0.68-0.89; Def ≤ 23.1%); 2) WCG1DF4,4 representing the cycle of regional groundwater recharge (scales = 64-128 days; rmax = 0.98-1.00; Def ≤ 11.1%); 3) WCG2DF4,1 indicating the complex interaction between the episodes of precipitation and direct runoff (scales = 2-8 days; rmax = 0.82-0.91; Def ≤ 35.3%) and seasonal GW-RW interaction (scales = 64-128 days; rmax = 0.76-0.91; Def ≤ 14.2%); 4) WCG2DF4,4 reflecting the complex effects of seasonal pervasive pumping and the local recharge cycle (scales = 64-128 days; rmax = 0.86-0.94; Def ≤ 16.4%); 5) WCG3DF4,2 as the result of temporal pumping (scales = 2-8 days; rmax = 0.98-0.99; Def ≤ 7.7%); and 6) WCG3DF4,4 indicating the local recharge cycle (scales = 64-128 days; rmax = 0.76-0.91; Def ≤ 34.2 %). This study shows that major driving forces controlling GWL time series data in a complex hydrological setting can be identified and quantitatively evaluated by the combined use of DFA and WA and applying MRCCA and WTC.

  20. Excellence of numerical differentiation method in calculating the coefficients of high temperature series expansion of the free energy and convergence problem of the expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, S., E-mail: chixiayzsq@yahoo.com; Solana, J. R.

    2014-12-28

    In this paper, it is shown that the numerical differentiation method in performing the coupling parameter series expansion [S. Zhou, J. Chem. Phys. 125, 144518 (2006); AIP Adv. 1, 040703 (2011)] excels at calculating the coefficients a{sub i} of hard sphere high temperature series expansion (HS-HTSE) of the free energy. Both canonical ensemble and isothermal-isobaric ensemble Monte Carlo simulations for fluid interacting through a hard sphere attractive Yukawa (HSAY) potential with extremely short ranges and at very low temperatures are performed, and the resulting two sets of data of thermodynamic properties are in excellent agreement with each other, and wellmore » qualified to be used for assessing convergence of the HS-HTSE for the HSAY fluid. Results of valuation are that (i) by referring to the results of a hard sphere square well fluid [S. Zhou, J. Chem. Phys. 139, 124111 (2013)], it is found that existence of partial sum limit of the high temperature series expansion series and consistency between the limit value and the true solution depend on both the potential shapes and temperatures considered. (ii) For the extremely short range HSAY potential, the HS-HTSE coefficients a{sub i} falls rapidly with the order i, and the HS-HTSE converges from fourth order; however, it does not converge exactly to the true solution at reduced temperatures lower than 0.5, wherein difference between the partial sum limit of the HS-HTSE series and the simulation result tends to become more evident. Something worth mentioning is that before the convergence order is reached, the preceding truncation is always improved by the succeeding one, and the fourth- and higher-order truncations give the most dependable and qualitatively always correct thermodynamic results for the HSAY fluid even at low reduced temperatures to 0.25.« less

  1. A method to identify differential expression profiles of time-course gene data with Fourier transformation

    PubMed Central

    2013-01-01

    Background Time course gene expression experiments are an increasingly popular method for exploring biological processes. Temporal gene expression profiles provide an important characterization of gene function, as biological systems are both developmental and dynamic. With such data it is possible to study gene expression changes over time and thereby to detect differential genes. Much of the early work on analyzing time series expression data relied on methods developed originally for static data and thus there is a need for improved methodology. Since time series expression is a temporal process, its unique features such as autocorrelation between successive points should be incorporated into the analysis. Results This work aims to identify genes that show different gene expression profiles across time. We propose a statistical procedure to discover gene groups with similar profiles using a nonparametric representation that accounts for the autocorrelation in the data. In particular, we first represent each profile in terms of a Fourier basis, and then we screen out genes that are not differentially expressed based on the Fourier coefficients. Finally, we cluster the remaining gene profiles using a model-based approach in the Fourier domain. We evaluate the screening results in terms of sensitivity, specificity, FDR and FNR, compare with the Gaussian process regression screening in a simulation study and illustrate the results by application to yeast cell-cycle microarray expression data with alpha-factor synchronization. The key elements of the proposed methodology: (i) representation of gene profiles in the Fourier domain; (ii) automatic screening of genes based on the Fourier coefficients and taking into account autocorrelation in the data, while controlling the false discovery rate (FDR); (iii) model-based clustering of the remaining gene profiles. Conclusions Using this method, we identified a set of cell-cycle-regulated time-course yeast genes. The proposed method is general and can be potentially used to identify genes which have the same patterns or biological processes, and help facing the present and forthcoming challenges of data analysis in functional genomics. PMID:24134721

  2. Analysis of oscillatory motion of a light airplane at high values of lift coefficient

    NASA Technical Reports Server (NTRS)

    Batterson, J. G.

    1983-01-01

    A modified stepwise regression is applied to flight data from a light research air-plane operating at high angles at attack. The well-known phenomenon referred to as buckling or porpoising is analyzed and modeled using both power series and spline expansions of the aerodynamic force and moment coefficients associated with the longitudinal equations of motion.

  3. Scale effects on information theory-based measures applied to streamflow patterns in two rural watersheds

    NASA Astrophysics Data System (ADS)

    Pan, Feng; Pachepsky, Yakov A.; Guber, Andrey K.; McPherson, Brian J.; Hill, Robert L.

    2012-01-01

    SummaryUnderstanding streamflow patterns in space and time is important for improving flood and drought forecasting, water resources management, and predictions of ecological changes. Objectives of this work include (a) to characterize the spatial and temporal patterns of streamflow using information theory-based measures at two thoroughly-monitored agricultural watersheds located in different hydroclimatic zones with similar land use, and (b) to elucidate and quantify temporal and spatial scale effects on those measures. We selected two USDA experimental watersheds to serve as case study examples, including the Little River experimental watershed (LREW) in Tifton, Georgia and the Sleepers River experimental watershed (SREW) in North Danville, Vermont. Both watersheds possess several nested sub-watersheds and more than 30 years of continuous data records of precipitation and streamflow. Information content measures (metric entropy and mean information gain) and complexity measures (effective measure complexity and fluctuation complexity) were computed based on the binary encoding of 5-year streamflow and precipitation time series data. We quantified patterns of streamflow using probabilities of joint or sequential appearances of the binary symbol sequences. Results of our analysis illustrate that information content measures of streamflow time series are much smaller than those for precipitation data, and the streamflow data also exhibit higher complexity, suggesting that the watersheds effectively act as filters of the precipitation information that leads to the observed additional complexity in streamflow measures. Correlation coefficients between the information-theory-based measures and time intervals are close to 0.9, demonstrating the significance of temporal scale effects on streamflow patterns. Moderate spatial scale effects on streamflow patterns are observed with absolute values of correlation coefficients between the measures and sub-watershed area varying from 0.2 to 0.6 in the two watersheds. We conclude that temporal effects must be evaluated and accounted for when the information theory-based methods are used for performance evaluation and comparison of hydrological models.

  4. Osmosis in Cortical Collecting Tubules

    PubMed Central

    Schafer, James A.; Patlak, Clifford S.; Andreoli, Thomas E.

    1974-01-01

    This paper reports a theoretical analysis of osmotic transients and an experimental evaluation both of rapid time resolution of lumen to bath osmosis and of bidirectional steady-state osmosis in isolated rabbit cortical collecting tubules exposed to antidiuretic hormone (ADH). For the case of a membrane in series with unstirred layers, there may be considerable differences between initial and steady-state osmotic flows (i.e., the osmotic transient phenomenon), because the solute concentrations at the interfaces between membrane and unstirred layers may vary with time. A numerical solution of the equation of continuity provided a means for computing these time-dependent values, and, accordingly, the variation of osmotic flow with time for a given set of parameters including: Pf (cm s–1), the osmotic water permeability coefficient, the bulk phase solute concentrations, the unstirred layer thickness on either side of the membrane, and the fractional areas available for volume flow in the unstirred layers. The analyses provide a quantitative frame of reference for evaluating osmotic transients observed in epithelia in series with asymmetrical unstirred layers and indicate that, for such epithelia, Pf determinations from steady-state osmotic flows may result in gross underestimates of osmotic water permeability. In earlier studies, we suggested that the discrepancy between the ADH-dependent values of Pf and PDDw (cm s–1, diffusional water permeability coefficient) was the consequence of cellular constraints to diffusion. In the present experiments, no transients were detectable 20–30 s after initiating ADH-dependent lumen to bath osmosis; and steady-state ADH-dependent osmotic flows from bath to lumen and lumen to bath were linear and symmetrical. An evaluation of these data in terms of the analytical model indicates: First, cellular constraints to diffusion in cortical collecting tubules could be rationalized in terms of a 25-fold reduction in the area of the cell layer available for water transport, possibly due in part to transcellular shunting of osmotic flow; and second, such cellular constraints resulted in relatively small, approximately 15%, underestimates of Pf. PMID:4846767

  5. Category-Specific Comparison of Univariate Alerting Methods for Biosurveillance Decision Support

    PubMed Central

    Elbert, Yevgeniy; Hung, Vivian; Burkom, Howard

    2013-01-01

    Objective For a multi-source decision support application, we sought to match univariate alerting algorithms to surveillance data types to optimize detection performance. Introduction Temporal alerting algorithms commonly used in syndromic surveillance systems are often adjusted for data features such as cyclic behavior but are subject to overfitting or misspecification errors when applied indiscriminately. In a project for the Armed Forces Health Surveillance Center to enable multivariate decision support, we obtained 4.5 years of out-patient, prescription and laboratory test records from all US military treatment facilities. A proof-of-concept project phase produced 16 events with multiple evidence corroboration for comparison of alerting algorithms for detection performance. We used the representative streams from each data source to compare sensitivity of 6 algorithms to injected spikes, and we used all data streams from 16 known events to compare them for detection timeliness. Methods The six methods compared were: Holt-Winters generalized exponential smoothing method (1)automated choice between daily methods, regression and an exponential weighted moving average (2)adaptive daily Shewhart-type chartadaptive one-sided daily CUSUMEWMA applied to 7-day means with a trend correction; and7-day temporal scan statistic Sensitivity testing: We conducted comparative sensitivity testing for categories of time series with similar scales and seasonal behavior. We added multiples of the standard deviation of each time series as single-day injects in separate algorithm runs. For each candidate method, we then used as a sensitivity measure the proportion of these runs for which the output of each algorithm was below alerting thresholds estimated empirically for each algorithm using simulated data streams. We identified the algorithm(s) whose sensitivity was most consistently high for each data category. For each syndromic query applied to each data source (outpatient, lab test orders, and prescriptions), 502 authentic time series were derived, one for each reporting treatment facility. Data categories were selected in order to group time series with similar expected algorithm performance: Median > 100 < Median ≤ 10Median = 0Lag 7 Autocorrelation Coefficient ≥ 0.2Lag 7 Autocorrelation Coefficient < 0.2 Timeliness testing: For the timeliness testing, we avoided artificiality of simulated signals by measuring alerting detection delays in the 16 corroborated outbreaks. The multiple time series from these events gave a total of 141 time series with outbreak intervals for timeliness testing. The following measures were computed to quantify timeliness of detection: Median Detection Delay – median number of days to detect the outbreak.Penalized Mean Detection Delay –mean number of days to detect the outbreak with outbreak misses penalized as 1 day plus the maximum detection time. Results Based on the injection results, the Holt-Winters algorithm was most sensitive among time series with positive medians. The adaptive CUSUM and the Shewhart methods were most sensitive for data streams with median zero. Table 1 provides timeliness results using the 141 outbreak-associated streams on sparse (Median=0) and non-sparse data categories. [Insert table #1 here] Data median Detection Delay, days Holt-winters Regression EWMA Adaptive Shewhart Adaptive CUSUM 7-day Trend-adj. EWMA 7-day Temporal Scan Median 0 Median 3 2 4 2 4.5 2 Penalized Mean 7.2 7 6.6 6.2 7.3 7.6 Median >0 Median 2 2 2.5 2 6 4 Penalized Mean 6.1 7 7.2 7.1 7.7 6.6 The gray shading in the table 1 indicates methods with shortest detection delays for sparse and non-sparse data streams. The Holt-Winters method was again superior for non-sparse data. For data with median=0, the adaptive CUSUM was superior for a daily false alarm probability of 0.01, but the Shewhart method was timelier for more liberal thresholds. Conclusions Both kinds of detection performance analysis showed the method based on Holt-Winters exponential smoothing superior on non-sparse time series with day-of-week effects. The adaptive CUSUM and She-whart methods proved optimal on sparse data and data without weekly patterns.

  6. Detection of anomalous signals in temporally correlated data (Invited)

    NASA Astrophysics Data System (ADS)

    Langbein, J. O.

    2010-12-01

    Detection of transient tectonic signals in data obtained from large geodetic networks requires the ability to detect signals that are both temporally and spatially coherent. In this report I will describe a modification to an existing method that estimates both the coefficients of temporally correlated noise model and an efficient filter based on the noise model. This filter, when applied to the original time-series, effectively whitens (or flattens) the power spectrum. The filtered data provide the means to calculate running averages which are then used to detect deviations from the background trends. For large networks, time-series of signal-to-noise ratio (SNR) can be easily constructed since, by filtering, each of the original time-series has been transformed into one that is closer to having a Gaussian distribution with a variance of 1.0. Anomalous intervals may be identified by counting the number of GPS sites for which the SNR exceeds a specified value. For example, during one time interval, if there were 5 out of 20 time-series with SNR>2, this would be considered anomalous; typically, one would expect at 95% confidence that there would be at least 1 out of 20 time-series with an SNR>2. For time intervals with an anomalously large number of high SNR, the spatial distribution of the SNR is mapped to identify the location of the anomalous signal(s) and their degree of spatial clustering. Estimating the filter that should be used to whiten the data requires modification of the existing methods that employ maximum likelihood estimation to determine the temporal covariance of the data. In these methods, it is assumed that the noise components in the data are a combination of white, flicker and random-walk processes and that they are derived from three different and independent sources. Instead, in this new method, the covariance matrix is constructed assuming that only one source is responsible for the noise and that source can be represented as a white-noise random-number generator convolved with a filter whose spectral properties are frequency (f) independent at its highest frequencies, 1/f at the middle frequencies, and 1/f2 at the lowest frequencies. For data sets with no gaps in their time-series, construction of covariance and inverse covariance matrices is extremely efficient. Application of the above algorithm to real data potentially involves several iterations as small, tectonic signals of interest are often indistinguishable from background noise. Consequently, simply plotting the time-series of each GPS site is used to identify the largest outliers and signals independent of their cause. Any analysis of the background noise levels must factor in these other signals while the gross outliers need to be removed.

  7. Decoupling the electrical conductivity and Seebeck coefficient in the RE2SbO2 compounds through local structural perturbations.

    PubMed

    Wang, Peng L; Kolodiazhnyi, Taras; Yao, Jinlei; Mozharivskyj, Yurij

    2012-01-25

    Compromise between the electrical conductivity and Seebeck coefficient limits the efficiency of chemical doping in the thermoelectric research. An alternative strategy, involving the control of a local crystal structure, is demonstrated to improve the thermoelectric performance in the RE(2)SbO(2) system. The RE(2)SbO(2) phases, adopting a disordered anti-ThCr(2)Si(2)-type structure (I4/mmm), were prepared for RE = La, Nd, Sm, Gd, Ho, and Er. By traversing the rare earth series, the lattice parameters of the RE(2)SbO(2) phases are gradually reduced, thus increasing chemical pressure on the Sb environment. As the Sb displacements are perturbed, different charge carrier activation mechanisms dominate the transport properties of these compounds. As a result, the electrical conductivity and Seebeck coefficient are improved simultaneously, while the number of charge carriers in the series remains constant. © 2012 American Chemical Society

  8. The internal dosimetry code PLEIADES.

    PubMed

    Fell, T P; Phipps, A W; Smith, T J

    2007-01-01

    The International Commission on Radiological Protection (ICRP) has published dose coefficients for the ingestion or inhalation of radionuclides in a series of reports covering intakes by workers and members of the public, including children and pregnant or lactating women. The calculation of these coefficients divides naturally into two distinct parts-the biokinetic and dosimetric. This paper describes in detail the methods used to solve the biokinetic problem in the generation of dose coefficients on behalf of the ICRP, as implemented in the Health Protection Agency's internal dosimetry code PLEIADES. A summary of the dosimetric treatment is included.

  9. Continuous Change Detection and Classification (CCDC) of Land Cover Using All Available Landsat Data

    NASA Astrophysics Data System (ADS)

    Zhu, Z.; Woodcock, C. E.

    2012-12-01

    A new algorithm for Continuous Change Detection and Classification (CCDC) of land cover using all available Landsat data is developed. This new algorithm is capable of detecting many kinds of land cover change as new images are collected and at the same time provide land cover maps for any given time. To better identify land cover change, a two step cloud, cloud shadow, and snow masking algorithm is used for eliminating "noisy" observations. Next, a time series model that has components of seasonality, trend, and break estimates the surface reflectance and temperature. The time series model is updated continuously with newly acquired observations. Due to the high variability in spectral response for different kinds of land cover change, the CCDC algorithm uses a data-driven threshold derived from all seven Landsat bands. When the difference between observed and predicted exceeds the thresholds three consecutive times, a pixel is identified as land cover change. Land cover classification is done after change detection. Coefficients from the time series models and the Root Mean Square Error (RMSE) from model fitting are used as classification inputs for the Random Forest Classifier (RFC). We applied this new algorithm for one Landsat scene (Path 12 Row 31) that includes all of Rhode Island as well as much of Eastern Massachusetts and parts of Connecticut. A total of 532 Landsat images acquired between 1982 and 2011 were processed. During this period, 619,924 pixels were detected to change once (91% of total changed pixels) and 60,199 pixels were detected to change twice (8% of total changed pixels). The most frequent land cover change category is from mixed forest to low density residential which occupies more than 8% of total land cover change pixels.

  10. Development of a takeoff performance monitoring system. Ph.D. Thesis. Contractor Report, Jan. 1984 - Jun. 1985

    NASA Technical Reports Server (NTRS)

    Srivatsan, Raghavachari; Downing, David R.

    1987-01-01

    Discussed are the development and testing of a real-time takeoff performance monitoring algorithm. The algorithm is made up of two segments: a pretakeoff segment and a real-time segment. One-time imputs of ambient conditions and airplane configuration information are used in the pretakeoff segment to generate scheduled performance data for that takeoff. The real-time segment uses the scheduled performance data generated in the pretakeoff segment, runway length data, and measured parameters to monitor the performance of the airplane throughout the takeoff roll. Airplane and engine performance deficiencies are detected and annunciated. An important feature of this algorithm is the one-time estimation of the runway rolling friction coefficient. The algorithm was tested using a six-degree-of-freedom airplane model in a computer simulation. Results from a series of sensitivity analyses are also included.

  11. A hybrid Jaya algorithm for reliability-redundancy allocation problems

    NASA Astrophysics Data System (ADS)

    Ghavidel, Sahand; Azizivahed, Ali; Li, Li

    2018-04-01

    This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.

  12. Molecular surface representation using 3D Zernike descriptors for protein shape comparison and docking.

    PubMed

    Kihara, Daisuke; Sael, Lee; Chikhi, Rayan; Esquivel-Rodriguez, Juan

    2011-09-01

    The tertiary structures of proteins have been solved in an increasing pace in recent years. To capitalize the enormous efforts paid for accumulating the structure data, efficient and effective computational methods need to be developed for comparing, searching, and investigating interactions of protein structures. We introduce the 3D Zernike descriptor (3DZD), an emerging technique to describe molecular surfaces. The 3DZD is a series expansion of mathematical three-dimensional function, and thus a tertiary structure is represented compactly by a vector of coefficients of terms in the series. A strong advantage of the 3DZD is that it is invariant to rotation of target object to be represented. These two characteristics of the 3DZD allow rapid comparison of surface shapes, which is sufficient for real-time structure database screening. In this article, we review various applications of the 3DZD, which have been recently proposed.

  13. Detection of Unexpected High Correlations between Balance Calibration Loads and Load Residuals

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2014-01-01

    An algorithm was developed for the assessment of strain-gage balance calibration data that makes it possible to systematically investigate potential sources of unexpected high correlations between calibration load residuals and applied calibration loads. The algorithm investigates correlations on a load series by load series basis. The linear correlation coefficient is used to quantify the correlations. It is computed for all possible pairs of calibration load residuals and applied calibration loads that can be constructed for the given balance calibration data set. An unexpected high correlation between a load residual and a load is detected if three conditions are met: (i) the absolute value of the correlation coefficient of a residual/load pair exceeds 0.95; (ii) the maximum of the absolute values of the residuals of a load series exceeds 0.25 % of the load capacity; (iii) the load component of the load series is intentionally applied. Data from a baseline calibration of a six-component force balance is used to illustrate the application of the detection algorithm to a real-world data set. This analysis also showed that the detection algorithm can identify load alignment errors as long as repeat load series are contained in the balance calibration data set that do not suffer from load alignment problems.

  14. Columbia: The first five flights entry heating data series. Volume 4: The lower windward wing 50 percent and 80 percent semispans

    NASA Technical Reports Server (NTRS)

    Williams, S. D.

    1983-01-01

    Entry heating flight data and wind tunnel data on the lower wing 50% and 80% Semi-Spans are presented for the first five flights of the Space Shuttle Orbiter. The heating rate data is presented in terms of normalized film heat transfer coefficients as a function of angle-of-attack, Mach number, and Normal Shock Reynolds number. The surface heating rates and temperatures were obtained via the JSC NONLIN/INVERSE computer program. Time history plots of the surface heating rates and temperatures are also presented.

  15. Statistical indicators of collective behavior and functional clusters in gene networks of yeast

    NASA Astrophysics Data System (ADS)

    Živković, J.; Tadić, B.; Wick, N.; Thurner, S.

    2006-03-01

    We analyze gene expression time-series data of yeast (S. cerevisiae) measured along two full cell-cycles. We quantify these data by using q-exponentials, gene expression ranking and a temporal mean-variance analysis. We construct gene interaction networks based on correlation coefficients and study the formation of the corresponding giant components and minimum spanning trees. By coloring genes according to their cell function we find functional clusters in the correlation networks and functional branches in the associated trees. Our results suggest that a percolation point of functional clusters can be identified on these gene expression correlation networks.

  16. Assessing multiscale complexity of short heart rate variability series through a model-based linear approach

    NASA Astrophysics Data System (ADS)

    Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe

    2017-09-01

    We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.

  17. Time Series Analysis and Forecasting of Wastewater Inflow into Bandar Tun Razak Sewage Treatment Plant in Selangor, Malaysia

    NASA Astrophysics Data System (ADS)

    Abunama, Taher; Othman, Faridah

    2017-06-01

    Analysing the fluctuations of wastewater inflow rates in sewage treatment plants (STPs) is essential to guarantee a sufficient treatment of wastewater before discharging it to the environment. The main objectives of this study are to statistically analyze and forecast the wastewater inflow rates into the Bandar Tun Razak STP in Kuala Lumpur, Malaysia. A time series analysis of three years’ weekly influent data (156weeks) has been conducted using the Auto-Regressive Integrated Moving Average (ARIMA) model. Various combinations of ARIMA orders (p, d, q) have been tried to select the most fitted model, which was utilized to forecast the wastewater inflow rates. The linear regression analysis was applied to testify the correlation between the observed and predicted influents. ARIMA (3, 1, 3) model was selected with the highest significance R-square and lowest normalized Bayesian Information Criterion (BIC) value, and accordingly the wastewater inflow rates were forecasted to additional 52weeks. The linear regression analysis between the observed and predicted values of the wastewater inflow rates showed a positive linear correlation with a coefficient of 0.831.

  18. Modelling the dynamics of scarlet fever epidemics in the 19th century.

    PubMed

    Duncan, S R; Scott, S; Duncan, C J

    2000-01-01

    Annual deaths from scarlet fever in Liverpool, UK during 1848-1900 have been used as a model system for studying the historical dynamics of the epidemics. Mathematical models are developed which include the growth of the population and the death rate from scarlet fever. Time-series analysis of the results shows that there were two distinct phases to the disease (i) 1848-1880: regular epidemics (wavelength = 3.7 years) consistent with the system being driven by an oscillation in the transmission coefficient (deltabeta) at its resonant frequency, probably associated with dry conditions in winter (ii) 1880-1900: an undriven SEIR system with a falling endemic level and decaying epidemics. This period was associated with improved nutritive levels. There is also evidence from time-series analysis that raised wheat prices in pregnancy caused increased susceptibility in the subsequent children. The pattern of epidemics and the demographic characteristics of the population can be replicated in the modelling which provides insights into the detailed epidemiology of scarlet fever in this community in the 19th century.

  19. Mapping tropical rainforest canopies using multi-temporal spaceborne imaging spectroscopy

    NASA Astrophysics Data System (ADS)

    Somers, Ben; Asner, Gregory P.

    2013-10-01

    The use of imaging spectroscopy for florisic mapping of forests is complicated by the spectral similarity among coexisting species. Here we evaluated an alternative spectral unmixing strategy combining a time series of EO-1 Hyperion images and an automated feature selection strategy in MESMA. Instead of using the same spectral subset to unmix each image pixel, our modified approach allowed the spectral subsets to vary on a per pixel basis such that each pixel is evaluated using a spectral subset tuned towards maximal separability of its specific endmember class combination or species mixture. The potential of the new approach for floristic mapping of tree species in Hawaiian rainforests was quantitatively demonstrated using both simulated and actual hyperspectral image time-series. With a Cohen's Kappa coefficient of 0.65, our approach provided a more accurate tree species map compared to MESMA (Kappa = 0.54). In addition, by the selection of spectral subsets our approach was about 90% faster than MESMA. The flexible or adaptive use of band sets in spectral unmixing as such provides an interesting avenue to address spectral similarities in complex vegetation canopies.

  20. Topological Characteristics of the Hong Kong Stock Market: A Test-based P-threshold Approach to Understanding Network Complexity

    NASA Astrophysics Data System (ADS)

    Xu, Ronghua; Wong, Wing-Keung; Chen, Guanrong; Huang, Shuo

    2017-02-01

    In this paper, we analyze the relationship among stock networks by focusing on the statistically reliable connectivity between financial time series, which accurately reflects the underlying pure stock structure. To do so, we firstly filter out the effect of market index on the correlations between paired stocks, and then take a t-test based P-threshold approach to lessening the complexity of the stock network based on the P values. We demonstrate the superiority of its performance in understanding network complexity by examining the Hong Kong stock market. By comparing with other filtering methods, we find that the P-threshold approach extracts purely and significantly correlated stock pairs, which reflect the well-defined hierarchical structure of the market. In analyzing the dynamic stock networks with fixed-size moving windows, our results show that three global financial crises, covered by the long-range time series, can be distinguishingly indicated from the network topological and evolutionary perspectives. In addition, we find that the assortativity coefficient can manifest the financial crises and therefore can serve as a good indicator of the financial market development.

  1. Forest canopy growth dynamic modeling based on remote sensing prodcuts and meteorological data in Daxing'anling of Northeast China

    NASA Astrophysics Data System (ADS)

    Wu, Qiaoli; Song, Jinling; Wang, Jindi; Xiao, Zhiqiang

    2014-11-01

    Leaf Area Index (LAI) is an important biophysical variable for vegetation. Compared with vegetation indexes like NDVI and EVI, LAI is more capable of monitoring forest canopy growth quantitatively. GLASS LAI is a spatially complete and temporally continuous product derived from AVHRR and MODIS reflectance data. In this paper, we present the approach to build dynamic LAI growth models for young and mature Larix gmelinii forest in north Daxing'anling in Inner Mongolia of China using the Dynamic Harmonic Regression (DHR) model and Double Logistic (D-L) model respectively, based on the time series extracted from multi-temporal GLASS LAI data. Meanwhile we used the dynamic threshold method to attract the key phenological phases of Larix gmelinii forest from the simulated time series. Then, through the relationship analysis between phenological phases and the meteorological factors, we found that the annual peak LAI and the annual maximum temperature have a good correlation coefficient. The results indicate this forest canopy growth dynamic model to be very effective in predicting forest canopy LAI growth and extracting forest canopy LAI growth dynamic.

  2. Multiple elastic scattering of electrons in condensed matter

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2017-01-01

    Since the 1940s, much attention has been devoted to the problem of accurate theoretical description of electron transport in condensed matter. The needed information for describing different aspects of the electron transport is the angular distribution of electron directions after multiple elastic collisions. This distribution can be expanded into a series of Legendre polynomials with coefficients, Al. In the present work, a database of these coefficients for all elements up to uranium (Z=92) and a dense grid of electron energies varying from 50 to 5000 eV has been created. The database makes possible the following applications: (i) accurate interpolation of coefficients Al for any element and any energy from the above range, (ii) fast calculations of the differential and total elastic-scattering cross sections, (iii) determination of the angular distribution of directions after multiple collisions, (iv) calculations of the probability of elastic backscattering from solids, and (v) calculations of the calibration curves for determination of the inelastic mean free paths of electrons. The last two applications provide data with comparable accuracy to Monte Carlo simulations, yet the running time is decreased by several orders of magnitude. All of the above applications are implemented in the Fortran program MULTI_SCATT. Numerous illustrative runs of this program are described. Despite a relatively large volume of the database of coefficients Al, the program MULTI_SCATT can be readily run on personal computers.

  3. Factorization of the association rate coefficient in ligand rebinding to heme proteins

    NASA Astrophysics Data System (ADS)

    Young, Robert D.

    1984-01-01

    A stochastic theory of ligand migration in biomolecules is used to analyze the recombination of small ligands to heme proteins after flash photolysis. The stochastic theory is based on a generalized sequential barrier model in which a ligand binds by overcoming a series of barriers formed by the solvent protein interface, the protein matrix, and the heme distal histidine system. The stochastic theory shows that the association rate coefficient λon factorizes into three terms λon =γ12Nout, where γ12 is the rate coefficient from the heme pocket to the heme binding site, is the equilibrium pocket occupation factor, and Nout is the fraction of heme proteins which do not undergo geminate recombination of a flashed-off ligand. The factorization of λon holds for any number of barriers and with no assumptions regarding the various rate coefficients so long as the exponential solvent process occurs. Transitions of a single ligand are allowed between any two sites with two crucial exceptions: (i) the heme binding site acts as a trap so that thermal dissociation of a bound ligand does not occur within the time of the measurement; (ii) the final step in the rebinding process always has a ligand in the heme pocket from where the ligand binds to the heme iron.

  4. Dispersion of thermooptic coefficients of soda-lime-silica glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, G.

    1995-01-01

    The thermooptic coefficients, i.e., the variation of refractive index with temperature (dn/dT), are analyzed in a physically meaningful model for two series of soda-lime-silica glasses. 25Na{sub 2}O{center_dot}xCaO{center_dot}(75 {minus} x)SiO{sub 2} and (25 {minus} x)Na{sub 2}O{center_dot}xCaO {center_dot} 75SiO{sub 2}. This model is based on three physical parameters--the thermal expansion coefficient and excitonic and isentropic optical bands that are in the vacuum ultraviolet region--instead of on consideration of the temperature coefficient of electronic polarizability, as suggested in 1960. This model is capable of predicting and analyzing the thermooptic coefficients throughout the transmission region of the optical glasses at any operating temperature.

  5. Virial coefficients and demixing in the Asakura-Oosawa model.

    PubMed

    López de Haro, Mariano; Tejero, Carlos F; Santos, Andrés; Yuste, Santos B; Fiumara, Giacomo; Saija, Franz

    2015-01-07

    The problem of demixing in the Asakura-Oosawa colloid-polymer model is considered. The critical constants are computed using truncated virial expansions up to fifth order. While the exact analytical results for the second and third virial coefficients are known for any size ratio, analytical results for the fourth virial coefficient are provided here, and fifth virial coefficients are obtained numerically for particular size ratios using standard Monte Carlo techniques. We have computed the critical constants by successively considering the truncated virial series up to the second, third, fourth, and fifth virial coefficients. The results for the critical colloid and (reservoir) polymer packing fractions are compared with those that follow from available Monte Carlo simulations in the grand canonical ensemble. Limitations and perspectives of this approach are pointed out.

  6. Synchronization of Human Autonomic Nervous System Rhythms with Geomagnetic Activity in Human Subjects

    PubMed Central

    McCraty, Rollin; Atkinson, Mike; Stolc, Viktor; Alabdulgader, Abdullah A.; Vainoras, Alfonsas

    2017-01-01

    A coupling between geomagnetic activity and the human nervous system’s function was identified by virtue of continuous monitoring of heart rate variability (HRV) and the time-varying geomagnetic field over a 31-day period in a group of 10 individuals who went about their normal day-to-day lives. A time series correlation analysis identified a response of the group’s autonomic nervous systems to various dynamic changes in the solar, cosmic ray, and ambient magnetic field. Correlation coefficients and p values were calculated between the HRV variables and environmental measures during three distinct time periods of environmental activity. There were significant correlations between the group’s HRV and solar wind speed, Kp, Ap, solar radio flux, cosmic ray counts, Schumann resonance power, and the total variations in the magnetic field. In addition, the time series data were time synchronized and normalized, after which all circadian rhythms were removed. It was found that the participants’ HRV rhythms synchronized across the 31-day period at a period of approximately 2.5 days, even though all participants were in separate locations. Overall, this suggests that daily autonomic nervous system activity not only responds to changes in solar and geomagnetic activity, but is synchronized with the time-varying magnetic fields associated with geomagnetic field-line resonances and Schumann resonances. PMID:28703754

  7. An open-access database of grape harvest dates for climate research: data description and quality assessment

    NASA Astrophysics Data System (ADS)

    Daux, V.; Garcia de Cortazar-Atauri, I.; Yiou, P.; Chuine, I.; Garnier, E.; Ladurie, E. Le Roy; Mestre, O.; Tardaguila, J.

    2012-09-01

    We present an open-access dataset of grape harvest dates (GHD) series that has been compiled from international, French and Spanish literature and from unpublished documentary sources from public organizations and from wine-growers. As of June 2011, this GHD dataset comprises 380 series mainly from France (93% of the data) as well as series from Switzerland, Italy, Spain and Luxemburg. The series have variable length (from 1 to 479 data, mean length of 45 data) and contain gaps of variable sizes (mean ratio of observations/series length of 0.74). The longest and most complete ones are from Burgundy, Switzerland, Southern Rhône valley, Jura and Ile-de-France. The most ancient harvest date of the dataset is in 1354 in Burgundy. The GHD series were grouped into 27 regions according to their location, to geomorphological and geological criteria, and to past and present grape varieties. The GHD regional composite series (GHD-RCS) were calculated and compared pairwise to assess their reliability assuming that series close to one another are highly correlated. Most of the pairwise correlations are significant (p-value < 0.001) and strong (mean pairwise correlation coefficient of 0.58). As expected, the correlations tend to be higher when the vineyards are closer. The highest correlation (R = 0.91) is obtained between the High Loire Valley and the Ile-de-France GHD-RCS. The strong dependence of the vine cycle on temperature and, therefore, the strong link between the harvest dates and the temperature of the growing season was also used to test the quality of the GHD series. The strongest correlations are obtained between the GHD-RCS and the temperature series of the nearest weather stations. Moreover, the GHD-RCS/temperature correlation maps show spatial patterns similar to temperature correlation maps. The stability of the correlations over time is explored. The most striking feature is their generalised deterioration at the late 19th-early 20th century. The possible effects on GHD of the phylloxera crisis, which took place at this time, are discussed. The median of all the standardized GHD-RCS was calculated. The distribution of the extreme years of this general series is not homogenous. Extremely late years all occur during a two-century long time window from the early 17th to the early 19th century, while extremely early years are frequent during the 16th and since the mid-19th century.

  8. Precision of channel catfish catch estimates using hoop nets in larger Oklahoma reservoirs

    USGS Publications Warehouse

    Stewart, David R.; Long, James M.

    2012-01-01

    Hoop nets are rapidly becoming the preferred gear type used to sample channel catfish Ictalurus punctatus, and many managers have reported that hoop nets effectively sample channel catfish in small impoundments (<200 ha). However, the utility and precision of this approach in larger impoundments have not been tested. We sought to determine how the number of tandem hoop net series affected the catch of channel catfish and the time involved in using 16 tandem hoop net series in larger impoundments (>200 ha). Hoop net series were fished once, set for 3 d; then we used Monte Carlo bootstrapping techniques that allowed us to estimate the number of net series required to achieve two levels of precision (relative standard errors [RSEs] of 15 and 25) at two levels of confidence (80% and 95%). Sixteen hoop net series were effective at obtaining an RSE of 25 with 80% and 95% confidence in all but one reservoir. Achieving an RSE of 15 was often less effective and required 18-96 hoop net series given the desired level of confidence. We estimated that an hour was needed, on average, to deploy and retrieve three hoop net series, which meant that 16 hoop net series per reservoir could be "set" and "retrieved" within a day, respectively. The estimated number of net series to achieve an RSE of 25 or 15 was positively associated with the coefficient of variation (CV) of the sample but not with reservoir surface area or relative abundance. Our results suggest that hoop nets are capable of providing reasonably precise estimates of channel catfish relative abundance and that the relationship with the CV of the sample reported herein can be used to determine the sampling effort for a desired level of precision.

  9. Investigation on Quantitative Structure Activity Relationships of a Series of Inducible Nitric Oxide.

    PubMed

    Sharma, Mukesh C; Sharma, S

    2016-12-01

    A series of 2-dihydro-4-quinazolin with potent highly selective inhibitors of inducible nitric oxide synthase activities was subjected to quantitative structure activity relationships (QSAR) analysis. Statistically significant equations with high correlation coefficient (r 2  = 0.8219) were developed. The k-nearest neighbor model has showed good cross-validated correlation coefficient and external validation values of 0.7866 and 0.7133, respectively. The selected electrostatic field descriptors the presence of blue ball around R1 and R4 in the quinazolinamine moiety showed electronegative groups favorable for nitric oxide synthase activity. The QSAR models may lead to the structural requirements of inducible nitric oxide compounds and help in the design of new compounds.

  10. Asymptotically Exact Solution of the Problem of Harmonic Vibrations of an Elastic Parallelepiped

    NASA Astrophysics Data System (ADS)

    Papkov, S. O.

    2017-11-01

    An asymptotically exact solution of the classical problem of elasticity about the steadystate forced vibrations of an elastic rectangular parallelepiped is constructed. The general solution of the vibration equations is constructed in the form of double Fourier series with undetermined coefficients, and an infinite system of linear algebraic equations is obtained for determining these coefficients. An analysis of the infinite system permits determining the asymptotics of the unknowns which are used to convolve the double series in both equations of the infinite systems and the displacement and stress components. The efficiency of this approach is illustrated by numerical examples and comparison with known solutions. The spectrum of the parallelepiped symmetric vibrations is studied for various ratios of its sides.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rycroft, Chris H.; Bazant, Martin Z.

    An advection-diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-Analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape.more » This result is subsequently derived using residue calculus. The structure of the non-Analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton-Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). In conclusion, the model raises fundamental mathematical questions about broken symmetries in finite-Time singularities of both continuous and stochastic dynamical systems.« less

  12. Asymmetric collapse by dissolution or melting in a uniform flow

    PubMed Central

    Bazant, Martin Z.

    2016-01-01

    An advection–diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape. This result is subsequently derived using residue calculus. The structure of the non-analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton–Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). The model raises fundamental mathematical questions about broken symmetries in finite-time singularities of both continuous and stochastic dynamical systems. PMID:26997890

  13. Asymmetric collapse by dissolution or melting in a uniform flow

    DOE PAGES

    Rycroft, Chris H.; Bazant, Martin Z.

    2016-01-06

    An advection-diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-Analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape.more » This result is subsequently derived using residue calculus. The structure of the non-Analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton-Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). In conclusion, the model raises fundamental mathematical questions about broken symmetries in finite-Time singularities of both continuous and stochastic dynamical systems.« less

  14. Orthogonality of spherical harmonic coefficients

    NASA Astrophysics Data System (ADS)

    McLeod, M. G.

    1980-08-01

    Orthogonality relations are obtained for the spherical harmonic coefficients of functions defined on the surface of a sphere. Following a brief discussion of the orthogonality of Fourier series coefficients, consideration is given to the values averaged over all orientations of the coordinate system of the spherical harmonic coefficients of a function defined on the surface of a sphere that can be expressed in terms of Legendre polynomials for the special case where the function is the sum of two delta functions located at two different points on the sphere, and for the case of an essentially arbitrary function. It is noted that the orthogonality relations derived have found applications in statistical studies of the geomagnetic field.

  15. Excess entropy scaling for the segmental and global dynamics of polyethylene melts.

    PubMed

    Voyiatzis, Evangelos; Müller-Plathe, Florian; Böhm, Michael C

    2014-11-28

    The range of validity of the Rosenfeld and Dzugutov excess entropy scaling laws is analyzed for unentangled linear polyethylene chains. We consider two segmental dynamical quantities, i.e. the bond and the torsional relaxation times, and two global ones, i.e. the chain diffusion coefficient and the viscosity. The excess entropy is approximated by either a series expansion of the entropy in terms of the pair correlation function or by an equation of state for polymers developed in the context of the self associating fluid theory. For the whole range of temperatures and chain lengths considered, the two estimates of the excess entropy are linearly correlated. The scaled bond and torsional relaxation times fall into a master curve irrespective of the chain length and the employed scaling scheme. Both quantities depend non-linearly on the excess entropy. For a fixed chain length, the reduced diffusion coefficient and viscosity scale linearly with the excess entropy. An empirical reduction to a chain length-independent master curve is accessible for both dynamic quantities. The Dzugutov scheme predicts an increased value of the scaled diffusion coefficient with increasing chain length which contrasts physical expectations. The origin of this trend can be traced back to the density dependence of the scaling factors. This finding has not been observed previously for Lennard-Jones chain systems (Macromolecules, 2013, 46, 8710-8723). Thus, it limits the applicability of the Dzugutov approach to polymers. In connection with diffusion coefficients and viscosities, the Rosenfeld scaling law appears to be of higher quality than the Dzugutov approach. An empirical excess entropy scaling is also proposed which leads to a chain length-independent correlation. It is expected to be valid for polymers in the Rouse regime.

  16. Exploring the Linkage of Sea Surface Temperature Variability on Three Spatial Scales

    NASA Astrophysics Data System (ADS)

    Luo, L.; Capone, D. G.; Hutchins, D.; Kiefer, D.

    2011-12-01

    As part of a project examining climate change in the Southern California Bight at the University of Southern California, we studied the linkage of the variability of sea surface temperature across three nested spatial scales, the north Pacific Basin, the West Coast of North American, and the Southern California Bight. Specifically, we analyzed daily GHRSST images between September 1981 and July 2009. In order to remove seasonal changes in temperature and focus upon differences between years, we calculate weekly mean temperature for each pixel from the time series, and then subjected the anomalies for the 3 spatial scales to empirical orthogonal function (EOF) analysis. The corresponding temporal expansion coefficients and spatial components (eigenvector) for each EOF mode were then generated to examine the temporal and spatial patterns of SST change. The results showed that the El Nino Southern Oscillation (ENSO) has a clear influence on the SST variability across all the three spatial scales, especially the 1st EOF mode which represents the largest variance. The comparison between the time coefficients of the 1st EOF mode and the Oceanic Nino Index (ONI) suggested that the EOF mode 1 of the Pacific Basin region matched well with almost all the El Nino and La Nina signals while the West Coast of North American captured only the strong signals and the Southern California Bight captures still fewer of the signals. This clearly indicated that the Southern California Bight is relatively insensitive to ENSO signal relative to other locations along the West Coast. The 1st EOF Mode for the West Coast of North American was also clearly influenced by upwelling. The cross correlation coefficient between each pair of the EOF mode 1 temporal expansion coefficients for the three spatial scales suggested that they were significantly correlated to each other. The effect of the Pacific Decadal Oscillation (PDO) on the SST change was also demonstrated by the temporal variability of the temporal expansion coefficients of the 2nd EOF mode. However, the correlations of 2nd EOF mode time coefficients between the three scales appeared relatively low compared the 1st EOF mode. In summary sea surface temperature in the Southern California Bight behaves like a node that is relatively insensitive to ENSO, PDO, and upwelling signals.

  17. An optimal implicit staggered-grid finite-difference scheme based on the modified Taylor-series expansion with minimax approximation method for elastic modeling

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2017-03-01

    Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.

  18. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  19. The harmonic development of the Earth tide generating potential due to the direct effect of the planets

    NASA Astrophysics Data System (ADS)

    Hartmann, Torsten; Wenzel, Hans-Georg

    1994-09-01

    The time-harmonic development of the Earth tide generating potential due to the direct effect of the planets Venus, Jupiter, Mars, Mercury and Saturn has been computed. The catalog of the fully normalized potential coefficients contains 1483 waves. It is based on the DE102 numerical ephemeris of the planets between years 1900 and 2200. Gravity tides due to the planets computed from the catalog at the surface of the Earth have an accuracy of about 0.027 pm/sq s (1 pm/sq s = 10(exp -12) m/sq s = 0.1 ngal) rms and 0.160 / 0.008 pm/sq s at maximum in time / frequency domain using the new benchmark tidal gravity series (Wenzel 1994).

  20. Separation of atmospheric, oceanic and hydrological polar motion excitation mechanisms based on a combination of geometric and gravimetric space observations

    NASA Astrophysics Data System (ADS)

    Göttl, F.; Schmidt, M.; Seitz, F.; Bloßfeld, M.

    2015-04-01

    The goal of our study is to determine accurate time series of geophysical Earth rotation excitations to learn more about global dynamic processes in the Earth system. For this purpose, we developed an adjustment model which allows to combine precise observations from space geodetic observation systems, such as Satellite Laser Ranging (SLR), Global Navigation Satellite Systems, Very Long Baseline Interferometry, Doppler Orbit determination and Radiopositioning Integrated on Satellite, satellite altimetry and satellite gravimetry in order to separate geophysical excitation mechanisms of Earth rotation. Three polar motion time series are applied to derive the polar motion excitation functions (integral effect). Furthermore we use five time variable gravity field solutions from Gravity Recovery and Climate Experiment to determine not only the integral mass effect but also the oceanic and hydrological mass effects by applying suitable filter techniques and a land-ocean mask. For comparison the integral mass effect is also derived from degree 2 potential coefficients that are estimated from SLR observations. The oceanic mass effect is also determined from sea level anomalies observed by satellite altimetry by reducing the steric sea level anomalies derived from temperature and salinity fields of the oceans. Due to the combination of all geodetic estimated excitations the weaknesses of the individual processing strategies can be reduced and the technique-specific strengths can be accounted for. The formal errors of the adjusted geodetic solutions are smaller than the RMS differences of the geophysical model solutions. The improved excitation time series can be used to improve the geophysical modeling.

  1. Fluctuations in air pollution give risk warning signals of asthma hospitalization

    NASA Astrophysics Data System (ADS)

    Hsieh, Nan-Hung; Liao, Chung-Min

    2013-08-01

    Recent studies have implicated that air pollution has been associated with asthma exacerbations. However, the key link between specific air pollutant and the consequent impact on asthma has not been shown. The purpose of this study was to quantify the fluctuations in air pollution time-series dynamics to correlate the relationships between statistical indicators and age-specific asthma hospital admissions. An indicators-based regression model was developed to predict the time-trend of asthma hospital admissions in Taiwan in the period 1998-2010. Five major pollutants such as particulate matters with aerodynamic diameter less than 10 μm (PM10), ozone (O3), nitrogen dioxide (NO2), sulfur dioxide (SO2), and carbon monoxide (CO) were included. We used Spearman's rank correlation to detect the relationships between time-series based statistical indicators of standard deviation, coefficient of variation, skewness, and kurtosis and monthly asthma hospitalization. We further used the indicators-guided Poisson regression model to test and predict the impact of target air pollutants on asthma incidence. Here we showed that standard deviation of PM10 data was the most correlated indicators for asthma hospitalization for all age groups, particularly for elderly. The skewness of O3 data gives the highest correlation to adult asthmatics. The proposed regression model shows a better predictability in annual asthma hospitalization trends for pediatrics. Our results suggest that a set of statistical indicators inferred from time-series information of major air pollutants can provide advance risk warning signals in complex air pollution-asthma systems and aid in asthma management that depends heavily on monitoring the dynamics of asthma incidence and environmental stimuli.

  2. Efficient algorithms for construction of recurrence relations for the expansion and connection coefficients in series of Al-Salam Carlitz I polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2005-12-01

    Two formulae expressing explicitly the derivatives and moments of Al-Salam-Carlitz I polynomials of any degree and for any order in terms of Al-Salam-Carlitz I themselves are proved. Two other formulae for the expansion coefficients of general-order derivatives Dpqf(x), and for the moments xellDpqf(x), of an arbitrary function f(x) in terms of its original expansion coefficients are also obtained. Application of these formulae for solving q-difference equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Al-Salam-Carlitz I polynomials and any system of basic hypergeometric orthogonal polynomials, belonging to the q-Hahn class, is described.

  3. Recurrences and explicit formulae for the expansion and connection coefficients in series of Bessel polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2004-08-01

    A formula expressing explicitly the derivatives of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another explicit formula, which expresses the Bessel expansion coefficients of a general-order derivative of an infinitely differentiable function in terms of its original Bessel coefficients, is also given. A formula for the Bessel coefficients of the moments of one single Bessel polynomial of certain degree is proved. A formula for the Bessel coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Bessel coefficients is also obtained. Application of these formulae for solving ordinary differential equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Bessel-Bessel polynomials is described. An explicit formula for these coefficients between Jacobi and Bessel polynomials is given, of which the ultraspherical polynomial and its consequences are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Bessel and Hermite-Bessel are also developed.

  4. Modeling the Secondary Drying Stage of Freeze Drying: Development and Validation of an Excel-Based Model.

    PubMed

    Sahni, Ekneet K; Pikal, Michael J

    2017-03-01

    Although several mathematical models of primary drying have been developed over the years, with significant impact on the efficiency of process design, models of secondary drying have been confined to highly complex models. The simple-to-use Excel-based model developed here is, in essence, a series of steady state calculations of heat and mass transfer in the 2 halves of the dry layer where drying time is divided into a large number of time steps, where in each time step steady state conditions prevail. Water desorption isotherm and mass transfer coefficient data are required. We use the Excel "Solver" to estimate the parameters that define the mass transfer coefficient by minimizing the deviations in water content between calculation and a calibration drying experiment. This tool allows the user to input the parameters specific to the product, process, container, and equipment. Temporal variations in average moisture contents and product temperatures are outputs and are compared with experiment. We observe good agreement between experiments and calculations, generally well within experimental error, for sucrose at various concentrations, temperatures, and ice nucleation temperatures. We conclude that this model can serve as an important process development tool for process design and manufacturing problem-solving. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  5. Theoretical determination of chemical rate constants using novel time-dependent methods

    NASA Technical Reports Server (NTRS)

    Dateo, Christopher E.

    1994-01-01

    The work completed within the grant period 10/1/91 through 12/31/93 falls primarily in the area of reaction dynamics using both quantum and classical mechanical methodologies. Essentially four projects have been completed and have been or are in preparation of being published. The majority of time was spent in the determination of reaction rate coefficients in the area of hydrocarbon fuel combustion reactions which are relevant to NASA's High Speed Research Program (HSRP). These reaction coefficients are important in the design of novel jet engines with low NOx emissions, which through a series of catalytic reactions contribute to the deterioration of the earth's ozone layer. A second area of research studied concerned the control of chemical reactivity using ultrashort (femtosecond) laser pulses. Recent advances in pulsed-laser technologies have opened up a vast new field to be investigated both experimentally and theoretically. The photodissociation of molecules adsorbed on surfaces using novel time-independent quantum mechanical methods was a third project. And finally, using state-of-the-art, high level ab initio electronic structure methods in conjunction with accurate quantum dynamical methods, the rovibrational energy levels of a triatomic molecule with two nonhydrogen atoms (HCN) were calculated to unprecedented levels of agreement between theory and experiment.

  6. Tests on thirteen navy type model propellers

    NASA Technical Reports Server (NTRS)

    Durand, W F

    1927-01-01

    The tests on these model propellers were undertaken for the purpose of determining the performance coefficients and characteristics for certain selected series of propellers of form and type as commonly used in recent navy designs. The first series includes seven propellers of pitch ratio varying by 0.10 to 1.10, the area, form of blade, thickness, etc., representing an arbitrary standard propeller which had shown good results. The second series covers changes in thickness of blade section, other things equal, and the third series, changes in blade area, other things equal. These models are all of 36-inch diameter. Propellers A to G form the series on pitch ratio, C, N. I. J the series on thickness of section, and K, M, C, L the series on area. (author)

  7. Time Series Analysis of Remote Sensing Observations for Citrus Crop Growth Stage and Evapotranspiration Estimation

    NASA Astrophysics Data System (ADS)

    Sawant, S. A.; Chakraborty, M.; Suradhaniwar, S.; Adinarayana, J.; Durbha, S. S.

    2016-06-01

    Satellite based earth observation (EO) platforms have proved capability to spatio-temporally monitor changes on the earth's surface. Long term satellite missions have provided huge repository of optical remote sensing datasets, and United States Geological Survey (USGS) Landsat program is one of the oldest sources of optical EO datasets. This historical and near real time EO archive is a rich source of information to understand the seasonal changes in the horticultural crops. Citrus (Mandarin / Nagpur Orange) is one of the major horticultural crops cultivated in central India. Erratic behaviour of rainfall and dependency on groundwater for irrigation has wide impact on the citrus crop yield. Also, wide variations are reported in temperature and relative humidity causing early fruit onset and increase in crop water requirement. Therefore, there is need to study the crop growth stages and crop evapotranspiration at spatio-temporal scale for managing the scarce resources. In this study, an attempt has been made to understand the citrus crop growth stages using Normalized Difference Time Series (NDVI) time series data obtained from Landsat archives (http://earthexplorer.usgs.gov/). Total 388 Landsat 4, 5, 7 and 8 scenes (from year 1990 to Aug. 2015) for Worldwide Reference System (WRS) 2, path 145 and row 45 were selected to understand seasonal variations in citrus crop growth. Considering Landsat 30 meter spatial resolution to obtain homogeneous pixels with crop cover orchards larger than 2 hectare area was selected. To consider change in wavelength bandwidth (radiometric resolution) with Landsat sensors (i.e. 4, 5, 7 and 8) NDVI has been selected to obtain continuous sensor independent time series. The obtained crop growth stage information has been used to estimate citrus basal crop coefficient information (Kcb). Satellite based Kcb estimates were used with proximal agrometeorological sensing system observed relevant weather parameters for crop ET estimation. The results show that time series EO based crop growth stage estimates provide better information about geographically separated citrus orchards. Attempts are being made to estimate regional variations in citrus crop water requirement for effective irrigation planning. In future high resolution Sentinel 2 observations from European Space Agency (ESA) will be used to fill the time gaps and to get better understanding about citrus crop canopy parameters.

  8. Identification of spikes associated with local sources in continuous time series of atmospheric CO, CO2 and CH4

    NASA Astrophysics Data System (ADS)

    El Yazidi, Abdelhadi; Ramonet, Michel; Ciais, Philippe; Broquet, Gregoire; Pison, Isabelle; Abbaris, Amara; Brunner, Dominik; Conil, Sebastien; Delmotte, Marc; Gheusi, Francois; Guerin, Frederic; Hazan, Lynn; Kachroudi, Nesrine; Kouvarakis, Giorgos; Mihalopoulos, Nikolaos; Rivier, Leonard; Serça, Dominique

    2018-03-01

    This study deals with the problem of identifying atmospheric data influenced by local emissions that can result in spikes in time series of greenhouse gases and long-lived tracer measurements. We considered three spike detection methods known as coefficient of variation (COV), robust extraction of baseline signal (REBS) and standard deviation of the background (SD) to detect and filter positive spikes in continuous greenhouse gas time series from four monitoring stations representative of the European ICOS (Integrated Carbon Observation System) Research Infrastructure network. The results of the different methods are compared to each other and against a manual detection performed by station managers. Four stations were selected as test cases to apply the spike detection methods: a continental rural tower of 100 m height in eastern France (OPE), a high-mountain observatory in the south-west of France (PDM), a regional marine background site in Crete (FKL) and a marine clean-air background site in the Southern Hemisphere on Amsterdam Island (AMS). This selection allows us to address spike detection problems in time series with different variability. Two years of continuous measurements of CO2, CH4 and CO were analysed. All methods were found to be able to detect short-term spikes (lasting from a few seconds to a few minutes) in the time series. Analysis of the results of each method leads us to exclude the COV method due to the requirement to arbitrarily specify an a priori percentage of rejected data in the time series, which may over- or underestimate the actual number of spikes. The two other methods freely determine the number of spikes for a given set of parameters, and the values of these parameters were calibrated to provide the best match with spikes known to reflect local emissions episodes that are well documented by the station managers. More than 96 % of the spikes manually identified by station managers were successfully detected both in the SD and the REBS methods after the best adjustment of parameter values. At PDM, measurements made by two analyzers located 200 m from each other allow us to confirm that the CH4 spikes identified in one of the time series but not in the other correspond to a local source from a sewage treatment facility in one of the observatory buildings. From this experiment, we also found that the REBS method underestimates the number of positive anomalies in the CH4 data caused by local sewage emissions. As a conclusion, we recommend the use of the SD method, which also appears to be the easiest one to implement in automatic data processing, used for the operational filtering of spikes in greenhouse gases time series at global and regional monitoring stations of networks like that of the ICOS atmosphere network.

  9. Price-volume multifractal analysis of the Moroccan stock market

    NASA Astrophysics Data System (ADS)

    El Alaoui, Marwane

    2017-11-01

    In this paper, we analyzed price-volume multifractal cross-correlations of Moroccan Stock Exchange. We chose the period from January 1st 2000 to January 20th 2017 to investigate the multifractal behavior of price change and volume change series. Then, we used multifractal detrended cross-correlations analysis method (MF-DCCA) and multifractal detrended fluctuation analysis (MF-DFA) to analyze the series. We computed bivariate generalized Hurst exponent, Rényi exponent and spectrum of singularity for each pair of indices to measure quantitatively cross-correlations. Furthermore, we used detrended cross-correlations coefficient (DCCA) and cross-correlation test (Q(m)) to analyze cross-correlation quantitatively and qualitatively. By analyzing results, we found existence of price-volume multifractal cross-correlations. The spectrum width has a strong multifractal cross-correlation. We remarked that volume change series is anti-persistent when we analyzed the generalized Hurst exponent for all moments q. The cross-correlation test showed the presence of a significant cross-correlation. However, DCCA coefficient had a small positive value, which means that the level of correlation is not very significant. Finally, we analyzed sources of multifractality and their degree of contribution in the series.

  10. Zero-lag synchronization in coupled time-delayed piecewise linear electronic circuits

    NASA Astrophysics Data System (ADS)

    Suresh, R.; Srinivasan, K.; Senthilkumar, D. V.; Raja Mohamed, I.; Murali, K.; Lakshmanan, M.; Kurths, J.

    2013-07-01

    We investigate and report an experimental confirmation of zero-lag synchronization (ZLS) in a system of three coupled time-delayed piecewise linear electronic circuits via dynamical relaying with different coupling configurations, namely mutual and subsystem coupling configurations. We have observed that when there is a feedback between the central unit (relay unit) and at least one of the outer units, ZLS occurs in the two outer units whereas the central and outer units exhibit inverse phase synchronization (IPS). We find that in the case of mutual coupling configuration ZLS occurs both in periodic and hyperchaotic regimes, while in the subsystem coupling configuration it occurs only in the hyperchaotic regime. Snapshots of the time evolution of outer circuits as observed from the oscilloscope confirm the occurrence of ZLS experimentally. The quality of ZLS is numerically verified by correlation coefficient and similarity function measures. Further, the transition to ZLS is verified from the changes in the largest Lyapunov exponents and the correlation coefficient as a function of the coupling strength. IPS is experimentally confirmed using time series plots and also can be visualized using the concept of localized sets which are also corroborated by numerical simulations. In addition, we have calculated the correlation of probability of recurrence to quantify the phase coherence. We have also analytically derived a sufficient condition for the stability of ZLS using the Krasovskii-Lyapunov theory.

  11. Clustering of change patterns using Fourier coefficients.

    PubMed

    Kim, Jaehee; Kim, Haseong

    2008-01-15

    To understand the behavior of genes, it is important to explore how the patterns of gene expression change over a time period because biologically related gene groups can share the same change patterns. Many clustering algorithms have been proposed to group observation data. However, because of the complexity of the underlying functions there have not been many studies on grouping data based on change patterns. In this study, the problem of finding similar change patterns is induced to clustering with the derivative Fourier coefficients. The sample Fourier coefficients not only provide information about the underlying functions, but also reduce the dimension. In addition, as their limiting distribution is a multivariate normal, a model-based clustering method incorporating statistical properties would be appropriate. This work is aimed at discovering gene groups with similar change patterns that share similar biological properties. We developed a statistical model using derivative Fourier coefficients to identify similar change patterns of gene expression. We used a model-based method to cluster the Fourier series estimation of derivatives. The model-based method is advantageous over other methods in our proposed model because the sample Fourier coefficients asymptotically follow the multivariate normal distribution. Change patterns are automatically estimated with the Fourier representation in our model. Our model was tested in simulations and on real gene data sets. The simulation results showed that the model-based clustering method with the sample Fourier coefficients has a lower clustering error rate than K-means clustering. Even when the number of repeated time points was small, the same results were obtained. We also applied our model to cluster change patterns of yeast cell cycle microarray expression data with alpha-factor synchronization. It showed that, as the method clusters with the probability-neighboring data, the model-based clustering with our proposed model yielded biologically interpretable results. We expect that our proposed Fourier analysis with suitably chosen smoothing parameters could serve as a useful tool in classifying genes and interpreting possible biological change patterns. The R program is available upon the request.

  12. Monitoring monthly surface water dynamics of Dongting Lake using Sentinel-1 data at 10 m.

    PubMed

    Xing, Liwei; Tang, Xinming; Wang, Huabin; Fan, Wenfeng; Wang, Guanghui

    2018-01-01

    High temporal resolution water distribution maps are essential for surface water monitoring because surface water exhibits significant inner-annual variation. Therefore, high-frequency remote sensing data are needed for surface water mapping. Dongting Lake, the second-largest freshwater lake in China, is famous for the seasonal fluctuations of its inundation extents in the middle reaches of the Yangtze River. It is also greatly affected by the Three Gorges Project. In this study, we used Sentinel-1 data to generate surface water maps of Dongting Lake at 10 m resolution. First, we generated the Sentinel-1 time series backscattering coefficient for VH and VV polarizations at 10 m resolution by using a monthly composition method. Second, we generated the thresholds for mapping surface water at 10 m resolution with monthly frequencies using Sentinel-1 data. Then, we derived the monthly surface water distribution product of Dongting Lake in 2016, and finally, we analyzed the inner-annual surface water dynamics. The results showed that: (1) The thresholds were -21.56 and -15.82 dB for the backscattering coefficients for VH and VV, respectively, and the overall accuracy and Kappa coefficients were above 95.50% and 0.90, respectively, for the VH backscattering coefficient, and above 94.50% and 0.88, respectively, for the VV backscattering coefficient. The VV backscattering coefficient achieved lower accuracy due to the effect of the wind causing roughness on the surface of the water. (2) The maximum and minimum areas of surface water were 2040.33 km 2 in July, and 738.89 km 2 in December. The surface water area of Dongting Lake varied most significantly in April and August. The permanent water acreage in 2016 was 556.35 km 2 , accounting for 19.65% of the total area of Dongting Lake, and the acreage of seasonal water was 1525.21 km 2 . This study proposed a method to automatically generate monthly surface water at 10 m resolution, which may contribute to monitoring surface water in a timely manner.

  13. Dynamic cross correlation studies of wave particle interactions in ULF phenomena

    NASA Technical Reports Server (NTRS)

    Mcpherron, R. L.

    1979-01-01

    Magnetic field observations made by satellites in the earth's magnetic field reveal a wide variety of ULF waves. These waves interact with the ambient particle populations in complex ways, causing modulation of the observed particle fluxes. This modulation is found to be a function of species, pitch angle, energy and time. The characteristics of this modulation provide information concerning the wave mode and interaction process. One important characteristic of wave-particle interactions is the phase of the particle flux modulation relative to the magnetic field variations. To display this phase as a function of time a dynamic cross spectrum program has been developed. The program produces contour maps in the frequency time plane of the cross correlation coefficient between any particle flux time series and the magnetic field vector. This program has been utilized in several studies of ULF wave-particle interactions at synchronous orbit.

  14. A Generalized Framework for Reduced-Order Modeling of a Wind Turbine Wake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Nicholas; Viggiano, Bianca; Calaf, Marc

    A reduced-order model for a wind turbine wake is sought from large eddy simulation data. Fluctuating velocity fields are combined in the correlation tensor to form the kernel of the proper orthogonal decomposition (POD). Proper orthogonal decomposition modes resulting from the decomposition represent the spatially coherent turbulence structures in the wind turbine wake; eigenvalues delineate the relative amount of turbulent kinetic energy associated with each mode. Back-projecting the POD modes onto the velocity snapshots produces dynamic coefficients that express the amplitude of each mode in time. A reduced-order model of the wind turbine wake (wakeROM) is defined through a seriesmore » of polynomial parameters that quantify mode interaction and the evolution of each POD mode coefficients. The resulting system of ordinary differential equations models the wind turbine wake composed only of the large-scale turbulent dynamics identified by the POD. Tikhonov regularization is used to recalibrate the dynamical system by adding additional constraints to the minimization seeking polynomial parameters, reducing error in the modeled mode coefficients. The wakeROM is periodically reinitialized with new initial conditions found by relating the incoming turbulent velocity to the POD mode coefficients through a series of open-loop transfer functions. The wakeROM reproduces mode coefficients to within 25.2%, quantified through the normalized root-mean-square error. A high-level view of the modeling approach is provided as a platform to discuss promising research directions, alternate processes that could benefit stability and efficiency, and desired extensions of the wakeROM.« less

  15. Dental Technicians' Pneumoconiosis; Illness Behind a Healthy Smile – Case Series of a Reference Center in Turkey

    PubMed Central

    Alici, Nur Şafak; Beyan, Ayşe Coşkun; Demıral, Yücel; Çimrin, Arif

    2018-01-01

    Background: Dental laboratories include many hazards and risks. Dental technicians working in an unfavorable work environment in Turkey and other parts of the world may develop pneumoconiosis as a result of exposure to dust, depending on exposure time. In this study, we aimed to investigate the clinical and laboratory findings of dental technicians. Materials and Methods: The study consists of a case series. Between 2013 and 2016, a total of 70 who were working as a dental technician and referred to our clinic with suspicion of occupational disease were evaluated. Comprehensive work-history, physical examination complaints, functional status, chest X-ray, and high-resolution computed lung tomography (HRCT) findings were evaluated. Results: In all, 46 (65.7%) of the 70 dental technicians were diagnosed with pneumoconiosis. About 45 (97.8%) subjects were male and 1 (2.2%) was female. The mean age of starting to work was 15.89 ± 2.79 (11-23) years. The mix dust exposure time was 176.13 ± 73.97 (18-384) months. Small round opacities were most common finding. In 16 patients, high profusion being 2/3 and above were identified, and large opacity was detected in 11 patients. The radiological profusion had a weak negative correlation with FEV 1 and FVC (correlation coefficient − 0.18, P = 0.210 and − 0.058, P = 0704) and moderate negative correlation between radiological profusion and FEV1/FVC (correlation coefficient − 0.377, P = 0.010). In addition, no correlation was observed between the age at start of work and the duration of exposure. Conclusion: The presence of pneumoconiosis continues in dental technicians in Turkey, especially because there is an early childhood apprenticeship culture and almost all workers in this period have the history of sandblasting. PMID:29743783

  16. Time series analysis of temporal networks

    NASA Astrophysics Data System (ADS)

    Sikdar, Sandipan; Ganguly, Niloy; Mukherjee, Animesh

    2016-01-01

    A common but an important feature of all real-world networks is that they are temporal in nature, i.e., the network structure changes over time. Due to this dynamic nature, it becomes difficult to propose suitable growth models that can explain the various important characteristic properties of these networks. In fact, in many application oriented studies only knowing these properties is sufficient. For instance, if one wishes to launch a targeted attack on a network, this can be done even without the knowledge of the full network structure; rather an estimate of some of the properties is sufficient enough to launch the attack. We, in this paper show that even if the network structure at a future time point is not available one can still manage to estimate its properties. We propose a novel method to map a temporal network to a set of time series instances, analyze them and using a standard forecast model of time series, try to predict the properties of a temporal network at a later time instance. To our aim, we consider eight properties such as number of active nodes, average degree, clustering coefficient etc. and apply our prediction framework on them. We mainly focus on the temporal network of human face-to-face contacts and observe that it represents a stochastic process with memory that can be modeled as Auto-Regressive-Integrated-Moving-Average (ARIMA). We use cross validation techniques to find the percentage accuracy of our predictions. An important observation is that the frequency domain properties of the time series obtained from spectrogram analysis could be used to refine the prediction framework by identifying beforehand the cases where the error in prediction is likely to be high. This leads to an improvement of 7.96% (for error level ≤20%) in prediction accuracy on an average across all datasets. As an application we show how such prediction scheme can be used to launch targeted attacks on temporal networks. Contribution to the Topical Issue "Temporal Network Theory and Applications", edited by Petter Holme.

  17. Description of data on the Nimbus 7 LIMS map archive tape: Ozone and nitric acid

    NASA Technical Reports Server (NTRS)

    Remsberg, E. E.; Kurzeja, R. J.; Haggard, K. V.; Russell, J. M., III; Gordley, L. L.

    1986-01-01

    The Nimbus 7 Limb Infrared Monitor of the Stratosphere (LIMS) data set has been processed into a Fourier coefficient representation with a Kalman filter algorithm applied to profile data at individual latitudes and pressure levels. The algorithm produces synoptic data at noon Greenwich Mean Time (GMT) from the asynoptic orbital profiles. This form of the data set is easy to use and is appropriate for time series analysis and further data manipulation and display. Ozone and nitric acid results are grouped together in this report because the LIMS vertical field of views (FOV's) and analysis characteristics for these species are similar. A comparison of the orbital input data with mixing ratios derived from Kalman filter coefficients indicates errors in mixing ratio of generally less than 5 percent, with 15 percent being a maximum error. The high quality of the mapped data was indicated by coherence of both the phases and the amplitudes of waves with latitude and pressure. Examples of the mapped fields are presented, and details are given concerning the importance of diurnal variations, the removal of polar stratospheric cloud signatures, and the interpretation of bias effects in the data near the tops of profiles.

  18. Use of a Spreadsheet to Help Students Understand the Origin of the Empirical Equation that Allows Estimation of the Extinction Coefficients of Proteins

    ERIC Educational Resources Information Center

    Sims, Paul A.

    2012-01-01

    A brief history of the development of the empirical equation that is used by prominent, Internet-based programs to estimate (or calculate) the extinction coefficients of proteins is presented. In addition, an overview of a series of related assignments designed to help students understand the origin of the empirical equation is provided. The…

  19. A diffusivity model for predicting VOC diffusion in porous building materials based on fractal theory.

    PubMed

    Liu, Yanfeng; Zhou, Xiaojun; Wang, Dengjia; Song, Cong; Liu, Jiaping

    2015-12-15

    Most building materials are porous media, and the internal diffusion coefficients of such materials have an important influences on the emission characteristics of volatile organic compounds (VOCs). The pore structure of porous building materials has a significant impact on the diffusion coefficient. However, the complex structural characteristics bring great difficulties to the model development. The existing prediction models of the diffusion coefficient are flawed and need to be improved. Using scanning electron microscope (SEM) observations and mercury intrusion porosimetry (MIP) tests of typical porous building materials, this study developed a new diffusivity model: the multistage series-connection fractal capillary-bundle (MSFC) model. The model considers the variable-diameter capillaries formed by macropores connected in series as the main mass transfer paths, and the diameter distribution of the capillary bundles obeys a fractal power law in the cross section. In addition, the tortuosity of the macrocapillary segments with different diameters is obtained by the fractal theory. Mesopores serve as the connections between the macrocapillary segments rather than as the main mass transfer paths. The theoretical results obtained using the MSFC model yielded a highly accurate prediction of the diffusion coefficients and were in a good agreement with the VOC concentration measurements in the environmental test chamber. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Study on drag coefficient of rising bubble in still water

    NASA Astrophysics Data System (ADS)

    Shi, M. Y.; Qi, Mei; Yi, C. G.; Liu, D. Y.; Zhang, K. X.

    2017-09-01

    Research on the behavior of a rising bubble in still water is on the basis of Newton's theory of classical mechanics. Develop a calculation analysis and an experimental process of bubble rising behavior in order to search for an appropriate way of valuing drag coefficient, which is the key element toward this issue. Analyze the adaptability of the drag coefficient; compare the theoretical model to the real experimental model of rising bubble behavior. The result turns out that the change rate of radius could be ignored according to the analysis; the acceleration phase is transient; final velocity and the diameter of bubble do relate to the drag coefficient, but have no obvious relation with the depth of water. After series of inference analysis of the bubble behavior and experimental demonstration, a new drag coefficient and computing method is proposed.

Top